Language

May 7, 2007

Where is the line drawn for what constitutes “what it’s like to be” something? What it’s like to see red seems the obvious example of a what-it’s-like phenomena. But what about knowing a language or having a particular culture? Is there what-it’s-likeness in knowling a language beyond the veridical aspects, such as directly perceiving words by hearing or sight?

Advertisements

Representing Representation – Mary’s Room

April 6, 2007

Some comments on Pete Mandik’s paper The Neurophilosophy of Subjectivity. Here Peter struggles with some of the main issues I’m interested in albeit more professionally and coherently. An unjust summary of points made in the paper: Phenomenal Raw Feels don’t exist as mere sensory input devoid of higher level mental processes. If this is true, then would it be possible to replicate an “experience” purely at a higher level? Pete answers in the affirmative, at least to an extent. Mary, by her study, could have more what-it’s-like knowledge than a control subject who isn’t a brilliant color scientist.

How higher level representation might represent “phenomenal” representation? Pete remarks,

If a picture is indeed worth a thousand words, then isn’t it cheating to say that whatever a picture P represents, the same thing can be represented by the description “whatever P represents”? Even if the “whatever P represents” move is somehow disqualified, the following move remains open: just add words to the description. Why couldn’t a sufficiently long description represent all of the same things without itself being a picture?

The context of that comment is a discussion he has of Dennett’s Jello box. Tear a Jello box in half, and the tear line is infinitely complex, each side becoming the detector for the other. So it is with our perception of red. Red is what it is because one side of the Jello box is a set of physical properties of the environment, and another side is our sensory input and brains which evolved in an environmental context. I think it’s within Dennett’s observation that if we could perfectly describe one side of the tear, we’d be able to anticipate the other. That’s one way in which higher level representation might represent the knowledge of redness. But is the perfect memory, or imagined concept, exactly the same as staring at a red dot – can it duplicate the Raw Feel?

Mary’s room frames the problem of qualia in terms of knowledge. And solving the problem in the above way has Mary knowing red the same way the rest of us know red while not presently looking at something red. It’s tempting to raise the bar and demand Mary induce the experience of red without looking at something red. Or to “know” red the same way we know red while staring at something red.

Following up on Pete’s comment in this way, how could words, some other way of representation, represent the same thing as whatever is going on while staring at the red dot?

Could, as Pete suggests, words stand in for whatever is going on to create the picture? And in real time, in order to mimick the Raw Feel of looking at red?

The answer I try to convince myself is true, is yes, to both. My strategy is to grapple with the kind of representation going on in a Raw Feel, the kind of hardware necessary to produce the Raw Feel, and ultimately, question the project of representation altogether by confusing the symbol with what’s being represented. Some of this is groping in the dark I admit as I’m not 100% certain of what I’m trying to say.

1) What we convince ourselves we see isn’t necessarily what we’re really seeing. As Dennett argues, we don’t really see every detail in a complex rug pattern. As I understand, he argues that even a Raw Feel, to stare at a field of wheat, doesn’t put one million little wheat grains in our head .GIF style, but rather our Raw Feel works like a .JPEG. This is a consequence of the experience happening between high and low level brain functions. At the high level, all kinds of cultural, language, and other things are working to shape the experience. So the complex picture that words can’t describe, is already in part, a kind of word, or language. Like in a .JPEG, let some symbol or number stand for a whole lot of wheat shafts. Let something else help shape differing shades of brown throughout the picture and the thousands of edges that aren’t really continously and individual “before our minds eye”. In this way, by lowering the bar of what constitutes a picture, we can see more easily how words might be able to access the high level functions involved during a Feel.

2)Hardware. It’s probably ridiculous to believe that by pure mental manipulations, we can give ourselves the same thing as a stare at a field of wheat. Though the brain devoid of real inputs can come close, for those who have had vivid dreams. And those, like me, who have had intense episodes of sleep paralysis and hag dreams, it’s even easier to imagine internal experience with “experiences” that arn’t pain or sound but God knows what. So again, perhaps this lowers the bar of Raw Feels a little more.

But if we still can’t get concepts to jump high enough, we can call upon hardware substitutions to aid, so long as the hardware is sufficiently different and we rely on it for processing power rather than it’s specific function. For instance, if I lost my sight tommorow, and God zapped me with the ability to echolocate, how different would my experiences be? (assuming others couldn’t see me making the noises!)

When I’ve brought up the Cypher analogy (from the Matrix), I’ve intended to cover points 1) and 2) together. Cypher claims that all he sees is “Blonde” and “Red Dress” while he stares at the code streaming by. The hardware substitution here is just beefing up his normal capacity so it makes sense for him to supersonically speed-read and process. The visual aspect of his experience is unrelated to visually seeing red. The code is the words, and the Matrix experience is the picture. So equiped with superhardware (that I’m adding), he’s trained himself to unconsciously translate fine grained language in real time so efficiently that it’s, by his report, no different than seeing “red”. And no doubt as time goes on, simple substitutions work their way in as placeholders for complex information analogous to the way digital compression works in order to form more effective experience. The same perhaps could be acheived with supermemory.

3)The final point is that if Cypher can see red by reading matrix code, red as a standard to be acheived is tied to his history as a sighted person navigating the world. But what is the true experience of the world? It would seem that Cypher and the later Neo both understand the code perfectly well but translate in opposite directions. Neo just processes code when he effortlessly battles the agents at the end instead of seeing a picture. The point at which they forget which mode they’re in, where working with symbol substitutions or filled in pixels begin to run together is where the computational thesis gains force.


Breaking up Raw Feels

February 8, 2007

There’s a pretty interesting post up on Brain Pains about the next step in evolution of the ability replies to Mary’s room.

I plan on talking more later about this paper by Derek Pereboom but just want to get a couple basic ideas on the table for now.

Derek defines the phenomenal this way, “the phenomenal property is as it is introspectively represented.” This captures the supposed one dimensional aspect of qulia, that it is what it is. Our feeling of red might distort reality, but that feeling can’t be wrong. It doesn’t matter what kind of phobia we have of needles, if we think we’re in pain, we’re in pain.

Derek makes the case that the above description isn’t guaranteed to be true. If I understand him right, he’s driving a wedge between the property and the introspection by representational languge. He elaborates the dualist/nonreductionist definition of qualia, “An introspective mode of presentation accurately represents the qualitative nature of a phenomenal property.”

This allows him to raise the question, what if it doesn’t accurately represent? He then argues that there is good reason to believe that it’s possible that it might not accurately represent, and if it doesn’t physicalism would be saved.

One important way this argument improves on the ability arguments (he claims) is that it is iterative in a way that keeps up dualist’s regress. An obvious problem with the ability argument is that while it may save knowledge, there is something else, the ability to know red in a different way, that resists physicalism. The same objection could be made here, the introspective mode of presentation might be said to resist physicalism even if qulia do not. But Derek thinks the key insight in his representational definition could be extended to that case, and any nth cases beyond it so that there is always the possibility of innacuracy open for the dualist’s next nonreductive term.

How he acheives the “possibility” is interesting. There doesn’t seem to be a direct way to argue for it since we can’t transgress our own introspection, but he thinks there are parallel problems which give us reason to suppose his claim might be true.

As just one example, he gives self-referencing sentences. These kinds of sentences parallel our own indubitibility about our feelings. The sentence, “This German sentence has six words” he says represents correctly in one way and incorrectly in another. So, “for all we know” the case might be the same for our own phenomenal introspection. It’s an open possibility. And if he’s right, that would diffuse qualia arguments.


Bats – The Weaker Case

January 23, 2007

Chalmers’ Zombie argument is the most surgically precise rejection of physicalism. Other thought experiments leading up to this one, aren’t necessarily exhausted by it. And it is in fact, wrong to assume that the target has remained exactly constant.

 If there is something it is like to be a bat, we can’t infer physicalism is false outright.  There are ontological, epistemic, and semantic aspects of the problem of qualia which have to be taken into account. Nagel had a problem it seems with science explaining everything, but held to a layered view of the physical world and his aim seems to me to be epistemic. The layers, epistemically isolated, but not necessarily ontologically. Nagel believed there could be “bridge laws” which translate the layers into each other.

So it would seem that the verdict on whether we can know what it’s like to be a bat isn’t exhausted in this thought exercise.


knowledge?

January 16, 2007

Last night I covered a big section of “The Conscious Mind” talking about various conceptions of and response to Mary’s Room. The best responses in Chalmer’s estimation are the ability responses. Mary didn’t have new knowledge upon seeing red for the first time, she just had another ability. I follow this response to an extent. But I think it gets to a deeper issue, that knowledge and abilities are closely tied together and even the coldest scientific theories don’t represent the world in itself but as constructed by brains and as ran as programs within brains which ‘contain’ the knowledge rather than lists of propositions on paper.

In other words, one reason why it’s hard to respond to this knowledge argument is it assumes an impossible ideal, that a person can have absolute propositional knowledge of everything – that there is a such thing in principle as perfect representation.