Representing Representation – Mary’s Room

April 6, 2007

Some comments on Pete Mandik’s paper The Neurophilosophy of Subjectivity. Here Peter struggles with some of the main issues I’m interested in albeit more professionally and coherently. An unjust summary of points made in the paper: Phenomenal Raw Feels don’t exist as mere sensory input devoid of higher level mental processes. If this is true, then would it be possible to replicate an “experience” purely at a higher level? Pete answers in the affirmative, at least to an extent. Mary, by her study, could have more what-it’s-like knowledge than a control subject who isn’t a brilliant color scientist.

How higher level representation might represent “phenomenal” representation? Pete remarks,

If a picture is indeed worth a thousand words, then isn’t it cheating to say that whatever a picture P represents, the same thing can be represented by the description “whatever P represents”? Even if the “whatever P represents” move is somehow disqualified, the following move remains open: just add words to the description. Why couldn’t a sufficiently long description represent all of the same things without itself being a picture?

The context of that comment is a discussion he has of Dennett’s Jello box. Tear a Jello box in half, and the tear line is infinitely complex, each side becoming the detector for the other. So it is with our perception of red. Red is what it is because one side of the Jello box is a set of physical properties of the environment, and another side is our sensory input and brains which evolved in an environmental context. I think it’s within Dennett’s observation that if we could perfectly describe one side of the tear, we’d be able to anticipate the other. That’s one way in which higher level representation might represent the knowledge of redness. But is the perfect memory, or imagined concept, exactly the same as staring at a red dot – can it duplicate the Raw Feel?

Mary’s room frames the problem of qualia in terms of knowledge. And solving the problem in the above way has Mary knowing red the same way the rest of us know red while not presently looking at something red. It’s tempting to raise the bar and demand Mary induce the experience of red without looking at something red. Or to “know” red the same way we know red while staring at something red.

Following up on Pete’s comment in this way, how could words, some other way of representation, represent the same thing as whatever is going on while staring at the red dot?

Could, as Pete suggests, words stand in for whatever is going on to create the picture? And in real time, in order to mimick the Raw Feel of looking at red?

The answer I try to convince myself is true, is yes, to both. My strategy is to grapple with the kind of representation going on in a Raw Feel, the kind of hardware necessary to produce the Raw Feel, and ultimately, question the project of representation altogether by confusing the symbol with what’s being represented. Some of this is groping in the dark I admit as I’m not 100% certain of what I’m trying to say.

1) What we convince ourselves we see isn’t necessarily what we’re really seeing. As Dennett argues, we don’t really see every detail in a complex rug pattern. As I understand, he argues that even a Raw Feel, to stare at a field of wheat, doesn’t put one million little wheat grains in our head .GIF style, but rather our Raw Feel works like a .JPEG. This is a consequence of the experience happening between high and low level brain functions. At the high level, all kinds of cultural, language, and other things are working to shape the experience. So the complex picture that words can’t describe, is already in part, a kind of word, or language. Like in a .JPEG, let some symbol or number stand for a whole lot of wheat shafts. Let something else help shape differing shades of brown throughout the picture and the thousands of edges that aren’t really continously and individual “before our minds eye”. In this way, by lowering the bar of what constitutes a picture, we can see more easily how words might be able to access the high level functions involved during a Feel.

2)Hardware. It’s probably ridiculous to believe that by pure mental manipulations, we can give ourselves the same thing as a stare at a field of wheat. Though the brain devoid of real inputs can come close, for those who have had vivid dreams. And those, like me, who have had intense episodes of sleep paralysis and hag dreams, it’s even easier to imagine internal experience with “experiences” that arn’t pain or sound but God knows what. So again, perhaps this lowers the bar of Raw Feels a little more.

But if we still can’t get concepts to jump high enough, we can call upon hardware substitutions to aid, so long as the hardware is sufficiently different and we rely on it for processing power rather than it’s specific function. For instance, if I lost my sight tommorow, and God zapped me with the ability to echolocate, how different would my experiences be? (assuming others couldn’t see me making the noises!)

When I’ve brought up the Cypher analogy (from the Matrix), I’ve intended to cover points 1) and 2) together. Cypher claims that all he sees is “Blonde” and “Red Dress” while he stares at the code streaming by. The hardware substitution here is just beefing up his normal capacity so it makes sense for him to supersonically speed-read and process. The visual aspect of his experience is unrelated to visually seeing red. The code is the words, and the Matrix experience is the picture. So equiped with superhardware (that I’m adding), he’s trained himself to unconsciously translate fine grained language in real time so efficiently that it’s, by his report, no different than seeing “red”. And no doubt as time goes on, simple substitutions work their way in as placeholders for complex information analogous to the way digital compression works in order to form more effective experience. The same perhaps could be acheived with supermemory.

3)The final point is that if Cypher can see red by reading matrix code, red as a standard to be acheived is tied to his history as a sighted person navigating the world. But what is the true experience of the world? It would seem that Cypher and the later Neo both understand the code perfectly well but translate in opposite directions. Neo just processes code when he effortlessly battles the agents at the end instead of seeing a picture. The point at which they forget which mode they’re in, where working with symbol substitutions or filled in pixels begin to run together is where the computational thesis gains force.


Is Modal Logic – Lame?

March 19, 2007

By this question, I don’t mean so much the ontological commitment – modal realism ect., but just merely, working out problems by adding up the sum total of a bunch of formally stated premesis, especially with modal qualifiers.

I’m all for tightening up arguments and precision. I know…

But I just can’t force myself to study the subject very deeply.  I’ve studied logic with quantification and all that abit, and it’s a little bit interesting in its own right, but here’s the thing. How many philosophy oriented deductive arguments past two or three steps have made a serious difference – or contribution for that matter?

It seems to me that it’s rare to even get agreement on problems that are stated with two or three premesis.  And that’s just standard quantification. Now add in “possibly” and so on, and how many formally stated arguments out there past three steps are generally agreed upon as being true?

Robert Lanza Discovers Idealism

March 14, 2007

See this article in The American Scholar.

A stem cell researcher has discovered the novel idea that the world in whole or in part is a mental construct. Anticipating Immanuel Kant a few hundred years after the fact, Lanza suggests that the mind creates the spacio-temperal world where the rest of reality plays out.

In regard to Lanza’s fascination with Zeno’s Paradox, granted, biology majors wouldn’t necessarily study Aristotle, but aren’t they required to take at least a semester of business calculus?

And quantum mechanics isn’t so mystical these days. The controversial observer-relative “collapse of the wave function” found in the copenhegan interpretation of quantum mechanics isn’t nearly as fashionable to physicists today as it was thirty years ago. Interpretations of QM are the realm of metaphysics, not squarely laboratory and mathematical science.

However, more importantly than the fact that Lanza fails to slay his beast, his arrow doesn’t even release in the right direction. Paraphrasing Chalmers, he sets up his objective:

These theories reflect some of the important work that is occurring in the fields of neuroscience and psychology, but they are theories of structure and function. They tell us nothing about how the performance of these functions is accompanied by a conscious experience

As a solution he offers:

Space and time, not proteins and neurons, hold the answer to the problem of consciousness. When we consider the nerve impulses entering the brain, we realize that they are not woven together automatically, any more than the information is inside a computer. Our thoughts have an order, not of themselves, but because the mind generates the spatio-temporal relationships involved in every experience. We can never have any experience that does not conform to these relationships, for they are the modes of animal logic that mold sensations into objects.

In other words, like Kant, Lanza skirts physicalism and wanders the terrain of functionalism. Chalmers would no doubt have to ask him, “After you’ve delineated the function of the ‘mind’ generating spacio-temperal relationships, where is the conscious experience?” So for all of the gap theorists out there, it looks like consciousness, unfortunately, is still a mystery.

Is Functionalism Physicalism?

March 9, 2007

How does functionalism relate to physicalism? This isn’t a trivial question and this post is just to raise the issue rather than attempt to solve it. One of the things at stake is, if philosopher x is a functionalist and y an opponent, then if they have different definitions of what functionalism is, how they disagree will be less clear.

Ned Block and Jerry Fodor are broadly functionalists save for qualia and some higher cognitive functions (Fodor) while just assuming physicalism is probably true. So for instance, while the inverted spectrum argument is ofted wielded against physicalism, in the case of Block and Fodor, it targets only functionalism. And if functionalism in a strong sense is false but physicalism true, how are they different?

David Chalmers whose key interest is narrowly the ontology of mind, virtually interchanges the terms functionalism and physicalism. He argues the world is causally closed and that everything save conscious experience reduces to the physical. More specifically, he argues that the soft sciences are ultimately linked to the physical by their functionalizability. If something is functional, then it’s physical. How the micro world could specifically be understood on functional terms is unclear, but paragraphs like this are key:

“2. The principle of organizational invariance. This principle states that any two systems with the same fine-grained functional organization will have qualitatively identical experiences. If the causal patterns of neural organization were duplicated in silicon, for example, with a silicon chip for every neuron and the same patterns of interaction, then the same experiences would arise.”

So for him the “causal patterns of neural organization” is the same thing as the “fine-grained functional organization”. He notes later in the paragraph Searle’s disagreement, but where does Searl disagree when his own view champions “causal supervenience?” What is the difference between “causal supervenience” and “causal patterns of neural organization?” I think Searl believes if silicon or biology can truly duplicate physical causality in the right way, then consciousness results. But then, there is no equating for Searle causality with functionality. To sharpen my point here, consider Searle’s rejection of Penrose’s quantum account of mind. Searle affirms that the brian is 1) just a machine 2) a neural net. But isn’t that just what the functionalists have been trying to tell Searle all along?(!)

Not exactly, because the causal account of how that machine IS a machine matters. Modeling the synaptic connections perfectly is for Searle, still just a model. But isn’t Chalmers model going deeper than that? Herin lies where I think they’re talking past each other. At what point are we moving from a functional model to the real thing? When Chalmers says we’ll replace a neuron with a chip performing the same function, he seems to mean, down to the relevant level of physical causality wherever that is. And when Searle rejoinds, he seems to mean, a functional account can’t capture the relevant causal level. Chalmers assumes functionalism is physicalism, and Searle assumes it’s not.

Searle argues that functions are something we ascribe. Is there anything inherently “computational” about an abcabus (someone please tell me how to spell that word)? It’s a kids toy or a door prop as much as it is a calculator depending on how we interpret it. Whereas it would seem there is something more objectively real about physical causality. Now I don’t think that ultimately works because I think that physics is also a product of our interpretation. And more importantly, some of Searle’s more ambitious attempts to trivialize functions (that can be “anything”) have been adequately refuted.

Needles to say, I think Searle and Chalmers are both right in their deeper points. But the whole discussion is problematized by the lack of agreement on what functionalism actually is, and particular, how it relates to physicalism.

String Theory, Postmodernism, Market Efficiency

February 28, 2007

I’ve been having fun reading Not Even Wrong and following up on its links the past few days. String Theory – yeah ok, I tried to follow an introduction to gauge theory this afternoon and realize this is a topic I’ll be ignorant on until Clark gets rich from selling chocolate and does some laymans’ level write-ups on his blog – is apparently in a crisis. Or at least some physicists, Woit and Smolin in particular, say in books they’ve written. Smolin’s book on this topic draws on Kuhn and Feyarabend’s philosophy of science. Smolin is also the father of an alternative to String Theory, Loop Quantum Gravity. Other physicists, notably Lubos Motl are very angry about this dissent and accuse the apostates in harsh words of all kinds of heresy including “postmodernism” and “communism”. Well, the main points I gather from the skeptic’s case is that string theory is only becoming more complicated without having solved any real problems and little chances of making experimental predictions in the future. “Not even wrong” I take it disputes String Theory on Popper’s grounds of falsifiability. Smolin also apparently tries to draw this criteria out, it’s said that LQM is falsifiable whereas String Theory isn’t.

If I understand Smolin, he believes science is in a crisis mode (Kuhn) and that there exists a need for new perspectives (Feyerabend) in order to make progress. The establishment is steeped in bureaucracy and resistance to alternative ideas. Now, what Feyerabend meant by “Anything Goes” is that historical examples of scientific acheivments border on relativism if our guiding light is simplistic formulations of “scientific method”. The demarcation problem according to him, remains an intractable one. What constitutes good and bad science is difficult to determine past a certain, blurry point. For instance, Feyerabend’s famous alternative to the enlightenment account of the Galileo event puts Galileo in the position of being “anti-science” rather than the shallow champion of Occam’s Razor.

I see a close resemblance in Feyerabend’s idea to the Efficient Market Hypothesis of Chicago-school economics. This theory essentially states that there is no way to beat the Stock Market average by skill because the information about public companies is too easily available to be valuable. No-arbitrage assumptions are at least the hope for those who believe in capitalism. As an example, if a stock follows a predictable trend, then it can’t be a money making one, because if it were, plenty of market players would be there to buy or sell as needed, biding the price, and destroying the trend. In a way, philosophers of science are “trend watching”, looking for past performance as an indicator for future results. There may be be some historical themes in science, just as there are in financial markets. In retrospect, we have some excellent ideas as to what happened to bring about our current situation. But to use that information in a way that would “beat the market” in the future (predict which theory is right in a profitable way, well beyond current expectations), is very, very hard. So future history might tell us whether today’s science needed more valley crossers than hill climbers (Smolin), but if the “market of ideas” in science approximates efficiency, it would be very difficult to make this prediction today – or to make this prediction better than others who are making forecasts by choosing graduate programs and offering grants.

The Chicago school does not believe in price bubbles. In other words, to them, the 2000-2001 Stock Market tumble wasn’t predictable. Price to earnings ratios and other fundamentals being “out of whack” wasn’t news to anyone who mattered. Similarily, the fact that string theory is “unfalsifiable” doesn’t appear to be news to any string theorists. Philosophers of science assessing the situation by the “fundamentals” have no more certainty than trend watchers for one-upping their peers. It may be that falsifiability for instance, proves the Achilles Hill for String Theory but it may prove to be the case that it becomes falsifiable or that a future science doesn’t follow today’s rules as we’d hope.

So whether science is really in a crisis remains to be seen. Certainly it seems to their credit that Woit and Smolin are cautious skeptics, each acknowledging their voice respectively as one of many. And they may prove to be right.

Qualia and Externalism

February 21, 2007

I may be generalizing here, but it seems to me the folks over at Brain Pains are both strong content externalists and proponents of qualia. I’m trying to figure out how they’d go together. I’ll be honest and say today is the first time I ever thought about it and I just don’t see how they’d co-exist. I’m sure there is a good explanation, but…

Recall, content externalism is the belief that mental content is constituted in part by external factors. Following Putnam’s twin earth, if water is xyz on another planet and not h2o, then the two thoughts of water by a creature in either situation with identical internal states are different. The attraction, I suppose, it to ward off relativism. If a first intension {water-h2o} is wrong, then a secondary intension {xyz} closes the deal independent of our faults, so we don’t have to worry about the “world changing” {Kuhn} as our scientific theories update.

Now, I don’t know what Putnum explicitly thought about qualia, but I do know he invented functionalism, so that implies he either reduced or eliminated qualia. But what about for those who believe in nonreductive or nonphysical qualia and content externalism?

– qualia are part of our mental content
– qualia are indubitable to us
– mental content is external

These premises result in at least what really seems like a contradiction. If “red” is red because it seems that way to me and nothing more {qualia} then how can it be ‘outside the head’ {external}?

The Matrix and Qualia

February 17, 2007

In the Movie The Matrix, Cypher discusses how he monitors what’s going on inside the virtual world:

there’s way too much information to decode the Matrix. You get used to it, though. Your brain does the translating. I don’t even see the code. All I see is blonde, brunette, and redhead.

Whenever I think of computationalism I think about this scene. I don’t of course, think anyone could ever actually read “computer code” that fast nor would it make any sense to work in machine language if visual Basic will suffice. But, I think there are a ways in which the scene is instructive.

Let’s assume there is no such thing as qualia. It is reasonable to me that something like qualia or phenomenal experience would yet be reported anyhow. When laboring in everyday thinking about history and philosophy, we can describe many of our thoughts in a few sentences. But when it comes to the vast amount of information our sensory ASICs process, such as the visual field during freeway driving, we’d be helpless to communicate the details in language without some kind of shortcuts. As the relevant information density increases, the more would-be experiential terminology would be needed to communicate. An omniscient bicycle metaphysician who has never rode a bike and an omnipotent BMX racer who’s never studied physics would both have to take great shortcuts to coach an understudy, or even to think about coaching an understudy in concepts, and their programs I’d wager would be similar.

Returning to the Matrix, if Cypher really could translate all that code as it scrolls by, how else could he report it but as experience? I think there is a parallel in Dennett’s theory on blindsight. As the baud rate is turned up by the objects moving faster accross the visual field, the (star) subjects report “experiencing” it. Perhaps in a similar way, communicating in a foreign language with the aid of translation dictionaries is thinking – really hard thinking – but speaking naturally in one’s own language seems to have a subtle phenomenal aspect to it in addition to a thinking aspect.

One objection might be that Cypher clearly intended his remarks to be metaphorical and not literal. But one dimensional qualia, mistaken perceptions of mistaken perceptions to any order of iteration couldn’t be much more than metaphor anyway. Hitting my fingers with a hammer hurts like hell. There’s no better way to put it. We can match up these experiences but there is no intrinsic stability therin. And finally, there is the other side of the coin. Neo, the omniscient one. As Neo’s knowledge increases to superhuman proportions (think of Mary’s knowledge of color as she gulps down color equation after equation), instead of seeing “redhead” or “agent”, the phenomenal world disappears and he sees ‘reality’, the code. You know, everything is in slow motion. Slow down the baud rate of sensory input and the illusion of qualia becomes intuitively information processing.

Contemplating a zombie world devoid of “inner life” is supposed to be possible to do, according to the gap theorists, but it’s also suppose to be an exercise in absurdity. Ha-ha-ha, zombie A.G. hits his finger with a hammer and screams but doesn’t actually “feel” any pain. The above is a way to begin conceiving of a world that is exhausted by the psychological but where phenomenal reports are essential to the way it works.