Saturday, September 20, 2014

IEET - Chalmers vs Pigliucci on the Philosophy of Mind-Uploading [2 Parts]

This comes from the Institute for Ethics and Emerging Technologies blog - a two-part series on the possibility of mind-uploading, written by John Danaher. The author pits an optimistic essay by David Chalmers against a pessimistic essay by Massimo Pigliucci. Both essays come from a recent book, Intelligence Unbound: The Future of Uploaded and Machine Minds (August, 2014).

Any reader familiar with this blog will know I side with Pigliucci in my belief that uploading consciousness into a machine is both undesirable and (even if it were desirable) impossible.

My main point: Consciousness is embodied, embedded, and enactive - and most importantly it is an emergent property of a complex biological system.

Chalmers vs Pigliucci on the Philosophy of Mind-Uploading (1): Chalmers’s Optimism

By John Danaher
Philosophical Disquisitions

Posted: Sept 17, 2014

The brain is the engine of reason and the seat of the soul. It is the substrate in which our minds reside. The problem is that this substrate is prone to decay. Eventually, our brains will cease to function and along with them so too will our minds. This will result in our deaths. Little wonder then that the prospect of transferring (or uploading) our minds to a more robust, technologically advanced, substrate has proved so attractive to futurists and transhumanists.

But is it really feasible? This is a question I’ve looked at many times before, but the recent book Intelligence Unbound: The Future of Uploaded and Machine Minds offers perhaps the most detailed, sophisticated and thoughtful treatment of the topic. It is a collection of essays, from a diverse array of authors, probing the key issues from several different perspectives. I highly recommend it.

Within its pages you will find a pair of essays debating the philosophical aspects of mind-uploading (you’ll find others too, but I want to zone-in on this pair because one is a direct response to the other). The first of those essays comes from David Chalmers and is broadly optimistic about the prospect of mind-uploading. The second of them comes from Massimo Pigliucci and is much less enthusiastic. In this two-part series of posts, I want to examine the debate between Chalmers and Pigliucci. I start by looking at Chalmers’s contribution.


1. Methods of Mind-Uploading and the Issues for Debate

Chalmers starts his essay by considering the different possible methods of mind-uploading. This is useful because it helps to clarify — to some extent — exactly what we are debating. He identifies three different methods (note: in a previous post I looked at work from Seth Bamford suggesting that there were more methods of uploading, but we can ignore those other possibilities for now):
  • Destructive Uploading: As the name suggests, this is a method of mind-uploading that involves the destruction of the original (biological) mind. An example would be uploading via serial sectioning. The brain is frozen and its structure is analyzed layer by layer. From this analysis, one builds up a detailed map of the connections between neurons (and other glial cells if necessary). This information is then used to build a functional computational model of the brain.
  • Gradual Uploading: This is a method of mind-uploading in which the original copy is gradually replaced by functionally equivalent components. One example of this would be nanotransfer. Nanotechnology devices could be inserted into the brain and attached to individual neurons (and other relevant cells if necessary). They could then learn how those cells work and use this information to simulate the behaviour of the neuron. This would lead to the construction of a functional analogue of the original neuron. Once the construction is complete, the original neuron can be destroyed and the functional analogue can take its place. This process can be repeated for every neuron, until a complete copy of the original brain is constructed.
  • Nondestructive Uploading: This is a method of mind-uploading in which the original copy is retained. Some form of nanotechnology brain-scanning would be needed for this. This would build up a dynamical map of current brain function — without disrupting or destroying it — and use that dynamical map to construct a functional analogue.
Whether these forms of uploading are actually technologically feasible is anyone’s guess. They are certainly not completely implausible. I can certainly imagine a model of the brain being built from a highly detailed scan and analysis. It might take a huge amount of computational power and technical resources, but it seems within the realm of technological possibility. The deeper question is whether our minds would really survive the process. This is where the philosophical debate kicks-in.

There are, in fact, two philosophical issues to debate:
  • The Consciousness Issue: Would the uploaded mind be conscious? Would it experience the world in a roughly similar manner to how we now experience the world?
  • The Identity/Survival Issue: Assuming it is conscious, would it be our consciousness (our identity) that survives the uploading process? Would our identities be preserved?
The two issues are connected. Consciousness is valuable to us. Indeed, it is arguably the most valuable thing of all: it is what allows us to enjoy our interactions with the world, and it is what confers moral status upon us. If consciousness was not preserved by the mind-uploading process, it is difficult to see why we would care. So consciousness is a necessary condition for a valuable form of mind-uploading. That does not, however, make it a sufficient condition. After all, two beings can be conscious without sharing any important connection (you are conscious, and I am conscious, but your consciousness is not valuable to me in the same way that it is valuable to you). What we really want to preserve through uploading is our individual consciousnesses. That is to say: the stream of conscious experiences that constitutes our identity. But would this be preserved?

These two issues form the heart of the Chalmers-Pigliucci debate.

2. Would consciousness survive the uploading process?

So let’s start by looking at Chalmers’s take on the consciousness issue. Chalmers is famously one of the new-Mysterians, a group of philosophers who doubt our ability to have a fully scientific theory of consciousness. Indeed, he coined the term “The Hard Problem” of consciousness to describe the difficulty we have in accounting for the first-personal quality of conscious experience. Given his scepticism, one might have thought he’d have his doubts about the possibility of creating a conscious upload. But he actually thinks we have reason to be optimistic.

He notes that there are two leading contemporary views about the nature of consciousness (setting non-naturalist theories to the side). The first — which he calls the biological view — holds that consciousness is only instantiated in a particular kind of biological system: no nonbiological system is likely to be conscious. The second — which he (and everyone else) calls the functionalist view — holds that consciousness is instantiated in any system with the right causal structure and causal roles. The important thing is that the functionalist view allows for consciousness to be substrate independent, whereas the biological view does not. Substrate independence is necessary if an upload is going to be conscious.

So which of these views is correct? Chalmers favours the functionalist view and he has a somewhat elaborate argument for this. The argument starts with a thought experiment. The thought experiment comes in two stages. The first stage asks us to imagine a “perfect upload of a brain inside a computer” (p. 105), by which is meant a model of the brain in which every relevant component of a biological brain has a functional analogue within the computer. This computer-brain is also hooked up to the external world through the same kinds of sensory input-output channels. The result is a computer model that is a functional isomorph of a real brain. Would we doubt that such a system was conscious if the real brain was conscious?

Maybe. That brings us to the second stage of the thought experiment. Now, we are asked to imagine the construction of a functional isomorph through gradual uploading:
Here we upload different components of the brain one by one, over time. This might involve gradual replacement of entire brain areas with computational circuits, or it might involve uploading neurons one at a time. The components might be replaced with silicon circuits in their original location…It might take place over months or years or over hours.
If a gradual uploading process is executed correctly, each new component will perfectly emulate the component it replaces, and will interact with both biological and nonbiological components around it in just the same way that the previous component did. So the system will behave in exactly the same way that it would have without the uploading.
(Intelligence Unbound pp. 105-106)
Critical to this exercise in imagination is the fact that the process results in a functional isomorph and that you can make the process exceptionally gradual, both in terms of the time taken and the size of the units being replaced.

With the building blocks in place, we now ask ourselves the critical question: if we were undergoing this process of gradual replacement, what would happen to our conscious experience? There are three possibilities. Either it would suddenly stop, or it would gradually fade out, or it would be retained. The first two possibilities are consistent with the biological view of consciousness; the last is not. It is only consistent with the functional view. Chalmers’s argument is that the last possibility is the most plausible.


In other words, he defends the following argument:
(1) If the parts of our brain are gradually replaced by functional isomorphic component parts, our conscious experience will either: (a) be suddenly lost; (b) gradually fadeout; or © be retained throughout.
(2) Sudden loss and gradual fadeout are not plausible; retention is.
(3) Therefore, our conscious experience is likely to be retained throughout the process of gradual replacement.
(4) Retention of conscious experience is only compatible with the functionalist view.
(5) Therefore, the functionalist view is like to be correct; and preservation of consciousness via mind-uploading is plausible.
Chalmers adds some detail to the conclusion, which we’ll talk about in a minute. The crucial thing for now is to focus on the key premise, number (2). What reason do we have for thinking that retention is the only plausible option?

With regard to sudden loss, Chalmers makes a simple argument. If we were to suppose, say, that the replacement of the 50,000th neuron led to the sudden loss of consciousness, we could break down the transition point into ever more gradual steps. So instead of replacing the 50,000th neuron in one go, we could divide the neuron itself into ten sub-components and replace them gradually and individually. Are we to suppose that consciousness would suddenly be lost in this process? If so, then break down those sub-components into other sub-components and start replacing them gradually. The point is that eventually we will reach some limit (e.g. when we are replacing the neuron molecule by molecule) where it is implausible to suppose that there will be a sudden loss of consciousness (unless you believe that one molecule makes a difference to consciousness: a belief that is refuted by reality since we lose brain cells all the time without thereby losing consciousness). This casts the whole notion of sudden loss into doubt.

With regard to gradual fadeout, the argument is more subtle. Remember it is critical to Chalmers’ thought experiment that the upload is functionally isomorphic to the original brain: for every brain state that used to be associated with conscious experience there will be a functionally equivalent state in the uploaded version. If we accept gradual fadeout, we would have to suppose that despite this equivalence, there is a gradual loss of certain conscious experiences (e.g. the ability to experience black and white, or certain high-pitched sounds etc.) despite the presence of functionally equivalent states. Chalmers’ argues that this is implausible because it asks us to imagine a system that is deeply out of touch with its own conscious experiences. I find this slightly unsatisfactory insofar as it may presuppose the functionalist view that Chalmers is trying to defend.

But, in any event, Chalmers suggests that the process of partial uploading will convince people that retention of consciousness is likely. Once we have friends and family who have had parts of their brains replaced, and who seem to retain conscious experience (or, at least, all outward signs of having conscious experience), we are likely to accept that consciousness is preserved. After all, I don’t doubt that people with cochlear or retinal implants have some sort of aural or visual experiences. Why should I doubt it if other parts of the brain are replaced by functional equivalents?

Chalmers concludes with the suggestion that all of this points to the likelihood of consciousness being an organizational invariant. What he means by this is that systems with the exact same patterns of causal organization are likely to have the same states of consciousness, no matter what those systems are made of.

I’ll hold off on the major criticisms until part two, since this is the part of the argument about which Pigliucci has the most to say. Nevertheless, I will make one comment. I’m inclined towards functionalism myself, but it seems to me that in crafting the thought experiment that supports his argument, Chalmers helps himself to a pretty colossal assumption. He assumes that we know (or can imagine) what it takes to create a “perfect” functional analogue of a conscious system like the brain. But, of course, we don’t know really know what it takes. Any functional model is likely to simplify and abstract from the messy biological details. The problem is knowing which of those details is critical for ensuring functional equivalence. We can create functional models of the heart because all the critical elements of the heart are determinable from a third person perspective (i.e. we know what is necessary to make the blood pump from a third person perspective). That doesn’t seem to be the case with consciousness. In fact, that’s what Chalmers’s Hard Problem is supposed to highlight.

3. Will our identities be preserved? Will we survive the process?

Let’s assume Chalmers is right to be optimistic about consciousness. Does that mean he is right to be optimistic about identity/survival? Will the uploaded mind be the same as we are? Will it share our identity? Chalmers has more doubts about this, but again he sees some reason to be optimistic.

He starts by noting that there are three different philosophical approaches to personal identity. The first is biologism (or animalism), which holds that preservation of one’s identity depends on the preservation of the biological organism that one is. The second is psychological continuity, which holds that preservation of one’s identity depends on maintaining threads of overlapping psychological states (memories, beliefs, desires etc.). The third, slightly more unusual, is Robert Nozick’s “closest continuer” theory, which holds that preservation of identity depends on the existence of a closely-related subsequent entity (where “closeness” is defined in various ways).

Chalmers then defends two different arguments. The first gives some reason to be pessimistic about survival, at least in the case of destructive and nondestructive forms of uploading. The second gives some reason to be optimistic, at least in the case of gradual uploading. The end result is a qualified optimism about gradual uploading.

Let’s start with the pessimistic argument. Again, it involves a thought experiment. Imagine a man named Dave. Suppose that one day Dave undergoes a nondestructive uploading process. A copy of his brain is made and uploaded to a computer, but the biological brain continues to exist. There are, thus, two Daves: BioDave and DigiDave. It seems natural to suppose that BioDave is the original, and his identity is preserved in this original biological form; and it is equally natural to suppose that DigiDave is simply a branchline copy. In other words, it seems natural to suppose that BioDave and DigiDave have separate identities.

But now suppose we imagine the same scenario, only this time the original biological copy is destroyed. Do we have any reason to change our view about identity and survival? Surely not. The only difference this time round is that BioDave is destroyed. DigiDave is the same as he was in the original thought experiment. That suggests the following argument (numbering follows on from the previous argument diagram):
(9) In nondestructive uploading, DigiDave is not identical to Dave.
(10) If in nondestructive uploading, DigiDave is not identical to Dave, then in destructive uploading, DigiDave is not identical to Dave.
(11) In destructive uploading, DigiDave is not identical to Dave.
This looks pretty sound to me. And as we shall see in part two, Pigliucci takes a similar view. Nevertheless, there are two possible ways to escape the conclusion. The first would be to deny premise (2) by adopting the closest continuer theory of personal identity. The idea then would be that in destructive (but not non-destructive) uploading DigiDave is the closest continuer and hence the vessel in which identity is preserved. I think this simply reveals how odd the closest continuer theory really is.

The other option would be to argue that this is a fission case. It is a scenario in which one original identity fissions into two subsequent identities. The concept of fissioning identities was originally discussed by Derek Parfit in the case of severing and transplanting of brain hemispheres. In the brain hemisphere case, some part of the original person lives on in two separate forms. Neither is strictly identical to the original, but they do stand in “relation R” to the original, and that relation might be what is critical to survival. It is more difficult to say that nondestructive uploading involves fissioning. But it might be the best bet for the optimist. The argument then would be that the original Dave survives in two separate forms (BioDave and DigiDave), each of which stands in relation R to him. But I’d have to say this is quite a stretch, given that BioDave isn’t really some new entity. He’s simply the original Dave with a new name. The new name is unlikely to make an ontological difference.

Let’s now turn our attention to the optimistic argument. This one requires us to imagine a gradual uploading process. Fortunately, we’ve done this already so you know the drill: imagine that the subcomponents of the brain are replaced gradually (say 1% at a time), over a period of several years. It seems highly likely that each step in the replacement process preserves identity with the previous step, which in turn suggests that identity is preserved once the process is complete.

To state this is in more formal terms:
(14) For all n < 100, Daven+1 is identical to Daven.
(15) If for all n < 100, Daven+1 is identical to Daven, then Dave100 is identical to Dave.
(16) Therefore, Dave100 is identical to Dave.
If you’re not convinced by this 1%-at-a-time version of the argument, you can adjust it until it becomes more persuasive. In other words, setting aside certain extreme physical and temporal limits, you can make the process of gradual replacement as slow as you like. Surely there is some point at which the degree of change between the steps becomes so minimal that identity is clearly being preserved? If not, then how do you explain the fact that our identities are being preserved as our body cells replace themselves over time? Maybe you explain it by appealing to the biological nature of the replacement. But if we have functionally equivalent technological analogues it’s difficult to see where the problem is.

Chalmers adds other versions of this argument. These involve speeding up the process of replacement. His intuition is that if identity is preserved over the course of a really gradual replacement, then it may well be preserved over a much shorter period of replacement too, for example one that takes a few hours or a few minutes. That said, there may be important differences when the process is sped up. It may be that too much change takes place too quickly and the new components fail to smoothly integrate with the old ones. The result is a break in the strands of continuity that are necessary for identity-preservation. I have to say I would certainly be less enthusiastic about a fast replacement. I would like the time to see whether my identity is being preserved following each replacement.

4. Conclusion

That brings us to the end of Chalmers’ contribution to the debate. He says more in his essay, particularly about cryopreservation, and the possible legal and social implications of uploading. But there is no sense in addressing those topics here. Chalmers doesn’t develop his thoughts at any great length and Pigliucci wisely ignores them in his reply. We’ll be discussing Pigliucci’s reply in part two.

~ John Danaher holds a PhD from University College Cork (Ireland) and is currently a lecturer in law at NUI Galway (Ireland). His research interests are eclectic, ranging broadly from philosophy of religion to legal theory, with particular interests in human enhancement and neuroethics.
John blogs at
You can follow him on twitter @JohnDanaher.

* * * * *

Chalmers vs Pigliucci on the Philosophy of Mind-Uploading (2): Pigliucci’s Pessimism

By John Danaher
Philosophical Disquisitions
Posted: Sept 20, 2014

This is the second and final part of my series about a recent exchange between David Chalmers and Massimo Pigliucci. The exchange took place in the pages of Intelligence Unbound, an edited collection of essays about mind-uploading and artificial intelligence. It concerned the philosophical plausibility of mind-uploading.

As we saw in part one, there were two issues up for debate:

The Consciousness Issue: Would an uploaded mind be conscious? Would it experience the world in a roughly similar manner to how we now experience the world?

The Identity/Survival Issue: Assuming it is conscious, would it be our consciousness (our identity) that survives the uploading process?
David Chalmers was optimistic on both fronts. Adopting a functionalist theory of consciousness, he saw no reason to think that a functional isomorph of the human brain would not be conscious. Not unless we assume that biological material has some sort of magic consciousness-conferring property. And while he had his doubts about survival via destructive or non-destructive uploading, he thought that that a gradual replacement of the human brain, with functionally equivalent artificial components, could allow for our survival. 
As we will see today, Pigliucci is much more pessimistic. He thinks it is unlikely that uploads would be conscious, and, even if they are, he thinks it is unlikely that we would survive the uploading process. He offers four reasons to doubt the prospect of conscious uploads, two based on criticisms of the computational theory of mind, and two based on criticisms of functionalism. He offers one main reason to doubt survival. I will suggest that some of his arguments have merit, some don’t, and some fail to engage with the arguments put forward by Chalmers.


1. Pigliucci’s Criticisms of the Computational Theory of Mind

Pigliucci assumes that the pro-uploading position depends on a computational theory of mind (and, more importantly, a computational theory of consciousness). According to this theory, consciousness is a property (perhaps an emergent property) of certain computational processes. Pigliucci believes that if he can undermine the computational theory of mind, then so too can he undermine any optimism we might have about conscious uploads.
To put it more formally, Pigliucci thinks that the following argument will work against Chalmers:

  • (1) A conscious upload is possible only if the computational theory of mind is correct.
  • (2) The computational theory of mind is not correct (or, at least, it is highly unlikely to be correct).
  • (3) Therefore, (probably) conscious uploads are not possible.
Pigliucci provides two reasons for us to endorse premise (2). The first is a — somewhat bizarre — appeal to the work of Jerry Fodor. Fodor was one of the founders of the computational theory of mind. But Fodor has, in subsequent years, pushed back against the overreach he perceives among computationalists. As Pigliucci puts it:

[Fodor distinguishes] between “modular” and “global” mental processes, and [argues] that [only] the former, but not the latter (which include consciousness), are computational in any strong sense of the term…If Fodor is right, then the CTM [computational theory of mind] cannot be a complete theory of mind, because there are a large number of mental processes that are not computational in nature. 

(Intelligence Unbound, p. 123)

In saying this, Pigliucci explicitly references Fodor’s book-length response to the work of Steven Pinker, called The Mind Doesn’t Work that Way: The Scope and Limits of Computational Psychology. I can’t say I’m huge fan of Fodor, but even if I were I would find Pigliucci’s argument pretty unsatisfying. It is, after all, little more than a bare appeal to authority, neglecting to mention any of the detail of Fodor’s critique. It also neglects to mention that Fodor’s particular understanding of computation is disputed. Indeed, Pinker disputed it in his response to Fodor, which Pigliucci doesn’t cite and which you can easily find online. Now, my point here is not to defend the computational theory, or to suggest that Pinker is correct in his criticisms of Fodor, it is just merely to suggest that appealing to the work of Fodor isn’t going to be enough. Fodor may have done much to popularise the computational theory, but he doesn’t have final authority on whether it is correct or not. 
Let’s move on then to Pigliucci’s second reason to endorse premise (2). This one claims that the computational theory rests on a mistaken understanding of the Church-Turing thesis about universal computability. Citing the work of Jack Copeland — an expert on Turing, whose biography of Turing I recently read and recommend — Pigliucci notes that the thesis only establishes that logical computing machines (Turing Machines) “can do anything that can be described as a rule of thumb or purely mechanical (“algorithmic”)”. It does not establish that “whatever can be calculated by a machine (working on finite data in accordance with a finite program of instructions) is Turing-machine-computable”. This is said to be a problem because proponents of the computational theory of mind have tended to assume that “Church-Turing has essentially established the CTM”. 
I may not be well-qualified to evaluate the significance of this point, but it seems pretty thin to me. I think it relies on an impoverished notion of computation. It assumes that computationalists, and by proxy proponents of mind-uploading, think that a mind could be implemented on a classic digital computer architecture. While some may believe that, it doesn’t strike me as being essential to their claims. I think there is a broader notion of computation that could avoid his criticisms. To me, a computational theory is one that assumes mental processes (including, ultimately, conscious mental processes) could be implemented in some sort of mechanical architecture. The basis for the theory is the belief that mental states involve the representation of information (in either symbolic or analog forms) and that mental processes involve the manipulation and processing of the represented information. I see nothing in Pigliucci’s comments about the Church-Turing thesis that upsets that model. Pigliucci actually did a pretty good podcast on broader definitions of computation with Gerard O’Brien. I recommend it if you want to learn more.

In summary, I think Pigliucci’s criticisms of the computational theory are off-the-mark. Nevertheless, I concede that the broader sense of computation may in turn collapse into the broader theory of functionalism. This is where the debate is really joined.


2. Pigliucci’s Criticisms of Functionalism

And I think Pigliucci is on firmer ground when he criticises functionalism. Admittedly, he doesn’t distinguish between functionalism and computationalism, but I think it is possible to separate out his criticisms. Again, there are two criticisms with which to contend. To understand them, we need to go back to something I mentioned in part one. There, I noted how Chalmers seemed to help himself to a significant assumption when defending the possibility of a conscious upload. The assumption was that we could create of “functional isomorph” of the brain. In other words, an artificial model that replicated all the relevant functional attributes of the human brain. I questioned whether it was possible to do this. This is something that Pigliucci also questions. 
We can put the criticism like this:

  • (8) A conscious upload is possible only if we know how to create a functional isomorph of the brain.
  • (9) But we do not know what it takes to create a functional isomorph of the brain.
  • (10) Therefore, a conscious upload is not possible.
Pigliucci adduces two reasons for us to favour premise (9). The first has to do with the danger of conflating simulation with function. This hearkens back to his criticism of the computational theory, but can be interpreted as a critique of functionalism. The idea is that when we create functional analogues of real-world phenomena we may only be simulating them, not creating models that could take their place. The classic example here would be a computer model of rainfall or of photosynthesis. The computer models may be able to replicate those real-world processes (i.e. you might be able to put the elements of the models in a one-to-one relationship with the elements of the real-world phenomena), but they would still lack certain critical properties: they would not be wet or capable of converting sunlight into food. They would be mere simulations, not functional isomorphs. I agree with Pigliucci that the conflation of simulation with function is a real danger when it comes to creating functional isomorphs of the brain. 
Pigliucci’s second reason has to do with knowing the material constraints on consciousness. Here he draws on an analogy with life. We know that we are alive and that our being alive is the product of the complex chemical processes that take place in our body. The question is: could we create living beings from something other than this complex chemistry? Pigliucci notes that life on earth is carbon-based and that the only viable alternative is some kind of silicon-based life (because silicon is the only other element that would be capable of forming similarly complex molecule chains). So the material constraints on creating functional isomorphs of current living beings are striking: there are only two forms of chemistry that could do the trick. This, Pigliucci suggests, should provide some fuel for scepticism about creating isomorphs of the human brain:

[This] scenario requires “only” a convincing (empirical) demonstration that, say, silicon-made neurons can function just as well as carbon-based ones, which is, again, an exclusively empirical question. They might or might not, we do not know. What we do know is that not just any chemical will do, for the simple reason that neurons need to be able to do certain things (grow, produce synapses, release and respond to chemical signals) that cannot be done if we alter the brain’s chemistry too radically. 

(Intelligence Unbound, p 125)
I don’t quite buy the analogy with life. I think we could create wholly digital living beings (indeed, we may even have done so) though this depends on what counts as “life”, which is a question Pigliucci tries to avoid. Still, I think the point here is well-taken. There is a lot going on in the human brain. There are a lot of moving parts, a lot of complex chemical mechanisms. We don’t know exactly which elements of this complex machinery need to be replicated in our functional isomorph. If we replicate everything then we are just creating another biological brain. If we don’t, then we risk missing something critical. Thus, there is a significant hurdle when it comes to knowing whether our upload will share the consciousness of its biological equivalent. It has been a while since I read it, but as I recall, John Bickle’s work on the philosophy of neuroscience develops this point about biological constraints quite nicely.

This epistemic hurdle is heightened by the hard problem of consciousness. We are capable of creating functional isomorphs of some biological organs. For example, we can create functional isomorphs of the human heart, i.e. mechanical devices that replicate the functionality of the heart. But that’s because everything we need to know about the functionality of the heart is externally accessible (i.e. accessible from the third-person perspective). Not everything about consciousness is accessible from that perspective.

3. Pigliucci on the Identity Question

After his lengthy discussion of the consciousness issue, Pigliucci has rather less to say about the identity issue. This isn’t surprising. If you don’t think an upload is likely to be conscious, then you are unlikely to think that it will preserve your identity. But Pigliucci is sceptical even if the consciousness issue is set to the side. 
His argument focuses on the difference between destructive and non-destructive uploading. The former involves three steps: brain scan, mechanical reconstruction of the brain, and destruction of the original brain. The latter just involves the first two of those steps. Most people would agree that in the latter case your identity is not transferred to the upload. Instead, the upload is just a copy or clone of you. But if that’s what they believe about the latter case, why wouldn’t they believe it about the former too? As Pigliucci puts it:

If the only difference between the two cases is that in one the original is destroyed, then how on earth can we avoid the conclusion that when it comes to destructive uploading we just committed suicide (or murder, as the case may be)? After all, ex hypothesi there is no substantive differences between destructive and non-destructive uploading in terms of end results…I realize, of course, that to some philosophers this may seem far too simple a solution to what they regard as an intricate metaphysical problem. But sometimes even philosophers agree that problems need to be dis-solved, not solved [he then quotes from Wittgenstein]. 

(Intelligence Unbound, p. 128)
Pigliucci may be pleased with this simple, common-sensical solution to the identity issue, but I am less impressed. This is for two reasons. First, Chalmers made the exact same argument in relation to non-destructive and destructive uploading — so Pigliucci isn’t adding anything to the discussion here. Second, this criticism ignores the gradual uploading scenario. It was that scenario that Chalmers thought might allow for identity to be preserved. So I’d have to say Pigliucci has failed to engage the issue. If this were a formal debate, the points would go to Chalmers. That’s not to say that Chalmers is right; it’s just to say that we have been given no reason to suppose he is wrong.

4. Conclusion

To sum up, Pigliucci is much more pessimistic than Chalmers. He thinks it unlikely that an upload would be conscious. This is because the computational theory of mind is flawed, and because we don’t know what the material constraints on consciousness might be. He is also pessimistic about the prospect of identity being preserved through uploading, believing it is more likely to result in death or duplication. 
I have suggested that Pigliucci may be right when it comes to consciousness: whatever the merits of the computational theory of mind, it is true that we don’t know what it would take to build a functional isomorph of the human brain. But I have also suggested that he misses the point when it comes to identity. 


~ John Danaher holds a PhD from University College Cork (Ireland) and is currently a lecturer in law at NUI Galway (Ireland). His research interests are eclectic, ranging broadly from philosophy of religion to legal theory, with particular interests in human enhancement and neuroethics.
John blogs at
You can follow him on twitter @JohnDanaher.  

Donald Prothero - “Proof of Heaven”? (2 Years Later)

Two years after "Dr." Eben Alexander's book claimed that he had died (he was actually in a coma - death is a little more permanent) and gone to heaven, he is boasting about a new book called Map of Heaven. Seems lime a good time to re-debunk the first book.

Donald Prothero, writing at Skeptic, offers up the high points (or is that low points) of Alexander's fable as exposed by Esquire in 2013 (it will cost you $2.99 to read it).

“Proof of Heaven”?

Posted on Sep. 19, 2014 by  

It has been two years now since the best-seller lists in the “Non-Fiction” category were dominated by books claiming that the writer visited heaven, and then returned to write a book about it. The most famous was Dr. Eben Alexander’s tale, Proof of Heaven: A Neurosurgeon’s Journey into the Afterlife, which was released in October 2012, featured on Dr. Oz, on Larry King Live, on Oprah and on the cover of Newsweek. It  sold over two million copies and had been on the best-seller list for 35 weeks as of July 2013; more recent sales figures are not available, but it is no longer near the top of the best-seller list. But almost two years since the book came out, a lot of interesting facts have emerged that make the book seem less like a non-fictional account of heaven, and more like a convenient fiction to get a doctor in trouble out of his predicament and at the same time, make him filthy rich and immune to the criticism of the scientific and medical community. Now he has a website to suck in more readers, and is bragging about his next book to come out soon, called Map of Heaven.

The basic story is that Alexander, a neurosurgeon, was infected by a virulent strain of bacterial meningitis and was put in intensive care for seven days in 2008. Doctors also used drugs to induce a coma, which shuts down part of the brain. After his infection had subsided, he awoke from his coma, sure that he had experiences of heaven. He gave an elaborate account of it which takes up most of the book, complete with descriptions of millions of butterflies, and seeing his late sister in a peasant dress and having a conversation with her. He asserts that he was medically dead during this time, that his cerebral cortex was shut down, and that he miraculously came back to life with a memory of a pleasant short trip to celestial paradise.

But soon after his book came out, investigations into his past were conducted. In a 2013 article called “The Prophet” (paywall), Esquire contributing editor Luke Dittrich dug up a lot of facts which suggest it may all have been a fable concocted to cash in on the widespread religious belief in heaven—a fable made all the more persuasive coming from the mouth of a neurosurgeon.
Here are some of the key points established by Dittrich (given here roughly as summarized by Jerry Coyne in his useful discussion of Dittrich’s piece):
  • After repeated lawsuits, Alexander temporarily or permanently lost his surgical privileges at two different hospitals. For example, as Dittrich wrote, “In August 2003, UMass Memorial suspended Alexander’s surgical privileges ‘on the basis or allegation of improper performance of surgery.'”
  • Alexander has been repeatedly accused of falsifying evidence related to his surgeries—a “court-documented history of revising facts,” in Dittrich’s description.
  • One of the key stories which begins Alexander’s book is a near-collision with another parachutist—supposedly Alexander’s first near-death experience, and his first “proof of heaven.”  As Alexander claimed in his book,
    I had reacted in microseconds… How had I done it? … I realize now that…as marvelous a mechanism as the brain is, it was not my brain that saved my life that day at all. What sprang into action the second Chuck’s chute started to open was another, much deeper part of me. A part that could move so fast because it was not stuck in time at all the way the brain and body are.
    But rather than revealing a profound cosmic truth, this event may not have happened at all. When Dittrich dug into the story, he found that Chuck, named in the book as the other parachutist involved, had no recollection of this aerial brush with death. Confronted with this discovery, Alexander claimed that he changed the other parachutist’s name to “Chuck,” supposedly for legal reasons.
  • Some elements of the book appear to be artistic embellishments, such as the “perfect rainbow” that greeted Alexander upon his return to full consciousness. This flourish seems to be ruled out by weather records.
  • Although Alexander claimed his coma was caused by bacterial meningitis, emergency room doctor Laura Potter told Dittrich that she induced Alexander’s coma medically to stabilize his condition. Contrary to Alexander’s claims, his brain was not inactive during the coma. As Dittrich notes, “a key point of his argument for the reality of the realms he claims to have visited is that his memories could not have been hallucinations, since he didn’t possess a brain capable of creating even a hallucinatory conscious experience.
” However, Dr. Potter told Dittrich that Alexander was actually “Conscious but delirious” during his days under sedation.
  • One of the crucial moments in Alexander’s tale is his claim that he clearly cried to God just before going under. According to Dittrich, Dr. Potter
    … has no recollection of this incident, or of that shouted plea. What she does remember is that she had intubated Alexander more than an hour prior to his departure from the emergency room, snaking a plastic tube down his throat, through his vocal cords, and into his trachea. Could she imagine her intubated patient being able to speak at all, let alone in a crystal-clear way?
    “No,” she says.
Dittrich’s research paints an incredibly damning picture. As Coyne sums up, “the story looks like a sham, confected by a once-brilliant but now failed neurosurgeon who reclaims his time in the spotlight by pretending that he saw heaven. ”

An even more scathing commentary was provided by Sam Harris, who has done research in neurophysiology and brain function. Harris first eviscerates Newsweek magazine for running the story uncritically and providing no skeptical or scientific second opinions. In his words:
Whether you read it online or hold the physical object in your hands, this issue of Newsweek is best viewed as an archaeological artifact that is certain to embarrass us in the eyes of future generations. Its existence surely says more about our time than the editors at the magazine meant to say—for the cover alone reveals the abasement and desperation of our journalism, the intellectual bankruptcy and resultant tenacity of faith-based religion, and our ubiquitous confusion about the nature of scientific authority. The article is the modern equivalent of a 14th-century woodcut depicting the work of alchemists, inquisitors, Crusaders, and fortune-tellers. I hope our descendants understand that at least some of us were blushing.
Harris then goes on to carefully dissect Alexander’s claims, especially the assertion that his cerebral cortex was “shut down” or “inactivated.” His claim is not based on an fMRI or EEG or PET scan or any test that would tell if his cerebral cortex was inactive, but only CT scans, which tell you nothing about the activity within the cerebral cortex. If Alexander is such a great neurosurgeon, why doesn’t he know this?

Harris consulted Dr. Mark Cohen, a neurophysiologist at UCLA Medical Center, who pointed out the obvious problems with Alexander’s account:
As you correctly point out, coma does not equate to “inactivation of the cerebral cortex” or “higher-order brain functions totally offline” or “neurons of [my] cortex stunned into complete inactivity”. These describe brain death, a one hundred percent lethal condition. …
We are not privy to his EEG records, but high alpha activity is common in coma. Also common is “flat” EEG. The EEG can appear flat even in the presence of high activity, when that activity is not synchronous. For example, the EEG flattens in regions involved in direct task processing. This phenomenon is known as event-related desynchronization (hundreds of references).
As is obvious to you, this is truth by authority. Neurosurgeons, however, are rarely well-trained in brain function. Dr. Alexander cuts brains; he does not appear to study them. “There is no scientific explanation for the fact that while my body lay in coma, my mind—my conscious, inner self—was alive and well. While the neurons of my cortex were stunned to complete inactivity by the bacteria that had attacked them, my brain-free consciousness …” True, science cannot explain brain-free consciousness. Of course, science cannot explain consciousness anyway. In this case, however, it would be parsimonious to reject the whole idea of consciousness in the absence of brain activity. Either his brain was active when he had these dreams, or they are a confabulation of whatever took place in his state of minimally conscious coma.
There are many reports of people remembering dream-like states while in medical coma. They lack consistency, of course, but there is nothing particularly unique in Dr. Alexander’s unfortunate episode.
So, if we add all this up, we have a neurosurgeon who makes fundamental mistakes about how the brain works, because he is not a neuroscientist or neurophysiologist—and that is a BIG difference. On top of this, he has a history of falsifying records and was in trouble with numerous malpractice suits, so his medical career was effectively over. And when Dittrich checked with other people, many important details in the book turned out clearly false.

This does not seem to trouble Alexander or any of his followers who want to believe him. They, like so many others, are willing to be duped out of their money for the book and make him rich, all while he tells them fairy stories to confirm their beliefs and make them feel good. It wouldn’t be the first time some religious figure separated people from their money—but perhaps the first time it was done by a neurosurgeon in a white lab coat.

Dr. Donald Prothero taught college geology and paleontology for 35 years, at Caltech, Columbia, and Occidental, Knox, Vassar, Glendale, Mt. San Antonio, and Pierce Colleges. He earned his B.A. in geology and biology (highest honors, Phi Beta Kappa, College Award) from University of California Riverside in 1976, and his M.A. (1978), M.Phil. (1979), and Ph.D. (1982) in geological sciences from Columbia University. He is the author of over 35 books.

Friday, September 19, 2014

Ig Nobel Prizes 2014: From Jesus on Toast to Baby Faeces in Sausages

As I often wonder when I read these "studies," how the hell does anyone think up this stuff?

Via The Conversation:

Ig Nobel prizes 2014: from Jesus on toast to baby faeces in sausage

19 September 2014


Akshat Rathi
- Science and Data Editor at The Conversation

Flora Lisica
- Assistant Section Editor at The Conversation

Lord Toast. Catarina Mota, CC BY-NC-SA

The 24th Ig Nobel prizes were announced on September 18. The prizes annually award scientific research that “first makes people laugh and then makes them think."

The ceremony was food-themed including competitions such as Win-a-Date-With-a-Nobel-Laureate Contest. The awards for individual categories were presented at Harvard University by “a group of genuine, genuinely bemused Nobel Laureates.”

And the winners are:


The prize went to Kiyoshi Mabuchi of Kitasato University for his work “measuring the amount of friction between a shoe and a banana skin, and between a banana skin and the floor, when a person steps on a banana skin that’s on the floor”. Also tested were apple peels and orange skin – found to be less dangerous. Apparently the banana peels form a sugary gel under pressure that makes them more slippery. No humans were injured during the experiment.


Creatures of the night are, on average, “more self-admiring, more manipulative and more psychopathic” than people who habitually wake up early in the morning, according to Peter Jonason of the University of Western Sydney and colleagues. More specifically, the team showed that people with the Dark Triad set of personality traits – narcissism, psychopathy, and Machiavellianism – would do well after dark, because people would generally pay less attention to their manipulations.

Public health

Researchers from US, India, Japan and Czech Republic shared the prize “for investigating whether it was mentally hazardous for a human being to own a cat.” Cats rule the internet, but these researchers revealed that owning a pet cat is associated with some personality changes, including lowering intelligence in men and feeling less guilt in women. Two different teams won this prize. One focused on a cat-borne parasite that can infect humans and is known to manipulate the behaviour of its victims; the second looked at whether depression correlated with being bitten by cats.


Kang Lee at the University of Toronto and colleagues bagged the neuroscience prize “for trying to understand what happens in the brains of people who see the face of Jesus in a piece of toast.” Their results show that this behaviour is quite normal: the human brain is wired to quickly recognise faces in things with slight suggestions of one, no matter whether they are on a dog’s behind or on a naan bread. To find the regions of the brain involved in this, the researchers created a bunch of images of random noise, stuffed participants in an MRI tube, and told them that half of the images contained a face (they did a separate study where they were told they contained a letter). Over a third of the time, the subjects thought they saw a face.


After more than 5,000 observations, Vlastimil Hart of Czech University of Life Sciences and colleagues found that dogs prefer to align themselves to the Earth’s north-south magnetic field while urinating and defecating. Its nomination in the Ig Nobel’s was probably expected as soon as the paper was published. The researchers concluded that their findings forced “biologists and physicians to seriously reconsider effects magnetic storms might pose on organisms.” And no doubt those who have to clean up after them.


The aesthetics of paintings have been a subject of interest for scholars for hundreds of years. Now Marina de Tommaso of the University of Bari and her colleagues have won the Ig Nobel prize for getting quantitative about it. They prize was given “for measuring the relative pain people suffer while looking at an ugly painting, rather than a pretty painting, while being shot [in the hand] by a powerful laser beam”. They found that the perception of pain can be changed based on the aesthetic content of the painting. Technically, however, they couldn’t tell whether the art was altering the perception of the pain from the laser, or if the pain was an additive effect of looking at a painfully ugly piece of art while the laser was on.


Sonal Saraiya of Michigan State University and her colleagues won the Ig Nobel prize in medicine for developing nasal tampons made from bacon. Their use is specifically for Glanzmann Thrombasthenia, a blood disorder which can lead to “uncontrollable nosebleeds.”

Arctic Science

Reindeer aren’t safe in Norway. Eigel Reimers and Sindre Eftestøl of the University of Oslo noticed that polar bears were stalking them. To find out whether the reindeer were able to respond to the threat from bears, Reimers and Eftestøl had people approach the reindeer. To make the experimental and control groups as similar as possible, they got some humans to dress in polar bear costumes. The results showed that reindeers ran twice the distance when they saw a person in a polar bear costume.


When you think of healthy eating, you probably don’t think of sausages. But that may change, thanks to the amazing power of baby poop. With noses held tight, a team of medical researchers obtained bacteria from the faeces of infants, then tested which ones could both help to ferment sausages and also pass through the stomach to take up residence in the guts. Their finding could ultimately lead to probiotic sausages.

While the world loves the Ig Nobels, not all scientists take it in the same spirit. Peter Stahl, from the University of Victoria and a previous prizewinner, said there was a session at the end of the event where researchers are able to discuss the scientific aspects of their work. “But they could do a lot more to give people the context in which the science being mocked was done,” Stahl said.

Harry Flint, from the University of Aberdeen, said a certain amount of negativity was attached to the prize and the hard work by scientists. “Most scientists won’t want to be an award winner of the Ig Nobel Prize,” he said.

Rajita Sinha: The Stressed Brain: Hijacking Cognition, Emotion, Behavior, and Health

The video below showed up in my feed a couple of days before the article I'm sharing on how stress generates enzymes that attack the brain. Together, this information highlights how destructive stress can be on our brains and our cognitive function.

How stress tears us apart: Enzyme attacks synaptic molecule, leading to cognitive impairment

Date: September 18, 2014
Source: Ecole Polytechnique Fédérale de Lausanne
Why is it that when people are too stressed they are often grouchy, grumpy, nasty, distracted or forgetful? Researchers have just highlighted a fundamental synaptic mechanism that explains the relationship between chronic stress and the loss of social skills and cognitive impairment. When triggered by stress, an enzyme attacks a synaptic regulatory molecule in the brain, leading to these problems.
Carmen Sandi's team at EPFL discovered an important synaptic mechanism in the effects of chronic stress. It causes the massive release of glutamate which acts on NMDA receptors, essential for synaptic plasticity. These receptors activate MMP-9 enzymes which, like scissors, cut the nectin-3 cell adhesion proteins. This prevents them from playing their regulatory role, making subjects less sociable and causing cognitive impairment. Credit: EPFL  
Why is it that when people are too stressed they are often grouchy, grumpy, nasty, distracted or forgetful? Researchers from the Brain Mind Institute (BMI) at EPFL have just highlighted a fundamental synaptic mechanism that explains the relationship between chronic stress and the loss of social skills and cognitive impairment. When triggered by stress, an enzyme attacks a synaptic regulatory molecule in the brain. This was revealed by a work published in Nature Communications.

Carmen Sandi's team went to look for answers in a region of the hippocampus known for its involvement in behavior and cognitive skills. In there, scientists were interested in a molecule, the nectin-3 cell adhesion protein, whose role is to ensure adherence, at the synaptic level, between two neurons. Positioned in the postsynaptic part, these proteins bind to the molecules of the presynaptic portion, thus ensuring the synaptic function. However, the researchers found that on rat models affected by chronic stress, nectin-3 molecules were significantly reduced in number.

The investigations conducted by the researchers led them to an enzyme involved in the process of protein degradation: MMP-9. It was already known that chronic stress causes a massive release of glutamate, a molecule that acts on NMDA receptors, which are essential for synaptic plasticity and thus for memory. What these researchers found now is that these receptors activated the MMP-9 enzymes which, like scissors, literally cut the nectin-3 cell adhesion proteins. "When this happens, nectin-3 becomes unable to perform its role as a modulator of synaptic plasticity" explained Carmen Sandi. In turn, these effects lead subjects to lose their sociability, avoid interactions with their peers and have impaired memory or understanding.

The researchers, in conjunction with Polish neuroscientists, were able to confirm this mechanism in rodents both in vitro and in vivo. By means of external treatments that either activated nectin-3 or inhibited MMP-9, they showed that stressed subjectscould regain their sociability and normal cognitive skills. "The identification of this mechanism is important because it suggests potential treatments for neuropsychiatric disorders related to chronic stress, particularly depression," said Carmen Sandi, member of the NCCR-Synapsy, which studies the neurobiological roots of psychiatric disorders.

Interestingly, MMP-9 expression is also involved in other pathologies, such as neurodegenerative diseases, including ALS or epilepsy. "This result opens new research avenues on the still unknown consequences of chronic stress," concluded Carmen Sandi, the BMI's director.

Story Source:
The above story is based on materials provided by Ecole Polytechnique Fédérale de Lausanne. Note: Materials may be edited for content and length.

Journal Reference:

Michael A. van der Kooij, Martina Fantin, Emilia Rejmak, Jocelyn Grosse, Olivia Zanoletti, Celine Fournier, Krishnendu Ganguly, Katarzyna Kalita, Leszek Kaczmarek, Carmen Sandi. (2014). Role for MMP-9 in stress-induced downregulation of nectin-3 in hippocampal CA1 and associated behavioural alterations. Nature Communications; 5: 4995 DOI: 10.1038/ncomms5995
The article referenced here is open access, but it is highly technical. For those who want to read more, I am including the Discussion section below the video (at the bottom of the page).

* * * * *

Rajita Sinha: The Stressed Brain

Published on Sep 16, 2014

A Stockholm Psychiatry Lecture given by Professor Rajita Sinha, Yale University, at Karolinska Institutet Aug 27 2014. Title of the lecture: The stressed brain: hijacking cognition, emotion, behavior and health.
* * * * *

Here is the discussion section of the article summarized above.


We tested the hypothesis that MMP gelatinase activity is involved in key proteolytic processing events induced by chronic stress in a hippocampal subfield-dependent manner and in connection with behavioural changes. We show that chronic stress leads to a CA1-specific reduction in the perisynaptic expression of ​nectin-3 and found that this reduction is critically involved in the stress-induced deficits in social exploration, social recognition and CA1-dependent cognition. Interestingly, we found increased ​MMP-9-related gelatinase activity in the hippocampal CA1 in chronically stressed animals and could show that ​MMP-9 itself cleaves recombinant ​nectin-3, a process mediated via the NMDA-receptor. Consistently, intra-CA1 administration of either an ​MMP-9 inhibitor or an NMDA receptor antagonist during stress exposure prevented the development of stress-induced deficits in social exploration, social memory and CA1-dependent cognition. Our findings highlight a fundamental role for ​MMP-9 in the effects of chronic stress on brain function and behaviour.

Nectins are emerging as both targets24, 43 and mediators25 of stress actions in hippocampal-dependent memory and structural plasticity. We found molecular-, regional-, cellular compartment- and stress duration-dependent changes, with reduced ​nectin-3 expression after 21 days, but not 1 day, of restraint stress in the CA1 synaptoneurosomal, but not the total fraction. This was paralleled by deficits in several social behaviours and in a CA1-dependent cognitive task. Our results from cell culture experiments suggested that NMDA receptor activation during stress exposure might be implicated in the cleavage of ​nectin-3 in CA1 and its associated behavioural alterations. Previous work has implicated NMDA receptor activation in chronic stress-induced structural alterations in the hippocampus12, 44, 45. Our in vivo study involving the pharmacological administration of the NMDA receptor antagonist ​MK-801, either systemically or directly into the CA1 region, confirmed that this treatment prevented the stress-induced reduction of ​nectin-3 expression in the CA1 synaptoneurosomal fraction as well as the behavioural impairments induced by stress in the sociability and temporal order task.

Using AAV-induced OE of ​nectin-3 either in the whole hippocampus or specifically in the CA1 area, we obtained evidence for a causal role of ​nectin-3 reduction in chronic stress-induced behavioural alterations, with the exception of the aggressive phenotype. We confirmed that the effects of ​nectin-3 OE were not due to altered physiological responses to the stress procedure (for example, body weight changes or ​corticosterone responses) or to changes in anxiety or locomotion. We found increased ​nectin-1 expression associated with AAV-​nectin-3 OE throughout the hippocampus, consistent with evidence in knockout mice indicating that downregulation of either ​nectin-1 or ​nectin-3 induces a parallel decrease in the levels of the other nectins in the hippocampus46. ​Synaptophysin levels were not changed by ​nectin-3 OE and/or chronic stress, which is line with findings described for ​nectin-3 knockout mice46. In addition, using the same chronic restraint stress protocol as described here, changes in the size of postsynaptic densities were observed but not in synaptic density in the CA1 (ref. 5). Interestingly, consistent with evidence that nectins recruit cadherins to cooperatively promote cell adhesion47, we found a reduction in the CA1 perisynaptic ​N-cadherin levels. The specificity of these molecular changes in CA1 was supported by a lack of significant changes in the stressed animals’ synaptoneurosomal compartment of ​SynCAM-1 in the same brain region. To verify that the molecular changes specifically observed in CA1 were associated with well-established CA1-dependent behaviours, we tested animals in the temporal order task that is sensitive to CA1, but not to CA3, lesions40. With regard to region-specificity, our findings for CA1 are in contrast with recent evidence in mice showing reduced ​nectin-3 expression in CA3 (refs 24, 25). This disparity may be attributed to differences in the animal species or stress procedures.

MMPs are a family of proteolytic enzymes that degrade components of the extracellular matrix and cleave specific cell-surface proteins48, making them particularly suitable to sustain neural remodelling processes15. The degradation of cell adhesion molecules is one of the main mechanisms whereby MMPs affect neural plasticity9, 22 and the synapse-associated ​nectin-3 decrease suggested the potential involvement of proteolytic processing. ​Nectin-1 has been shown to undergo ectodomain shedding by alpha-secretase32; however, the molecular players involved in ​nectin-3 shedding remained unknown.

We found that decreased ​nectin-3 expression in the hippocampal CA1, but not in the CA3, synaptoneurosomes of the stressed animals is accompanied by increased gelatinase activity. This suggested an increase in ​MMP-2 and/or ​MMP-9 activity, as these two MMPs are the most prominent gelatinases expressed in the brain. Our cell culture experiments also indicated that NMDA receptor stimulation leads to increased ​nectin-3 proteolytic cleavage that is ​MMP-9 dependent. The involvement of ​MMP-9 and not ​MMP-2 is consistent with a previous study showing that ​MMP-2 does not interact with ​nectin-3 (ref. 49). Furthermore, we provide direct evidence that ​MMP-9 cleaves recombinant ​nectin-3. Interestingly, ​MMP-9 cleaves several postsynaptic proteins involved in trans-synaptic adhesion via their interaction with presynaptic proteins. The list of such ​MMP-9 targets includes ​β-dystroglycan that binds to neurexins42 as well as ​neuroligin-1 also binding neurexins50. Our findings are in line with previous reports implicating hippocampal ​MMP-9 in changes in dendritic spine morphology51 as well as in the cellular processes that contribute to a stressful learning task20. Importantly, we show that intra-CA1 treatment with a specific ​MMP-9 inhibitor prevented the emergence of chronic stress-induced effects in social exploration and CA1-dependent cognition. Therefore, our results are consistent other findings that indicate a crucial role for extracellular proteolysis in the stress-induced behavioural alterations, with former studies highlighting the role of serine proteases, including the ​tissue-plasminogen activator12 and ​neuropsin10.

Although deregulated social behaviour is a hallmark of many psychiatric disorders52, studies focusing on the link between chronic stress and psychopathology has mainly concentrated on studies in mood and cognition2, 7, whereas the effects of stress on social behaviours are much less known. In agreement with our previous study8, we confirm here that chronic restraint stress for 21 days leads to clear alterations in the social domain, including reduced sociability, impaired social memory and increased aggressive behaviours. The hippocampus has been implicated in social behaviours both in rodents53 and in humans54. Consistent with our findings, social recognition in rats was disrupted by CA1 damage55. However, although the effects of stress on sociability and social memory were rescued with ​nectin-3 OE, increased aggressive behaviours were not modified by this treatment. We have recently found that targeting the cell adhesion molecule ​neuroligin-2 expression or function in the hippocampus alters aggressive behaviour8, 56, suggesting the involvement of the hippocampus in the regulation of aggression. However, it should be noted that those treatments were not confined to the CA1 area, which, on its own, might not modulate aggressive behaviours.

In summary, our findings identify a key role for ​MMP-9 proteolytic processing of ​nectin-3 in the hippocampal CA1, through a mechanism that engages NMDA receptors, among the processes leading to chronic stress-induced changes in social and cognitive behaviours. In addition to ​nectin-3, recently identified as potential mediators in stress-related disorders25, our study highlights ​MMP-9 activity as a novel target for the treatment of stress-related neuropsychiatric disorders, in particular depression, which is typically characterized by deficits in the social and cognitive domains.