The result is really interesting, and I would love to read more when they reach n=5. (Or even n=1 on a reasonably normal brain.)
But Crick's notion sounds a lot like what Dennett derisively calls the "Cartesian Theater", the tendency to imagine consciousness as little room where a tiny homunculus watches the screens where all sensory data is displayed: http://en.wikipedia.org/wiki/Cartesian_theater
In Dennett's view, consciousness is basically a distributed system. There is no single place where consciousness "really happens". That a single electrode can disrupt consciousness doesn't suggest otherwise. If a backhoe hitting a cable junction triggered an AWS failure, we wouldn't say that the cable junction is where AWS "really happens".
I would assume these researchers are well aware of such problems. I still think it's very important to research it.
If we take a signal like a certain sound that evokes an experience, or we can even ask the person to do some conscious processing based on the sound, the sound could be a question like "what is the color of the sky?".
We can follow the signal: We can see first some mechanical preprocessing steps in the ear, then some neural preprocessing steps, and the signal is branched out, lots of different brain areas are probably activated and then again some fine movement post processing steps are done so that the person finally answers "blue".
I think it would be premature say that all the trivial very low level preprocessing steps in the ear are just as a relevant part of the conscious experience of hearing the question, as the understanding of the question.
Say, in a stretched analogy, if someone would like to understand how the computer can calculate some ray tracing or use some compression algorithms, it would not be that important to just understand how on die caches or PCI express lanes work. The important bit is to understand how instructions cause the ALU:s to wrangle the bits in registers, that's the core of the magic, most of the other stuff is relatively trivial pre- and postprocessing.
So in this sense I think the question is well posed.
There's also practical implications. Think if we could have reliable anesthesia (or the hypnosis part of it, you might still need pain killers and muscle relaxants). We might have very few side effects. We would not have to give big doses just to be safe. This could mean much faster recovery after anesthesia.
Also costs of anesthesia would drop immensely if it could be even better controlled than it is now. It costs a lot to stay in a hospital. Also if you're unconscious, you have to be taken care of, again tying people...
Ketamine is indeed safer than many other anesthetics in terms of the risk of respiratory depression, which is why it is a common choice for veterinary anesthesia. However, it's not uniformly better than more conventional anesthetics. Its half-life is substantially longer than that of propofol, so it actually takes longer to recover from. There is also some evidence that repeated administration of ketamine can produce brain damage, although it's unclear whether this is clinically relevant.
As long as we're on the topic of targeting the claustrum specifically with pharmaceutical agents, I'll point out that the claustrum has a particularly high density of kappa opioid receptors, which are the target of salvinorin A, the active constituent of the psychotropic plant Salvia divinorum [1]. Of course we have no evidence the plant's effects are specifically related to its action in the claustrum.
[1] Smythies, J., Edelstein, L., & Ramachandran, V. (2012). Hypotheses relating to the function of the claustrum. Frontiers in Integrative Neuroscience, 6. doi:10.3389/fnint.2012.00053
The claustrum is suspected to be a key junction that ties together different systems, making coordination between systems possible. This possibility is suggested by the claustrum's seemingly unique anatomical situation--so many other regions run into it like no other.
If the claustrum is a junction connecting myriad systems, enabling coordinated forms of mental life, possibly consciousness, then in a sense, it could really be a key to consciousness.
Calling this critical juncture key to consciousness is not saying the claustrum is a theater where a homunculus watches everything come together, or that consciousness happens in the claustrum.
Consciousness is a state of the brain where disparate systems interact, and the claustrum may be key to explaining that coordination in consciousness.
Borrowing the car analogy in the article, the ignition system isn't where a car "runs". Nonetheless, the ignition is key to explaining how everything that comes together to put the car in a running state. I think people are misreading what the investigator meant when he suggests the claustrum might be key to explaining the conscious state.
Am I wrong here? Somebody quote me the passage I'm missing where the investigators say the claustrum is where consciousness happens, rather than just saying that the claustrum may be key to understanding how consciousness happens.
The sentence in the article that reminded me of Dennett was this one: "Crick was working on a paper that suggested our consciousness needs something akin to an orchestra conductor to bind all of our different external and internal perceptions together."
Thanks for sourcing. I guess it's not your fault for reading the author's anthropomorphized metaphor and thinking "homunculus", particularly if you have no familiarity with Crick and Koch's project. In any case, while relating one anthropomorphic metaphor to another is not bad analogy making, the analogy doesn't correspond in fact to what Crick and Koch are looking for. They do believe consciousness has a unified quality, where the input of disparate system form a gestalt. The problem is explaining how all input from so many systems coordinate together. Functionally, there must be a juncture to make coordination--hence the "conductor" metaphor--but how, where? As it happens, there is a piece of anatomy that looks like a physical nexus--the claustrum.
Well, the cable junction is also part of what really happens.
Not quite as much. If the cable junction fails, someone downstream of that junction won't have access to the calculations, but someone upstream of it still will. Whereas, if the clock in a CPU fails, there's no calculations happening anywhere.
As a classically trained scientist, I fail to see how this can be newsworthy given how big of revelation this would be to broader scientific discovery and how statistically invalid this claim is at this point.
However, as legend would have it, psychologist B.F. Skinner was famous for performing his experiments on a single cat/dog. When prodded by other psychologists on how he could claim his results were valid since he only tested it on one subject, he would say "Well, bring me your cat"
It's hardly statistically invalid. This wasn't a drugs trial. They noted an effect on a subject from a direct stimulus and then repeated the experiment numerous times with variations that corroborated their finding. Obviously the next step is to find another test subject but I think it's pretty obvious why this is newsworthy.
Howdy! I definitely understand how a lot of neurological progress has happened with single person experiments.
I guess my main quip is how matter-of-fact this article expresses the finding!
Then again, I may just be arguing over semantics and not taking the gist of the finding. I appreciate your simple note to remind me of the fact that the finding is the cool thing, not the words used when being reported!
I'm only looking this field from far, as public interest does, but I kind of remember reading brain can actually reorganize so that impaired zone's functionality are transferred to other ones, to compensate.
If I'm not mistaken here, doesn't this make using a single subject even more of a problem in neuroscience than in other science experiments ?
Even in other fields if you were just 'exploring', ie you didn't know what controlled experiment to perform, you would perform sample size 1 experiments in the hope of generating a testable hypothesis. That doesn't mean you can draw conclusions from this study, but it does introduce an intriguing option for a future study with controls and a larger sample size.
Well as soon as you admit more than one consciousness into the world your already on shakey ground. And allowing one consciousness is perhaps being charitable.
Not that I am arguing against other minds, mind you. It's rather that a starting point of classical science doesn't really get one anywhere in regards to consciousness. We are always performing an induction from the one we have privileged access to for any claim about others
What brudgers is saying is that there is no proof that can be made that anyone is conscious but yourself. Sure, you can experience love and feelings that lead you to believe that this partner of yours is conscious, and other people are.. But ultimately, there is a HUGE difference and leap of assumption between consciousness and reactiveness. We know animals are reactive. We only know that our self-being is conscious.
Obviously these are Matrix-type thoughts. I am just trying to explain brudgers point.
That's the direction a philosophical skeptic might head. But my point is more rooted in language; in 20th century Cambridge rather than 18th century Edinburgh - in Wittgenstein not David Hume.
Once we start talking about consciousness we're outside the realm of Newton's billiards balls and calculus. We're into psychology and navigating a linguistic sea full of terminology that owes more to Chaucer than Roger Bacon. A call for a larger sample size isn't going to give us a mathematical demonstration of any property of consciousness - it's just going to give us a claim on a larger confidence interval. It's going to make us feel better about our beliefs not show why they are correct.
I guess my point is that psychology is disjoint from 'classical science's simply because we cannot abstract its subject matter out of ordinary language and into mathematics. That doesn't mean we can't investigate it, just that we need to recognize the limitations imposed by the tools at our disposal. In programming terms, it's a ball of mud. Standardizing test conditions for investigating consciousness probably requires assumptions about states of consciousness upfront, e.g. sleeping, drowsy, awake, alert, and distracted. It probably relies on our nonscientific intuitions as well - trees and earthworms are right out and we're a long way from having the tools to investigate what it is like to be a bat in a scientific manner that feels continuous with the scientific method as applied in a field such as chemistry.
It's bloody-stupid is what it is. If you understood how your own consciousness works on a scientific level, you would know exactly what evidence to check for in other people to know whether they're conscious or not. You could perform a simple medical test to find out if someone's a p-zombie (hint: they definitely aren't).
You clearly aren't very introspective, too bad. That doesn't warrant the idea as bloody stupid. You cannot verify consciousness, only reactiveness. Maybe you are confused by medical consciousness versus consciousness in a real sense. Medical consciousness only means reactiveness, it's a misnomer
'It's only appearances' skepticism always includes "to me". It only works by induction on the skeptic's solipsism. The difficulty of your position is compounded by HN's interface - even the claim "it appears to me that you are made of meat" is implausible.
Even with solipsism you can easily say that I'm made of meat. On the other hand, saying your consciousness is entirely meat-based is hugely addumptive. You will not know until you are dead, if there's something more.
Conscious in a medical sense, where the brain acts as if it has separate consciousness, yes. But that's not what this problem is about.
Rather it is about consciousness in the philosophical sense, that starts with the question of whether or not anything outside of your own consciousness even exists, and if we posit the existence of a real world, whether or not other seemingly thinking, self-aware, conscious entities exists that are able to experience consciousness.
Since we don't know what gives rise to this form of consciousness, it is not evident whether or not there will ever be any evidence to determine whether they are conscious or not in that sense.
I believe Eli is coming from a position of: reality is made up of things that can be studied, and rules that are universal, and those things and rules are what gives rise to consciousness in me. If I understand the process that gives rise to consciousness in me, I can look at other people's brains and see whether the process would apply in them as well.
"Consciousness in the philosophical sense", to the extent that it's a meaningful thing to talk about, is part of reality.
>"Consciousness in the philosophical sense", to the extent that it's a meaningful thing to talk about, is part of reality.
Stronger statement: "consciousness in the philosophical sense" is either part of reality, or a meaningless construct invented by philosophers to justify metaphysical speculations, thus obtaining job security by having a permanent claim that some phenomenon actually exists that can never be reduced to science.
If you understood how your own consciousness works on a scientific level, you would know exactly what evidence to check for in other people
Leaving aside the difficulties in unpacking the idea of scientific introspection under the classical rubric of experiment as observation, once one decomposes consciousness to where a scientific level can be extracted, there's a corpse on the table not a patient. The whole gist of consciousness is that it's unified and once we admit a distinct 'scientific level' we ought to own up to what we have done and say "by consciousness I don't mean what is ordinarily meant, but instead I mean exactly 'x,y,z' and therefore my claims are not about consciousness in general but about this special definition."
And there's nothing wrong with that, and it might be useful.
To continue with the above thread, wouldn't Wittgenstein's response to this be something along the lines of: if scientifically dissecting consciousness results in a corpse, is it correct to say there was a body in the first place? Think of the question, "How does Helios pull the Sun across the sky?" After dissection, we resolve to question the question, not answer it.
See the "Mary's room" thought experiment. Thinking and reasoning about all the measurable properties of a phenomenon is way different than experiencing them. This doesn't necessarily mean that subjective perceptions have an immaterial existence, but it provides an approach to analyzing the mind that can't be achieved by physical measurement alone.
A better question would be "how can you try to understand consciousness without instrospection?" Studying consciousness merely by performing brain scans and electroencephalograms, without asking the subject what she's experiencing, would surely provide a poor and incomplete perspective.
I think some readers might be mistaking this for a 'Self-awareness on-off switch'
The subject's state could be compared to somebody in certain phases of sleep or in a coma. Not the same as those, but quite close. A quote in the article describes her as being 'still awake', but I wonder what that means.
It's not like the subject became a zombie. According to the very brief description, she just became completely unresponsive, didn't do anything, and had no memory of what had happened.
Consciousness in the medical sense can be considered along axes of wakefulness and awareness. For example, in a coma the subject is neither aware nor awake.
In contrast, patients in a persistent vegetative state are awake, but not aware. It sounds like the researchers may have replicated this state of consciousness, which would imply that the claustrum is important in awareness, but perhaps not as critical in wakefulness, at least in this particular subject (keeping in mind that her brain is already abnormal in some sense due to epilepsy and neurosurgery that excised the left side of her hippocampus).
> A quote in the article describes her as being 'still awake', but I wonder what that means.
I suppose both physical and neurological observable processes that as we know define 'sleep' and 'awake': her eyes were open but with a blank stare and so on, and since they said she did not exhibit neural signs of epilepsy, she was wired to an EEG, so brain patterns would possibly not be matching any phase of sleep but wakefulness.
That's all that could be inferred from the article, which is honestly quite thin and sensationalist.
Yes it seems the claustrum integrates all your sensorial input, I'd also really like to know more about the subjective experience of it being switched off.
The article repeatedly gives the impression of an abrupt change, by using words like 'switched' and 'stopped', but the description used later makes it sound like a more 'normal', gradual process.
she gradually spoke more quietly or moved less and less until she drifted into unconsciousness.
That part of the process sounds very similar to someone undergoing anesthesia, or simply falling asleep. Though there are obvious differences.
So maybe that's what it feels like? Some people remember the feeling of going under, but maybe not the last few seconds of it.
"Koubeissi thinks that the results do indeed suggest that the claustrum plays a vital role in triggering conscious experience. "I would liken it to a car," he says. "A car on the road has many parts that facilitate its movement – the gas, the transmission, the engine – but there's only one spot where you turn the key and it all switches on and works together. So while consciousness is a complicated process created via many structures and networks – we may have found the key."
“We may have found the key” is a short phrase containing only Germanic origin words, so it sounds “strong”; and occurring last, giving you serial position bias.
According to Google[1] key in English is from an old English word of unknown origin. It also doesn't mean it has to come from the word of the exact same meaning in its origin language
Evin* stated that Germanic words sound strong and that is why I preferred this excerpt. Do you have something to back that up? You don't know me at all, so in my opinion this statement is more speculative than the n=1 stumbled-upon-discovery being discussed.
I actually didn't focus much on the end of this excerpt. And I should have added the following in my original comment - I actually honed in on this part:
"but there's only one spot where you turn the key and it all switches on and works together"
Immediately after I read this, I actually imagined the key turning off, and the car shutting down. This is precisely what the author/surgeon stated happened during the surgery.
Also, I was too going to point out that 'clé' comes from French too. Maybe an etymology dictionary says it comes from Old English, but English is of course partially derived from Old French as well as Norse/Germanic languages [1].
Saying that I liked a metaphor because it contains Germanic words (like much of the English language) that occurred last is a nice theory. But what if I preferred softer sounding words for some other reason and had a bias against Germanic words - say because I studied French in school, lived in France or had grandparents that died horrifically during WWII.
I didn't downvote FYI.
But another theory as to why I liked it is because that passage instantly helped me to understand what the author believes the incredibly complex system the brain is like by using a simple rhetorical device called a metaphor.
On the important subject of whether or not consciousness is actually switched on and off in the claustrum, as a non-neuro-professional I have to grant his metaphor some credence and resolve to investigate further.
Hedging with “I think” unnecessarily weakens the presentation of an idea. If you actually believe it, why wouldn’t you present it as fact? Then if some evidence comes along to disprove you, you change your mind.
I don't see the importance of this study outside of the neuroscience field. They just found another way of making people to stop reacting to external stimuli.
I've come to the conclusion, that science will never be able to even define consciousness, let alone explain it.
For starters, if it can be induced externally without shoving wires into your brain, it could conceivably be a safer, more reliable form of surgical anesthesia. Even with "anesthesiologist" being a dedicated career path, botched anesthesia kills more people than we'd like to admit, plus there's the rare-but-horrific surgical awareness cases. Even if it can't be induced by (say) an external magnetic field, a minimally-invasive implant might be provided to a patient who's expected to need repeated surgeries.
Also, as the article points out, reversing the process could potentially be useful in awakening coma patients.
Of course all that's very speculative, and more research is called for, but that's how science is done.
Apart from being an increase in knowledge, it could have significant implications for our understanding of the experiences of non-human animals.
It appears that mice have a claustrum, and I would be very interested to see the effect of manipulating that area on their behaviour. If it has little effect, one might conclude that mice don't experience consciousness the way we do, for example, or the reverse.
Edit: here is an open access version of Crick & Koch's paper on the claustrum:
> I think the real problem is that people don't really want consciousness defined. They want to keep it as this magical, special thing.
I disagree, what makes you think so? Hard problem of consciousness is the real problem. Consciousness isn't a physical thing, that's why physics can't define it.
If you want to keep the assumption that consciousness and the physical world share causal interactions, consciousness must be a physical thing. How could consciousness and the physical world interact causally if consciousness is not part of the physical world?
The reason physics can't define consciousness isn't that consciousness isn't physical. Consciousness is just part of the world our current physics, and really our current philosophies, haven't developed satisfactory means to address.
Many physical phenomena, now explained by physics, were once unexplained by previous iterations of physics. Previous physics were just incomplete, until people came along with new ideas to expand what phenomena physics could explain. Doesn't mean those phenomena weren't physical things before physics evolved ways to understand them.
>Consciousness isn't a physical thing, that's why physics can't define it.
Says who? It is most certainly caused by physical neurons which are built by particles following the laws of physics. In principle it is both understandable and explainable in terms of physics, though likely any explanation will follow some other nomenclature (e.g. we don't describe weather in terms of motions of quarks)
The first step is to understand consciousness and how it works. Once we understand how it works it has a multitude of applications. Not least in computer science (eg. trying to create machines that display consciousness), and neuro-medicine (eg. how do we wake people up that are in a coma).
> If so isn't this a case of the snake trying to eat it's own tail ?
Pretty much. Consciousness is one level above science in a way, so every attempt to only use science to explain consciousness will fail. Consciousness is a metaphysical concept, "meta" = an abstraction layer above.
Very interesting. Naturally I'm skeptical due to the small sample size, and them seemingly only testing her auditory, and motor responses. However, the apparent gradual dampening of her reactions is indeed intriguing, it'd be nice to see some sort of follow up study.
If they indeed found an on/off switch, it'd be interesting to attempt to "kickstart" some coma victims. If it works, it'd be a major breakthrough.
Lots of neurology results are on truly small data sizes. You can't insert electrodes into brains or cut out brain parts at will for ethical reasons, so you have to wait for a patient where you have to insert one in just the right place for other reasons.
So, it may be a while before you see that follow-up study.
On a side note, the combination of antibiotics with fast recovery of injured soldiers has been very beneficial to neurology, as it meant that neurologists saw more living patients with horrific brain injuries.
It is unlikely that anybody would find this in patients with traumatic injuries, though. A projectile damaging this area deep in the brain likely would take out lots of other areas, too, and would be lethal, even if one gets the patient in a hospital within minutes.
Auditory and motor responses and a total cessation of internal thought and/or short term memory formation that would provide any evidence of internal thought continuing during the stimulation.
But they seriously need to test that on someone with an intact hippocampus.
Im not sure why there seem to be some negative comments, this is absolutely amazing- albeit a little scary in a sci-fi clockwork orange kind of way...
Additionally, consciousness gets commonly defined in philosophy of mind as the "likeness" of sensation and perception (i.e. its "like" something to smell roses and its "like" something to hear Mozart). So, this study cuts at the root of consciousness.
Consciousness is not found in one part. To use a software analogy, they have commented out the main() function - but what makes the system what it is is all the code and it's mutual interactions - but it won't run with that important function removed.
I'm doubtful we can really "prove" true unconsciousness exists (in the sense that subjective awareness completely ceases) until we have a much more advanced understanding of how experience arises in the brain. Right now the most we can say is "X results in an amnesic, non-responsive state". After all, if we rely on self-reporting, how can someone remember being unconscious if they have no memory of it?
Also, the article keeps saying the claustrum, but don't we have one in each hemisphere? Which one did the scientists stimulate, and what would happen if you stimulated the other one?
When people think about the future and artificial intelligence, they have all these crazy ideas about robots becoming conscious like skynet in terminator or Hal on 2001 a space odyssey. But I suspect the opposite will be true. Things like the human brain project in the EU and the brain initiative in the US, and other research are likely to explain human consciousness. I suspect it will answer long standing questions about spirituality. And by time we implement sophisticated AI, consciousness won't be a mystery.
Consciousness and AI have almost nothing to do with each-other. Human beings only reason when awake and self-aware, but this is most likely a quirk of ours (ie: the daemon `cognit` only runs at init-level 3), not a fundamental feature of cognition.
They're pointing a zapper at different "areas" of someone's head and seeing what happens. That's what counts for brain science today. We have a long way to go.
Honestly, some of the best brain science ever was by accidental injuries removing some portion of the brain and scientists getting to study the difference. So being able to pick and choose now is ground breaking. It's the equivalent of genetics when we started being able to selectively turn off or insert genes (along with a fluorescent market to know it took).
Those things can be just as crude as each other. Genes (and proteins) can interact in very complex ways so don't be fooled into thinking that 'direct manipulation' of specific parts means you have any idea what's going on.
As a software analogy, think about a very large and complex, undocumented code-base in a bizarre language you've never come across. Your only experimental tool is to pick a line of code, delete/modify it, and then see what happens. As you learn more, you can do refine your experiments but basically, you're still just 'poking it with a stick'.
Any attempt to use the above information will result in a breach of my patent USPO12345678, "A novel method of software maintenance using and IDE and a stick"
Toggling big switches vs smaller switches. Same idea, different scale, different "stick".
What is/are the appropriate scales to study brain function at? Quarks? Molecules? Brain regions? Which ones? All the above? None? We need to know the answer in advance before pronouncing a method "crude", or conversely, "too precise", by some criteria.
We don't know the answer to the scale question but poking "areas" to build up a fuzzy picture of regional function at first approximation is getting us there.
Would the knowledge gained from knock-out/stimulation experiments be more fine-grained if the investigators already had a fine-grained understanding of what they were studying and thus, the cleanest way to study it? Yes.
But breakthroughs are not made where investigators know what they're doing. Breakthroughs are made where nobody knows what they're doing--often crudely, frequently accidentally. The history of scientific advance is messy. Significance of results doesn't map neatly to sophistication of methods/tools.
I wholly agree that neuroscience has a long way to go. But to me that doesn't diminish the achievements of neuroscientists pushing us there, using whatever ethical opportunities available to them.
You can make anything sound daft by expressing it a certain way. The entire history of physics, all the way from cavemen to the LHC, is just a story of smashing rocks together in creative ways and watching what happens.
But obviously it's a lot more than that, and so is electric brain stimulation.
It's more interesting since they mention consciousness, though I don't know what they're really talking about.
I think stuff like 'rebar through the eye' has revealed interesting 'machine learning'-type defects, like funny ways your vision can be messed up and see at a very low fps, unable to perceive most quick motion.
But machine learning is popular and stuff because it works, unlike strong AI and consciousness, which are pretty difficult to get any handle on.
Fundamental physics is done by smashing particles together and seeing what happens. We've gotten tremendously far with this approach. Do you have a better idea?
Before smashing particles together, physicists use mathematical models to predict exactly what they will see given different assumptions, usually with incredibly high degrees of accuracy. There are no such models of higher-level cognitive processes in neuroscience. They are very different.
>Before smashing particles together, physicists use mathematical models to predict exactly what they will see given different assumptions, usually with incredibly high degrees of accuracy.
Not necessarily. In the 1950s and 1960s when we started exploring higher energy levels, we didn't really have a model or theory that predicted all the particles we ended up seeing. That came after.
I often think about exactly both these points. We are so far behind in terms of research compared to particle physics because we don't have the ability to destructively smash consciousness apart just to see what falls out over and over again. "Zapping" very low number of brains is going to be slow going, and unfortunately is not far from the best we have to work with.
We absolutely have a long way to go, I wonder if we'll "solve" many of the open problems in brain science, medicine and consciousness in the next 100 years. The human brain is outrageously complicated, and much of it's workings are hidden to us, in essentially a featureless grey goo.
Well...until brain electrical signal is decrypted, they'll keep on zapping until something interesting happens. Remember when electro-shock therapy was used for nearly everything in medical practices?
Well, a hammer also makes a good consciousness switch. Especially when combined with needles and applied to nails. Seriously, how can you discuss a marginal case with absolutely no statistical evidence. Do apes have claustrum? If yes - run on them, then speak sense. Besides, brains are immensely resilient and damaged parts functionality can be replaced functionally by other brain regions.
I listened to an episode of Radiolab about 3 "Black Boxes" lately and the first one is about this mysterious threshold in the brain between consciousness and unconsciousness.
I like that it was Crick who gave some early research. Good to see that some scientists do get a second act. If he had published and survived, would this have been a second Nobel?
> "Counter-intuitively, Koubeissi's team found that the woman's loss of consciousness was associated with increased synchrony of electrical activity, or brainwaves, in the frontal and parietal regions of the brain that participate in conscious awareness. Although different areas of the brain are thought to synchronise activity to bind different aspects of an experience together, too much synchronisation seems to be bad. The brain can't distinguish one aspect from another, stopping a cohesive experience emerging."
Right, counter-intuitively... As long as we'll be making up reasons based on that piece of data, how about this:
Global synchrony occurs when a brain is recovering from an unknown or detected bad state, and all major parts of the brain say "HLO" to each other to notify for their existence, proper operation and to establish connection. So it's not that it's bad to be synchronous, but the lack of information from critical parts of the brain causes it to repeatedly reboot in order to recover from a bad state forced by electric rods in the brain.
Hey, I'm a scientist!
Seriously though, why aren't we thinking about what the implications of our experimental data would be on a distributed computing system, which our brain is, instead of giving silly "it's like the key in a car" explanations, assigning causality randomly and without merit?
The result isn't counter-intuitive to those familiar with fairly basic human neuroscience. When you close your eyes, for example, low frequency, long-range synchronization also increases.
The stated "counter-intuitive"-ness is more likely an artifact of the translation of the result to layman's terms, where the traditional (since the industrial revolution) way of thinking about the brain and mind is that it is a machine, and the function of this machine is consciousness. I think several such artifacts often appear in these sorts of articles on neuroscience.
If you wish to actually understand and interpret the results for yourself, without the crude and inevitably inaccurate lens of the science writer, you have to read some of the books written by neuroscientists themselves. Rhythms of the Brain by Gyorgy Buzsaki is the best I've read so far.
And yet, nothing you said really rejects my made up explanation, even more, it supports it (the visual center stops sending information as it lacks input, and synchronization attempts begin).
But Crick's notion sounds a lot like what Dennett derisively calls the "Cartesian Theater", the tendency to imagine consciousness as little room where a tiny homunculus watches the screens where all sensory data is displayed: http://en.wikipedia.org/wiki/Cartesian_theater
In Dennett's view, consciousness is basically a distributed system. There is no single place where consciousness "really happens". That a single electrode can disrupt consciousness doesn't suggest otherwise. If a backhoe hitting a cable junction triggered an AWS failure, we wouldn't say that the cable junction is where AWS "really happens".