There’s an article in the NY Times right now called The Trouble with Brain Science. This article does a good job of summarizing the basic unease I have about this field. The article in a nutshell:
“But neither project has grappled sufficiently with a critical question that is too often ignored in the field: What would a good theory of the brain actually look like?”
Credit to people like Giulio Tononi and Karl Friston, who are at least trying to make theories that bridge in some way, although I’m not particularly convinced by either of them, and I think we have a long way to go.
I’m going to put something out there: Physics seems to be based on mathematical regularity, so that’s the kind of theories they have. Biology is all happenstance and evolution, so perhaps instead of physics-like theories, we should be looking for theoretical approaches which are evolutionary on the conceptual level. This tends to be unsatisfying, because such models end up appearing to be ad-hoc in that they are explicitly adapted to the observations, rather than designed to predict the observations from first principles. But that’s the rub: there’s no reason to think that ANY first principles will predict biology, let alone the neuroscience-psychology bridge! Any neuro-psycho bridge will perforce be ad-hoc in that it is a model of something that has no underlying first-principles necessity. What this means in practice is a few things:
- Really *useful* models in this territory will be theoretically unsatisfying to the current way of thinking, so we have to change our feelings about our science to match the reality of nature.
- Really useful models will come into existence through some sort of process of growing and evolving onto the data, rather than the kind of mathematical insight that has driven a lot of the “hard” sciences traditionally. That is, there won’t be a lot of “eureka” moments when someone realizes that a simple power law governs the relationship (for example). Again, we’ll have to adjust our expectations to match the reality of nature.
- Really useful models will, themselves, be of an order of complexity comparable to the systems they are intending to describe, in a way that flies in the face of what scientists currently think of as the definition of a “theory” at all. In short, the complexity (say, Kolmogorov) of a theory is supposed to be vastly smaller than the complexity of the systems it allows you to model; that’s part of the very idea of what it means to be a theory.
A quick look at the current big brain initiatives shows that we are already moving in the direction I’m describing: a computer simulating an entire brain is, obviously, a more complex system than just the brain itself. So then you could ask, after these projects succeed, who exactly is it that “understands” something more about the brain?
I don’t see any of this as being fundamentally problematic; but it is going to require a big change in how we think about science as a whole, in order for there to be room for these non-reducing models. It’s also going to lead to different dynamics of how science is done: since it will depend much less on individual understanding (in the holding-an-entire-model-in-your-head sense) than before, it seems to me that science as large teams will be more important. Well, now that I think about it, they have that in physics too already, with the big collider and telescope projects.
Update: David Meyer at Michigan sends me this:
You and others should read the following document; it’s at the apt level of analysis for what’s needed to advance theoretical psychological science. More micro brain probing is currently pretty much irrelevant for that purpose… http://www.umich.edu/~bcalab/documents/MeyerKieras1999.pdf
I’ll write more after I read that.