But it’s not only in a Galway coroner’s court that the laws of nature have been suspended. They’re just as dysfunctional at the world’s most advanced scientific establishment – CERN.
Europe’s premier physics lab has measured particles travelling faster than light. Fancy that. This is a little troubling to them though, because for a physicist, matter moving faster than light makes about as much sense as God knocking on the door, presenting you with an iguana wrapped in newspaper, saying “Call me Susan, I have no legs for hosepipe” and turning into a forest of lemon trees. It doesn’t happen, it can’t happen, it won’t happen.
So as they make their lemonade, the boys and girls at CERN have to try to figure out where things went awry. Those particles can’t really have gone faster than light, can they? They have mass – which is a technical way of saying they weigh something – and a thing with mass can’t travel even as fast as light, never mind faster. This is because…
Well, this is because the world is a lot freakin’ weirder than it looks. You may not have noticed this – actually you couldn’t possibly – but the faster you move, the heavier you get. It isn’t detectable at the speeds even spacecraft travel at, but the effect gets more pronounced as you approach the speed of light. So pronounced in fact that if you ever travelled at the speed of light, you’d weigh an infinite amount. Which can’t be pleasant.
To make it worse, as you go faster you shrink in the direction of travel. (So much for the symbolism of the sports car then.) At the speed of light, your length front-to-back would be zero. Something with no length at all but which weighs more than the whole universe isn’t really a possible thing, so matter never can go as fast as light. The only reason light itself can manage is that it has no mass and no length to change.
Another way to think of it: The speed of light is the infinity of speed. Saying “faster than light” is like saying “more than infinity”, it’s a meaningless statement. So if this experiment showed particles of matter going from A to B in less time than light could, you’re forced to conclude that, well, perhaps A isn’t as far from B as you thought. Or maybe the particles found some sort of short cut. Or… the universe just shrunk or… something.
Those are actually genuine suggestions. Most modern theories of the universe tend to have a few extra spatial dimensions lying around; not just the Up-Down, Forward-Back and Left-Right we know, but also Hoo-Hah, Abba-Dabba and Hosni-Mubarak. Say. Maybe those extra dimensions form hidden spaces that the particles (called neutrinos) can cut through.
Or maybe not. Frankly no one knows. Any theory that accounts for a deviation from such a fundamental law has to be so darn theoretical that it may as well just be a particularly pretty form of hand-waving. Most likely explanation? They’ve simply made a mistake. They are some of the most intelligent people on the planet, they have the best lab in the world, and they’ve spent the last six months re-checking their results, but still the best explanation is that they put a decimal point in the wrong place somewhere. Almost anything is more likely than that their results are actually right.
I have a theory of my own. Of course.
CERN is headquartered in Geneva, Switzerland. That’s good, we expect things to be done with precision there. But in order to measure such high speeds, the neutrinos have to be sent to a target that’s some distance away. Quite a distance actually. Further than Switzerland is big. In Italy in fact.
Italy. Of course they’re getting figures that don’t reflect reality. Berlusconi is probably pocketing some of those neutrinos himself.
IBM has a really interesting – and just slightly scary – plan. In cooperation with Switzerland’s École Polytechnique Fédérale de Lausanne, they want to simulate the human brain.
They’re building a computer model. This is not the same thing as Artificial Intelligence (AI), programming a machine to act human. That would be a ‘top down’ approach; trying to understand how the mind works by looking at what it does. Instead this is ‘bottom up’, simulating the nuts and bolts of the brain, its biological wiring, its cells, even its molecules.
Which is quite an undertaking – in fact it is hard to exaggerate how big the task is. The brain is often described as the most complex thing in the known universe. Complexity is a thing that’s difficult to define but easy to perceive. Looking into the back of a TV, you’re instantly aware that it’s more complex than say a food mixer. Basically it looks more tricky to fix. The parts are small, numerous, and connected together in many different ways. Perhaps that’s the most intiuitive shorthand measure of complexity – the number of different ways that the parts of something interconnect. The human brain has far more connected parts than any other thing known, certainly more than any computer. Even Japan’s Earth Simulator, built to model the climate of the entire planet, is nothing compared to the brain of an average person.
It’s no surprise therefore that they aren’t trying to do the whole thing at once, or anything approaching that. They are starting with the best bit though: the neocortex (also called the cerebrum), the outside layer of the brain that’s most recent in evolutionary terms. It’s not unique to us, but it is far more developed in humans than in any other animal and appears to be responsible for what we experience as thought.
Even alone though, this is still far too complex for current technology to tackle. All they’re hoping to simulate right now is what’s known as a neocortical column. This can be described as a single ‘circuit’ of the brain, one of its processing units. The whole neocortex contains about a million of these. And for the moment at least, they only plan to model it on the level of its cells; to get down to the molecules that make up the cells will take vastly more computational power again. Yet even this is an immensely ambitious target. To model just one circuit of the brain in this (relatively) simple way will require four whole modules of Blue Gene – the technology IBM used to take the title of world’s fastest supercomputer back from the Earth Simulator.
So how far are we then from modelling the whole brain? Well assuming this first stage succeeds – it won’t be easy – all they really need to do is scale it up. Vastly. These four Blue Gene racks would fit in a normal kitchen. Four million? They would take up a golf course, and require the energy of five medium-sized power stations.
When you consider that your actual brain fits inside your head and runs reasonably well on sandwiches and cups of tea, you realise what a gap there is between nature’s technology and our own.
What’s the point then in going to all this trouble when a brain can be made much more cheaply using just two humans? If the object were to create machines that think, this would clearly be a madly inefficient way to go about it. But that’s not the object. The fact is we know amazingly little about how our own brains work. Simulating a part of one, even a solitary neocortical circuit, will teach us so much about what is really going on in there. Modelling allows you to find out why something is the way it is, because it can show you what would happen if it were different. The beneficial applications of that are obvious; as we see how it works, we gain greater insight into why it fails – what causes schizophrenia, Alzheimer’s, autism, the things that plague our minds.
But though it’s always good when research has palpable benefits, I think we need no such excuse when it comes to researching the structure and function of the brain. To know ones own mind – that is surely a philosophical imperative.