Discussion:
What scientific idea is ready for retirement?
Brett Hall
2014-01-15 03:39:49 UTC
Permalink
This year's Edge question: "What scientific idea is ready for retirement?" has Martin Rees saying what I have heard a number of other physicists saying recently: there *is* a limit to human understanding. His response - is that the idea in need of retirement is: "We'll Never Hit Barriers To Scientific Understanding".

This reminds me of what the populariser of science, Neil DeGrasse Tyson says (almost!) each time he makes a public appearance; he is afraid our brains won't be good enough to understand the laws of physics and you'll perhaps need a brain twice the size, like some alien intelligence, to eventually grasp the nature of (say) dark energy or whatever. He is a fan of science fiction and the possibility of alien life but, apparently, not of augmenting our own brains to be "twice the size" with a computer or whatever.
Nonetheless and here I'm sticking my neck out maybe some aspects of reality are intrinsically beyond us, in that their comprehension would require some post-human intellect—just as Euclidean geometry is beyond non-human primates.
Some may contest this by pointing out that there is no limit to what is computable. But being computable isn't the same as being conceptually graspable. To give a trivial example, anyone who has learnt Cartesian geometry can readily visualize a simple pattern a line or a circle when they're given the equation for it. But nobody given the (simple seeming) algorithm for drawing the Mandelbrot Set could visualise its amazing intricacies– even though drawing the pattern is only a modest task for a computer.
That does not seem right to me. I think Descartes had a similar idea, but came down on the other side. Descartes pointed out that when we think of a square we can visualise it *and* have an understanding of it held in our minds. But with what he called a "hiliogon" (a thousand sided figure) although we cannot visualise such a thing, we can *understand* such a figure is possible. Descartes was trying to argue that although we cannot visualise an infinite being (god) we can nonetheless comprehend the existence of one.

Back to Rees; he seems to be saying that the inability to *visualise* some shape is a limitation to comprehension. But this seems precisely what it is not. That is, his own example here of the Mandelbrot set, to my mind, is exactly why our understanding need not be bounded. Just because we cannot visualise it, does not mean we will fail to understand it.

Am I missing something?
It would be unduly anthropocentric to believe that all of science and a proper concept of all aspects of reality—is within human mental powers to grasp. Whether the really long-range future lies with organic post-humans or with intelligent machines is a matter for debate but either way, there will be insights into reality left for them to discover.
It seems this is the mistake not of anthropocentrism, but parochialism on the part of Rees. Why are we limited in this way by our abstract human minds? If we need more memory to understand stuff, or more computational speed, that is merely a matter of technology, right? But if we augment our human biology with silicon or some other thing, we will still be humans or "earth people" or persons, in the relevant sense. My guess is we already do augment our brains in this way, and as more memory and speed is required, we will continue to.

Say other galaxies had been discovered well before the theory of computation was understood. One might well have argued that understanding what happens when two galaxies "collide" will forever be beyond human comprehension because that would take modelling 200+ billion objects interacting with another 200+ billion objects and no human could ever understand/compute that. It would take more time than available to even a large group of humans to calculate over many lifetimes. But of course we can, and do. It just takes a little bit more technology than pen and paper.

Brett.
j***@public.gmane.org
2014-02-09 17:22:08 UTC
Permalink
The purpose of the computational ability of the brain (and in fact of the entire organization of living entities) is survival. So it is worth keeping in mind that should you augment our brain’s computational ability a thousand fold to better model the world, it might in fact impair in some ways our ability to survive - we might not be able to see the forest for the trees. “A Beautiful Mind” comes to mind. More is not always better.
But what do we mean by the term “model”? Any physical model of a system that is less than the exact size and composition of the original (e.g. a model plane in a wind tunnel) must be less accurate in terms of predictability of the response of the model. Theoretically, a computer could model a system down to some level of detail, such as atomic. That might require less space and mass than the original model. However, a computer model, not being a physical model, must also model the forces and dynamics of the system, which would require additional circuitry and hence more physical mass and space. That is where the evolution of our brain has taken a critical path. Our brain is obviously not a physical but a symbolic / dynamical modeler of the universe. Hence our predilection for developing “laws” and theories to predict the “future”. Our brain represents “entities” in the world abstractly as symbols, and computes paths and interactions among them by means we don’t fully understand, but in any case it takes less space in our brain to represent a coffee cup than the coffee cup itself. Although we “visualize” the world as 3-D, really we do not have access to anything greater than 2-D information, but our brain “synthesizes” 3-D computationally. I don’t mean by perspective, but by overlap of successive images or experiences, like stitching many overlapping images to make a large panorama photo. Our conception of a coffee cup is a mental hologram of many images of various cups, stored not as 2-D or 3-D information, but as an N-dimensional matrix of synaptic connections and weighting factors. Our brain can represent the world N-dimensionally, but our conception is limited to a 3-D projection of that representation. We can’t “imagine” a hypercube, but we can grasp it mathematically. Thus our brains can understand a great deal of the universe (specifically, we can predict what will happen
) even though we can’t visualize it (Decartes’ point).

Long ago evolution “decided” our brains were better off specializing in symbolic rather than explicit modeling of the world. This path conforms to the principles of conservation of space and to predictability - in the latter case, the Heisenberg uncertainty principle. An explicit model (photograph of a plane in the air) tells you little if anything about what will happen to the plane “next”, because there is no dynamic information associated with the photographic pixels. In contrast, a movie provides some idea of position and of velocity. We need both for prediction, hence our brain is designed to abstract static and temporal features of entities. Computers are “augmentations” of our brains, whether inside or outside our skulls, as long as we can communicate with them. So if we build a computer model that correctly predicts the consequences of the collision of two galaxies to some degree of precision, I would agree that we can say we “understand” that system even if we can’t “conceptualize” it explicitlly. So if we find something in the universe we can’t model, we will have hit the barrier. Of course, we can’t even model the weather very well. Does that mean we don’t understand it? Perhaps. But we can survive it (well, most of the time
).
Lee Corbin
2014-02-21 02:15:06 UTC
Permalink
jgruner writes
Post by j***@public.gmane.org
The purpose of the computational ability of the brain (and in fact of the
entire organization of living entities) is survival. So it is worth
keeping in mind that should you augment our brain’s computational ability
a thousand fold to better model the world, it might in fact impair in some
ways our ability to survive - we might not be able to see the forest for
the trees. “A Beautiful Mind” comes to mind. More is not always better.
<

These and the rest of your remarks seem very much on target to me.
Post by j***@public.gmane.org
But what do we mean by the term “model”?
At this point, we are encountering the common problem of over-reliance on
one term. The last thing that will help is to try to define it,
incidentally. (Whenever someone does that, not only does everyone else go on
using the term in the very same way that they always have, but even the
definer himself falls into whatever usage has become common with him.)

Though I don't disagree with any of the characteristics you adduce, perhaps
we can avoid the term overly much? If--and sometimes it does happen--a term
becomes "indispensible", then it's a sure sign that something is very wrong.
An inability to say what we mean using alternate language implies that we
don't understand very well what we're talking about.
Post by j***@public.gmane.org
Any physical model of a system that is less than the exact size and
composition of the original (e.g. a model plane in a wind tunnel) must be
less accurate in terms of predictability of the response of the model.
Theoretically, a computer could model a system down to some level of
detail, such as atomic. That might require less space and mass than the
original model.... <
Quite right.
Post by j***@public.gmane.org
Our conception of a coffee cup is a mental hologram of many images of
various cups, stored not as 2-D or 3-D information, but as an
N-dimensional matrix of synaptic connections and weighting factors. <
I'm trying to go with the flow of that; but I'm a little reluctant to
endorse the idea of the brain using anything that actually resembles an
N-dimensional matrix. All there is, it seems to me, is just as you say: a
mass of "synaptic connections and weighting factors", but the brain
researchers so far as I know have only very complicated and incomplete ideas
about how the mapping from even a simple object, such as a coffee cup, is
done, to say nothing about how *democracy* or *Stalinism* might be
represented.
Post by j***@public.gmane.org
Thus our brains can understand a great deal of the universe (specifically,
we can predict what will happen
) even though we can’t visualize it
(Descartes’ point).
<

Yes, and that's true for anything, I would think, in which human intuition
plays a strong part. Sometimes in a conversation, you just get the "feeling"
that someone is, say, anxious, or is disagreeing with you (the moron), and
we don't visualize that in any way. So perhaps there is no need to single
out hypercubes here--or is there? Perhaps I miss your meaning.
Post by j***@public.gmane.org
Long ago evolution “decided” our brains were better off specializing in
symbolic rather than explicit modeling of the world.
<

Shouldn't we rather say that our ancestors were better off (in the viable
reproductive arena) *adding* the capability of symbolic modeling. Or, well,
taking your point another way, shouldn't we also say that even very early
and ordinary animals didn't "model" with any precision the world around
them, but rather, as you explained, form higher level impressions?
Post by j***@public.gmane.org
Computers are “augmentations” of our brains, whether inside or outside our
skulls, as long as we can communicate with them. So if we build a
computer model that correctly predicts the consequences of the collision
of two galaxies to some degree of precision, I would agree that we can say
we “understand” that system even if we can’t “conceptualize” it
explicitly.
<

I agree. We ought to say that we have a certain sort of understanding in
this case.
Post by j***@public.gmane.org
So if we find something in the universe we can’t model, we will have hit
the barrier. Of course, we can’t even model the weather very well. Does
that mean we don’t understand it? Perhaps. But we can survive it (well,
most of the time
).
<

It's probably fair, to try to answer your question, to say that we don’t
understand the weather, or at least we don't understand it very well. And
once our processors or our weather models reach a certain degree of
reliability, then we understand "better"; but still, the human beings in
some kind of native brain mode, don't really understand what the product of
(7^9 + 2^16 - 7917270072)^26 is, at least not in the way that we understand
that 3x4 = 12. For the former, we understand "in principle" but not in
detail the answer provided to us by computers, and perhaps not available to
Kepler--well, he could be pretty tenacious, so I guess I should say
Cicero--at all.

Lee
j***@public.gmane.org
2014-03-23 04:49:13 UTC
Permalink
Post by Lee Corbin
jgruner writes
Post by j***@public.gmane.org
The purpose of the computational ability of the brain (and in fact of the
entire organization of living entities) is survival. So it is worth
keeping in mind that should you augment our brain’s computational ability
a thousand fold to better model the world, it might in fact impair in some
ways our ability to survive - we might not be able to see the forest for
the trees. “A Beautiful Mind” comes to mind. More is not always better.>
These and the rest of your remarks seem very much on target to me.
Post by j***@public.gmane.org
But what do we mean by the term “model”?
At this point, we are encountering the common problem of over-reliance on
one term. The last thing that will help is to try to define it,
incidentally. (Whenever someone does that, not only does everyone else go on
using the term in the very same way that they always have, but even the
definer himself falls into whatever usage has become common with him.)
Though I don't disagree with any of the characteristics you adduce, perhaps
we can avoid the term overly much? If--and sometimes it does happen--a term
becomes "indispensible", then it's a sure sign that something is very wrong.
An inability to say what we mean using alternate language implies that we
don't understand very well what we're talking about.
Post by j***@public.gmane.org
Any physical model of a system that is less than the exact size and
composition of the original (e.g. a model plane in a wind tunnel) must be
less accurate in terms of predictability of the response of the model.
Theoretically, a computer could model a system down to some level of
detail, such as atomic. That might require less space and mass than the
original model....
Quite right.
Our conception of a coffee cup is a mental hologram of many images of
various cups, stored not as 2-D or 3-D information, but as an
N-dimensional matrix of synaptic connections and weighting factors.
I'm trying to go with the flow of that; but I'm a little reluctant to
endorse the idea of the brain using anything that actually resembles an
N-dimensional matrix. All there is, it seems to me, is just as you say: a
mass of "synaptic connections and weighting factors", but the brain
researchers so far as I know have only very complicated and incomplete ideas
about how the mapping from even a simple object, such as a coffee cup, is
done, to say nothing about how *democracy* or *Stalinism* might be
represented.
Post by j***@public.gmane.org
Thus our brains can understand a great deal of the universe (specifically,
we can predict what will happen
) even though we can’t visualize it
(Descartes’ point).>
Yes, and that's true for anything, I would think, in which human intuition
plays a strong part. Sometimes in a conversation, you just get the "feeling"
that someone is, say, anxious, or is disagreeing with you (the moron), and
we don't visualize that in any way. So perhaps there is no need to single
out hypercubes here--or is there? Perhaps I miss your meaning.
Post by j***@public.gmane.org
Long ago evolution “decided” our brains were better off specializing in
symbolic rather than explicit modeling of the world.
Shouldn't we rather say that our ancestors were better off (in the viable
reproductive arena) *adding* the capability of symbolic modeling. Or, well,
taking your point another way, shouldn't we also say that even very early
and ordinary animals didn't "model" with any precision the world around
them, but rather, as you explained, form higher level impressions?
Post by j***@public.gmane.org
Computers are “augmentations” of our brains, whether inside or outside our
skulls, as long as we can communicate with them. So if we build a
computer model that correctly predicts the consequences of the collision
of two galaxies to some degree of precision, I would agree that we can say
we “understand” that system even if we can’t “conceptualize” it
explicitly.
I agree. We ought to say that we have a certain sort of understanding in
this case.
Post by j***@public.gmane.org
So if we find something in the universe we can’t model, we will have hit
the barrier. Of course, we can’t even model the weather very well. Does
that mean we don’t understand it? Perhaps. But we can survive it (well,
most of the time
).>
It's probably fair, to try to answer your question, to say that we don’t
understand the weather, or at least we don't understand it very well. And
once our processors or our weather models reach a certain degree of
reliability, then we understand "better"; but still, the human beings in
some kind of native brain mode, don't really understand what the product of
(7^9 + 2^16 - 7917270072)^26 is, at least not in the way that we understand
that 3x4 = 12. For the former, we understand "in principle" but not in
detail the answer provided to us by computers, and perhaps not available to
Kepler--well, he could be pretty tenacious, so I guess I should say
Cicero--at all.
Lee, sorry if I overuse the term "model", but it is a good starting point. A core concept of philosophy is that we have a "model" of some sort in our head of the outside world that we compare with reality and use discrepancies between the two to enable us to adapt to the "real" world. Schizophrenia is the condition in which we cannot distinguish which is the "real" one. Hard to imagine to be sure. But when you are "conscious' within a dream, you don't usually think it is "unreal", however, one can "awaken" in a dream and realize it is a dream creating a dichotomy in which the dream can be controlled. You can fly, knowing you are not "really" flying.

Perhaps the most important question in terms of evolution of human intelligence, and life as a whole really, is how this model is embedded in our brain. True, for very abstract concepts like "democracy" there are no current explanations. However, for motor function involving the cerebellum, there are mathematical models to explain how directed movements can be achieved with our limbs. To reach for an object involves many more than 3 degrees of freedom (wrist flexion / rotation, elbow, shoulder, and body). How does the brain produce a 3-D movement with 5 or more degrees of freedom? This can be modeled, as with robotic arms, using tensorial mathematics (which I only understand intuitively).


Similarly, you can reduce 1000 photographs of a cup to an overspecified NxNxN dimensional space and do a correlation analysis and so on to render a 3-dimensional image of a cup. BUT the final 3-D model of the cup in the computer is not a 3-D physical model, but a binary representation of a mathematical equation. Mammalian brains have evolved in such a way as to enable an analogous process to reduce overspecified information to a manageable set of synaptic functions, and most importantly for humans, one(s) that can be compared to other similar representations of other objects (pots, pans, cars, ...) - what we call thinking.


Understanding social interactions has been, I think, more difficult than predicting weather because we don't have a grasp of the elements involved (fear, greed, altruism, and so on) - and especially how to measure them in humans. I mean do people fear another economic colllapse? Do they hope we can end starvation in the Sudan? but with the internet, twitter, facebook, etc., now we potentially have much better ability to measure the "currents" and vortices of society and make some predictions. Predictions that, hopefully, will improve life on our planet.
Loading...