Skip Navigation

You're viewing part of a thread.

Show Context
182 comments
  • You say an incompleteness theorem implies that brains are computable?

    No, I'm saying that incompleteness implies that either cause and effect does not exist, or there exist incomputable functions. That follows from considering the universe, or its collection of laws, as a logical system, which are all bound by the incompleteness theorem once they reach a certain expressivity.

    All I said is that the plain old Turing machine wouldn’t be the adequate model for human cognitive capacity in this scenario.

    Adequate in which sense? Architecturally, of course not, and neither would be lambda calculus or other common models. I'm not talking about specific abstract machines, though, but Turing-completeness, that is, the property of the set of all abstract machines that are as computationally powerful as Turing machines, and can all simulate each other. Those are a dime a gazillion.

    Or, see it this way: Imagine a perfect, virtual representation of a human brain stored on an ordinary computer. That computer is powerful enough to simulate all physical laws relevant to the functioning of a human brain... it might take a million years to simulate a second of brain time, but so be it. Such a system would be AGI (for ethically dubious values of "artificial"). That is why I say the "whether" is not the question: We know it is possible. We've in fact done it for simpler organisms. The question is how to do it with reasonable efficiency, and that requires an understanding of how the brain does the computations it does so we can mold it directly into silicon instead of going via several steps of one machine simulating another machine, each time incurring simulation overhead from architectural mismatch.

182 comments