More Multidimensional food for thought Ashtar Massive...
2 13 2014 ~We are within perhaps a decade of creating computers that match and perhaps even dwarf the human brain in computing power, and that are capable of complex computations that may include something like reasoning and even a notion of self—what many would therefore consider to be autonomous, conscious machines.
When we contemplate this, most of us still have Skynet of the Terminator movies lurking in the back of our minds, and so the question that generally gets asked is whether such machines will eventually decide we are a nuisance and destroy or enslave us. Artificial intelligence (AI) researcher Hugo de Garis rather apocalyptically predicted that the question “Should we build them?” will so profoundly divide humans in the second half of the 21st century that it will result in a calamitous conflict that kills billions—what he calls an “artilect war” (artilect being his term for artificial intellect).
“Should we build them?” is not the right question to ask. For one thing, it is pointless. The whole history of our relation to technology shows that if the capability to build something and use it exists, it will be built and used (if necessary, in secret). In any case, as with many other advances, the same technological developments that threaten humanity could give us the tools to protect against those threats; technology has its self-balancing, homeostatic mechanisms, like everything else.
But obviously, we need to enter the new world of AI prepared. To do that, we need to ask much more fundamental questions about mind and consciousness than most non-scientists are used to asking: specifically, how, when, and crucially if key aspects of mind, such as consciousness or feelings, can actually arise from material structures, be they man-made circuits or organic brains. If any computer-related question ends up polarizing us in the second half of this century, this one—what philosophers like to call the “hard problem” of how consciousness is produced by a brain—is likely to be it.
So instead of “Should we build them?”, a more pressing question we should be prepared to answer is, “Should we believe them?”—that is, believe computers that claim to be or act like they are conscious, and believe their inventors that consciousness is nothing more than computations performed by a machine. To many outside the scientific community, it is not self-evident that even the biological machine in our heads can accomplish that feat.
The Hard Problem
The human brain is the most complex physical structure known, having by some estimates more potential synaptic connections than there are atoms in the universe, and able to store something like 10 to the power of 20 bits of information. To create an artificial, humanlike or superhuman intellect surely requires extraordinary processing power to match or approximate this, and Singularity prophets tend to focus on surmounting this specific challenge (perhaps through quantum computing) when imagining building machines that approach some humanlike threshold. Yet what exactly will that threshold be?
“Intelligence” is a vague term that encompasses both the ability to manage large quantities of information and the ability to think and reason and solve problems, and often this latter notion gets lumped in with other human attributes such as feeling, self-awareness, free will, and so on. But even on those terms there is not much agreement how to define them, let alone what they really are. “Consciousness” is generally used as a catch-all term, replacing the old theologically and supernaturally loaded term “soul.” Before we can ever evaluate the intelligence or consciousness of a machine, we need to understand what we are talking about when we talk about our own, human consciousness...
...Yet, as Michael Hanlon recently pointed out in the pages of Aeon Magazine, Graziano and other bold materialists still can’t, and will never be able to, marshal neuroscientific evidence to account for what it is like to be an aware, thinking being—that is, not merely thinking that I exist and am aware, but actually sitting here feeling or experiencing that thought, indeed feeling or experiencing anything at all.
This philosophical position is sometimes called Mysterianism: Mysterians do not believe that consciousness can be completely reduced to or explained by brain processes. Even if certain components of consciousness, such as reflexivity, sense of self, or the attention-monitoring that Graziano describes can be explained as the outcome of computations in the cortex (and thus could theoretically be achieved by computers), there remains this more basic phenomenological fact of experience and awareness, the feeling-ground of being.
This ground is so basic, subtle, and pervasive that it is generally overlooked and eludes verbal description...
read full article at: thenightshirt.com
Replies