I have no idea why everyone here absolutely insists on having this thoroughly pointless argument with me at all. I merely stated that which should be obvious - consciousness is not software - and lots of people were, apparently, offended by that because a bunch of tech bros pretended it does.
But consciousness could easily be a manifestation of ‘software’. You can’t know. You can’t know what it is; you can’t even prove it exists outside of your own experience. So when you make definitive statements like that, you will often get people pointing out that you are wrong. It’s not a matter of being offended any more than being offended at any untruth being spread as if fact.
But consciousness could easily be a manifestation of ‘software’.
Why? Because we invented software? Viewing human consciousness as software says a lot more about the early 21st century viewer than it does human consciousness - pretty much in the same way that viewing human physiology as purely mechanical says a lot more about the early 20th century viewer than it does human physiology.
Let’s be clear… there is no indication - never mind evidence - that human consciousness works like software. In spite of that, it seems to be a holy cow belief for plenty of people here. And I’d argue that the reason why that is is far, far more relevant than the “consciousness-vs.-software” debate itself.
Well I think in this context software can mean a set of flexible procedureal instructions being followed by a more rigid hardware framework. Parts of the human brain are like software (re-wireable links and learned timings), and parts are hardware (as grown from birth, mostly independent of stimulus). A computer is also hardware (cpu) and software. An AI neural network is just a big matrix of interrelations between nodes which software can run as a network, much like the human brain is a big set of neurons that runs as a network. Obviously the human brain is more complicated than the current structural basis for AI now as the human brain has other feedback mechanisms. But people are working on modeling these kinds of things and applying them to AI. And AI nets could theoretically get as big or much bigger, representing neural nets larger than our brains. So there’s no particular reason AI could not match or surpass human thought power. Both the brain and computer systems are a combination of hardware and software in this context. But computer scientists see the software as a layer on top of the hardware - and inferred or secondarily intelligence comes more as a layer on the software. It doesn’t really matter if it’s software or hardware anyway as it’s just algorithms and the implementation doesn’t really matter. Similarly in the brain, there are biological hardware processes and the equivalent of software (dynamically configurable connections). But it can still be seen as an implementation of an algorithm. If consciousness can come out of that, there’s no reason that it can’t come out of software running on a computer. There is no ‘consciousness’ mechanism as far as we know - it is a result of having a sufficient complexity of the right kind of algorithmic processing. Or at least, that is a perfect reasonable explanation. It’s seemingly unprovable whether it exists in anything other that one’s own personal experience; so we simply can’t know if another system is actually conscious. But if it acts conscious, it seems like that’s about as good a test as we will ever manage. There’s no point in gatekeeping the assumption of consciousness of an AI any more than denying the consciousness of another person just because you can’t prove it. Unless we identify some sort of biological basis for consciousness that for some reason cannot be copied in a computer based system, there’s no good reason to think AIs can’t be conscious. One can bring spirituality or religion into it, but that’s similarly unprovable and there’s no particular reason those things couldn’t apply to AI systems if they apply to human brains.
I have no idea why everyone here absolutely insists on having this thoroughly pointless argument with me at all. I merely stated that which should be obvious - consciousness is not software - and lots of people were, apparently, offended by that because a bunch of tech bros pretended it does.
But consciousness could easily be a manifestation of ‘software’. You can’t know. You can’t know what it is; you can’t even prove it exists outside of your own experience. So when you make definitive statements like that, you will often get people pointing out that you are wrong. It’s not a matter of being offended any more than being offended at any untruth being spread as if fact.
Why? Because we invented software? Viewing human consciousness as software says a lot more about the early 21st century viewer than it does human consciousness - pretty much in the same way that viewing human physiology as purely mechanical says a lot more about the early 20th century viewer than it does human physiology.
Let’s be clear… there is no indication - never mind evidence - that human consciousness works like software. In spite of that, it seems to be a holy cow belief for plenty of people here. And I’d argue that the reason why that is is far, far more relevant than the “consciousness-vs.-software” debate itself.
Well I think in this context software can mean a set of flexible procedureal instructions being followed by a more rigid hardware framework. Parts of the human brain are like software (re-wireable links and learned timings), and parts are hardware (as grown from birth, mostly independent of stimulus). A computer is also hardware (cpu) and software. An AI neural network is just a big matrix of interrelations between nodes which software can run as a network, much like the human brain is a big set of neurons that runs as a network. Obviously the human brain is more complicated than the current structural basis for AI now as the human brain has other feedback mechanisms. But people are working on modeling these kinds of things and applying them to AI. And AI nets could theoretically get as big or much bigger, representing neural nets larger than our brains. So there’s no particular reason AI could not match or surpass human thought power. Both the brain and computer systems are a combination of hardware and software in this context. But computer scientists see the software as a layer on top of the hardware - and inferred or secondarily intelligence comes more as a layer on the software. It doesn’t really matter if it’s software or hardware anyway as it’s just algorithms and the implementation doesn’t really matter. Similarly in the brain, there are biological hardware processes and the equivalent of software (dynamically configurable connections). But it can still be seen as an implementation of an algorithm. If consciousness can come out of that, there’s no reason that it can’t come out of software running on a computer. There is no ‘consciousness’ mechanism as far as we know - it is a result of having a sufficient complexity of the right kind of algorithmic processing. Or at least, that is a perfect reasonable explanation. It’s seemingly unprovable whether it exists in anything other that one’s own personal experience; so we simply can’t know if another system is actually conscious. But if it acts conscious, it seems like that’s about as good a test as we will ever manage. There’s no point in gatekeeping the assumption of consciousness of an AI any more than denying the consciousness of another person just because you can’t prove it. Unless we identify some sort of biological basis for consciousness that for some reason cannot be copied in a computer based system, there’s no good reason to think AIs can’t be conscious. One can bring spirituality or religion into it, but that’s similarly unprovable and there’s no particular reason those things couldn’t apply to AI systems if they apply to human brains.