What I’m trying to say is that there’s no clear demarcation line between simple ‘response to stimuli’ (e.g., a bullet being propelled from a gun) and ‘awareness’ so, any attempt to demarcate “this is aware” and “this isn’t aware” is nonsense and quickly leads us to Loki’s Wager. I say awareness is a gradient like colors on a rainbow; some things are “more aware” than others.
Again, this is MY opinion and you can do with that what you will.
Whatever that “non-determined” way is, you could argue it’s programmed by the physical state of the nerves and their receptors. There’s a reason we’re afraid of computers becoming self-aware instead of just aware. They’re happy computing so long as they don’t know why. If you introduce the concept of a self, and reasons for the duties of oneself, we tangle with desire which cannot be reasonably harnessed.
oops, i accidentally my post before i was done writing, let me explain what i mean
For an entity to have awareness it must possess both
the ability to process information and respond to that information
the ability to process the information/response recursively, (experiencing the input/response stream as input, and responding to it)
A rock isn’t aware because it doesn’t have the ability to process information
A plant turning to light can process and respond to information biologically (1), but it can’t process and respond to its biological response (2), therefore isn’t aware.
A person responds biologically to information (i.e. nerve endings activating when stimulated by heat) and then responds to the biological input/output by acting in such a way that causes pain to stop, (i.e. removing hand from stovetop).
(1) is useful as a test for awareness, but it isn’t sufficient by itself. A plant responding to light is equivalent to a computer determining that 1+1=2. It’s simply a rule-based reflex, the rules being physical or mathematical laws. It’s impossible for a computer to process the fact that it’s been asked that 1+1=2 and formulate a response by inference, because that would require self-concept (differentiation between self and other) to apply that information to. In plants and computers, it’s always a 1:1 ratio, input produces output and nothing else. In aware entities, input produces output, input/output produces input which produces output, and so on recursively. Make sense?
Removing a hand from a (presumably hot) stovetop, however, is equivalent to a 1+1=2 rule-based reflex, though.
As for formulating a response by inference, take Clippy from Microsoft Office. COLOR=‘Black’
When you are typing in a Word document, for example, and the program detects that you’re trying to write a letter, Clippy pops up a speech bubble asking if you need help writing a letter. It inferred, based on your input, that you might be trying to write a letter.
No, it’s not, because of how it’s working biologically. Nothing in the chain of biological responses to heat has the direct result of moving the hand, so it is not a 1:1 input response.
Again, no. “Clippy” is a computer program which takes inputs and produces 1:1 outputs that cumulatively give you the illusion of awareness.
So given that you now understand my definition, lets get back to my original question for Materialists, which was why does this awareness arise in certain materials and not others? Everything including us is “stardust” so why is some of this dust aware and some not?
I’m sorry, but, yes, it is. Thermoreceptors in nerve cells send biochemical electricity to the spinal column and directly back to the source of heat, causing an automatic withdrawal reflex. The biochemical electricity that reaches the brain that we interpret as “pain” comes later.
I’m still not understanding this “1:1 input/output response” thing of yours.
By 1:1 I mean for every possible input, there is one and only one possible output. In the context of a computer program it’s quite simple to understand. Do you know what an IF/THEN statement is in programming? IF x THEN do y. That’s all there is to Clippy and every other program in existence.
Continuing with the analogy, this would be expressed as the function “IF hot THEN move hand.” So why is it possible for a person to control the reflex and touch a hot surface without immediately flinching? There’s obviously the potential for some other input governing the reaction; therefore for this input, there is more than a single possible output, ergo 's not merely a biochemical response to stimuli - its a response to the stimulus/response itself.
I thought I was answering his points. He’s trying to say that 1:1 input/output is something computers do while animals have something like 1:4 input/output or something. What I’m saying is that the “4” coming from “1” is simply a nested IF/THEN statement and that, given enough nesting, you can make an Excel spreadsheet appear to be “aware”…and I’m trying to figure out the difference from “illusion of awareness” and “actual awareness”. Is there any?
Information processing is required, but not sufficient element of awareness. So yes, computer process information, but they are still something missing in order to possess awareness - what is it is still quite a mystery.
I agree that there is no clear demarcation line on awareness, but I think that you place a color where should be black, If I may use a black-white gradient analogy.
Actually, I think that there is no so much difference between self-awareness and awareness - self-awareness means that you know that you exist (ergo you don’t want to die), so it requires some more cognitive abilities than merely knowing that something around you exist.
And the difference between programmed and not-programmed behaviour… You can look on a computer as a black-box that takes input and produces output depending on its state. The state is represented by computer memory, just like human’s mind state is represented by the brain and connections between neurons. So what is the difference between computers and humans or animals? The answer is quite simple - computer state is mostly static - every program is divided on instructions (algorithms responsible for performing task) and data - the former never change (unless a computer virus modifies it), and the later change quite rarely - for example, when you move a mouse around the screen or press a button, only few memory cells are actually modified! The state of the brain, on the other hand, changes every second, and it changes drastically. Brain is an analogue “device”, so it’s practically impossible for the brain to take the same state two times in a lifespan. That’s what I mean by “non-programmed” - you cannot predict its state at any time, as opposed to computers.
I’d argue that the missing factor is complexity. Computers are actually quite stupid, all we’ve been doing over the last century has been increasing their power. The laptop still follows the simple step by step logic process of a 1960s room-filling computer, except one is much faster at doing it. Once a way of making a program more complex (through quantum events perhaps) then we may come closer to making an intelligent machine.
Saying that a brain is too complex or unpredicatable to be analogue to a computer is incorrect. Humans will react with reasonable predicatability to different stimulus, and thinking patterns are found to be common across groups. I believe that the extreme complexity of the brain decieves people into thinking it to be some kind of magic box that randomly creates thoughts, rather than a machine following a very convuluted program. A machine is still a machine, no matter how dynamic or advanced.
I’d also like to point out that biological animals are pretty much machines as well, except using organic structures and having a very complicated CPU.
Digital computers may be divided into discrete cycles, but Seba are you trying to divide the line between awareness and reactivity by whether or not signals will be sent concurrently? Or should I say, whether there’s variability in the speed of signals due to unpredictable analog matter?
We’ve had analog computers in the past, subject to the variance of electric current. We introduce (pseudo)random numbers as parameters in a number of modern computer functions. I don’t think unpredictability is a necessary part of being aware, environmentally-aware or self-aware.
By complexity you mean completely different architecture (i.e. not Von Neumann architecture), right? Well, I believe that even with the Von Neumann architecture artificial awareness could be achieved, but it would require much more computing power and memory that is available today (perhaps through cluster computing), and of course it would have to be based on neural networks.
Well, I didn’t say that, but still, brain has certain advantages over digital computers. Like I said, signal thresholds between neurons are continuous in nature. On the other hand, in case of computers even changes in state aren’t continuous, because they occur at discrete moments in time determined by an oscillator. A continuous function can of course be approximated by a discrete one, but my point was that the computer programs have limited number of states as opposed to brain, where the number of states is unimaginably large.
I won’t deny that. I would even say that most of the simple animals like insects don’t have awareness at all, but that’s just my opinion.
I must say that I cannot draw that line, because I’m not really sure what awareness is or how it works, probably nobody is. One thing I’m sure though, the memory state must be changing constantly, it cannot just freeze waiting for the signal. Secondly, I would expect that aware beings could learn new things, change its behaviour (or algorithms), and as far as I know most of the robots can’t do that.
Yep, those good old Polish analog computers… Those were true pieces of art in the 60s, to bad they were no match for the microprocessors.
Founded in 2004, Leakfree.org became one of the first online communities dedicated to Valve’s Source engine development. It is more famously known for the formation of Black Mesa: Source under the 'Leakfree Modification Team' handle in September 2004.