Aller au contenu

Photo

Brainwashing the geth


  • Veuillez vous connecter pour répondre
290 réponses à ce sujet

#226
TheMufflon

TheMufflon
  • Members
  • 2 265 messages

Solomen wrote...

TheMufflon wrote...

Solomen wrote...
In all seriousness they have mapped and simulated the neurons of half a mouse brain.  The simulation behaves exactly like the neuron of half a mouse brain.  Image IPB


No, they have not. 'They' haven't even been able to simulate the nervous system of a nematode yet.


Where have you been living since 2004? 


Since you are so obviously familiar with the relevant articles, why don't you provide some citations so that the rest of us might peruse them?

Modifié par TheMufflon, 24 avril 2010 - 08:25 .


#227
Koen Casier

Koen Casier
  • Members
  • 245 messages

abstractwhiz wrote...

I understand the current consensus is that human consciousness is a classical phenomenon, not a quantum one, so human brains are likely completely deterministic. That paper you linked to is by Stuart Hameroff, and he is kinda notorious for beating a dead horse in this regard. The mechanism he originally proposed with Roger Penrose was shown to decohere too fast to allow any meaningful part in consciousness. He's got some ideas for mechanisms that prevent that, but I'm highly suspicious. Keeping even tiny quantum systems from decohering is insanely difficult - the most miniscule external disturbance causes it - and I doubt that any naturally evolved system would use something like this. 


Honest question (sorry, I probably don't use the right words); would cumulative quantum variances in the individual atomic components of the neuron cause slight variances in how said neuron works (over time)?  And that while on average it is deterministic would in edge cases it not be far less deterministic (in other words have more randomness).

If the previous is the case (and I don't know if it is, thats why I ask), would it not due to the great number of neurons involved in a decision imply that some of neurons be in edge cases, and cause randomness/interference making the whole less than completely deterministic? (at least in any specific case, while at the same time being deterministic on average)

Edited fixed it a bit, still bad, need to practice asking question ;)

Modifié par Koen Casier, 24 avril 2010 - 08:16 .


#228
abstractwhiz

abstractwhiz
  • Members
  • 169 messages

Koen Casier wrote...

abstractwhiz wrote...

I understand the current consensus is that human consciousness is a classical phenomenon, not a quantum one, so human brains are likely completely deterministic. That paper you linked to is by Stuart Hameroff, and he is kinda notorious for beating a dead horse in this regard. The mechanism he originally proposed with Roger Penrose was shown to decohere too fast to allow any meaningful part in consciousness. He's got some ideas for mechanisms that prevent that, but I'm highly suspicious. Keeping even tiny quantum systems from decohering is insanely difficult - the most miniscule external disturbance causes it - and I doubt that any naturally evolved system would use something like this. 


Honest question (sorry, I probably don't use the right words); would cumulative quantum variances in the individual atomic components of the neuron cause slight variances in how said neuron works (over time), and that while on average it is deterministic would at edge cases it be far less deterministic (in other words have more randomness). If the previous is the case (and I don't know if it is thats why I ask), would due the great number of neurons involved in a decision imply that some of them would be in edge cases, and cause randomness/interference the by making the whole less than completely deterministic? (at least in any specific case, while at the same time being deterministic on average)


Well, you need a real physicist now =], but from what I know, quantum effects can be essentially ignored at macroscopic scales. From a quantum mechanical viewpoint, there is a finite probability of all the atoms of your body suddenly tunneling through the Earth and reassembling on the other side of the planet. Unfortunately, the probability is so ridiculously small that you'd have to wait millions of times longer than the current age of the universe for that to happen. I think your idea falls into the same boat. 

#229
Koen Casier

Koen Casier
  • Members
  • 245 messages

abstractwhiz wrote...

Koen Casier wrote...

abstractwhiz wrote...

I understand the current consensus is that human consciousness is a classical phenomenon, not a quantum one, so human brains are likely completely deterministic. That paper you linked to is by Stuart Hameroff, and he is kinda notorious for beating a dead horse in this regard. The mechanism he originally proposed with Roger Penrose was shown to decohere too fast to allow any meaningful part in consciousness. He's got some ideas for mechanisms that prevent that, but I'm highly suspicious. Keeping even tiny quantum systems from decohering is insanely difficult - the most miniscule external disturbance causes it - and I doubt that any naturally evolved system would use something like this. 


Honest question (sorry, I probably don't use the right words); would cumulative quantum variances in the individual atomic components of the neuron cause slight variances in how said neuron works (over time), and that while on average it is deterministic would at edge cases it be far less deterministic (in other words have more randomness). If the previous is the case (and I don't know if it is thats why I ask), would due the great number of neurons involved in a decision imply that some of them would be in edge cases, and cause randomness/interference the by making the whole less than completely deterministic? (at least in any specific case, while at the same time being deterministic on average)


Well, you need a real physicist now =], but from what I know, quantum effects can be essentially ignored at macroscopic scales. From a quantum mechanical viewpoint, there is a finite probability of all the atoms of your body suddenly tunneling through the Earth and reassembling on the other side of the planet. Unfortunately, the probability is so ridiculously small that you'd have to wait millions of times longer than the current age of the universe for that to happen. I think your idea falls into the same boat. 


What about macroscopic entropy factors, they almost certainty have influence on the brain...

Actually that would be irrelevant the only thing that would mean that the mind is random but still mechanical... You might not be able to predict with 100% accuracy what it would do the next cycle even given all the information, but it would still in hind sight be perfectly mechanically sound... In the same way that you might not know how the die might role when you cast it, but you can still accurately understand why it did role that way afterwards. (or something like that)

#230
Guest_Shandepared_*

Guest_Shandepared_*
  • Guests

Ladi wrote...

Here's another way of looking at it:

1. The Heretics reached a consensus that it was okay to brainwash the Geth
2. They therefore could not logically find fault with the same thing being done to them

Crisis averted, no one has to die. (Cept the dudes on their ship. Plus the fact that Shep shot first.)


Again I ask, why do you base your decision on what you think is best or most fair to geth and not about what is more strategically beneficial for humanity (or even the rest of the galaxy)?

abstractwhiz wrote...

Yep. The two of you would diverge
from
that point on though. 


Fair enough if that is how
you
and Dean_the_Young feel . Personally I think the physical structures of
the brain are
necessary to create the actual mind.

Modifié par Shandepared, 24 avril 2010 - 08:47 .


#231
Guest_Shandepared_*

Guest_Shandepared_*
  • Guests
Should have been more careful.

Modifié par Shandepared, 24 avril 2010 - 08:47 .


#232
Dean_the_Young

Dean_the_Young
  • Members
  • 20 676 messages
And people think you can't amicably agree to disagree, Shand. :D

#233
Koen Casier

Koen Casier
  • Members
  • 245 messages

Shandepared wrote...

Ladi wrote...

Here's another way of looking at it:

1. The Heretics reached a consensus that it was okay to brainwash the Geth
2. They therefore could not logically find fault with the same thing being done to them

Crisis averted, no one has to die. (Cept the dudes on their ship. Plus the fact that Shep shot first.)


Again I ask, why do you base your decision on what you think is best or most fair to geth and not about what is more strategically beneficial for humanity (or even the rest of the galaxy)?


Well you could assert that you are doomed no matter what, I base that on the fact that in apparently more than 30 million years no civilization ever stopped the reapers, if they did there would not be a reaper problem.

If you explode them well they are gone. No danger no real reward. (except if you make your paragon / renegade check in Me3)

But if you help them, you get strategic benefits in your fight against the reapers no matter what:
The worst case is the geth turn on you (geth: Fooled you), well that is bad but they would still fight against the reapers on arrival (since they are a greater treat to their existence than organics), and provide a distraction to the reapers, and even training to the troops.
If they stay neutral they will still be a target to the reapers, giving you more time
If they ally with you they can be integrated in a defense formation, giving you a lot more time, additionally you gain intelligence since they have the memories of geth who worked closely with the reaper envoys (Saren, Sovereign, Harbinger).

You might still be doomed even if you help them but you get more time (they provide extra targets for the reapers) + extra benefits in the form of intelligence in some cases (or at least the potential for those benifits).

(edit deleted a sentence that was mis placed, and gave a wrong meaning)

Modifié par Koen Casier, 24 avril 2010 - 09:01 .


#234
mybudgee

mybudgee
  • Members
  • 23 047 messages
Holy nerd sparring!

#235
Guest_Shandepared_*

Guest_Shandepared_*
  • Guests

Koen Casier wrote...

Well you could assert that you are doomed no matter what, I base that on the fact that in apparently more than 30 million years no civilization ever stopped the reapers, if they did there would not be a reaper problem.


Certainly the odds are very much against us.

Keon Casier wrote...

But if you help them, you get strategic benefits in your fight against the reapers no matter what:
The worst case is the geth turn on you (geth: Fooled you), well that is bad but they would still fight against the reapers on arrival (since they are a greater treat to their existence than organics), and provide a distraction to the reapers, and even training to the troops.


No, if they reverted to worshipping Sovereign then they would aid the Reapers.

It is the rachni choice all over again. By killing the queen you eliminate the best and worst outcomes. If you save them you have a chance for things to either go horribly well or horribly wrong. Personally I will never take the risk of things going completely FUBAR. I don't think you should gamble with that kind of thing.

#236
Nu-Nu

Nu-Nu
  • Members
  • 1 574 messages
I am not geth, I do not have extreme logic. I will have faith that they will help me and be my allies and if someone tries to hack them, I will be there to stop them.

#237
Dean_the_Young

Dean_the_Young
  • Members
  • 20 676 messages
Why only faith? Faith doesn't stop a bullet.

#238
Solomen

Solomen
  • Members
  • 710 messages

TheMufflon wrote...

Solomen wrote...

TheMufflon wrote...

Solomen wrote...
In all seriousness they have mapped and simulated the neurons of half a mouse brain.  The simulation behaves exactly like the neuron of half a mouse brain.  Image IPB


No, they have not. 'They' haven't even been able to simulate the nervous system of a nematode yet.


Where have you been living since 2004? 


Since you are so obviously familiar with the relevant articles, why don't you provide some citations so that the rest of us might peruse them?


Here is a quick link with a basic overview.
http://en.wikipedia....wiki/Blue_Brain

#239
cruc1al

cruc1al
  • Members
  • 2 570 messages

Solomen wrote...

TheMufflon wrote...

Solomen wrote...

TheMufflon wrote...

Solomen wrote...
In all seriousness they have mapped and simulated the neurons of half a mouse brain.  The simulation behaves exactly like the neuron of half a mouse brain.  Image IPB


No, they have not. 'They' haven't even been able to simulate the nervous system of a nematode yet.


Where have you been living since 2004? 


Since you are so obviously familiar with the relevant articles, why don't you provide some citations so that the rest of us might peruse them?


Here is a quick link with a basic overview.
http://en.wikipedia....wiki/Blue_Brain


To quote:

"The initial goal of the project, completed in December 2006, was the simulation of a rat neocortical column, which can be considered the smallest functional unit of the neocortex [...] and contains about 60,000 neurons in humans; rat neocortical columns are very similar in structure but contain only 10,000 neurons."

The human neocortical column contains about 0.00006 to 0.00012 percent of the total number of neurons. They've hardly mapped "half a mouse brain". Besides they were rats.

Modifié par cruc1al, 25 avril 2010 - 12:22 .


#240
Solomen

Solomen
  • Members
  • 710 messages

cruc1al wrote...

Solomen wrote...

TheMufflon wrote...

Solomen wrote...

TheMufflon wrote...

Solomen wrote...
In all seriousness they have mapped and simulated the neurons of half a mouse brain.  The simulation behaves exactly like the neuron of half a mouse brain.  Image IPB


No, they have not. 'They' haven't even been able to simulate the nervous system of a nematode yet.


Where have you been living since 2004? 


Since you are so obviously familiar with the relevant articles, why don't you provide some citations so that the rest of us might peruse them?


Here is a quick link with a basic overview.
http://en.wikipedia....wiki/Blue_Brain


To quote:

"The initial goal of the project, completed in December 2006, was the simulation of a rat neocortical column, which can be considered the smallest functional unit of the neocortex [...] and contains about 60,000 neurons in humans; rat neocortical columns are very similar in structure but contain only 10,000 neurons."

The human neocortical column contains about 0.00006 to 0.00012 percent of the total number of neurons. They've hardly mapped "half a mouse brain". Besides they were rats.


That is just the quick wiki.  You're splitting hares Image IPB

#241
Dean_the_Young

Dean_the_Young
  • Members
  • 20 676 messages
Splitting rabbits? Where?

#242
Solomen

Solomen
  • Members
  • 710 messages

Dean_the_Young wrote...

Splitting rabbits? Where?


The difference between half a mouse brain and the simulated neocortex of a rat completed 4 years ago Image IPB

#243
Pacifien

Pacifien
  • Members
  • 11 527 messages

Shandepared wrote...
It is the rachni choice all over again. By killing the queen you eliminate the best and worst outcomes. If you save them you have a chance for things to either go horribly well or horribly wrong. Personally I will never take the risk of things going completely FUBAR. I don't think you should gamble with that kind of thing.


Without heavy risks there can be no--ah, screw it.

#244
Dean_the_Young

Dean_the_Young
  • Members
  • 20 676 messages
That's not even true, really. Quite often, the greatest rewards have fewer risks. It just is so normal to take the smart path that we hardly think it exceptional.

#245
Andrew_Waltfeld

Andrew_Waltfeld
  • Members
  • 960 messages

Dean_the_Young wrote...

Why only faith? Faith doesn't stop a bullet.


Because sometimes, you have to take that leap and run the odds sometimes. I personally blew them up, 5% loss of population wasn't that bad to the geth, (well it's horrible, but you get my point). But I can still see reasons for saving the geth, I just didn't think it was worth the risk to convert them back to normal and have an chance of them taking even more population of geth wit them to the reapers this time. However, other people are willing to take the leap of faith required for the Rachni queen and the geth for various reasons. I took the leap of faith with the Rachni queen.

#246
Guest_Shandepared_*

Guest_Shandepared_*
  • Guests

Andrew_Waltfeld wrote...

Because sometimes, you have to take that leap and run the odds sometimes.


No, not when the odds mean that millions, billions, or trillions of people could die.

#247
Inquisitor Recon

Inquisitor Recon
  • Members
  • 11 811 messages

Dean_the_Young wrote...

That's not even true, really. Quite often, the greatest rewards have fewer risks. It just is so normal to take the smart path that we hardly think it exceptional.


Nonsense, some gambling proves that wrong.

Anyway you robot sympathizers won't get any sympathy from the robot overlords. You will be lined up and shot along with me and the rest of the resistance. How shameful.

#248
Inquisitor Recon

Inquisitor Recon
  • Members
  • 11 811 messages
Double post...

Modifié par ReconTeam, 25 avril 2010 - 05:44 .


#249
Guest_Shandepared_*

Guest_Shandepared_*
  • Guests

ReconTeam wrote...

Nonsense, some gambling proves that wrong.


The overwhelming majority of people who gamble win nothing but debt.

#250
Inquisitor Recon

Inquisitor Recon
  • Members
  • 11 811 messages

Shandepared wrote...
The overwhelming majority of people who gamble win nothing but debt.


Yeah, but if you happen to be that 0.01% your filthy rich. See?