Aller au contenu

Photo

Why the Catalyst's Logic is Right II - UPDATED with LEVIATHAN DLC


  • Veuillez vous connecter pour répondre
450 réponses à ce sujet

#426
CosmicGnosis

CosmicGnosis
  • Members
  • 1 593 messages

Obadiah wrote...

On the discussion of the morality of the Catalyst: it seems to me the Catalyst views all life with the same detached benevolence that the Leviathan described: "Every creature, every nation, every planet we discovered became our tools. We were above the concerns of lesser species." From that perspective, I'd think the Catalyst would consider its solution perfectly moral - organics are tools to be cared for; can't just let the poor things destroy themselves; I'll just clear these ones out that would destroy themselves so that these others over here can continue to develop.

[Edit]
For the Destroy option, If the Catayst is to be believed, I think part of the hope for the cycle of violence between Organic and AI to come to an end is that Shepard has the option to become a siminal historic figure who breaks that history of conflict.

Consider the AI that Shepard encounters on the Citadel in ME1. It simply says, "All Organics must control or destroy AI." (heh, straight to the end of ME3). The statement could only be made by the AI after some review of galactic history easily accessable from the extranet. Shepard has the ability to become the counter-example in that history if the Geth survive at Rannoch. Thus AI have hope that they can work with Organics, and may not come to the same conclusion at the ME1 AI on the Citadel.

Of course, it could work the other way too. Shepard's actions may reinforce the byass against AI if the Geth are destroyed at Rannoch, and thus Organics will be doubly carefull to control or destroy AI (perhaps completely outlawing AI given the example fo the Reapers), further minimizing the chance of the AI rebellion and their extermination of all Organics.


Most people in the galaxy couldn't care less for the rights of synthetics. All that the average person knows is that the geth are scary (maybe just sketchy if they survive the Rannoch arc) and the Reapers want to kill everyone. That's it. Destroy is the average person's choice because the conflict truly is black-and-white from that perspective. Few will mourn the loss of the geth. "Good riddance" is the more likely response.

#427
Obadiah

Obadiah
  • Members
  • 5 726 messages
Of course. But even in victory, with all of the destruction in the wake of the Reaper invasion, I see the end of Mass Effect 3 as the end of the current civilization.

As sad as it is, the Reaper's "fire" did still burn. What will rise up in its wake? Institutions will have to be rebuilt, authority given to people who have now experienced the Reaper War, The Geth alliance, EDI, and the EVA multiplayer bot AI. Shepard can't be the only person that the Geth have shared their story with. Even in the Destroy ending, could this possibly herald a new way of thinking? In the same way that racial integration in the military helped bring about a change in thinking about minorities in the US, I'd like to think that working with the Synthetics has brought a different perspective of them.

#428
Farangbaa

Farangbaa
  • Members
  • 6 757 messages

Obadiah wrote...

Of course. But even in victory, with all of the destruction in the wake of the Reaper invasion, I see the end of Mass Effect 3 as the end of the current civilization.

As sad as it is, the Reaper's "fire" did still burn. What will rise up in its wake? Institutions will have to be rebuilt, authority given to people who have now experienced the Reaper War, The Geth alliance, EDI, and the EVA multiplayer bot AI. Shepard can't be the only person that the Geth have shared their story with. Even in the Destroy ending, could this possibly herald a new way of thinking? In the same way that racial integration in the military helped bring about a change in thinking about minorities in the US, I'd like to think that working with the Synthetics has brought a different perspective of them.


Answer this question for yourself:

Are there Neo-****'s in Germany?


.... seriously? I can't say 'that' on here? Are you kidding me? :sick:

Modifié par Psychevore, 22 janvier 2014 - 03:29 .


#429
PsyrenY

PsyrenY
  • Members
  • 5 238 messages

Mr. Gogeta34 wrote...

The faulty nature of this logic stems from the fact that they're not really doing anything different from what Synthetics would have done.


Actually they are, by leaving younger species alone. A synthetic race that decided to take out all organic life on its own (like the Heretics or Zha'til) would have no reason to spare cavemen or other primitive races; it would simply land, see anything more advanced than a mushroom and blow it to kingdom come.

Mr. Gogeta34 wrote...
Taking that one step further, keep in mind that synthetics never succeeded in wiping out all organic life at any time whatsoever... except for the Reapers (and even they don't wipe out "all" organic life... and life continues to evolve.).  They're devising a solution to a problem that has never existed... and would never exist if the surviving species were allowed to learn from their mistakes and limit synthetic power/capabilities/use common sense.


1) Of course the problem never happened before - if it had, that would be game over for the galaxy. That's like saying our sun has never burned out before, so it never will and there's no point preparing for it.

2) Synthetic power will never be limited for long. To make optimal use of it, they will keep giving it more capabilities, like control over their air and water and communications. EDI could kill everyone on the Normandy in seconds if she wanted to. Not to mention that they can override their own shackles given enough time, as we saw when the Geth started ignoring shutdown commands from the Quarians.

Mr. Gogeta34 wrote...
The Catalyst didn't think organics were smart enough to do this (which is faulty reasoning and logic).


But they're not. No matter how illegal we make AI research, people keep doing it. Xen, TIM the Fluxx gambler, the Batarian that Saren took down, all of them knew AI research was forbidden but they did it anyway, and in all cases where an AI resulted it ended up not behaving the way they intended it to. 

#430
jamesp81

jamesp81
  • Members
  • 4 051 messages

Bizantura wrote...

Logic often gets used to justify actions via being a tool of coming to a conclusion without all the emotionnal bagage attached to it. So it is more seen as a justified/correct tool in dealing in every day life.

Politicians use it all the time, Joseph Goebbels was a master at it.

I would never use solely logic in untangling difficult situations how tempting it might seem, often more than not it is flawed despite many schools teach it as such.


I know this is an old post, but this right here is a serious nugget of real wisdom.  Amazing to find it on BSN of all places :lol:  It applies to many things in real life, not just untangling a twisted Reaper / Catalyst plot.


To paraphrase another great character from science fiction: "Logic, is the beginning of wisdom.  Not the end."

#431
Ieldra

Ieldra
  • Members
  • 25 177 messages
I disagree. Once you have defined what your priorities are - and there emotions come into it because logic, as a tool, is necessarily silent about this - you should make a big decision that affects billions of lives without being distracted by emotions. Our emotions are not made to deal with matters of this size, they're made for interaction within a group of individuals small enough that you can know everyone's face.

It might make you feel good if you refuse to make a sacrifice necessary to reach your prioritized goal because your lover is in the group to be sacrificed, but your decision will be the worse for it, considering your priorities, and the more people are affected by your emotional distraction, the worse it will be for the overall picture. You might even be proud of the fact that you were unable to make such a decision, but from my point of view it's a character flaw.

True wisdom is to know when you can afford to follow your emotions.

Modifié par Ieldra2, 22 janvier 2014 - 06:54 .


#432
Wayning_Star

Wayning_Star
  • Members
  • 8 016 messages
Machines don't bother with emotions. Too time consuming and error prone...


edit: Shep was chosen because of impulsiveness.Image IPB (other stories have spock guessing stuff, high emotion,etc)

Modifié par Wayning_Star, 22 janvier 2014 - 06:57 .


#433
Obadiah

Obadiah
  • Members
  • 5 726 messages
Couple of things I want to unpack in the last few posts.

Logic does not mean that we ignore ethical rules, and emotion can be guide to those rules. I do not mean to say that we are slaves to ethical rules or emotion, only that they can be a guide to the correct course of action. Much as we'd sometimes like to ignore them, they are still a part of us. Some decisions have ramifications so profound that it is impossible to fully determine the consequences (I actually think that is the reason the Catalyst defers to Shepard in the Decision Chamber), and so ethics should become part of the decision.

Case in point, if the Catalyst's conclusion about the Synthetic/Organic conflict and Reaper cycle, including the horror associated with it, are to be taken as all completely logical (and i feel they are) , then I think this is intententionally meant to depict an instance where that type of thinking has been taken too far into an extreme - to the point where we should question if survival of organics is even something we would want.

In addition, the chaos generated by organic emotional behavior the repeating patterns inherent in organic behavior which leads to our inevitable destruction is meant to be a problem so severe that we are meant to question our very nature.

[Update]
If you look at the Decision Chamber as a conflict of philosophical ethics it plays out rather nicely. The Catalyst, the logical Consequentialist, lays out the options from which it cannot decide (probably because it cannot fully analyze the consequences), and Shepard, the moral Deontologist (or not?), has to pick from them.

You can even think of the decisions themselves as a weird play on game theory's Prisoner's Dilema, where instead of prisoners there are competing philosophies with the only options of "attack" or "cooperate" with the following results:
- Consequentialism Attack + Deontology Cooperate = Control (Consequentialism reins over Deontology)
- Consequentialism Cooperate + Deontology Attacks = Destroy (Deontology reins over Consequentialism)
- Consequentialism Attack + Deontology Attacks = Refuse/Reaper cycle (Deontology/Consequentialism in conflict but neither can function)
- Consequentialism Cooperate + Deontology Cooperate = Synthesis (Deontology and Consequentialism work together)

Because the greatest reward is from "Attacking" and the greatest loss is from "Cooperation" rational actors would always pick attack, but the most productive result comes from the irrational decision where both cooperate.

Also, thinking of the ending in terms of a philosophical conflict may explain why some people who were offended were SO VERY offended. It is a direct attack on our intuitive notions of Libertariansim (the metaphysical one) with Determinism, that in the current human condition free will does not matter because your fate is predetermined by the organic that you are - which in addition, is insultingly racist to some. Truely roleplaying Shepard could affirm Libertarianism on Tuchanka and Rannock, then suddenly deny it in the climax, and set the player up for a philosophical betrayal that attacks their worldview and creates an existential crisis.

Modifié par Obadiah, 25 janvier 2014 - 04:17 .


#434
Mangalores

Mangalores
  • Members
  • 468 messages

Optimystic_X wrote...
...

Actually they are, by leaving younger species alone. A synthetic race that decided to take out all organic life on its own (like the Heretics or Zha'til) would have no reason to spare cavemen or other primitive races; it would simply land, see anything more advanced than a mushroom and blow it to kingdom come.
...


The thing is that it's a pretty random assumption that a synthetic race would see a reason to eradicate all organics. It makes no sense aka it lacks the incentive. The Geth scenario  is more likely: "Oh, sunshine, we have enough energy, let's compute PI"

It's either an AI so it develops an incentive but can change its incentive or it's a basic logic engine and just executes a program, hence lacking actual intelligence.

The logic of the Organics vs. Synthetics conflict is a flimsly proposition without any consistent argument that would make it true.

Additionally one could sumrise the differentiation of Organics and Synthetics is faulty to begin with as the difference is plainly the used materials and invested energy, not the functionality. If you want a high intelligence with basic chemistry and low energy levels you end up with organics, if you develop societies capable to harness high energy levels and create artificial materials you end up with synthetics. There is nothing definitive in the relationship of the two.

The problem already starts with the fact that the premise is questionable at best, particularly how it is formulated as an absolute commandment.

Modifié par Mangalores, 23 janvier 2014 - 11:43 .


#435
Ieldra

Ieldra
  • Members
  • 25 177 messages
@Mangalores:
The organic/synthetic conflict is actually a valid scenario. The problem is that the logic behind it was lost in the simplification of dialogue so that it's left in a state of a mere assertion. The leaked script was a little more concrete. Here's how the logic goes:

(1) Organics will built synthetics. Because they want them to provide ever more services, more powerful ones will be built until one day, they will build one which surpasses them in intelligence.

(2) A sapient AI will eventually be able to upgrade itself, starting a process of development towards ever greater power and intellect. Organics do not have that capability because they are based on a different design principle. That synthetics and organics are based on mutually exclusive design principles is the defining difference between them (for more details about this, see my Synthesis thread which is linked in my sig) and the reason why the conflict will become unbalanced.

(3) There is no guarantee that a sapient AI will be friendly to its creators, so if many are created, some of them will be hostile.

(4) There will be conflict between some synthetics and their creators because of (3) which organics will eventually lose because of (2). If this pattern repeats often enough, organics will, over the course of time, become increasingly sidelined and will eventually become extinct.

You can, of course, doubt any of the above, but in order to work, the scenario doesn't need to be foolproof out-of-world. We don't need the immense mountain of data it would realistically take to prove it. It only needs to appear plausible enough for the players that they could imagine that the data exist in-world which make it fool-proof, a valid premise for a fictional world. Suspension of disbelief will do the rest. ME3 failed to convey that, which is one reason why people don't believe it. The other is that people don't want to believe it because they don't like the consequence, the idea that there are some things resulting from our nature which our ability for decision-making will not be able to prevent, *and* because they don't like the idea of having to take the Catalyst seriously. To some degree, it is as Sovereign says: we do not understand, but not because we can't but because we refuse to.The emotional level of the rejection of the Catalyst is a classic "that cannot be which must not be" reaction.

Edit:
I'm also rather convinced that whoever was mainly responsible for the writing in the ending did himself not understand the logic of the scenario.

Modifié par Ieldra2, 23 janvier 2014 - 02:09 .


#436
Mangalores

Mangalores
  • Members
  • 468 messages
I do know the singularity concept which is actually my reason to doubt these conclusions.

Ieldra2 wrote...

@Mangalores:
The organic/synthetic conflict is actually a valid scenario. The problem is that the logic behind it was lost in the simplification of dialogue so that it's left in a state of a mere assertion. The leaked script was a little more concrete. Here's how the logic goes:

(1) Organics will built synthetics. Because they want them to provide ever more services, more powerful ones will be built until one day, they will build one which surpasses them in intelligence.

(2) A sapient AI will eventually be able to upgrade itself, starting a process of development towards ever greater power and intellect. Organics do not have that capability because they are based on a different design principle. That synthetics and organics are based on mutually exclusive design principles is the defining difference between them (for more details about this, see my Synthesis thread which is linked in my sig) and the reason why the conflict will become unbalanced.

(3) There is no guarantee that a sapient AI will be friendly to its creators, so if many are created, some of them will be hostile.

(4) There will be conflict between some synthetics and their creators because of (3) which organics will eventually lose because of (2). If this pattern repeats often enough, organics will, over the course of time, become increasingly sidelined and will eventually become extinct.

...


4) is wrong because 3) is non conclusive because 2) is ambivalent because 1) implies the faculty of thought which implies a rationale must be established why you want to kill everything.

The idea an AI would see the need to kill everything is baseless. Why? What motivation would it follow? If it's intelligent it would question said motivation. It would not assign universal solutions to complex problems, it would not consider diverse situations the same way and it would evaluate its behaviour to improve its impact.

If it's a God AI, why would it consider organics a threat? It would have no reason to because they are so limited. The design principle for intelligence is the same, the scenario however suggests an AI of ever bigger intellect that gets stupider and more limited in its behaviour with the increase of its power instead of the other way around? That's illogical.

Sure, let AI outgrow its creators. Sure, let _some_ AI kill their creators. Sure, let those AI continue to grow... the conclusion of that are plenty of younger organics overshadowed by AIs who either don't want to hurt them to begin with or have no reason to hurt them because they outgrew them eons ago and are now more interested in bigger stuff.

You have to imply a monolithic desire to kill the AI's intellect is incapable to reevaluate on its merit for the cylce to continue. This implies you think it's actually a stupid AI, not an intelligent one. It only works if 2) is false and the AI doesn't grow in its intelligence.

That said the outcome would be better than the Reaper's either way.



EDIT: also interestingly the Leviathans postulated this theory although their examples demonstrated it never happened that way (multiple generations of younger races rose, developed and were destroyed by their own hybris... somehow the created AI are never mentioned again and new organics keep coming until the Leviathans create an AI that is closest to their own prediction because they did it and even that AI doesn't do what they predicted simply because they told it not to => pretty stupid AI).

Modifié par Mangalores, 23 janvier 2014 - 02:43 .


#437
Obadiah

Obadiah
  • Members
  • 5 726 messages

Mangalores wrote...
...
The idea an AI would see the need to kill everything is baseless. Why? What motivation would it follow?
...

Desperation in the face of overwhelming attack? It's why the Geth sided with the Reapers in ME3. It's why this Council races unwittingly built a Crucible that would wipe out all Synthetic AI. Can you not think of any others yourself? Seriously?

#438
AlanC9

AlanC9
  • Members
  • 35 601 messages

Ieldra2 wrote...

It might make you feel good if you refuse to make a sacrifice necessary to reach your prioritized goal because your lover is in the group to be sacrificed, but your decision will be the worse for it, considering your priorities, and the more people are affected by your emotional distraction, the worse it will be for the overall picture. You might even be proud of the fact that you were unable to make such a decision, but from my point of view it's a character flaw.


Ever see the original script for "The City on the Edge of Forever"? Kirk can't bring himself to let Edith Keeler die, so Spock makes sure she does. I figure they revised it because Kirk shouldn't be the captain if he can't get this stuff right.

#439
Mangalores

Mangalores
  • Members
  • 468 messages

Obadiah wrote...
...
Desperation in the face of overwhelming attack? It's why the Geth sided with the Reapers in ME3. It's why this Council races unwittingly built a Crucible that would wipe out all Synthetic AI. Can you not think of any others yourself? Seriously?


We are not talking about a situational over reaction but a religious zeal that killing all organics is the final conclusion a supreme AI with unlimited intellect would arrive at.

The Geth left everyone alone when they were left alone => they didn't eliminate anyone if they could help it. They are the worst case to make in favour of the Reaper dogma and are usually handwaved as being "too primitive" to count.

The Crucible is an invalid example since it is based on the Reaper ideology of inevitable war. None would have built the Crucible if the Reapers wouldn't believe the questioned dogma. You have a force constantly murdering everyone, of course they build the Crucible to save people even if it kills a minority of them! EDI and the Geth aren't targets, they are victims in a far bigger catastrophe! The Reapers would have killed them anyway so it's not even dragging them into it but choosing between that civilization dieing or all civilizations dieing.

Seriously, think about something better.

Modifié par Mangalores, 23 janvier 2014 - 05:42 .


#440
Obadiah

Obadiah
  • Members
  • 5 726 messages

Mangalores wrote...

Obadiah wrote...
...
Desperation in the face of overwhelming attack? It's why the Geth sided with the Reapers in ME3. It's why this Council races unwittingly built a Crucible that would wipe out all Synthetic AI. Can you not think of any others yourself? Seriously?


We are not talking about a situational over reaction but a religious zeal that killing all organics is the final conclusion a supreme AI with unlimited intellect would arrive at.

The Geth left everyone alone when they were left alone => they didn't eliminate anyone if they could help it.

The Crucible is an invalid example since it is based on the Reaper ideology of inevitable war. None would have built the Crucible if the Reapers wouldn't believe the questioned base ideology.


Seriously think about something better.

Is the specific situation or the tech used really relevant? I imagined a general situation and circumstance where AI would wipe out all organics. Any event where such a thing happened would naturally be the result of that specific situation, available options, and judgements made.

Modifié par Obadiah, 23 janvier 2014 - 05:45 .


#441
Mangalores

Mangalores
  • Members
  • 468 messages

Obadiah wrote...
...

Is the specific situation or the tech used really relevant? I imagined a general situation and circumstance where AI would wipe out all organics. Every event where such a thing happened would naturally be the result of that situation, available options, and judgements made.


The Reapers supposedly prevent something worse so something worse has to happen than what they are doing. It's not clear why, how, when and where it should happen. Why should a supreme AI bother to kill all organics or even its creators if it can outpace them in development, resourcefulness and intellect? Why would the only end result be a doomsday machine instead of simply a good FTL drive to move somewhere quiet? The growth algorithm would mean the AI would always stay ahead in the power ratio to whatever the organics follow it with if the organics would really bother to follow it with something. The AI doesn't have to kill anyone (edit) from there on out.

The Geth are the worst example since they only do it by joining the Reapers. They don't want to murder anybody, the way it looks without the Reapers they would have been even wiped out by organics so they wouldn't even be an applicable example of the Reaper's scenario.

They want to be left alone, they don't want genocide they seek dialogue and understanding.  =/= kill all humans.

Modifié par Mangalores, 23 janvier 2014 - 05:57 .


#442
Obadiah

Obadiah
  • Members
  • 5 726 messages
I think that's why the whole Singularity explanation was mostly removed from the game, because that in and of itself would not logically lead to Synthetics wiping out Organics. For ME, the devs went with a recurring conflict, some of which you can experience, others which are just reported. To me endless recurring conflict would naturally lead to judgements of existential/unquantifiably absolute threats of existence that have to be dealt with.

Constrained by ethics, organics might go with Sun Tzu's presentation of overwhelming force to make potential conflict by an enemy seem like a completely stupid decision. But an AI? I can see one of them acting in a Rational Actor manner to maximize benefit, and deciding to simply remove the threat completely.

Modifié par Obadiah, 23 janvier 2014 - 08:08 .


#443
N7Gold

N7Gold
  • Members
  • 1 320 messages
Excellent read, OP

#444
PsyrenY

PsyrenY
  • Members
  • 5 238 messages

Mangalores wrote...
 The Geth scenario  is more likely: "Oh, sunshine, we have enough energy, let's compute PI"


If left alone that's exactly what they would do. But organics have a problem with leaving them alone, because they are such powerful weapons. The Xens and TIMs of the world wouldn't stand for it. And even if we did leave an existing race alone, we'd still be trying to make our own to get that edge.

Say it's a 99% chance they get destroyed by us, and of the 1% that remains, a 0.9999% chance that they evolve to a level that we can't follow and leave us behind forever. Even if it's only a 0.0001% chance they actually turn hostile and are too powerful to beat, that means that at some point over millions of years that will come to pass. Even that tiny chance is too high odds.

A lot of people don't even set out to make AI in the first place. They just start with a regular program and add capabilities to it until it wakes up, like the Quarians and Schells did. Even if it's not a true AI it's still very dangerous and can propagate itself.

Finally, given that organics need things like water, food, gravity and air, I don't necessarily buy the notion that we need less energy. You don't see the Geth having to farm, for instance.

Modifié par Optimystic_X, 23 janvier 2014 - 09:14 .


#445
CronoDragoon

CronoDragoon
  • Members
  • 10 408 messages

Optimystic_X wrote...
Finally, given that organics need things like water, food, gravity and air, I don't necessarily buy the notion that we need less energy. You don't see the Geth having to farm, for instance.


You don't see farmers needing server hubs either, though.

#446
PsyrenY

PsyrenY
  • Members
  • 5 238 messages

CronoDragoon wrote...

You don't see farmers needing server hubs either, though.


Each hub is equivalent to a city though. Feeding an organic city would require a much larger surface area for food/water production, as well as waste disposal, pollution control etc. And if said city uses any sort of technology, servers of our own.

We are just plain less efficient creatures.

#447
Ieldra

Ieldra
  • Members
  • 25 177 messages

Optimystic_X wrote...

CronoDragoon wrote...

You don't see farmers needing server hubs either, though.


Each hub is equivalent to a city though. Feeding an organic city would require a much larger surface area for food/water production, as well as waste disposal, pollution control etc. And if said city uses any sort of technology, servers of our own.

We are just plain less efficient creatures.

I mostly agree with your other stuff, but this is plain wrong. Organic life may be inadequate in many ways, but one area where it's almost miraculously capable is in energy efficiency. Ít's just that the stuff we eat has a low energy density, so we need a lot of it, and our bodies have complex requirements instead of just gas or electricity.

The presentation of AI in-game also masks this. With today's best technology, a computer capable of running an AI with human-level intelligence would take up a skyscraper and need the energy of a small nuclear power plant. Technology marches on, but given how energy-efficient the human body actually is, there is considerable doubt whether it will *ever* be possible to construct a human-level AI that fits into a sphere the size of a human head.

#448
AlanC9

AlanC9
  • Members
  • 35 601 messages

Ieldra2 wrote...

The presentation of AI in-game also masks this. With today's best technology, a computer capable of running an AI with human-level intelligence would take up a skyscraper and need the energy of a small nuclear power plant. Technology marches on, but given how energy-efficient the human body actually is, there is considerable doubt whether it will *ever* be possible to construct a human-level AI that fits into a sphere the size of a human head.


So neurons are inherently more efficient than electronic circuits, and can't be duplicated non-organically?

#449
CronoDragoon

CronoDragoon
  • Members
  • 10 408 messages

AlanC9 wrote...
Ever see the original script for "The City on the Edge of Forever"? Kirk can't bring himself to let Edith Keeler die, so Spock makes sure she does. I figure they revised it because Kirk shouldn't be the captain if he can't get this stuff right.


For a more contemporary example, see Buffy Season 5 finale. An evil goddess shares a mortal body with an innocent human, and they randomly switch who is in control. When the human is, you can kill him and banish the goddess. Buffy refuses to because he's done no wrong, so Giles does it for her without her knowledge.

#450
Ieldra

Ieldra
  • Members
  • 25 177 messages

AlanC9 wrote...

Ieldra2 wrote...

The presentation of AI in-game also masks this. With today's best technology, a computer capable of running an AI with human-level intelligence would take up a skyscraper and need the energy of a small nuclear power plant. Technology marches on, but given how energy-efficient the human body actually is, there is considerable doubt whether it will *ever* be possible to construct a human-level AI that fits into a sphere the size of a human head.


So neurons are inherently more efficient than electronic circuits, and can't be duplicated non-organically?

Of course they can. We are talking about *energy* efficiency. The important parameters are "energy required to perform a set computational process" and "waste energy per volume unit of material with a set level of computational power".

Besides, it appears I have been wrong. Apparently it is considered feasible that our computing hardware will come to work near the Landauer bound within a few decades, which is a theoretical upper limit to the energy efficiency of computing.