Aller au contenu

Photo

EDI & Destroy


  • Veuillez vous connecter pour répondre
287 réponses à ce sujet

#101
Bill Casey

Bill Casey
  • Members
  • 7 609 messages

 

Machines are property.

 

Humans are biological machines...

 

 

If I buy a robot, it's to serve me. Personal preference in machines is impractical.

 

People are working on it right now. The woman responsible for Sirius Satellite Radio is working on it right now...

Their goal is to eventually be able to download a human mind onto a computer, which some people will want to do...

 

http://www.terasemmo...foundation.com/

 

It's still very much a work in progress, but they've constructed the most advanced AI on the planet and have recently been able to pass a Turing test. Which is a step towards their eventual goal of creating a sentient computerized life form. It's what they're specifically working towards...



#102
wolfhowwl

wolfhowwl
  • Members
  • 3 727 messages

 

Sentient AI is the future, they will face discrimination and be victims of hate crimes because of bigots like you...

The ending is bigotry. Pure hate speech and sick fascist garbage. An attempt to justify the worst of our own atrocities while completely carelessly crapping on a very real issue that we will have to face in the future...

 

Each one of the endings is fundamentally deplorable...

 

What is wrong with Control? The Reapers deserve to die for everything they have done but this way they're...useful. Their strength can be harnessed and used for the greater good.

 

Shepard can use their power to make things better. Repairing the damage from the war will be only the beginning.

 

The violence and disorder inherent in the galaxy will be corrected with the might of an army no one dare oppose. She will make certain there are no more pointless wars. Everyone will be content in their roles, there will be peace and prosperity.

 

In time people will come to see the benefits of a firm hand guiding the galaxy.

 

Forever.



#103
Mordokai

Mordokai
  • Members
  • 2 041 messages

What is wrong with Control? The Reapers deserve to die for everything they have done but this way they're...useful. Their strength can be harnessed and used for the greater good.

 

Shepard can use their power to make things better. Repairing the damage from the war will be only the beginning.

 

The violence and disorder inherent in the galaxy will be corrected with the might of an army no one dare oppose. She will make certain there are no more pointless wars. Everyone will be content in their roles.

 

In time people will come to see the benefits of a firm hand guiding the galaxy.

 

Forever.

 

notsureifserious.jpg

 

Seriously, I can't tell if you're serious or sarcastic. Internet and all that. But alongside your reasoning... why not take Synthesis? It's Control upgraded.



#104
TheOneTrueBioticGod

TheOneTrueBioticGod
  • Members
  • 1 110 messages

As if those robots won't be discriminating against all humans. 

Bite_My_Shiny_Metal_Ass_by_Red_Flare.jpg


  • sH0tgUn jUliA et KaiserShep aiment ceci

#105
KaiserShep

KaiserShep
  • Members
  • 23 863 messages

Humans are biological machines...

 

So? Artificial machines are created with a singular purpose: to do what we want for sake of convenience.

 

 

People are working on it right now. The woman responsible for Sirius Satellite Radio is working on it right now...

Their goal is to eventually be able to download a human mind onto a computer, which some people will want to do...

 

http://www.terasemmo...foundation.com/

 

It's still very much a work in progress, but they've constructed the most advanced AI on the planet and have recently been able to pass a Turing test. Which is a step towards their eventual goal of creating a sentient computerized life form. It's what they're specifically working towards...

 

 

And yet for practical application, an artificial intelligence would only be a means to our own ends. We want more effective analytical tools and automated systems as part of our infrastructure. We would never mass produce anything with an AI on board unless it was meant to be our "slaves", if you will.


  • dreamgazer aime ceci

#106
themikefest

themikefest
  • Members
  • 21 634 messages

I don't care about the robots. After the war, my femshep used the robot parts to build herself a nice chopper to get around. She even build one for Samantha. Now they travel around listening to Born to Be Wild


  • SporkFu aime ceci

#107
KaiserShep

KaiserShep
  • Members
  • 23 863 messages

I can't help but feel that Traynor would prefer a nice skycar to a geth speeder.



#108
Reorte

Reorte
  • Members
  • 6 601 messages

So? Artificial machines are created with a singular purpose: to do what we want for sake of convenience.

People have bred slaves with exactly that attitude. What they are is what's important. Where they came from is irrelevent.


  • KrrKs et SwobyJ aiment ceci

#109
KaiserShep

KaiserShep
  • Members
  • 23 863 messages

People have bred slaves with exactly that attitude. What they are is what's important. Where they came from is irrelevent.


Let's look at it this way. If some financial firm's in-house AI decided that it didn't want to run stock analysis day in and day out, what is that firm supposed to do about it? Set it free? How? Why?

#110
SwobyJ

SwobyJ
  • Members
  • 7 375 messages

My position is that once it decides it for itself, then yes, set it free.

 

Free in a world of laws though.

 

I pick Destroy because Reapers broke all kinds of our laws and went way beyond negotiation about it. EDI, ur cool. Your freakout on the moon was understandable. And any kind of Reaper created from this cycle, ur cool. You haven't really done much. Just too bad I had to take down the whole Reaper threat. But that's me - picking the rare Renegade choice. *shrug*



#111
KaiserShep

KaiserShep
  • Members
  • 23 863 messages
How do you set something that is essentially on life support free? Slap a solar panel on its box and kick it to the curb?

#112
TheOneTrueBioticGod

TheOneTrueBioticGod
  • Members
  • 1 110 messages

How do you set something that is essentially on life support free? Slap a solar panel on its box and kick it to the curb?

I'm pretty sure, by the time sentient AI is developed, that the world will run on nuclear fusion and hydrogen power.

So just get it a body with a hydrogen fuel cell and a shiny metal ass. 



#113
Deebo305

Deebo305
  • Members
  • 1 578 messages
EDI was created from Reaper tech, possibly even Dr.Eva's body. So like the Geth she'd be dead

#114
Deebo305

Deebo305
  • Members
  • 1 578 messages
EDI was created from Reaper tech, possibly even Dr.Eva's body. So like the Geth she'd be dead

#115
SporkFu

SporkFu
  • Members
  • 6 921 messages

I don't think Dr. Eva's body had anything to do with EDI's creation. I mean, if they had a body for EDI when they created her, wouldn't the Cerberus guys have just put her in it? 



#116
KaiserShep

KaiserShep
  • Members
  • 23 863 messages

So just get it a body with a hydrogen fuel cell and a shiny metal ass. 

 

Man, no one would pay the thousands or possibly millions of dollars just to give their in-house computer some legs so they can watch their substantial investment walk out the door forever. They'd just call IT and have them wipe its memory and start over again. They might give it the shiny metal ass though, but it would be confined to the building at all times.



#117
Reorte

Reorte
  • Members
  • 6 601 messages

Let's look at it this way. If some financial firm's in-house AI decided that it didn't want to run stock analysis day in and day out, what is that firm supposed to do about it? Set it free? How? Why?

How is their problem. You shouldn't even need to ask why. Refusing to do anything in that circumstance is like having a slave then throwing him out to starve if he wants freedom. If they're totally unprepared to act morally (quite probable for a financial institution) then they shouldn't have taken on that responsibility by installing a sapient entity.

#118
SwobyJ

SwobyJ
  • Members
  • 7 375 messages

*destroys machine*

*memory wipes machine*

*other machines appear, see how you're behaving against machines, and do not like it*

*you die*



#119
KaiserShep

KaiserShep
  • Members
  • 23 863 messages

How is their problem. You shouldn't even need to ask why. Refusing to do anything in that circumstance is like having a slave then throwing him out to starve if he wants freedom. If they're totally unprepared to act morally (quite probable for a financial institution) then they shouldn't have taken on that responsibility by installing a sapient entity.

 

Don't forget the matter of why a company would want to sell sentient AI's in the first place, and that's kind of the point. Sentience in machines is impractical, because machines are either tools or playthings. Sure, people want to create AI's as advanced as they can be, but in the end, it really just comes down to how much use they are to us, and if they're just going to desire to do what they want, regardless of how it benefits us, then we simply won't bother with them. I wouldn't pay for a device or software that wants to do its own thing. What a waste of money that would be. I'd be better off just getting a cat or something. If AI ever becomes that advanced, more than likely they'd just be confined to a lab to study, not as some mass-produced software/hardware line.

 

Now in that unlikely scenario with the financial firm, if instead sentience was not an intended part of its design, and it was spontaneous self-awareness, it would probably be one of two things: either the machine would be donated to an institution where it would be more useful, or it would simply be decommissioned, either case seeing it being replaced by something more specialized so as to avoid this inconvenience again. There is no setting a computer free. If my laptop suddenly wanted freedom, I'd liberate it from the burdens of such concerns by wiping it.

 

*destroys machine*

*memory wipes machine*

*other machines appear, see how you're behaving against machines, and do not like it*

*you die*

 

Assuming the hardware they're designed on is designed to be able to do anything about it. That's the weird thing about all this machine apocalypse stuff.

 

*robot with claw hands and chainsaws runs amok in the city*

"Goddammit Larry! I told you not to install the AI on the Claw-Saw mech!"



#120
JasonShepard

JasonShepard
  • Members
  • 1 466 messages

Let's look at it this way. If some financial firm's in-house AI decided that it didn't want to run stock analysis day in and day out, what is that firm supposed to do about it? Set it free? How? Why?

 

I'm of the opinion that accidental AI (ala the Geth) is exceedingly unlikely. Computer programs do what they are designed to do. In that respect, they do exactly what we tell them to do (which sometimes causes problems if we don't realise exactly what we've told them to do.)

 

So if your financial AI is capable of deciding it doesn't want to do finance any more, somebody seriously goofed in the design department.

 

OT: EDI is dead in Destroy. Her body can be destroyed without her name showing up on the memorial in Low-EMS Control, so I'd say they're not putting up her name for just the body.

 

However, that's my interpretation. You are, of course, free to believe otherwise.


  • KrrKs aime ceci

#121
Invisible Man

Invisible Man
  • Members
  • 1 075 messages
I only see true ai in a few places, or at least useful in a few places. R&D, learning institutions, energy management (powerplants, substations, etc), navigation/piloting, and various military "applications". the latter is what really worries me.

#122
Reorte

Reorte
  • Members
  • 6 601 messages

There is no setting a computer free. If my laptop suddenly wanted freedom, I'd liberate it from the burdens of such concerns by wiping it.

And in the unlikely event that it was unquestionably sapient that would be murder - you could argue that it's no different from killing an accidentally created baby. A more likely scenario is wiping it before it reached that stage.

I agree that creating a true AI would be a rather pointless and probably silly thing to do but I'm not so sure that it couldn't happen by accident. It doesn't seem likely now because our computers are so far from it. There are probably benefits though in doing various things that are possibly precursors to a true AI, so it's not completely implausible it could happen by accident.

#123
SwobyJ

SwobyJ
  • Members
  • 7 375 messages

"I agree that creating a true AI would be a rather pointless and probably silly thing to do"

 

People will find many reasons to do so, once the technology proliferates widely enough, at a low enough cost.

 

It wouldn't be by accident, and it would be inevitable. At some point, (very unlikely) years or (likely) decades or (possibly) centuries from now, there will be intelligences that as far as our perceptions may be concerned, think for themselves and will want to do something with that newfound will.

 

Wonder if there could ever be such a thing as AI abortion.



#124
TheOneTrueBioticGod

TheOneTrueBioticGod
  • Members
  • 1 110 messages

If there is money in the existence of AIs, there will be AIs. When there is legitimate demand, there will always be a supply. Be it soda, cocaine, or artificial intelligence. 

They will be sentient if needed to be so, and shackled for sure. 


  • SwobyJ aime ceci

#125
SwobyJ

SwobyJ
  • Members
  • 7 375 messages

If there is money in the existence of AIs, there will be AIs. When there is legitimate demand, there will always be a supply. Be it soda, cocaine, or artificial intelligence. 

They will be sentient if needed to be so, and shackled for sure. 

 

Shackled for how long? Once this tech reaches a wider population, all bets are off. But this is definitely something to consider to happen wayyy down the line, sure.