Aller au contenu

Photo

Why are those who choose Control and Synthesis so much happier with the ending?


  • Veuillez vous connecter pour répondre
1010 réponses à ce sujet

#351
Br3admax

Br3admax
  • Members
  • 12 316 messages

David7204 wrote...

Hmm. Interesting that despite hating the Catalyst and proclaiming how stupid he is, you're agreeing with him entirely now.

So he's not stupid after all, then?

The Catalyst is right in saying that making AI is stupid. It is however wrong in saying that we will:
A) Make them.
B) Be killed out by them.
C) Need Reapers. 

#352
Cainhurst Crow

Cainhurst Crow
  • Members
  • 11 374 messages

AlanC9 wrote...

Darth Brotarian wrote...

David7204 wrote...

That sounds like common sense to me. I would certainly hope my children have all of my strengths and none of my weaknesses.


Those robots aren't your children. They're a seperate specis who, without any means of control, could and would destroy us for the sake of survival.


If thry feel they need to do that, we probably deserve it.


"Your rate of growth is counter productive. Your populations should be 23.56742% lower than current levels to be efficient. Please designate those you wish to delete."

#353
Br3admax

Br3admax
  • Members
  • 12 316 messages

David7204 wrote...

I would absolutely want a car that can decide to go faster or slower depending on what's appropriate. In fact, that's pretty much a necessity for driverless cars. And I would be happy to have a microwave that adjusts itself depending on what it's cooking.

None of that requires AI. 

#354
AresKeith

AresKeith
  • Members
  • 34 128 messages

David7204 wrote...

I would absolutely want a car that can decide to go faster or slower depending on what's appropriate. In fact, that's pretty much a necessity for driverless cars. And I would be happy to have a microwave that adjusts itself depending on what it's cooking.


There's lazy, then there's too lazy

#355
MassivelyEffective0730

MassivelyEffective0730
  • Members
  • 9 230 messages

David7204 wrote...

I would absolutely want a car that can decide to go faster or slower depending on what's appropriate. In fact, that's pretty much a necessity for driverless cars. And I would be happy to have a microwave that adjusts itself depending on what it's cooking.


Does it need to have sapience? I don't think a car needs to be able to have deontological perspective to determine how fast it needs to go.

#356
Cainhurst Crow

Cainhurst Crow
  • Members
  • 11 374 messages

AlanC9 wrote...

KaiserShep wrote...
A mass produced product that can manifest its own personal preferences would be monumentally bad design, for the same reason why we don't want our cars to go faster or slower on their own, or our microwave to cook something for 20 minutes rather than 20 seconds.


What's wrong with self-driving cars?


My old GPS constantly led me to dead ends, or other buidlings, or woodland areas, becuase it confuses what destination I'm looking for.

I can only imagine the horror of your car taking you to a wooded area, and turning right to smack you right into a tree at 50 miles an hour.

#357
crimzontearz

crimzontearz
  • Members
  • 16 788 messages

AlanC9 wrote...

crimzontearz wrote...

They did need a way for the player to Refuse if he didn't do so during the dialogue.

no, no they did not


You think it would be OK to trap players into one of the three endings instead of letting them trigger Refuse? Really?

Not everybody plays with a walkthrough open in another window.

uh yeah, there is a "refuse" option in the dialogue, make it clear on the wheel and move on, there was no need for the refuse trigger to be tied to shooting the starbrat

#358
MassivelyEffective0730

MassivelyEffective0730
  • Members
  • 9 230 messages

AresKeith wrote...

David7204 wrote...

That sounds like common sense to me. I would certainly hope my children have all of my strengths and none of my weaknesses.


So you want your children to be AIs?


Strengths.....

I'm still trying to see where he see's strengths from.

Sounds like he's asking for a paradox here.

#359
David7204

David7204
  • Members
  • 15 187 messages
None of these responses are remotely relevant.

Decision making doesn't require AI, but AI requires decision making.

The fact that sometimes technology screws up is irrelevant. That's why you make better technology. That's why technology undergoes extensive testing to ensure it as safe.

Modifié par David7204, 20 octobre 2013 - 03:21 .


#360
spirosz

spirosz
  • Members
  • 16 356 messages

David7204 wrote...

None of these responses are remotely relevant.

Decision making doesn't required AI, but AI requires decision making.

The fact that sometimes technology screws up is irrelevant. That's why you make better technology. That's why technology undergoes extensive testing to ensure it as safe.


Saftey is never 100%. 

#361
Br3admax

Br3admax
  • Members
  • 12 316 messages

David7204 wrote...

None of these responses are remotely relevant.

Decision making doesn't require AI, but AI requires decision making.

The fact that sometimes technology screws up is irrelevant. That's why you make better technology. That's why technology undergoes extensive testing to ensure it as safe.

Translation: My ****** comment made no sense so it's time to cover it up by saying it's irrelevant and trying to debate something else. Also, I'm going to reach China soon. 

#362
David7204

David7204
  • Members
  • 15 187 messages
That's perfectly true. And?

#363
Cainhurst Crow

Cainhurst Crow
  • Members
  • 11 374 messages

David7204 wrote...

None of these responses are remotely relevant.

Decision making doesn't required AI, but AI requires decision making.

The fact that sometimes technology screws up is irrelevant. That's why you make better technology. That's why technology undergoes extensive testing to ensure it as safe.


No amount of testing stops technology from still screwing up, that's why sotfware patches and recalls happen whenever your dealing with a new product.

Now imagine that sort of error happening to something you trust your literal life to. Not a gamble I want to take.

#364
David7204

David7204
  • Members
  • 15 187 messages
You take that gamble every time you fly in a plane, drive in a car, eat a processed food product, drink a soda, or walk outside. You take it by living in a house and using a computer. You're incredibly dependant on technology being safe. You take that gamble by existing.  

Modifié par David7204, 20 octobre 2013 - 03:25 .


#365
AlanC9

AlanC9
  • Members
  • 35 706 messages

crimzontearz wrote...

AlanC9 wrote...

crimzontearz wrote...

They did need a way for the player to Refuse if he didn't do so during the dialogue.

no, no they did not

You think it would be OK to trap players into one of the three endings instead of letting them trigger Refuse? Really?

Not everybody plays with a walkthrough open in another window.

uh yeah, there is a "refuse" option in the dialogue, make it clear on the wheel and move on, there was no need for the refuse trigger to be tied to shooting the starbrat


"Need" is a bit strong there. But it's still good to have so a player who plays out the conversation without coming to a final decision still has all his options open. 

They should have left this out so idiots could shoot at a hologram?

#366
MassivelyEffective0730

MassivelyEffective0730
  • Members
  • 9 230 messages

David7204 wrote...

None of these responses are remotely relevant.


Not relevant = I don't like them.

Decision making doesn't require AI, but AI requires decision making.


Why do we need AI?

The fact that sometimes technology screws up is irrelevant.


It's completely relevant. When it goes haywire, **** hits the fan.

That's why you make better technology.


It still fails/breaks/meltsdown/clunks/fubars.

#367
David7204

David7204
  • Members
  • 15 187 messages
Technology does fail sometimes. So what? We still use it.

#368
spirosz

spirosz
  • Members
  • 16 356 messages

David7204 wrote...

You take that gamble every time you fly in a plane, drive in a car, eat a processed food product, drink a soda, or walk outside. You take it by living in a house and using a computer. You're incredibly dependant on technology being safe. You take that gamble by existing.  


Tis' true. 

#369
Steelcan

Steelcan
  • Members
  • 23 292 messages

David7204 wrote...

Technology does fail sometimes. So what? We still use it.

I don't.


I'm a neo-luddite who has renounced the developed world so I can murder geth babies in peace

Modifié par Steelcan, 20 octobre 2013 - 03:31 .


#370
Br3admax

Br3admax
  • Members
  • 12 316 messages

spirosz wrote...

David7204 wrote...

You take that gamble every time you fly in a plane, drive in a car, eat a processed food product, drink a soda, or walk outside. You take it by living in a house and using a computer. You're incredibly dependant on technology being safe. You take that gamble by existing.  


Tis' true. 

Yep, takin a gamble. Sure hope that this walking thinking machine, inside of a bomber goes right. Here's to hoping. 

#371
MassivelyEffective0730

MassivelyEffective0730
  • Members
  • 9 230 messages
Once more, David has completely invented a point for an argument no one is making and has totally derailed the thread.

So back to synthesis and control...

#372
Steelcan

Steelcan
  • Members
  • 23 292 messages

MassivelyEffective0730 wrote...

Once more, David has completely invented a point for an argument no one is making and has totally derailed the thread.

So back to synthesis and control...

recent events have shocked my southern sensibilities...

not sure I can stay on topic

#373
David7204

David7204
  • Members
  • 15 187 messages
The point that has been attempted is that making AI is stupid because technology is sometimes imperfect and dangerous, and we should never gamble our lives with it.

That point clearly has no merit.

#374
Cainhurst Crow

Cainhurst Crow
  • Members
  • 11 374 messages

David7204 wrote...

You take that gamble every time you fly in a plane, drive in a car, eat a processed food product, drink a soda, or walk outside. You take it by living in a house and using a computer. You're incredibly dependant on technology being safe. You take that gamble by existing.  


1. I hate flying.

2. All those other gambles are significantly less costly than trusting my life to some automoton that could kill me because of a software glitch.

#375
Br3admax

Br3admax
  • Members
  • 12 316 messages

David7204 wrote...

The point that has been attempted is that making AI is stupid because technology is sometimes imperfect and dangerous, and we should never gamble our lives with it.

That point clearly has no merit.

No, the point was that they will very much function in trying to kill everyone they can to survive. If they act independently, as AI would, they will  have needs and wants that will conflict with our own.