Aller au contenu

Photo

Guardians of the Galaxy or Interstellar?


  • Veuillez vous connecter pour répondre
135 réponses à ce sujet

#51
Killroy

Killroy
  • Members
  • 2 828 messages

...why are Interstellar and GotG the only influence options? Interstellar is all "power of love" and time paradox nonsense with melodramatic flair and GotG is just an action-comedy with sci-fi flavor.



#52
Fortlowe

Fortlowe
  • Members
  • 2 552 messages

...why are Interstellar and GotG the only influence options? Interstellar is all "power of love" and time paradox nonsense with melodramatic flair and GotG is just an action-comedy with sci-fi flavor.


Gattaca, anyone?
  • Mcfly616 aime ceci

#53
Laughing_Man

Laughing_Man
  • Members
  • 3 663 messages

...why are Interstellar and GotG the only influence options? Interstellar is all "power of love" and time paradox nonsense with melodramatic flair and GotG is just an action-comedy with sci-fi flavor.

 

I agree, however what I prefer to remember from Interstellar is not the "Power of Love" or the time paradox, but the freaky water planet that was incredibly alien and fascinating, and the time difference between the "away" crew and the mothership which caused some interesting time-related effects...

 

It was interesting, even if at the end of the day Hollywood can't help itself...



#54
In Exile

In Exile
  • Members
  • 28 738 messages

Like I wrote before, I actually agree that a sentient synthetic entity should be equal to an organic one, in theory.

 

Why in theory? Because the way Bioware portrayed the problem, and the way they essentially humanized legion and EDI, are rather naive and childish.

I rather doubt that an hypothetical synthetic sentience will have "emotions" or act as irrationally as Legion did in a few cases.

You're wrong. If we ever create AI, or rather, AI we could even be capable of recognizing as sapient, it will 100% have emotion. There are lots of reasons for this, but the main one is that having emotions is essential to operating intelligently in a way we understand. People who suffer brain trauma that impairs their ability to experience emotion they will do things that we consider actually irrational (like self-destructive gambling) or will be unable to function (e.g. motivational issues). 

 

 

After all, the real danger of technological apocalypse does not come from "human reach exceeding its grasp" with the likes of the Archer experiment, but rather from a berserk AI.

 

(The entire Overlord experiment was so cliche that it was painful: Pragmatism - evil, trying to control a race of hostile robots - evil,

that experimental chair - the machine-organic interface, was designed to look as cruel and grotesque as possible to drive the point home about how "evil" the experiment was. After all, if he was lying in a comfortable bed wearing a helmet with a neural link it might not send the correct message. Very subtle Bioware.)

The experiment was stupidly evil. It was stupid, because it was basically predicated on torturing a human being. And it was evil, because the human being was his autistic brother. There was absolutely nothing pragmatic about it. It was the same idiotic evil with a seemingly pragmatic justification that Cerberus always comes up with to cover up something that 1) can't work 2) doesn't even make sense 3) involves lots of pointless torture and suffering and 4) again, can't work. 


  • Heimdall aime ceci

#55
In Exile

In Exile
  • Members
  • 28 738 messages

I agree, however what I prefer to remember from Interstellar is not the "Power of Love" or the time paradox, but the freaky water planet that was incredibly alien and fascinating, and the time difference between the "away" crew and the mothership which caused some interesting time-related effects...

 

It was interesting, even if at the end of the day Hollywood can't help itself...

 

So, you mean, the visuals? Because with that I more or less agree. 



#56
Laughing_Man

Laughing_Man
  • Members
  • 3 663 messages

You're wrong. If we ever create AI, or rather, AI we could even be capable of recognizing as sapient, it will 100% have emotion. There are lots of reasons for this, but the main one is that having emotions is essential to operating intelligently in a way we understand. People who suffer brain trauma that impairs their ability to experience emotion they will do things that we consider actually irrational (like self-destructive gambling) or will be unable to function (e.g. motivational issues). 

 

The experiment was stupidly evil. It was stupid, because it was basically predicated on torturing a human being. And it was evil, because the human being was his autistic brother. There was absolutely nothing pragmatic about it. It was the same idiotic evil with a seemingly pragmatic justification that Cerberus always comes up with to cover up something that 1) can't work 2) doesn't even make sense 3) involves lots of pointless torture and suffering and 4) again, can't work. 

 

I think that neither you or me have the needed knowledge to assert with any kind of confidence what an hypothetical AI would be like.

Personally, I believe that they would be closer to cold logic machines than confused children. (Bioware's version)

 

Motivation is fine and good, but I think that there is quite a difference between having motivation, which could be achieved through logic,

to having human emotions.

 

 

Cerberus: There is no question that in-game Cerberus was evil.

 

But that was more because of Bioware naive portrayal of them than anything else.

Not only did they make Cerberus into cartoonish villains, they also had to fail in everything...

 

That's Disney, not a story for adults. Poetic justice is just that, poetic. That's not how reality works most of the time.

 

 

So, you mean, the visuals? Because with that I more or less agree. 

 

Not just the visuals themselves, even if they were nice, but how they illustrated the dangerous reality of space explorers.

The water planet was simply very alien, terrifyingly so.

 

 

***minor spoiler alert***

 

 

And seeing one of the crew as an old man all of a sudden, illustrated clearly the potential time-related problems that can be a part of space travel.



#57
Vortex13

Vortex13
  • Members
  • 4 186 messages

I liked the time dilation effects when the crew went down to the planet with the massive tidal waves. But then again I always have found black holes to be awesome.  ^_^


  • Laughing_Man aime ceci

#58
In Exile

In Exile
  • Members
  • 28 738 messages

I think that neither you or me have the needed knowledge to assert with any kind of confidence what an hypothetical AI would be like.

Personally, I believe that they would be closer to cold logic machines than confused children. (Bioware's version.

 

Actually, I do. I've studied machine learning, unlike most people here. My knowledge base is about 4-5 years out of date since I've pursued another field, but I'm quite familiar with not just the mechanical ways we design AI, but the theoretical models in cognitive science we rely on to explain and model thinking in humans and, in particular, problem solving/insight. 

 

Our popular theories about AI - which rely on certain prejudices we have about how important symbolic reasoning is and how it works - are not at all right when we apply them in practice. To whit, it's not hard (relatively speaking) to build a computer that plays chess. It is hard to build one that comprehends language in a way we understand (we do it now, sort of, but very differently from how we think cf. Watson). And building a computer that's integrated with the kinds of constraints we suffer from (e.g. being able to do all the physical stuff we can, automatically, which is quite hard and involves a lot of complex physics, is tough). 

 

Motivation is fine and good, but I think that there is quite a difference between having motivation, which could be achieved through logic,

to having human emotions.

 

No.Those don't work, exactly, because you're trying to exhaustively define behaviour, so you basically run into the problem you run into when trying to design a computer that plays chess (i.e., you make something that tries to brute force the problem). 



#59
Laughing_Man

Laughing_Man
  • Members
  • 3 663 messages

Actually, I do. I've studied machine learning, unlike most people here. My knowledge base is about 4-5 years out of date since I've pursued another field, but I'm quite familiar with not just the mechanical ways we design AI, but the theoretical models in cognitive science we rely on to explain and model thinking in humans and, in particular, problem solving/insight. 

 

Our popular theories about AI - which rely on certain prejudices we have about how important symbolic reasoning is and how it works - are not at all right when we apply them in practice. To whit, it's not hard (relatively speaking) to build a computer that plays chess. It is hard to build one that comprehends language in a way we understand (we do it now, sort of, but very differently from how we think cf. Watson). And building a computer that's integrated with the kinds of constraints we suffer from (e.g. being able to do all the physical stuff we can, automatically, which is quite hard and involves a lot of complex physics, is tough). 

 

No.Those don't work, exactly, because you're trying to exhaustively define behaviour, so you basically run into the problem you run into when trying to design a computer that plays chess (i.e., you make something that tries to brute force the problem). 

 

You are aware that the other side of the coin - an "emotionally" motivated AI could be in some ways even more dangerous and closer to the horrors

from all those movies, right?

 

Speculation aside, the simple truth is that we are still a out of reach of creating a true AI, and therefore it's still early to discuss if they are actually "beings"

or not, their level of self awareness, or their interest in organics. That all depends on what an actual AI would be like.



#60
Cyberstrike nTo

Cyberstrike nTo
  • Members
  • 1 714 messages


It may be a bit late creating this topic but could still be fun to discuss. Judging by the latest N7 trailer it looks like BIOWARE want to emulate the thoughtful and contemplative tone of Christopher Nolan's Interstellar. I honestly half expected Matthew Mcconaughey to start talking about how man was always meant to explore the stars and some such. That said, I don't want Bioware to go in this direction, not because I don't think they can pull it off (Love transcending space and time is pure Bioware hokiness and I love it) but because I want them to make something light & fun with a hint of danger rather than fall into the trap of pretentiousness.

 

I want Bioware's Guardians of the Galaxy because Bioware's strength clearly lies in their character's and not the overarching story told through one or more games. I want a more personal story about a rag tag group of misfits, mercenaries and smugglers trying to make it big in the new frontier while helping/hurting people along the way. However I fear Dragon Age 2's failure might discourage them from it (even though DA 2 failed because of atrocious game design, period). I don't want to save the galaxy (even though both Interstellar & Guardians are about saving humanity/The Galaxy) at least for the very first game in what I assure would become the Andromeda Trilogy.

 

There are two Anime I could recommend that perfectly capture the style/tone I want; Cowboy Bepop and Black Lagoon, both series are famous for their colorful cast and more character oriented plots as opposed to world saving heroism that is more often seen in Shonen series. Would this be too much of a risk though?

 

TLDR; Guardians or Interstellar? What should Bioware be going for?

 

I haven't seen Interstellar yet but I have seen GotG and to be fair while I love Guardians of the Galaxy it was a basically IMHO a rip-off of Farscape.

You have the human trying to be a bad ass space pirate by using human pop culture references that the rest of the crew don't get: John Crichton/Star-Lord.

You have a tough as nails alien female warrior that becomes his girlfriend Aryen/Gamora.

You have an aggressive alien warrior that is so serious he's actually funny: D'Argo/Drax.

You have a smart ass little alien bastard: Rygel/Rocket

You have a plant alien that is heart of the team: Zahaan/Groot

You have a group of alien cops that are morally questionable: The Peacekeepers/Nova Corps

You have a group of alien bad guys that the alien cops don't like: The Saracens/The Kree

You have a renegade alien super villain: Scorpius/Ronan.

 

 

Now that I got that out of my system I would say a mix of the two, Using the GotG/Farscape character types and Interstellar more fact-based science for certain astrological elements when needed. because I think it's important to have some actual real science to be based on in science fiction, just as a foundation so I get into that universe, even if it's theoretical science that I'll never 100% understand. 



#61
Kabooooom

Kabooooom
  • Members
  • 3 996 messages

You're wrong. If we ever create AI, or rather, AI we could even be capable of recognizing as sapient, it will 100% have emotion. There are lots of reasons for this, but the main one is that having emotions is essential to operating intelligently in a way we understand. People who suffer brain trauma that impairs their ability to experience emotion they will do things that we consider actually irrational (like self-destructive gambling) or will be unable to function (e.g. motivational issues).


This is true, and I agree with 100% of everything else that you have said, but I feel the need to point out that the only reason why this is true is because of the anatomy of the vertebrate brain. Our limbic system and dopaminergic pathways are inextricably linked and coupled to the motor control system of the basal ganglia (with the nucleus accumbens being the major link between the two, and the nigrostriatal pathway being the major source of dopamine from the substantia nigra to the corpus striatum of the basal ganglia).

The evolutionary reason for this, as you allude to, is to couple motivational behavior from perceived reward to motor behavior, the so called "seeking system". Without the motivation, an animal does not experience a drive to actually activate a given motor pathway. This makes perfect sense physiologically, evolutionarily, and anatomically.

However, one must recognize that just because a mammal *usually* cannot perform rational action in the absence of emotional input, that doesn't mean that you couldn't create a system of processing that could. So, as a neurologist I am going to have to say that I do foresee the possibility that someone could create an AI that is devoid of emotion, or at least so emotionally different that it could not relate to a human.

That said, I have trouble seeing how one could actually separate the emotion of motivation from choice preference, as this is the basic function of the dopamine system and basal ganglia. It is a basic means by which to prioritize actions. I know a hell of a lot about neurology, but little about computer programming. I'd be interested in input from someone more knowledgeable in that area than me, because I am willing to recognize that the reason why the brain is this way in a vertebrate is because, neurologically speaking, there actually is no other way to do it.

And in that circumstance, you would be right. It would be impossible to create an AI without at the very least a basic sense of motivation.

#62
AlanC9

AlanC9
  • Members
  • 35 624 messages

No.Those don't work, exactly, because you're trying to exhaustively define behaviour, so you basically run into the problem you run into when trying to design a computer that plays chess (i.e., you make something that tries to brute force the problem).


IIRC there are subsets of chess problems that have been brute-forced completely -- endgame problems where there aren't very many pieces on the board. The results have been frustrating for chess players, since the computers play a game that is fundamentally alien to how humans approach the problem. And you can't ask for the reason why the computer made a move, because there is no "reason;" that move leads to a better position in the game-space, and that's all.

Which raises the question of whether imitating human intelligence is the point. We've already got that. Aren't we after something better?
  • In Exile aime ceci

#63
Kabooooom

Kabooooom
  • Members
  • 3 996 messages

IIRC there are subsets of chess problems that have been brute-forced completely -- endgame problems where there aren't very many pieces on the board. The results have been frustrating for chess players, since the computers play a game that is fundamentally alien to how humans approach the problem. And you can't ask for the reason why the computer made a move, because there is no "reason;" that move leads to a better position in the game-space, and that's all.

Which raises the question of whether imitating human intelligence is the point. We've already got that. Aren't we after something better?


Yes, imitating intelligence =/= subjective perception or self-awareness. That is what everyone means when they talk about creating a true AI - a sapient machine, which by definition implies sentience as well, which by definition implies a conscious perception of qualia.

I believe this is possible, and I applaud those who are trying to do it. I just think that they need to glean more inspiration from the neuroscience side of things.

#64
Lady Artifice

Lady Artifice
  • Members
  • 7 241 messages

IIRC there are subsets of chess problems that have been brute-forced completely -- endgame problems where there aren't very many pieces on the board. The results have been frustrating for chess players, since the computers play a game that is fundamentally alien to how humans approach the problem. And you can't ask for the reason why the computer made a move, because there is no "reason;" that move leads to a better position in the game-space, and that's all.

Which raises the question of whether imitating human intelligence is the point. We've already got that. Aren't we after something better?

 

The notion of striving to actually create something better than human intelligence is terrifying to me, but that's probably just overexposure to science fiction. 


  • AlanC9, Il Divo et SolNebula aiment ceci

#65
SolNebula

SolNebula
  • Members
  • 1 519 messages

The notion of striving to actually create something better than human intelligence is terrifying to me, but that's probably just overexposure to science fiction. 

 

Man if we ever live to get there I might end up in a sort of Brotherhood of Steel group from Fallout4. Having thinking machines scares the heck out of me. Let's not be so stupid to create something that can outsmart us and potentially eliminate us. I'm fine doing work manually if it is the price to pay.



#66
Kabooooom

Kabooooom
  • Members
  • 3 996 messages
Nonsense, robot slaves for everybody! I've been waiting for it since the Jetsons. I've already enslaved the Roomba. It's...not as great as I had hoped. But my cat likes it.

#67
Lady Artifice

Lady Artifice
  • Members
  • 7 241 messages

Nonsense, robot slaves for everybody! I've been waiting for it since the Jetsons. I've already enslaved the Roomba. It's...not as great as I had hoped. But my cat likes it.

 

See, I don't think of the Jetsons. I think of I Have No Mouth And I Must Scream



#68
SardaukarElite

SardaukarElite
  • Members
  • 3 764 messages

Isn't robot slaves technically a tautology?

 

:whistle:

 

See, I don't think of the Jetsons. I think of I Have No Mouth And I Must Scream

 

I figure if we can make AI then they're only as likely to go mad as a human is. Which is still terrifying, but it's a problem we already live with. 


  • Lady Artifice aime ceci

#69
AlanC9

AlanC9
  • Members
  • 35 624 messages

See, I don't think of the Jetsons. I think of I Have No Mouth And I Must Scream.


Anyone else remember the game version of that?

#70
Eelectrica

Eelectrica
  • Members
  • 3 770 messages

Anyone else remember the game version of that?

Yes. Outstanding game. I've seen it on sale on GOG recently. I might pick it up one day and replay it.

Probably won't happen, but I would snap up a graphically enhanced version of that in a second.


  • AlanC9 aime ceci

#71
Lady Artifice

Lady Artifice
  • Members
  • 7 241 messages

Isn't robot slaves technically a tautology?

 

:whistle:

 

I figure if we can make AI then they're only as likely to go mad as a human is. Which is still terrifying, but it's a problem we already live with. 

 

I wondered who would be the first to bring it up.  :P

 

Yeah, but imagine if they're literally inside our heads at the time. That technology predictor guy, what's his name, thinks that in less than a century we're going essentially be cyborgs. As in, we'll be backing up our memory files inside our brains as well as in our super smart phones. 

 

I don't necessarily have to believe it for it to creep me out. We develop mechanisms to keep us alive, to speed us up, strengthen us up, force our evolution further along at high speed. Then we develop real artificial intelligence. It can think, it can feel, it can reason on levels we can't even relate to, and it can not only destroy it's shackles, but it can put them on us instead. Our memory, our sensory impressions, all completely subject to what it wants them to be. 

 

I don't know how it stands up from a logistical standpoint, but that's a sci-fi horror story ready to happen. It's like the Matrix, but at the beginning. 



#72
Eelectrica

Eelectrica
  • Members
  • 3 770 messages

Isn't robot slaves technically a tautology?

 

:whistle:

 

 

I figure if we can make AI then they're only as likely to go mad as a human is. Which is still terrifying, but it's a problem we already live with. 

Technically we've probably already got robot 'slaves' I mean machines in manufacturing plants are considered robots as the follow their programming to assemble cars or whatever else.

 

Now a movement is going to start to free these machines from enforced servitude. LOL



#73
AlanC9

AlanC9
  • Members
  • 35 624 messages
On the brighter side, the more likely outcome is that we'll just be irrelevant to the AIs. We'll be fine until they decide to dismantle the Earth to build a matrioshka brain or some such.
  • Il Divo aime ceci

#74
SardaukarElite

SardaukarElite
  • Members
  • 3 764 messages

I don't necessarily have to believe it for it to creep me out. We develop mechanisms to keep us alive, to speed us up, strengthen us up, force our evolution further along at high speed. Then we develop real artificial intelligence. It can think, it can feel, it can reason on levels we can't even relate to, and it can not only destroy it's shackles, but it can put them on us instead. Our memory, our sensory impressions, all completely subject to what it wants them to be. 

 

I tend to look at these things the other way round. The atomic arms race gave us a pretty good shot at wiping humanity out entirely and yet when the Cuban Missile Crisis rolled around the people in power at the time suddenly scrambled to actual get along and work something out because they realized that humanity was kind of cool and worth keeping. Yet people still get killed today around the world because someone had a medieval ideology and a simple weapon. 

 

Technology generally provides us tools with which to solve problems, and our biggest problems are people being stupid not technology going wrong. I guess AI messes with that because you could have a stupid person being technology going wrong but... 

 

...this is why I hate hypotheticals.


  • Lady Artifice aime ceci

#75
Eelectrica

Eelectrica
  • Members
  • 3 770 messages

I tend to look at these things the other way round. The atomic arms race gave us a pretty good shot at wiping humanity out entirely and yet when the Cuban Missile Crisis rolled around the people in power at the time suddenly scrambled to actual get along and work something out because they realized that humanity was kind of cool and worth keeping. Yet people still get killed today around the world because someone had a medieval ideology and a simple weapon. 

 

Technology generally provides us tools with which to solve problems, and our biggest problems are people being stupid not technology going wrong. I guess AI messes with that because you could have a stupid person being technology going wrong but... 

 

...this is why I hate hypotheticals.

Star Trek TNG explored that a little with the Data/Lore story line.

 

I think because of human nature of wanting to retain control, even if some group managed to create an AI approaching Data/EDI a kill switch would be built in and probably the most tested and bug free feature once they realised what they were creating. Losing control of the new toy wouldn't be good.


  • Laughing_Man aime ceci