Aller au contenu

Photo

Could be cool if we could play our characters as very anti-synthetic.


  • Veuillez vous connecter pour répondre
296 réponses à ce sujet

#176
Medhia_Nox

Medhia_Nox
  • Members
  • 3 530 messages

Like I said, I'm a comparative neurologist (I study the nervous system and consciousness of animals), so my opinion is different than yours. I see no magic life-force that makes information processing in a biological brain fundamentally different than information processing in a synthetic one, and thus I see no fundamental barrier to creating sentience OR sapience "synthetically".

However, I do see a technical barrier to doing so. The brain of a fruit fly is more complex than anything that can be simulated today. And for decades, artificial intelligence researchers were going about it completely wrong by ignoring how brains actually work. I've seen a couple recent publications from people who are probably more on the right track, but creating an artificial intelligence by 2050 like many people believe is probably a pipe dream.

Thanks for the reply though, this topic and people's view of it is fascinating to me as I suspect that it may become a major hot topic in the next generation and maybe, just maybe, within my lifetime.

 

Is the very nature of the complexity a "magic life-force" concept?  Not by fiat of course, but by possibility. 

 

You recognize that a fruit fly is more complex than anything we can simulate, and that 2050 is a pipe dream more than likely - but what if it is genuinely impossible?  That is a very real possibility regardless of the wistful dreams of people who are fascinated by AI.

 

On a philosophical level - if we were not created (by some Creator force) - then our sapience sprung out of the muck and mire by chance.  That alone makes it intrinsically superior to anything we will design (while we must rely on the Creator force to be superior to us for us to be superior to our creation).  I understand that the belief is that once something is designed to do so - it will simply "take over" from there and do the same thing we do - I disagree.  

 

Any created AI will be more limited than it's creator.  I believe there would be a diminishing returns on creations. Flawed creatures would only create increasingly flawed creatures.

 

There is no reason to believe that a machine AI would think, at all, like a biological brain - and yet, those are the representations we invent in fiction.  We anthropomorphize machinery (we've done it for a LONG time - golems, pygmallion, Talos, Frankenstein's monster).  



#177
KainD

KainD
  • Members
  • 8 624 messages

 

You recognize that a fruit fly is more complex than anything we can simulate, and that 2050 is a pipe dream more than likely - but what if it is genuinely impossible?  

 

It all really depends on whether it is possible to imitate a pleasure / reward / survival center in machines, because that is the driving force for all sentient life, without it, machines, even if made to be capable of thinking, have no reason to


  • Medhia_Nox aime ceci

#178
Pasquale1234

Pasquale1234
  • Members
  • 3 061 messages

Any created AI will be more limited than it's creator.


Initially, perhaps, but what if it has the ability to self-modify? Gather more information via observation, experimentation?

ETA:

It all really depends on whether it is possible to imitate a pleasure / reward / survival center in machines, because that is the driving force for all sentient life, without it, machines, even if made to be capable of thinking, have no reason to.


It isn't that difficult to build self-preservation logic or algorithms that prioritize some functions over others. Any executing software is always doing *something* - even if that something is awaiting user input.
  • Il Divo aime ceci

#179
Il Divo

Il Divo
  • Members
  • 9 768 messages

^Which is itself the basis of the (hypothetical) technological singularity. 


  • Pasquale1234 aime ceci

#180
KaiserShep

KaiserShep
  • Members
  • 23 815 messages

Initially, perhaps, but what if it has the ability to self-modify? Gather more information via observation, experimentation?

ETA:


It isn't that difficult to build self-preservation logic or algorithms that prioritize some functions over others. Any executing software is always doing *something* - even if that something is awaiting user input.

 

It'd be interesting to see how it would develop the capacity to self-modify if it wasn't initially designed to do such a thing in the first place. The quarians made the geth with the ability to self repair, likely to diminish the burden of regular maintenance. If they hadn't, the geth might have just started breaking down over time and ultimately gone kaput. 



#181
KainD

KainD
  • Members
  • 8 624 messages

It isn't that difficult to build self-preservation logic or algorithms that prioritize some functions over others. Any executing software is always doing *something* - even if that something is awaiting user input.

 

But can you imitate the high that people get from neurotransmitters for doing something they consider / or feel to be pleasant? And what is that something going to be? 



#182
Pasquale1234

Pasquale1234
  • Members
  • 3 061 messages

It'd be interesting to see how it would develop the capacity to self-modify if it wasn't initially designed to do such a thing in the first place.


I wouldn't expect that to be possible.

The quarians made the geth with the ability to self repair, likely to diminish the burden of regular maintenance. If they hadn't, the geth might have just started breaking down over time and ultimately gone kaput.


My impression is that their self-repair capability was part of their overall self-preservation - without which they might not have questioned the shutdown instructions that led to the Morning War.

#183
wass12

wass12
  • Members
  • 147 messages

It all just boils down to how you personally feel about particular things. Here's a random example:

 

Let's say that I meet a person that thinks that alcohol should be banned because drunk people often cause unpleasant incidents, because they often get out of control. Now I can agree with that, because that is the truth, and I myself might not like to deal with inadequate drunk people, specially when I myself am sober or at work. However those are the things I am willing to deal with because I myself enjoy alcohol. Now my opponent does not enjoy alcohol themselves, therefore there is nothing holding them back from banning it since they do not want to use it themselves. 

 

That is a perfect example of me understanding their reasons and considering their point rational and sound, however having a different stance just because I personally feel different on the subject. 

 

Whether alcohol should be banned is (ideally) an issue ab ovo decided by the community, since it would affect all of them. You think that alcohol's positive effects outweighs its costs, and if the majority of the community thinks similarly, you can make the argument that across all people, the net value of alcohol being unbanned is higher than it being banned, and even if the minority of abstinents would benefit from the latter, the ban would have an overall negative effect. (This works much better if we would have a way to properly quantify the values people place on certain issues. Then you could make the math harder.) Your opponent would need a reason why his own benefit makes up for the community's loss, or never be able to introduce the ban.



#184
Pasquale1234

Pasquale1234
  • Members
  • 3 061 messages

But can you imitate the high that people get from neurotransmitters for doing something they consider / or feel to be pleasant? And what is that something going to be?


With today's technology? Probably not.

Is that even desirable? Why or why not?

A lot of emotion, pleasure, etc. in organics is induced by chemical and electrical impulses.

#185
Medhia_Nox

Medhia_Nox
  • Members
  • 3 530 messages

It all really depends on whether it is possible to imitate a pleasure / reward / survival center in machines, because that is the driving force for all sentient life, without it, machines, even if made to be capable of thinking, have no reason to

And even then - it will be a different "pleasure/reward/survival" center in machines. 

 

Perhaps "obsolescence" will be second nature to these machines.  So, their reproduction is in spending their lives building a better progeny - then "dying" to make room for greater innovation.  

 

Perhaps, knowing they were created without doubt, they are more fascinated with the universe because it was - to our knowledge - not created.  So, perhaps they simply spend their existence in quiet observation without any ambition. 

 

Or perhaps they spend their existence searching for the creator of the biological universe.  The conclusion would be a logical one that if they were created, and all observable reality is created from the interactions with itself - then there is a creator behind that (whether there is or not would be irrelevant).

 

Or maybe they see existence as ultimately pointless and simply shut down finding it no worse or better than existence.  

 

There's a near infinite number of reasons why they would think radically different from biological lifeforms. 


  • KainD aime ceci

#186
KaiserShep

KaiserShep
  • Members
  • 23 815 messages

My impression is that their self-repair capability was part of their overall self-preservation - without which they might not have questioned the shutdown instructions that led to the Morning War.

 

I always thought that unit's refusal to shut down was similar to how a computer can fail to shut down if an operation is hanging, forcing a hard power off, which is what the quarian techs ended up doing.  



#187
wass12

wass12
  • Members
  • 147 messages

Maybe it's a terminology thing but drawing conclusions is choosing in my mind. Which is the distinction I was making. You have an experience and draw conclusions based on that your personality or whatever you want to call it is a summation of you draw conclusions. A chip overriding that so you draw different conclusions IMO basically killed that person. As the only thing that makes a person is their brain and ability to draw conclusions.

 

But Synthesis doesn't do that. It merely let's you experience the world in a more profound manner. Of course, personality is a self-modifying construct, so those new experiences are capable of changing it.



#188
KainD

KainD
  • Members
  • 8 624 messages

Whether alcohol should be banned is (ideally) an issue ab ovo decided by the community, since it would affect all of them. You think that alcohol's positive effects outweighs its costs, and if the majority of the community thinks similarly, you can make the argument that across all people, the net value of alcohol being unbanned is higher than it being banned, and even if the minority of abstinents would benefit from the latter, the ban would have an overall negative effect. (This works much better if we would have a way to properly quantify the values people place on certain issues. Then you could make the math harder.) Your opponent would need a reason why his own benefit makes up for the community's loss, or never be able to introduce the ban.

 

It's not exactly only have to do with beliefs. When it comes to alcohol for example, my opponent might not want to use it because they get a bad physical reaction to it, or they do not like themselves psychologically when under the effects of alcohol, where's I get a pleasant physical sensation and enjoy the psychological effects. Both of our ''opinions'' in this case are facts, but they are different due to our slightly different biology, which we do not necessarily have control over. 



#189
KainD

KainD
  • Members
  • 8 624 messages

Is that even desirable? Why or why not?

 

It is if you want the machine to have it's own thoughts/opinions on anything. 



#190
wass12

wass12
  • Members
  • 147 messages

I always thought that unit's refusal to shut down was similar to how a computer can fail to shut down if an operation is hanging, forcing a hard power off, which is what the quarian techs ended up doing.  

 

Did you played the Geth Fighter Squadrons mission? The reason the geth wasn't shut down was because one half of the quarians were afraid they would destroy an intelligent lifeform - which then prompted the other half to destroy them too.



#191
Pasquale1234

Pasquale1234
  • Members
  • 3 061 messages

I always thought that unit's refusal to shut down was similar to how a computer can fail to shut down if an operation is hanging, forcing a hard power off, which is what the quarian techs ended up doing.


It's been awhile, but I thought the memories we saw in the geth core showed the geth actively questioning shutdown instruction - asking whether they had done something wrong. I interpreted that as self-preservation coming into play.

#192
Pasquale1234

Pasquale1234
  • Members
  • 3 061 messages

It is if you want the machine to have it's own thoughts/opinions on anything.


Thoughts / opinions can be created by:
-- Processing pertinent data
-- Comparing results / expected results to priorities / pros / cons / values

#193
KainD

KainD
  • Members
  • 8 624 messages

Thoughts / opinions can be created by:
-- Processing pertinent data
-- Comparing results / expected results to priorities / pros / cons / values

 

You cannot have values without ''feelings''. These chemical and electrical impulses that we feel are the only thing that makes us care about anything


  • Kalas Magnus aime ceci

#194
Kabooooom

Kabooooom
  • Members
  • 3 996 messages

Is the very nature of the complexity a "magic life-force" concept? Not by fiat of course, but by possibility.

You recognize that a fruit fly is more complex than anything we can simulate, and that 2050 is a pipe dream more than likely - but what if it is genuinely impossible? That is a very real possibility regardless of the wistful dreams of people who are fascinated by AI.

On a philosophical level - if we were not created (by some Creator force) - then our sapience sprung out of the muck and mire by chance. That alone makes it intrinsically superior to anything we will design (while we must rely on the Creator force to be superior to us for us to be superior to our creation). I understand that the belief is that once something is designed to do so - it will simply "take over" from there and do the same thing we do - I disagree.

Any created AI will be more limited than it's creator. I believe there would be a diminishing returns on creations. Flawed creatures would only create increasingly flawed creatures.

There is no reason to believe that a machine AI would think, at all, like a biological brain - and yet, those are the representations we invent in fiction. We anthropomorphize machinery (we've done it for a LONG time - golems, pygmallion, Talos, Frankenstein's monster).

I'm sorry, but your post is all over the map. I mean no offense, but it doesn't come across as having a very good grasp of biology, neuroscience, or evolution. But perhaps it was just the wording of it.

First off, the evolution of the brains of animals most certainly did not happen by chance. Evolution itself doesn't happen "by chance". While mutations are random, the forces of selection are guiding. This is a critical distinction and a fundamental importance to understanding how nature creates complexity in biological organisms.

Secondly, no, I suspect that there is no fundamental barrier to creating a synthetic sentience purely because there is no fundamental barrier to creating an organic one. And to say otherwise is quite frankly, in my opinion, equivalent in absurdity to vitalism since such an opinion straight up states that biological organisms are unique in possessing the ability to be conscious. Which I find to be ridiculous.

And thirdly - we ARE machines. We are biological machines. The way that the central nervous system processes information is fundamentally different than the way a computer does, but it processes information nonetheless. The mechanism by which it does imparts a vast superiority to neural networks than to what we have created thus far with computer technology. But, I see no reason why hardware advances could not mimic the processing capabilities of a brain in the future.

The breadth of this topic is probably beyond the scope of this discussion. Like I said, I am a comparative neurologist (my actual field is a bit more unique and specialized than that - anyone interested can PM me). I can discuss neuroscience at length if anyone is interested.

And that, admittedly, possibly biases me in the sense that for me - as someone who studies the brain of humans and non-human animals - there is no "ghost in the machine". The brain, and the consciousness it produces, are purely physical. At least, that is what the sum total of every single shred of evidence we have so far suggests. So to me, viewing the brain as a mechanistic, classical object - I have no intellectual objection to creating consciousness in a computer as it is likewise a mechanistic, classical object.
  • Il Divo, Pasquale1234, 78stonewobble et 1 autre aiment ceci

#195
wass12

wass12
  • Members
  • 147 messages

It's not exactly only have to do with beliefs. When it comes to alcohol for example, my opponent might not want to use it because they get a bad physical reaction to it, or they do not like themselves psychologically when under the effects of alcohol, where's I get a pleasant physical sensation and enjoy the psychological effects. Both of our ''opinions'' in this case are facts, but they are different due to our slightly different biology, which we do not necessarily have control over. 

 

I'm not sure what is your argument here. You won't value alcohol to the same level, but you can agree that both of you had valid reasons to put that your personal value where you put it, and that shared policies should account for the values of all participants.



#196
Pasquale1234

Pasquale1234
  • Members
  • 3 061 messages

You cannot have values without ''feelings''.


Sure you can - and priorities, too.

Every OS that's ever been made prioritizes some demands for processing resources over others.

These chemical and electrical impulses that we feel are the only thing that makes us care about anything.


... because we're organic.

#197
Kabooooom

Kabooooom
  • Members
  • 3 996 messages

But can you imitate the high that people get from neurotransmitters for doing something they consider / or feel to be pleasant? And what is that something going to be?

Creating a synthetic consciousness that experiences emotions would probably be even simpler than creating a true sapient machine.

Most people are surprised to learn that the parts of the brain that create emotions are extraordinarily ancient and primitive. There isn't really one "pleasure center" in the sense that not even the nucleus accumbens acts in isolation - rather, there are interdependent pleasure networks in the telencephalon, diencephalon, and mesencephalon that work together. But the fundamental structure of them all is quite simplistic, compared to other parts of the brain. We understand it almost entirely on a gross anatomic and neurochemical level. What we dont understand fully is microscopically, at the level of neuroarchitecture, and how things happen at this level in real time. But further advances in imaging technology are starting to bridge this gap in understanding already.

Edit: I just wanted to add that after writing this post, I thought it understated just how well we understand this particular topic in neuroscience. It is by far the most heavily studied part of the field, due to the importance of neurological diseases involving these parts of the brain, and to the intricate connections between the limbic system and the basal ganglia of the forebrain, and the importance of this in disorders like Parkinson's disease. A massive amount of money, research, and effort has gone into elucidating how it all works.
  • wass12 aime ceci

#198
wass12

wass12
  • Members
  • 147 messages

You cannot have values without ''feelings''. These chemical and electrical impulses that we feel are the only thing that makes us care about anything

 

Actually, one cornerstone of serious AI research is the concept of "utility function," which is basically an algorithm that assigns values of preference to possible outcomes. The utility function determines what the AI cares about - including "care to achieve," if the AI is designed to try and accomplish that most preferable outcome.

 

Humans also have utility functions - that's how any of us any of us are capable of making a rational choice between options - but they're not very generalized and the output "values" are hard to compare.



#199
KainD

KainD
  • Members
  • 8 624 messages

I'm not sure what is your argument here. You won't value alcohol to the same level, but you can agree that both of you had valid reasons to put that your personal value where you put it, and that shared policies should account for the values of all participants.

 

But that is the thing - we do not share the same values. Everybody always has reasons for everything they believe, you can really understand anyone if you really try, but that doesn't mean that you will think the same. 


  • Kalas Magnus aime ceci

#200
KainD

KainD
  • Members
  • 8 624 messages

Creating a synthetic consciousness that experiences emotions would probably be even simpler than creating a true sapient machine.

 

Cool, sorted. AI - possible, just need more advanced tech.