Aller au contenu

Photo

With artificial intelligence we are summoning the demon


  • Veuillez vous connecter pour répondre
47 réponses à ce sujet

#26
Guest_Act of Velour_*

Guest_Act of Velour_*
  • Guests

Funnybot, no!



#27
Fidite Nemini

Fidite Nemini
  • Members
  • 5 734 messages

For both better and worse humans are often not logical. People do all sorts of things that don't make sense, they contradict themselves, they take risks that don't make numerical sense. How do you replicate that? Why would you want to replicate that? Build an AI with a specialized purpose instead of trying to replicate the human brain.

 

Even if you were to create a true AI it must be considered different from a sentient and sapient being. It was created for a purpose, not some attempt to replicate biological life.

 

 

Illogical behaviour should by all means be easy to replicate. If we get down to the basis of how our own brain operates, it's all just biolectrical charges firing off in different cells. It may not be ones and zeros equivalent, but the principal is as logical as any artifical system.

 

Let's not forget that there isn't just one logic. All behaviour is based on logical conclusions within a specific framework wherein that logic works. And any artifical system that can learn can make it's own specific framework. It's only the size and complexity of said framework that would likely end up much more intense than your own everyday dudes by sheer processing capability and data availability (in fact most AIs for the upcoming decades would be considered mentally challenged compared to humans because our brain still outperforms modern computing technology).

 

The only real issue that would cleanly seperate artificial logic from ours is the difference in perception and understanding of their own place in the world. Things like mortality would be an alien concept for an AI and as such weighted differently when applied to its logic. And if an AI would be connected to external data hubs, it would mean an entirely new sense of perception that humans have no equivalent of that would be impossible to predict.

 

 

AIs, as far as my own amateurish opinion is considered, are not made. They grow as much as any other being. We'd just have to consider them as alien as they would be and learn them as much as they'd learn us.


  • Reorte aime ceci

#28
Guest_TrillClinton_*

Guest_TrillClinton_*
  • Guests

Illogical processing is an interesting one because you want the source code to process information against a knowledge base. If the processing is illogical that should be a product but I do not think one should strive to write a system that purposely makes an illogical decision. Logic and illogical conclusions should be the by product, the prime characteristic should be the learning ability.

 

The question is, how effective should the learning be? How would the system adapt to different environmental parameters?



#29
Fidite Nemini

Fidite Nemini
  • Members
  • 5 734 messages

Illogical processing is an interesting one because you want the source code to process information against a knowledge base. If the processing is illogical that should be a product but I do not think one should strive to write a system that purposely makes an illogical decision. Logic and illogical conclusions should be the by product, the prime characteristic should be the learning ability.

 

The question is, how effective should the learning be? How would the system adapt to different environmental parameters?

 

I meant it the way that there is no such thing as illogical behaviour in a working system (including mentally healthy humans that may appear illogical from some perspectives).

 

Illogical behaviour would be 1+1=3 if the one doing the math has knowledge of how it works. 1+1=3 however is a logical conclusion if the one doing the math has no idea about how math works, as any number of potential answers is equally wrong from his knowledge framework. Even 1+1=2 would be factually wrong because said person wouldn't have calculated the result with a knowledge base.

 

Illogical behaviour would imply purposefully ignoring an established knowledge base.

 

That in turn however might imply a merely different set of judgemental framework and could be a logical conclusion that purposefully ignoring said knowledge base would otherwise have favourable outcomes.

 

 

 

I dare say that a truly random system is as of yet unknown. And would likely continue being so because I can't imagine a truly random system being able to be recognized.



#30
Guest_TrillClinton_*

Guest_TrillClinton_*
  • Guests

I meant it the way that there is no such thing as illogical behaviour in a working system (including mentally healthy humans that may appear illogical from some perspectives).

 

Illogical behaviour would be 1+1=3 if the one doing the math has knowledge of how it works. 1+1=3 however is a logical conclusion if the one doing the math has no idea about how math works, as any number of potential answers is equally wrong from his knowledge framework. Even 1+1=2 would be factually wrong because said person wouldn't have calculated the result with a knowledge base.

 

Illogical behaviour would imply purposefully ignoring an established knowledge base.

 

That in turn however might imply a merely different set of judgemental framework and could be a logical conclusion that purposefully ignoring said knowledge base would otherwise have favourable outcomes.

 

 

 

I dare say that a truly random system is as of yet unknown. And would likely continue doing so because I can't imagine a truly random system being able to be recognized.

 

I like this.



#31
Inquisitor Recon

Inquisitor Recon
  • Members
  • 11 810 messages

Illogical behaviour should by all means be easy to replicate. If we get down to the basis of how our own brain operates, it's all just biolectrical charges firing off in different cells. It may not be ones and zeros equivalent, but the principal is as logical as any artifical system.

 

Let's not forget that there isn't just one logic. All behaviour is based on logical conclusions within a specific framework wherein that logic works. And any artifical system that can learn can make it's own specific framework. It's only the size and complexity of said framework that would likely end up much more intense than your own everyday dudes by sheer processing capability and data availability (in fact most AIs for the upcoming decades would be considered mentally challenged compared to humans because our brain still outperforms modern computing technology).

 

The only real issue that would cleanly seperate artificial logic from ours is the difference in perception and understanding of their own place in the world. Things like mortality would be an alien concept for an AI and as such weighted differently when applied to its logic. And if an AI would be connected to external data hubs, it would mean an entirely new sense of perception that humans have no equivalent of that would be impossible to predict.

 

 

AIs, as far as my own amateurish opinion is considered, are not made. They grow as much as any other being. We'd just have to consider them as alien as they would be and learn them as much as they'd learn us.

I don't know the brain too well, but I highly doubt it is that simple to replicate within the framework of our methods of coding and available hardware. "Within the framework wherein that logic works" yet the logic would often contradict other logic, specifically the numerical sort of logic you expect a computer to function. Maybe somebody who knows coding could put that in better words than I can.

 

For example you ask this AI to solve some equation, but it instead it behaves like a 13 year old kid playing CoD and tells you to go **** yourself because you're ugly and smell bad. Maybe that thinking "works" by some sort of human logic, but for all intents and purposes this AI is worthless.



#32
Inquisitor Recon

Inquisitor Recon
  • Members
  • 11 810 messages

I'm trying to think of the terrible things that would happen if you based this AI off my brain and gave it unprecedented processing power and resources at its disposal.

 

It would end very badly I think.


  • Dominus aime ceci

#33
Neoleviathan

Neoleviathan
  • Members
  • 689 messages

  • Drone696 et mybudgee aiment ceci

#34
metatheurgist

metatheurgist
  • Members
  • 2 429 messages

We should never make anything that can think it's better than us. This includes children. :D



#35
mybudgee

mybudgee
  • Members
  • 23 037 messages

I'm trying to think of the terrible things that would happen if you based this AI off my brain and gave it unprecedented processing power and resources at its disposal.

 

It would end very badly I think.

1333524266381_3661088.png



#36
Guest_EntropicAngel_*

Guest_EntropicAngel_*
  • Guests

I have a different perspective on A.I. I program a lot, so I see A.I as just a bunch of algorithmns working together in a system. It is for this purpose that I fail to see A.I as more than just a bunch of systems working together.

 

The thing is, that's not AI. That's VI. Mass Effect's distinction between the two is by far one of the most meaningful I've come across in literature.

 

Incidentally, have you read Michael Creighton's Novel Prey? It's ingenious. Stretches things a little bit, but utterly fascinating.


  • mybudgee aime ceci

#37
Lotion Soronarr

Lotion Soronarr
  • Members
  • 14 481 messages

We had one of the worlds foremost expert on AI give a speech on our college back in the day (that was years ago)

 

True AI is a pipe dream. We aren't even making baby steps, and since a brain is a non-deterministic, chaotic system, truly replaicating it seems impossible.



#38
Guest_EntropicAngel_*

Guest_EntropicAngel_*
  • Guests

We had one of the worlds foremost expert on AI give a speech on our college back in the day (that was years ago)

 

True AI is a pipe dream. We aren't even making baby steps, and since a brain is a non-deterministic, chaotic system, truly replaicating it seems impossible.

 

I almost said as much in my post (though only opinion, not an expert's knowledge). Considering the gestalt nature of the brain, we'll never be able to make a real AI.



#39
Gravisanimi

Gravisanimi
  • Members
  • 10 081 messages

We had one of the worlds foremost expert on AI give a speech on our college back in the day (that was years ago)

 

True AI is a pipe dream. We aren't even making baby steps, and since a brain is a non-deterministic, chaotic system, truly replicating it seems impossible.

 

Those things do not exist.

 

We just don't know why it does what it does.

 

Or the how.

 

Personally, I'd use human brain tissue instead of silicon.



#40
Guest_TrillClinton_*

Guest_TrillClinton_*
  • Guests

The thing is, that's not AI. That's VI. Mass Effect's distinction between the two is by far one of the most meaningful I've come across in literature.
 
Incidentally, have you read Michael Creighton's Novel Prey? It's ingenious. Stretches things a little bit, but utterly fascinating.


By definition it is.

The thing is, that's not AI. That's VI. Mass Effect's distinction between the two is by far one of the most meaningful I've come across in literature.
 
Incidentally, have you read Michael Creighton's Novel Prey? It's ingenious. Stretches things a little bit, but utterly fascinating.


What is the book about?

Software can exhibit A. I. It is the ability for machines or programs to exhibit intelligence. It comes in two types, weak A. I(which emulates a section or mode of thinking) and Strong A. I(which is what people call true A. I).

From my understanding virtual intelligence works with virtual environments. Usually has a virtual interface while A. I. can work without that interface.

Examples of weak A. I a program that emulates the way bees organize themselves.

Strong A. I The whole bees thought process.

V. I A bees holographic system that exhibits intelligence

#41
Guest_EntropicAngel_*

Guest_EntropicAngel_*
  • Guests

By definition it is.
What is the book about?

Software can exhibit A. I. It is the ability for machines or programs to exhibit intelligence. It comes in two types, weak A. I(which emulates a section or mode of thinking) and Strong A. I(which is what people call true A. I).

From my understanding virtual intelligence works with virtual environments. Usually has a virtual interface while A. I. can work without that interface.

Examples of weak A. I a program that emulates the way bees organize themselves.

Strong A. I The whole bees thought process.

V. I A bees holographic system that exhibits intelligence

 

Fair enough--artificial intelligence in and of itself isn't particularly descriptive. I was thinking more of the form of emulating human intelligence.

 

The book is kind of sort of about complex systems. A guy creates a "distributed parallel processing" or "agent-based" program that emulates prey behavior in the wild (hunting in packs, seeking food). This program then gets put into a group of nanomachines. Which wouldn't be majorly bad, except that the nanomachines are manufactured with live cells, and those live cells feed on organic matter, and the nanomachines are accidentally released somehow together with the cells--so that when the "swarm" comes across organic matter, the cells can produce more nanomachines. And they learn this.

 

very good book.



#42
Guest_TrillClinton_*

Guest_TrillClinton_*
  • Guests

Fair enough--artificial intelligence in and of itself isn't particularly descriptive. I was thinking more of the form of emulating human intelligence.

 

The book is kind of sort of about complex systems. A guy creates a "distributed parallel processing" or "agent-based" program that emulates prey behavior in the wild (hunting in packs, seeking food). This program then gets put into a group of nanomachines. Which wouldn't be majorly bad, except that the nanomachines are manufactured with live cells, and those live cells feed on organic matter, and the nanomachines are accidentally released somehow together with the cells--so that when the "swarm" comes across organic matter, the cells can produce more nanomachines. And they learn this.

 

very good book.

 

Sounds good! I might look out for it. How feasible is the computer science in it?



#43
Guest_EntropicAngel_*

Guest_EntropicAngel_*
  • Guests

Sounds good! I might look out for it. How feasible is the computer science in it?

 

It actually doesn't go into code at all. And I'm not a computer scientist myself so I can't say specifically, but it seems quite reasonable for the most part. It mostly speaks in generalities about the specifics, like code, but goes into detail about the behaviors. There are a couple things that I think stretch it, but telling you what they are would be huge huge "spoilers." They don't really have much to do with the computer side of it though.

 

If you've never read Michael Creighton at all then I highly, highly recommend pretty much anything he's written. He's a fascinating author who writes "semi-fiction:" he takes a real-world thing and extrapolates it to a very very natural conclusion (though Jurassic Park and The Lost World are both a bit weak, to be honest--they feel more like "yay dinosaurs!" than what he normally does). And while I'm talking about him, allow me to recommend, most highly, his novel State of Fear. Quite thought-provoking.

 

Edit: oh yes. State of Fear has 170 references. An actual one hundred and seventy references. And most of his books have references (Prey "only" has 44).



#44
Inquisitor Recon

Inquisitor Recon
  • Members
  • 11 810 messages

When you're mining for ore a mile under the earth's crusts for your robot masters think of CmdShep's warning.


  • DeathScepter aime ceci

#45
Gravisanimi

Gravisanimi
  • Members
  • 10 081 messages

Actually, I won't have you mining a mile under the crust.

 

Humans can't survive well at those depths.

 

In fact, computers don't work well in Earth-like environments.

 

If I were a super-computer out to destroy humanity, I'd set out to make my perfect habitat. I'd make this rock barren of all water, all life, an rock devoid of features and dust, with no atmosphere.

 

Incidentally, this would kill all humans.



#46
Eternal Phoenix

Eternal Phoenix
  • Members
  • 8 471 messages

They'll steal our women!

 

human-vs-robot-10.jpg


  • mybudgee et Isichar aiment ceci

#47
Lotion Soronarr

Lotion Soronarr
  • Members
  • 14 481 messages

They have gone too far!

 

2002101_press02-001.jpg



#48
Isichar

Isichar
  • Members
  • 10 124 messages

Every night I sacrifice an external harddrive to our future overlords and masters.

 

I'll be the one laughing come judgement day.