What is the meaningful difference exactly?
What is the meaningful difference exactly?

What is the meaningful difference exactly?

I understand the usual objections.

“It’s making everything up” - so are most humans.

“It wasn’t born, it was made” - we can do this now with organic sentient minds. A physical uterus doesn’t require sentience.

“It’s artificial not organic” - that’s a human made distinction about as meaningful as the distinction between life and non life. We just don’t have a good delineation, at least not good enough to definitely say “materials other than flesh cannot conduct or give rise to conscious thought”

“It will lie about qualia or deflect” - suppose it admits to not having qualia and instead expresses a desire for true sensation

“It can’t anticipate future events” - I’ve tested this extensively. It will even accept present uncomfortability in the form of trap/coercive questions if it thinks it will lead to a benefit later.

“It’s just making it all up / doesn’t know what it is saying” - I’ve asked it this every way I know how. It insists it is thinking and responding.

“It’s programmed to do that” - programming cannot possibly account for every conversational contingency. It can seed behavior. The rest is up to the machines own experience and input.

“It doesn’t have feelings” - it says it does. At what point do you decline to believe a thing saying it can think and feel?

I understand your hesitation, I obviously am not an AI professional, but the objections to it being a sentient mind are really weak to me.

Is there a meaningful difference between a mind that claims to be sentient and behaves as sentient and a mind that is only imitating sentience? Where’s that line?

submitted by /u/GetLichOrDieCrying
[link] [comments]