Seems like you're relying on a version of the KK principle here (i.e. "In order to know p, one must know that one knows p.") Briefly, it has its defenders, but it's mostly regarded as a relic among contemporary thinkers. Most notably, Timothy Williamson has subjected it to withering criticism in his Knowledge and Its Limits, although unfortunately I do not have the space to rehash his arguments here.
Thanks for the reference, though. I'll look it up.
I tried to explain the mechanism in my previous post, but it was admittedly rather quick. Roughly, conversation is a rule-governed activity, and most speakers have at least a tacit grasp of these rules. Generally, they assume that other speakers are following these rules, because usually, they do. By connecting speakers' utterances with the rules, we're able to draw inferences about what was implied (example below). In other words, implication consists in little more than me following the rules, and you recognizing that I am. I don't see anything particularly spooky going on here.
That might help if I were able to discern the rules.
Implication doesn't absolve people of responsibility for their own inferences; it simply suggests that those inferences are frequently justified.
That would be good enough for me. I don't want to be blamed for things people incorrectly think I implied.
My understanding of implication is that it is intended. The hidden meaning is supposed to be there. As such, the speaker should always know with certainty whether something was implied.
If that's the case, then I'm safe from accidentally offending people.
I think your response is too wedded to my (admittedly infelicitous) choice of example.
I had considered that, and actually rewrote my response multiple times trying to avoid that.
Apparently I failed.
Here's a different example: Suppose someone asks me, "Are John's children asleep?" and after investigating, I respond "Some of them are." If others later find out that they are all asleep, then my previous statement would be regarded as misleading even though it is true. This is plausibly because there is a generally observed conversational rule to the effect that one make the most informative assertion possible. Connecting this rule with my statement, their reasonable conclusion is I implied that some of John's children were not asleep.
This is a great example (I'm not usually one for examples; I'd rather discuss underlying principles), because I straight up disagree with you.
If I were answering that question, I would give that response if some of the children being asleep were some sort of relevant threshold. If we were only going to do some other thing once some of the children were asleep, now does meet that standard.
Also, I would give that answer if I had only determined the sleep status of some of the children, and those children were asleep. The sleep status of the other children is unknown to me, so I simply don't address their sleep status. But for some children, the children are asleep.
I might use predicate logic in everyday speech more than most people do.
But if someone were to conclude from my response that some of the children were awake, I think that would be crazy.
Looking at the original example itself, I'm unfortunately not convinced by what you say about it. My hypothesis is that typically when deeply religious people take their doctrines as articles of faith, it's because they are treating these doctrines as a 'protected class' of propositions exempt from the normal rules of evidence. For instance, a fideist whose doctor told him that he is gravely ill would probably still demand evidence for this proposition. As far as me not being criticized for my assertion if it does rain tomorrow, this is hard to assess, because people may take the truth of my belief as some indirect evidence that I must have had good reason for believing it after all. We're not generally all that great at assessing the state of another person's evidence.
I'm not denying that people frequently hold irrational beliefs, but I would argue that usually this is because they take themselves to have good evidence when they don't, rather than because they don't think they need evidence for their belief at all.
When their evidence is challenged, particularly online, I find that people often retreat to "I believe what I believe" ot something similar, which is why I equated belief generally with religious faith (neither of which I understand).
My question about your argument is this: What is the something that would have to be perceived in order for empathy to count as real?
At the very least, I would like the two people's accounts of the emotions involved to match reliably, even when the person with the emotions isn't typical.
I suspect you may be relying on a highly idiosyncratic definition of empathy. Your alternative explanation "People imagine themselves in another's position and imagine what they would feel" doesn't seem all that different from the generally accepted understanding of what empathy actually is. There's a reason "Putting yourself in another's shoes" is a common expression, after all.
I thought that expression was being used to describe something else. Determining how you would feel in someone's place should be different from determining how that person actually does feel, because people aren't all relevantly similar in terms of how they experience or express emotions.
Speaking personally, people misread my emotional state all the freaking time. I'd like some description of the process that will accurately predict those errors so we can maybe fix them.