In 'disagreement without transparency' Srinivasan and Hawthorne ask
: what ought one to do, epistemically speaking, when faced with a disagreement?
Note there is a disagreement and see if there is some empirical observation which can be made to settle the issue.
Faced with this question, one naturally hopes for an answer that is principled, general, and intuitively satisfying.
I've just given it.
We want to argue that this is a vain hope. Our claim is that a satisfying answer
epistemic questions don't have 'satisfying answers'. That which is 'informative' may be very unsettling.
will prove elusive because of non-transparency: that there is no condition such that we are always in a position to know whether it obtains.
Sure there is. You can see from the smile on the person's face that they are satisfied with the answer.
When we take seriously that there is nothing, including our own minds, to which we have assured access, the familiar project of formulating epistemic norms is destabilized.
Only in the sense that the claim that everything is the fart of the fart I farted last Tuesday destabilizes British monetary policy. In other words, the thing simply isn't true. In the case of finding what the 'satisfying answer' is we can inquire how pleased a particular person would be with a particular answer.
In this paper, we will show how this plays out in the special case of disagreement. But we believe that a larger lesson can ultimately be extracted from our discussion: namely, that non-transparency threatens our hope for fully satisfying epistemic norms in general.
Only in the sense that the fart I farted last Tuesday itself farts in a manner which threatens Biden's hopes and dreams.
To explore how non-transparency limits our prospects for formulating a satisfying disagreement norm, we will put forward what we call the Knowledge Disagreement Norm (KDN).
Which will turn out to be nonsense. The correct norm is that where there is a disagreement about something knowable, we consider what empirical evidence would end the disagreement by showing one side to be in error.
This norm falls out of a broadly knowledge-centric epistemology: that is, an epistemology that maintains that knowledge is the telos of our epistemic activity.
That is merely a definition or tautology since 'epistemic' means 'concerning knowledge'. It applies equally to a utilitarian or a theological epistemology.
When addressing the question ‘What ought one to do, epistemically speaking, in the face of disagreement?’ it can be useful to reflect on the semantics of ought-claims, and, in particular, the way in which ought-claims are notoriously context-sensitive.
This isn't true. Either an English speaker knows what the word 'ought' means or one is mentally fucking retarded. All words are 'context-sensitive'. In this particular case, the context is epistemic and the norm is to look for empirical evidence.
For ‘ought’ is one of a family of modals that, while retaining a barebones logical structure across its uses, is flexible in its contribution to the meaning of sentences. For example, there is a use of ‘ought’ connected to particular desires or ends, as in: ‘The burglar ought to use the back door, since it’s unlocked.’
It is the same use as 'people who don't have the key to the front door should use the back door'. The relevant desire has to do with entering the premises, not burglary per se. I think ought is being wrongly, or unnaturally, substituted for 'should'
There is also a use connected to legal or moral norms, as in: ‘You ought to go to jail if you murder someone.’
This is the same use as 'crimes should be punished'. The suggestion isn't that people should visit a jail after killing a person.
And there is a use that is (arguably) connected to the evidential situation of an individual or group, as in: ‘He ought to be in London by now.’
Again, this is 'should'. There is no deontic claim here.
Even within any one of these broad categories, there is considerable scope for context-dependence. For example, the truth conditions of a deontic ought-claim will also be sensitive to which facts are held fixed in the conversational context.
No. They are irrelevant. Either there truly is a specific duty or people are using 'ought' when they mean 'should'.
For example, when one says of a criminal ‘he ought to have gone to jail’, one is holding fixed the fact of the crime.
One has no power to do so. I think OJ should have gone to jail because I think he killed his wife and there was sufficient evidence to convict. But, someone with superior knowledge of the case may tell me I am wrong and supply verifiable evidence that I overlooked some crucial information which created 'reasonable doubt'.
By contrast, when one says ‘he ought to have never started on a life of crime’, one isn’t holding fixed the fact of the crime.
No. I may say this about Joe Biden whom I believe regularly steals my TV remote. But I have no power to fix any facts whatsoever.
Indeed it is plausible to suppose— and contemporary semantic wisdom does indeed suppose—that the semantic contribution of ‘ought’ is in general contextually sensitive to both a relevant domain of situations (the ‘modal base’), and to a relevant mode of ranking those situations (‘the ordering source’).
Sadly, contemporary semantic wisdom is considered useless and stupid.
On a popular and plausible version of this account, ‘It ought to be that p’ is true relative to a domain of situations and a mode of ordering just in case there is some situation in the domain such that p holds at it and at all situations ranked equal to or higher than it.
Rubbish! If you say 'where is my urine? I know I was too drunk to piss in the toilet'. My reply is to point at some yellow liquid on the floor and say 'It ought to be that pee.' You say, 'I just checked. It isn't pee. It is lemonade. Where did my pee go?' The answer is obvious. I bet you you couldn't angle your dick so as to drink your own pee. You won that bet and I owe you a tenner. If you don't remember this, I'm not going to tell you.
Incidentally, set theoretically speaking, there may be no domain for an epistemic proposition because of the problem of impredicativity or the intensional fallacy.
Where there are only finitely many situations, this is equivalent to the constraint that p holds at all of the best situations.
It is equivalent to nothing at all because situations may not be distinguishable or rankable. Assuming they are doesn't mean they actually are.
Note that this toy semantics has the consequence that finite closure holds for ought-claims. If ‘It ought to be that p1 . . . . ought to be that pn ’ is true for a finite set of premises, then ‘It ought to be that q’ is true for any proposition entailed by that set.
No. There may be no set because of impredicativity or the intensional fallacy. True, some sort of ramified type theory may alleviate the problem. But nobody has found any such beastie yet.
This style of semantics is by no means sacrosanct,
It is mathematically unsound.
but it is one that will be in the background of our thinking in this paper. From this perspective, the question ‘What ought one to do when one encounters a disagreement?’ can thus be clarified by considering how the key contextual factors—ordering source and modal base—are being resolved at the context at which the question is being raised.
There is no well ordering because epistemic objects are subject to the masked man or intensional fallacy. That is why epistemic disagreements require searching for empirical evidence or a 'witness'.
The Knowledge Disagreement Norm (KDN): In a case of disagreement about whether p, where S believes that p and H believes that not- p:
we need to find a p or evidence that something which entails p actually exists
(i) S ought to trust H and believe that not- p iff were S to trust H, this would result in S’s knowing not- p
S thinks H is untrustworthy. If S trusted H he would know H was trustworthy because H said so. But this has nothing to do with whether H is in fact trustworthy. Only actual evidence of an independent and objective kind can decide the question.
(ii) S ought to dismiss H and continue to believe that p iff were S to stick to her guns this would result in S’s knowing p ,
No. It would change nothing. Only evidence can produce knowledge as opposed to a belief however strongly held.
and (iii) in all other cases, S ought to suspend judgment about whether p .
This is not required. You can judge according to your own lights. A court may convict on the basis of admissible evidence. It can quash that conviction if new evidence comes to light.
KDN is sheer nonsense. These two cretins don't mention the only thing which matters- viz. evidence.
According to KDN, one should be ‘conciliatory’ in the face of disagreement—that is, give up one’s belief that p and trust one’s disagreeing interlocutor that not- p—just in case so trusting would lead one to know that not- p.
I think I am alive. You tell me I'm a ghost. You say 'trust me. I will push this dagger through your heart. You will then see that you are dead and therefore a ghost.' I decide to trust you and know that I am dying because you just shoved a dagger into my heart. According to KDN this would be a good thing.
Since (it is generally granted) trusting someone who knows is a method of acquiring knowledge oneself
only if there is evidence that trusting him is justified
(i) recommends that S trust H in cases where H knows that not- p .
on the basis of evidence. But there is no 'disagreement' here. There is a stupid guy who says 'I think the cat says bow wow' and a smart guy who says 'nope. It's a dog' at which point the stupid guy says 'well, you know more about such things because you are a son of a bitch.'
Being conciliatory in such cases will lead S to greater knowledge. 12 According to KDN, one should be ‘dogmatic’ in the face of disagreement—that is, dismiss one’s interlocutor and continue to believe p—if one knows that p.
On the basis of evidence. That's all that matters.
What about disagreement cases where neither S nor H knows whether p ? In such a case, KDN demands that S suspend judgment.
Which is itself a judgment.
There are a few things worth noting from the outset about KDN. First, KDN is inspired by knowledge-centric epistemology,
i.e. knowledge is knowledge centric for the same reason I am me-centric. That's just how ontology works.
an epistemology that takes the telos of belief and epistemic practice more generally to be knowledge.
There is no such epistemology. We get that we need to believe stuff which aint true- e.g. we aren't as stupid as shit- for psychological reasons. But 'epistemic practices' are about knowledge and the only thing that matters is relevance. Suppose I disagree with the Pope over whether it was Moses or Jesus who said 'fuck the Police!' The relevant evidence would be provided by Holy Scripture.
One might have, by contrast, a justification-centric epistemology,
would not be an epistemology. It would be casuistry or jurisprudential in nature. Similarly a sodomy-centric epistemology would seek to achieve sodomy rather than interest itself in knowledge.
according to which the telos of belief is mere justified belief,
which is still just belief and thus the things telos is itself.
or a truth-centric epistemology, according to which the telos of belief is mere true belief.
see above.
The fact is if we say 'sodomy-centric' epistemology we mean it isn't epistemology at all. The same holds for any non knowledge-centric epistemology. But knowledge is gained through empirical evidence.
Each of these alternative views could lead to disagreement norms that are analogous to the knowledge-centric KDN.
in other words they would be just as nonsensical. Only evidence matters.
While we will not discuss these possibilities here, much of the discussion that follows applies to them as well. Second, KDN says nothing about how S and H should respond in cases where their disagreement is a matter of divergent credences as opposed to conflicts of all-or-nothing belief.
Because KDN is shit. The answer is always 'look for evidence'.
Despite the prevailing trend in the disagreement debate,
i.e. disagreements between nutters who don't get that only evidence matters
our discussion will for the most part proceed without mention of credences. We find this a natural starting point; our pre-theoretic grip on the phenomenon of disagreement tends to be in terms of conflicts of all-or-nothing belief.
Nope. Our understanding is that epistemic disagreements are settled by evidence. Other sorts of disagreements- e.g. those based on personal animosity- aren't.
Third, note that compliance with KDN will not necessarily result in overall knowledge maximization.
It will result in stupidity maximisation.
Suppose S disagrees with H about whether Jack and Jill went up the hill together. If H were to trust S, H would come to know that Jack and Jill indeed went up the hill together,
nope. Only evidence of their doing so would produce knowledge.
but he would also abductively come to believe a cluster of false propositions based on the (false) hypothesis that Jack and Jill are having an affair. In short, KDN is locally consequentialist with respect to the telos of knowledge.
Nope. KDN is locally and globally shit. Only evidence matters when it comes to knowledge.
Less local consequentialisms are of course also possible, and we shall return to this issue in due course. Fourth, KDN’s gesture towards knowledge as epistemic telos can be unpacked in various ways, corresponding to diff erent meta-epistemological views. On one gloss, the relevant ‘ought’ is bouletic/desire-based: what makes KDN true is that the ‘ought’ is grounded in an (actual or idealized) desire of the disagreeing parties to maintain or gain knowledge about the disputed issue.
Such knowledge can only be gained from evidence.
On another gloss, the relevant ‘ought’ is based on a ranking implicit in certain, say, social norms, thereby rendering the ‘ought’ in KDN a kind of deontic ‘ought’.
As opposed to what? Ought is deontic or it is being illicitly substituted for a word denoting correlation.
On a more robustly realist tack, one might think the ‘ought’ of KDN is tied to a valuational structure that is desire- and social norm-transcendent. We shall not attempt to adjudicate between these diff erent views here, remaining silent for the most part on questions of meta-epistemology.
Which is exactly the same thing as epistemology just as meta-preferences are preferences. The relevant 'intension' can contain its own meta-language precisely because it has no mathematical representation and thus no one can say what it does or doesn't contain.
Fifth, and finally, conditions (i) and (ii) of KDN will have less bite to the extent that disagreement has the effect of automatically defeating knowledge or automatically defeating the knowledge-transferring capacity of trust.
Trust does not transfer knowledge. Study may do so. I trust the Professor to teach me but don't show up for class. I acquire no knowledge.
Now, no one thinks that all instances of disagreement have these defeat-effects.
None do. Only evidence matters. All knowledge claims are defeasible on that basis.
For, in many cases of disagreement— in particular, where only one of the two disagreeing parties is an expert, or where one party possesses more evidence than the other—it is obviously true that one can continue to know in the face of disagreement, and that one can come to know by trusting one’s disagreeing interlocutor.
Information remains information whether or not you trust the source.
For example, imagine that Tim believes no one is at home; he calls Ana on her mobile from his office, and expresses this belief. Ana disagrees—because she, in fact, is at home. Obviously Ana continues to know that someone (indeed, she) is at home, and Tim can come to know this himself by trusting Ana.
No. He has received information. Will he update his knowledge base on that basis? Maybe. Maybe not. Much will depend on how he interprets evidence received as part and parcel of that information transmission. If he could hear the sounds of a noisy pub in the background, he might think Ana is at a bar. She isn't at home. She is lying.
While disagreement does not always destroy knowledge and the knowledge-transmitting power of trust,
we may use other sources of evidence in deciding whether to trust hearsay testimony.
it is a live question whether it does so in a wide range of the more interesting cases of disagreement.
No. The question of epistemic disagreement is closed and deader than the dodo. Only evidence matters.
A vexed question, central to the disagreement debate, is whether knowledge is defeated in certain kinds of cases involving ‘peers’.
Only evidence defeats knowledge.
Many favour a view on which knowledge is defeated in cases of peer disagreement. In general, the greater the number of cases in which disagreement defeats knowledge or the knowledge-transferring capacities of trust, the more cases of disagreement will be relegated to the auspices of (iii). That is, the more disagreement defeats knowledge or the knowledge-conferring power of trust, the more KDN will recommend suspending judgment.
If there is no evidence one way or another those whose motivation is epistemic is to go further down their own road in the hope of discovering a 'crucial experiment' or gaining more evidence. Thus, though currently no candidate theories for quantum gravity have yielded experimentally testable predictions, there is no reason to pursue any such theory in the hope that it will do so.
Imagine the following situation: Sally and Harry are disagreeing about whether p.
If this is an epistemic disagreement Sally and Harry agree to disagree till some clinching evidence can be found.
In fact, p is true, and Sally knows this, but she isn’t in a position to know that she knows this.
Why? If Sally's motivation is epistemic, she is relying on evidence. True, she may say 'this is my intuition'. But she would need to consider what sort of evidence is incompossible with this intuition. She can ask Harry if he has any such thing or knows of a crucial experiment which will reveal it. Consider Kantian 'incongruent counterparts'. The Wu experiment settled its hash once and for all.
Harry (falsely) believes not- p, and (as very often happens), he is not in a position to know that he doesn’t know this.
He knows what evidence led him to that belief. Let him reveal it by all means.
Imagine further that Sally can maintain her knowledge that p by being dogmatic
She can say p is dogma of a type which can generate no observational discrepancy with the facts of the case. But dogma is dogmatic, not epistemic.
and that Harry can come to know p by trusting Sally.
No. He gets to know nothing. He just accepts a dogma.
Since neither party is in a position to know the facts about knowledge relevant to KDN,
none are. KDN is stupid shit
neither party is in a position to know precisely what action KDN demands of him or her.
do stupid shit rather than look for evidence.
To be somewhat more precise, we might say that KDN is not perfectly operationalizable,
it is stupid shit. Epistemic disagreements ought to lead to a search for evidence- nothing else.
where a norm N (of the form ‘S ought to F in circumstances G’) is perfectly operationalizable iff , whenever one knows N and is in G, one is in a position to engage in a piece of knowledgeable practical reasoning of the form: (1) I am in circumstances G (2) I ought to F in G (3) I can F by A-ing where A is a basic (mental or physical) act type that one knows how to perform.
Sally ought to run away when approached by a homicidal maniac. Is that 'operationalizable'? No because homicidal maniacs may look like kindly Police Sergeant concerned that a young lady get home safely. 'Knowledgeable practical reasoning' is evidence based. Sadly, there may be no clinching evidence which allows a norm to be 'operationalized'. Still, a different norm might work. Keep a gun in your coat pocket and make sure you are always facing a guy who might mean you harm. Shoot him the moment he does anything suspicious. True, you may end up killing an innocent, but them's the breaks.
As the case of Sally and Harry shows, KDN is not perfectly operationalizable.
It is nonsense.
This is because one is not always in a position to know whether one knows, and not always in a position to know whether one’s interlocutor knows.
That is irrelevant to an epistemic disagreement which ends with a quest for evidence or ought to do so if the game is worth the candle.
In other words, the relevant kinds of knowledge-related conditions are non-transparent,
because of misspecification or the 'masked man fallacy'. In other words the extension of the intension is not known or is impredicative or itself epistemic. Thus 'homicidal maniac' is an intension whose extension might include an avuncular Police Serjeant. However, 'dude who might harm you unless you shoot him first' has a well defined extension.
where a condition C is transparent just in case, whenever it obtains, one is in a position to know whether it obtains.
It isn't transparent. But your purpose is served if you can get away with shooting first and asking questions later. In America, there are States where you can shoot a guy on your property without first gathering evidence that they intend you harm. Since this is 'common knowledge' intruders would be well advised to only rob houses if they themselves are willing to kill the home-owner and pay the penalty for doing so.
Operationalization is subject to error. Sometimes it is worth doing. At other times, the cost of error outweighs any possible benefit.
//it often depends on conversational context whether, in proffering a bit of advice, one presupposes operationalizability. Suppose you are advising Jane on the giving of an award, and you say: ‘You ought to give the award to the person who just walked through the door.’ Uttered in a typical context, this presupposes that Jane knows (or is at least in a position to know) who just walked through the door. But one could also reasonably advise Jane as follows: ‘You ought to give the award to the most deserving person. I realise that it’s often difficult to tell who the most deserving person is.’ Here, one is recommending that the award be given to the most deserving person, but one by no means expects the recommendation to be operationalizable in the sense above.
The two cases are similar because the 'intension' has no well-defined 'extension'. Suppose there is a meeting. Everybody present just walked through the door. True, if both of you are looking at the door and one guy is striding across the threshold then there is a unique person picked out by your statement. But it may also be the case that the most deserving person is obvious at a glance. If the award is for 'Miss Teen Tamil Nadu' and only one teenaged Tamil girl is present then she is the most deserving of the award though, obviously, I would just go ahead and give myself the prize as I have continually done for the last five decades.
But so long as one does not falsely presuppose operationalizability,
Everything is operationalized with some margin for error. The Tamil girl may in fact be of Telugu origin. Also, she may have a dick. Still, mistakes happen.
it is far from clear that there is anything ipso facto wrong about articulating an imperfectly operationalizable norm as advice. After all, there can be instances in which one can’t incorporate a norm in knowledgeable practical reasoning but nonetheless has good evidence about what a norm recommends. Suppose Hanna gives you the advice: ‘You ought to put out as many chairs as there are guests.’ You have good evidence that there will be six guests, but you don’t know this. Hanna’s advice is hardly improper or useless, despite your not being able to incorporate it into knowledgable practical reasoning.
Just admit there is a margin of error and be done with it. You have knowledge of a stochastic, not certain type. So what? That's good enough for most purposes.
Indeed, even if offering a norm as advice presupposed a sort of operationalizability, this is at most a constraint on advice at a context, not in general. That is, just because there are cases in which KDN exhibits operationalizability-failures, this does not preclude it from ever being useful as advice;
it is useless for epistemic disagreements because the right advise is 'go find evidence'
it will count as advice in those contexts, at least, when it is operationalizable.
i.e. a crucial experiment can be made.
So while it is false that whenever we know, we know we know,
which is irrelevant for epistemic disagreement since only evidence matters
it is perfectly plausible that there are plenty of disagreement cases in which we both know and know we know.
In which case why not provide relevant evidence?
In such cases, one might well know what KDN demands of one. (Of course one will never know KDN demands trust in a situation in which one’s interlocutor knows p and one believes not- p and where such knowledge would be transmitted by trust—though insofar as one knows one doesn’t know p one will be in a position to know that KDN entails that ought to stop believing p .)
One will never 'know' KDN because it is nonsense and knowing nonsense means not knowing you yourself have shit for brains.
If the conditions relevant to KDN were transparent, then every (rational) attempt to conform to KDN would be successful. But since they are non-transparent, (rational) attempts to conform to KDN might fail. For this reason KDN can easily fail to be good advice because trying to follow it, or exhorting others to follow it, does not guarantee conformity with it.
Nothing can guarantee that advise will lead to actions in conformity with it.
Clairvoyant Maud. Maud is a clairvoyant, and uses her clairvoyance to come to know that the British prime minister is in New York, though she doesn’t know that she knows this. Her friends, who are members of Parliament and therefore usually know the whereabouts of the prime minister, assure her that the prime minister is in fact at 10 Downing Street. Maud, moreover, doesn’t even believe she is clairvoyant, as she has been exposed to plenty of evidence that suggests that clairvoyance is impossible. Nonetheless, Maud dismisses her friends and continues to believe that the prime minister is New York.
Nothing wrong in that. There is no epistemic disagreement here. Nobody is concerned to find out where the PM is. Otherwise, Maud and her MP friends would have started hunting for evidence.
Let us stipulate that it is possible to gain knowledge through clairvoyance, and that although Maud’s evidence that clairvoyance is impossible means that she isn’t in a position to know that she knows that the prime minister is in New York, she nonetheless does know his location. Then Maud, in being dogmatic, conforms to KDN; if she were instead to be conciliatory in the face of the disagreement, she would lose her knowledge that the prime minister is in New York.
No. She would merely deny it so as to be agreeable. Nothing wrong in that. I often agree with others that Julia Roberts was ideally cast in Pretty Woman though, as my Agent assured me, I would have been the best choice for a role which, in fact, was based on my own experiences as a trainee Chartered Accountant. I should explain, back in those days, Hollywood thought all Indians were thin because Indians didn't get enough food to eat. As a matter of fact, I am fat and have jiggly man-boobs. But because of prevailing stereotypes about Indian men, Julia- who is as thin as a rake- was cast in the role meant for me.
Nonetheless, it seems that Maud is doing something epistemically irresponsible by being dogmatic.
Nope. She is sticking to her guns because she genuinely has a super-power.
We feel a strong intuitive pull towards
stupid shit because you are stupid shitheads.
the judgment that Maud is doing what she ought not do, for she is maintaining a belief even when she has overwhelming (albeit misleading) evidence that she isn’t clairvoyant, and thus doesn’t know the disputed proposition.
No. It is these two cretins who are being 'epistemically irresponsible' here. They stipulated that Maud was clairvoyant and then started criticizing her because this imaginary person had an imaginary super-power they had assigned to her. Why not criticize Superman for not being Lex Luthor?
We can’t help thinking that Maud is playing with epistemic fire, exhibiting poor habits of mind that just happen, in this rare case, to serve her well.
Why should this case be 'rare'?
Thus, KDN allows for instances of what we might call ‘blameworthy right-doing’: that is, cases in which S intuitively does something blameworthy, though according to KDN she does what she ought to do.
An imaginary person with an imaginary super-power is described as 'blameworthy' by the cretins who imagined that person. If this is philosophy what is insanity?
Bridge Builder. Simon is an expert bridge engineer. He is constructing a bridge to span a large river, which thousands of commuters will cross each day. Simon has done the relevant calculations, and knows precisely how many struts are required to hold up the bridge. Simon’s colleague, Arthur, a more junior but competent engineer, disagrees with Simon’s assessment, saying that more struts are required.
Under some circumstances, this may be the case. An expert can determine how likely those circumstances are. The Economic rule to be applied is that of 'regret minimization' because of Knightian Uncertainty. These two cretins don't understand high IQ stuff of this sort.
Let us stipulate that Simon not only knows how many struts are required, but also knows that he knows this.
What you are stipulating is that Simon has a super-power.
Arthur, while almost always right himself, makes mistakes on a few more occasions, and Simon knows this. According to KDN, Simon should dismiss Arthur and be dogmatic about the number of struts required.
He isn't being dogmatic. He is relying on a super-power these two cretins have assigned to him.
Indeed, Simon is in a position to know that he should do this, since ( ex hypothesi) he not only knows how many struts are required, but moreover knows that he knows. Nonetheless, if Simon were to simply dismiss Arthur, we would likely feel that this would be problematic.
Nope. You cretins just gave this imaginary dude a super-power. It's like if the guy who invented Superman criticized the dude for flying despite Lex Luthor's suggestion that he had no such super-power.
What seems problematic about Simon’s dismissal of Arthur is that Simon is instilling in himself a bad habit—that is, a habit of boldly going on even in the face of disagreement, a habit that might easily lead him to disastrous consequences.
Like Superman flying around the place despite Lex Luthor's belief that nobody can defy gravity.
Our nervousness about Simon’s dogmatism, we would like to suggest, turns on our recognition that if Simon were in a case where he in fact didn’t know how many struts were required,
he could ask some other expert.
the habit he is instilling in himself in the case where he does know
We are in the habit of not asking for directions when walking in a neighbourhood we are thoroughly familiar with. This does not mean we instil in ourselves the bad habit of not asking for directions in unknown neighbourhoods. It is not the case that we should ask for directions every day when we go to work or return home even though we are perfectly familiar with the route. These two cretins are stupider than shit.
might easily lead him to act similarly dogmatically, thus building an unsafe bridge and threatening the lives of thousands.
There are safety inspectors and other such folk to ensure bridges are 'over-engineered'. Still, under some concatenation of circumstances even over-engineered bridges collapse. Look at the Francis Scott Key bridge. It could withstand collision from the sort of ships that used the canal in the Seventies and Eighties. But some twenty years ago, much heavier ships started plying the water. One such struck a pier bringing down the bridge. This wasn't the bridge builder's fault.
Of course, if Simon were always in a position to know when he didn’t know, there would be no such risk. That is, if Simon could always perfectly distinguish between cases in which he knows and doesn’t know,
e.g. by having supporting evidence ready at hand
the habit he is instilling in himself would be fine. But since there are not unlikely eventualities in which Simon isn’t in a position to know that he doesn’t know—again, because knowledge is non-transparent
it is transparent enough if supported by evidence
—the habit he is instilling in himself by dismissing Arthur is problematic.
Only if it is problematic that you don't ask for directions to the shop you are looking at lest you get into the bad habit of not asking for directions when lost in the middle of the Sahara desert.
Human beings are not creatures for whom the absence of knowledge is generally luminous; as such, it is simply not possible for humans to be dogmatic in cases where they know and not also be dogmatic in cases where they falsely believe they know.
One can be dogmatic on matters of dogma because no evidence can refute dogma. All other knowledge is defeasible by new evidence.
Grenade. A soldier is holding a grenade that is about to detonate, and he must decide to throw it either to his left or to his right.
He should do whatever his commander would want him to do.
Let’s assume that act consequentialism is the correct moral theory
It isn't for soldiers. They must obey the chain of command.
(or at least, more plausibly, that it is the correct moral theory with respect to Grenade).
In which case a soldier who deserts to the enemy could be said to have acted morally if, as a consequence of his cowardice, the enemy decides our troops are demoralized. This causes them to attack a well defended position. They are mown down, their morale crumbles, we win the war. But the cowardly deserter will still be Court Martialled and put before a firing squad.
Then we might say that what the soldier ought to do is to conform to the following norm: Consequentialist Norm (CN): If S is faced with the choice of doing only either A or B, S ought to do A if it would produce less harm than doing B, ought to do B if it would produce less harm than doing A, and is permitted to do either if A and B would produce equal harm. Imagine that the soldier in Grenade has misleading evidence that more harm will be done if he throws the grenade to the right. If he throws the grenade to the right, then he does (according to CN) what he ought not to have done, for he performed the action that resulted in greater harm. Nonetheless, he is obviously not blameworthy for doing what he does. This is an instance of blameless wrongdoing. Now suppose instead the soldier throws the grenade to the left, because he wants to maximize the possible harm of his action. In fact, his action minimizes the actual harm done; nonetheless, we certainly don’t want to say that his action was praiseworthy.
These cretins don't get that soldiers should maximize the harm done to the enemy and minimize that done to their own side. But, speaking generally, they would have orders as to how handle such situations.
As such, the claim that (as CN entails) the soldier ought to throw the grenade to the left does not supply the grounds for appropriate normative evaluation of the soldier’s actions.
That evaluation will be done by a Court Martial.
Both KDN and CN, then, suffer from the problem of normative divergence. That is, both link ‘ought’ to an ordering source that implies that there is no straightforward tie between what agents ought to do and the evaluative status of their actions or their character. This, we take it, is what is most deeply troubling about KDN: it fails to secure a naturally hoped-for tie between what agents ought to do and agents’ evaluative status.
Because KDN is stupid shit. For soldiers there is a normative tie between certain types of evidence- e.g. the colour of the uniform the enemy wears- and actions such as throwing a grenade. True, soldiers can make mistakes. This may lead to their acquittal by a Court Martial.
Imagine you are looking at a pointer on a dial. Given the distance you are from the dial, the particular light conditions, and so on, there exists some margin of error n such that that there is some strongest proposition p you are in a position to know of the form the pointer is plus or minus n degrees from point x, where x is the actual position of the pointer. If you were to believe, say, that the pointer is plus or minus n-1 degrees from point x, you would not in fact know this proposition.
You would know it if you had some other way to verify it. True, this knowledge might be defeasible in the light of further evidence but then all knowledge is defeasible.
Suppose, on this particular occasion, the strongest proposition you know about the position of the pointer is the proposition p, that the pointer is within range Q. That is, for all you know, the pointer is anywhere within the range Q, a range which has position x, the actual position, at its centre.
That does not follow. Knowing something is within a range would lead to thinking it is likely to be in the middle of the range assuming a normal distribution.
Now, note that nearly all of the positions within Q preclude knowing p.
Not if p is a statistical proposition or concerns a probability function.
If, say, the position of the pointer were closer to the edge of Q than point x, then one’s margin for error would preclude knowing p.
See above.
So it is very unlikely, relative to the propositions that you know (including p itself), that you know.
Rubbish! That's not how statistical information works.
The general upshot of Williamson’s argument, we take it, is the following. Defeatism can be helpfully thought of as a view on which knowledge is what we might call a ‘minimally luminous’ state.
It isn't. Knowledge is defeasible on the basis of evidence. There is no 'minimally luminous state' for the same reason there are no 'atomic propositions'.
A minimally luminous state is one such that whenever one is in it, it is not the case that it’s unlikely on one’s evidence that one is in it.
In which case, being a cat iff you are not a cat is a minimally luminous state. Thus, if you are a cat, it is not the case that you will find it unlikely that you are a cat because you are a fucking cat. Other 'minimally luminous states' include being a mermaid or the fart of the fart of a flying unicorn. The thing is daft.
But Williamson’s argument suggests that, given some plausible assumptions about margins of error, knowledge is not even minimally luminous.
But being a cat iff you are not a cat is. Thus Williamson, like these two cretins, isn't talking about knowledge, he is talking about nonsense.
The problem with
trying to align knowledge with epistemic virtue.
is that it opens the door to trying to align it with being a cat iff you are not a cat. Also, why focus only on epistemic virtue? What about epistemic cuddliness? Ought not knowledge to be cuddly and cute and willing to sit in our lap making purring noises?
There is certainly some intuitive pull to the thought that, in addition to an ‘ought’ governed by outcomes, we need an ‘ought’ that is tied to praiseworthy epistemic conduct
not to mention epistemic cuddliness or its ability to making purring noises while seated in our lap. But epistemic rape counselling too is important. Consider the vast hordes of undergraduates who have suffered serious sexual self-abuse. Should not stuff they are taught by Professors offer them gratuitous rape counselling?
— just as it is natural to draw an analogous distinction in the moral realm between what one objectively ought to do and what one subjectively ought to do.
More particularly if, subjectively, you are a cat iff you are not a cat and Moral Science is cuddling you and giving you gratuitous rape counselling.
That said, the praiseconnected ‘ought’ is rather more elusive than it initially seems.
Just as the cuddliness-connected 'ought' more particularly if it also has to offer gratuitous rape counselling.
In this section we explain why. Again, our explanation will turn on considerations of non-transparency.
i.e. not knowing what you are talking about.
A natural first pass on the ‘subjective’ epistemic ought will mimic its standard analogue in the moral sphere: one ought to do that which has greatest expected epistemic utility.
Utility is epistemic. It changes as the knowledge base changes. Moreover the supremum is unknowable. This is an impossible command.
Suppose that there exists some fixed scale of epistemic utility in the disagreement situation that is KDN-inspired.
Then there should be 'Aumann agreement', barring uncorrelated asymmetries between agents, as Bayesian priors are adjusted so both parties maximize their utility. But Aumann himself supplies an argument why this would not be regret minimizing. 'Discovery' is a good thing and epistemic disagreement could drive it.
A tempting thought here is that the subjective ‘ought’ is a measure of what is best by one’s own lights. But that thought becomes less tempting once we
remember that it has the clap
realize that whatever our gloss on ‘our lights’, there will plausibly be cases in which agents are justifiably mistaken about their own lights.
Which may be fine by their own lights. Equally, by their own lights, they may decide they are a cat iff they are not a cat and that Knowledge is giving them rape counselling.
In that case the phenomenon that gave rise to blame worthy right-doing and blameless wrongdoing with respect to the ‘objective’ ought—namely a mismatch between the facts pertinent to what one ought to do and what one takes those facts to be—re-emerges for the ‘by one’s lights’ ought.
Not if you look for evidence in a sensible manner.
In short, if we introduce an ‘ought’ tied to expected epistemic utility, then the phenomena of blameworthy rightdoing and blameless wrongdoing will still arise
because any cretin can praise or blame anything at all. Why is Quantum Theory not providing rape counselling to disabled African Americans? Personally, I blame Paul Dirac. He was very rude to me when I phoned him last night. Also he put on a fake Indian accent and said 'Saala haraami, mein tera gand phad doonga!' I think Dirac is very racist.
relative to that ‘ought’, again because of the non-transparency of evidence.
but more evidence can make it transparent enough.
Suppose, for example, that one is deciding whether to trust someone about a certain proposition that is in fact a complex theorem of classical logic.
In which case it has an equivalent Gentzen system featuring conditional tautologies.
If epistemic probabilities are standard, at least to the extent that all logical truths are assigned probability 1,
this is not the case with a Gentzen system because tautologies are partial.
then the facts of the disagreement will be probabilistically irrelevant.
No. There may be very low probability of finding an item which proves the rule is false. But we may have an existence proof for it all the same. However, this may rely on something like the axiom of choice or even determinacy and there may be types of math or logic which are useful but where that axiom is violated.
The proposition will have probability 1 relative to all facts,
if it is a tautology, not a partial one.
and the expected epistemic utility of trusting one’s interlocutor will be calculated accordingly.
No. It may be that persevering with a different axiom system opens new vistas. Nobody now thinks Brouwer was wrong to go in his own direction. But even in the Sixties, many American mathematicians thought he was off his rocker. This motivated Errett Bishop's famous paper on 'Schizophrenia of contemporary Mathematics'.
It is obvious enough, then, that any such conception of probability will induce intuitively compelling cases of blameless wrongdoing and blameworthy right-doing.
Nonsense! Intuitionism requires 'witnesses' i.e. a specific number or construction that is required to be part of an existence proof. But even approaches Brouwer thought 'blame-worthy' can be found to have such witnesses when the underlying theory is recast in constuctivist terms.
But it is not obvious that we can contrive a non-idealized notion of probability that will provide a more satisfying gauge of praiseworthiness and blameworthiness.
These two cretins can't contrive shit. One can certainly make a 'non-Dutch book' re. what a particular guy or committee will find praiseworthy or blameworthy if people have rational expectations and thus have 'coherent' preferences. Indeed, when estimating probable punitive damages, lawyers too might look at the composition of a jury and arrive at a figure so as to settle out of court rather than proceed with the case.
Note that even if the operative notion of probability were subjective probability, that will not avoid the general worry, since there is no reason to expect that subjective probabilities are themselves luminous.
Nothing is luminous. It's just that, if the game is worth the candle, there is an incentive for converging to 'coherent' preferences so there is no Dutch book- i.e. probabilities are objective enough.
This is especially clear if subjective probabilities are a matter of one’s dispositions over all bets, since there is no guarantee that one is in a position to know one’s own dispositions.
Nor is there a guarantee that this matters at all.
But even if one thinks of assigning a subjective probability to a proposition as more akin to, say, feeling cold than being disposed to bet, anti-luminosity arguments for feeling cold will still apply.
Nothing would apply because it is easy enough to feel cold by stabbing oneself repeatedly. Blood loss will cause circulation to slow thus causing you to feel cold even if it is a hot day. Thus, if you assign a high subjective probability to benefitting from being stabbed, you are welcome to do so even in some stupid psilosophers argue that this is what you should or should not do.
Intuitively, we expect epistemic norms to be
about evidence
normatively satisfying:
nope. The evidence may be very unsettling indeed. Knowledge isn't about cuddliness or getting gratuitous rape counselling from mathematical equations.
that is, we expect them to track our intuitions about blameworthy and praiseworthy epistemic conduct.
Amia's intuitions about what is blameworthy are barking mad. Still, it is true that epistemic conduct which leads to a continual drive to acquire appropriate evidence is praiseworthy. But these two cretins won't say so.
An epistemic norm that ties what one ought to do to a non-transparent condition (e.g. knowledge) is
not an epistemic norm. On the other hand, f you are an investigator or researcher paid to gather evidence, you are welcome to devise indirect or novel methods to do so. The method may initially be 'non-transparent', indeed, it may remain a black-box, but it can point one towards the solution for which admissible, transparent, evidence is available. The norms or protocols governing admissibility of evidence are respected and your job is done and you deserve praise.
an epistemic norm that will not satisfy this basic desideratum. To construct an epistemic norm that is normatively satisfying, then, we require an epistemic ‘ought’ that is tied to only transparent conditions; unfortunately, no such conditions plausibly exist.
I've just given them. Take DNA evidence which was not admissible at one time. Still it could be used to identify the killer and motivate an investigation that produced admissible evidence.
As such, the hope of finding a normatively satisfying answer to the disagreement question seems like a hope unlikely to be satisfied.
And yet in any useful, protocol bound field, such answers are found all the time.
our intention has been to suggest that there seems to be no single privileged answer to the question ‘What ought we to do, epistemically speaking, when faced with a disagreement?’
The privileged answer is always 'find evidence'. True, knowledge is intrinsically defeasible but we can still do our best till something better becomes available.
This thought, bleak as it might be, easily leads to bleaker thoughts.
Like, 'why are philosophy Professors so fucking stupid? Oh. It's because they failed to keep up with the Math. Well, at least I've got tenure. Sadly, this means I have to teach utter retards.'
We have not argued for the conclusion here, but it seems that non- transparency poses a more general problem for the ambition to formulate all sorts of epistemic norms.
It is true that a new type of evidence may be inadmissible or may be a 'black box'. However, this does not mean admissible evidence or a 'light box' might not be found and that an epistemic disagreement ends in a useful discovery.
If so, then it is not just a stable answer to the disagreement question that will remain elusive,
Because different people may have different motivations for disagreeing. I do so because I want a bribe. You do so because you are principled. The third guy is concerned with the truth etc, etc.
but also stable answers to all questions concerning what we ought to do, epistemically speaking.
There is a robust answer which I have given. It does not matter if the 'knower' is some radical type of sceptic or solipsist. What matters is observing the protocols re. admissibility of evidence even if recourse is taken to 'non-transparent' methods and this is done by an investigator who is in doubt whether he is a man who dreamed he was a butterfly or a butterfly who dreamed he was a man.
I suppose, Amia Srinivasan who is very disagreeable because, transparently, she is neither White nor has a dick, needed a type of psilosophy which holds finding evidence to be incompatible with 'epistemic virtue'. But it seems a trifle unfair that she has to teach nonsense just so as to spew hate at White bastids wot have dicks.
No comments:
Post a Comment