Back

Where Prototypes Fall Silent

A Cognitive Theory of Vagueness

Abstract

This paper develops a cognitive theory of vagueness grounded in prototype theory. The central thesis is that vagueness arises from prototype silence: prototype-based concepts determine centres but not edges, and in borderline cases the prototypes simply do not speak. When forced to categorise such cases, we produce utterances that misrepresent our cognitive state, because no relevant recognition event occurred. The sorites paradox fails both logically, since borderline instances of the induction premise have antecedents without truth values, and methodologically, since the standard procedure tests priming persistence rather than concept application. A proposed alternative, the Blind Sorites, removes priming contamination and reveals vagueness as it actually operates: smooth transitions between concepts, with a probability gradient where the gradient belongs. The resulting theory synthesises insights from epistemicism (sharp thresholds exist at any instant), degree theory (a gradient exists across instances), and contextualism (context shapes which prototypes are active), revealing these to be compatible descriptions of different levels of a single cognitive phenomenon.

1. Introduction

The sorites paradox has troubled philosophers for over two thousand years. Ten thousand grains of sand is clearly a heap. One grain is clearly not. For any number n, if n grains constitute a heap, then n minus one grains also constitute a heap; the removal of a single grain cannot be what separates heaps from non-heaps. Yet iterated application of this principle yields the absurd conclusion that one grain is a heap. The argument is valid, the first premise obviously true, the conclusion obviously false. Something must be wrong with the induction premise. But what?

Various responses have been proposed. Some hold that vague predicates have sharp boundaries we cannot know. Others claim that borderline sentences lack truth values, or that truth comes in degrees, or that the extension of vague predicates shifts with context. This paper develops a different approach, one grounded in the cognitive psychology of concept formation.

The core idea is this: most concepts are learned from instances, not definitions. You learn what a heap is by encountering clear heaps, not by learning a rule. These clear instances, which I call prototypes following Rosch, anchor the concept at its centre. But prototypes do not determine edges. They tell you what a clear heap looks like; they are silent about fifty grains. And if the prototypes are silent, and nothing else constitutes a boundary, then there is no boundary. Not hidden, but absent.

What happens when you encounter fifty grains? Typically, a different concept activates. You think sand, not heap. The prototypes for heap do not speak to this case, so heap does not fire. If someone then forces you to answer "heap or not heap?", they are asking you to speak where your prototypes are silent. Whatever you say will misrepresent your actual cognitive state, because no heap-recognition occurred.

The sorites exploits this. It primes the heap-concept at the start and keeps it artificially activated through continuous questioning. Each step asks not "does this configuration trigger heap-recognition fresh?" but "should I deactivate the concept already running?" The answer is always no, because adjacent configurations are indistinguishable. Remove the priming, test fresh, and the paradox disappears.

2. Prototype Theory and Concept Formation

Consider how a child learns the word bottle. Her father hands her an object and says the word. She looks at it: plastic, cylindrical, has a cap, contains liquid. She does not know which features matter. Later, another object, another utterance of the same word. Glass this time, different shape, but still a cap, still liquid inside. Some features match, some differ. Later still, a new word: cup. Also contains liquid, but no cap, open at the top. A different word for a thing that differs in these ways.

The instances accumulate over months and years. But notice what the child does not learn. She does not learn a definition. No one states necessary and sufficient conditions for bottlehood. Could you state them? The attempt is surprisingly difficult. What the child acquires is not a rule but a collection of central cases: prototypes, the clearest instances, encountered early and often, which anchor the concept most firmly.

This observation forms the empirical core of prototype theory, developed by Eleanor Rosch and colleagues in the 1970s. Extensive experimental work demonstrated that categories are organised around prototypes: typical or representative examples that serve as cognitive reference points. Category membership is graded rather than all-or-nothing. Robins are rated as more typical birds than penguins. Chairs are more typical furniture than rugs. Response times in categorisation tasks are faster for prototypical members. When asked to name exemplars, subjects produce prototypical instances more frequently.

Further studies found that subjects changed their minds about borderline category membership up to 22% of the time when asked the same question two weeks apart. Whether an olive counts as a fruit, whether a sponge counts as a kitchen utensil: these judgments are unstable at the margins. Categories have what researchers call fuzzy boundaries. No sharp line divides members from non-members, and the same individual may classify an item differently on different occasions.

The same holds for tall, for chair, for heap and bald and red. All prototype-based, all learned from instances rather than definitions. The prototypes anchor the concept at its centre. But what do prototypes tell us about the edges?

3. Prototypes Determine Centres, Not Edges

When you learn heap from examples, you learn what a clear heap looks like. Ten thousand grains piled high is clearly a heap; your prototypes tell you this, for the similarity to prototype heaps is overwhelming. But what do your prototypes tell you about fifty grains? About twenty? About five?

Nothing. The prototypes are silent. They did not include fifty grains. You never encountered fifty grains being called a heap or not-a-heap. The prototypes fix the centre of the concept but do not fix the periphery. They determine what is clearly in but do not determine where membership ends.

This might seem like a minor point, since of course we cannot encounter every possible case. But consider what it implies. A boundary would require something to constitute it. For there to be a fact about where heap ends, something must make it so. What could that be?

One might propose a rule: heaps have more than n grains. But no such rule was learned, and you could not state the number if asked. Moreover, grain count cannot capture heapness, which involves arrangement, density, shape, and substrate. A million grains scattered one per square kilometre is not a heap. Heapness is prototype-based precisely because it is multi-dimensional and cannot be reduced to a single threshold.

One might propose a precise similarity threshold: anything above 73.2% similar to the prototypes counts. But similarity does not come in percentages. Even if it did, who set the threshold, and where is it recorded?

One might appeal to the complete pattern of usage in the linguistic community. But that pattern is itself generated by individuals using prototype-based concepts. It cannot be more precise than the concepts that generate it.

One might posit something metaphysical: a mind-independent fact about heap-membership. But what mind-independent fact could make fifty grains a heap or not a heap? Heapness is not a natural kind waiting to be discovered. It is a human category, learned from human examples, for human purposes.

None of these options succeed. And if nothing constitutes a boundary, then there is no boundary. Not hidden, but absent. This is not the view that there is a boundary we cannot know. It is the denial that there is a boundary at all. There is nothing to know.

4. Why Prototype-Based Concepts Are Necessary

To see why vagueness is an inevitable feature of concepts like ours, consider a thought experiment. Imagine a being with infinite cognitive capacity. Such a being could represent every possible configuration of sand distinctly. Not heap and pile and few grains, for those are groupings, compressions, lossy summaries. Instead, this being would have a unique label for each possible arrangement of matter in the universe: Configuration-7,531,842 and Configuration-7,531,843, each named individually.

For such a being, the question "is this a heap?" does not parse. There are no heaps in its ontology. There are only configurations with names. It might understand the question the way we understand "is this number gruesome?": as presupposing a category that, for it, does not exist.

What would vagueness look like to such a being? It would not look like anything. There would be no borderline cases because there would be no borders. No fade zones because no fading. Every question of the form "is this an X?" would have a determinate answer. Either the configuration is Configuration-7,531,842 or it is not. Yes or no. Always.

The infinite being cannot experience vagueness because vagueness requires grouping, and the infinite being does not group. It has a unique representation for every distinct state of affairs.

We are not infinite beings. We are finite processors navigating an effectively infinite world. The space of possible configurations is combinatorially explosive. A finite system cannot assign unique representations to each possible state of affairs. There is not enough room. Not enough memory. Not enough processing capacity. The infinite being's approach is unavailable to us.

So we group. Many configurations map to a single representation. This is not a defect. It is a condition of thought itself. Without grouping, no generalisation. "This configuration preceded food" is useless. "This kind of situation precedes food" is actionable. Prototype-based concepts are a form of lossy compression. They are the only way concepts like ours could work.

And prototype-based concepts, as I have argued, determine centres but not edges. Vagueness follows. It is not a problem to be solved or a defect to be repaired. It is what happens when bounded processors compress an unbounded reality.

5. A Cognitivist Account of Extension

The infinite minds argument has a further implication for the semantics of vague predicates. If "heap" exists only because finite minds must compress, then "heap" does not refer to a mind-independent property. There is no heap-ness out there in the world, waiting to be tracked. The word refers to nothing except the cluster of configurations that reliably trigger heap-activation in minds like ours.

This yields a cognitivist account of what philosophers call extension. The extension of a predicate is standardly understood as the set of things the predicate applies to. For prototype-based concepts, I propose that extension just is the region of configurations for which concept-activation is reliable. There is no further fact about membership beyond the facts about activation.

Consider a paradigmatic case: ten thousand grains piled high. You see it, the heap-concept activates, you say "this is a heap." What makes your utterance true?

The answer is not correspondence to a mind-independent fact about heap-membership. There is no such fact. The answer is that the heap-concept reliably activates for this configuration across competent speakers and across moments. The reliability is the truth-maker. There is no further fact.

Near prototypes, activation approaches certainty: probability close to 1 across all speakers and all moments. Far from prototypes, activation approaches impossibility: probability close to 0. In between lies a gradient where activation is unreliable, sometimes occurring and sometimes not, depending on the momentary threshold.

Truth for paradigmatic cases is reporting reliable activation. Falsity for clear non-cases is reporting reliable non-activation. And in the borderline zone, the question of truth and falsity presupposes a determinacy that does not exist.

6. The Activation Account

Prototypes do more than sit passively in memory. They get activated. When you encounter something in the world, your mind compares it to stored prototypes, and if the similarity is strong enough, the relevant concept activates. You see an object, the bottle prototypes fire, and recognition occurs: you see a bottle. This happens quickly, below conscious deliberation. You do not calculate similarity scores. You simply recognise.

I want to be precise about the nature of this activation. At any given moment, for any given person, activation is binary. The concept either fires or it does not. There is no partial activation, no half-recognition. When you see something, recognition either occurs or it does not. You do not experience 37% recognition of a chair. Recognition is all-or-nothing at the moment of experience.

You may, of course, feel uncertain. But uncertainty is not partial recognition. It is a metacognitive state about recognition, not a graded recognition itself. You might recognise something as a chair with low confidence. But the recognition event is binary; the confidence is what varies.

If you could freeze a person's cognitive state and test them on all possible configurations, you would find a specific point where they flip from heap to not-heap. Call this the activation threshold. At this level, there is always a cutoff.

This might sound like the view that vague predicates have sharp boundaries we cannot know. At the level of any single moment, that view is correct. The threshold exists. It is sharp. But the threshold is not fixed. Test the same person tomorrow and the threshold will have shifted. Different mood, different context, different recent experiences, different neural noise. Yesterday the cutoff was at 847 grains; today it is at 912; tomorrow it might be 761.

The activation threshold is not a stable property of the concept. It is a momentary fact about a cognitive system in a particular state. Change the state and the threshold changes.

So at the level of any single moment, there is a fact. At the level of the concept itself, across moments and contexts, there is no fixed fact. There is only a distribution, a probability. For configurations near prototype heaps, activation is near-certain. For configurations far from prototype heaps, activation is near-impossible. In between, the probability grades off smoothly.

This is what the borderline zone is: the region where activation probability is intermediate, where the momentary threshold sometimes falls above and sometimes below. The gradient is not in the activation itself, for activation is binary. The gradient is in the probability of activation across the shifting thresholds of actual cognitive states.

7. Prototype Silence

What happens when you encounter something in the borderline zone?

Fifty grains of sand on a table. You see it. No one asks you anything. What occurs in your mind? You see sand, or some grains, or nothing in particular. The configuration registers but does not demand a label. Here is what does not happen: you do not think heap, you do not think borderline heap, you do not experience yourself as caught between categories or straining to decide. The heap-concept does not activate because the similarity to prototype heaps is insufficient to trigger recognition. The prototypes are silent about this case. So no recognition occurs. You see sand and move on.

Philosophers often write about the borderline zone as if it involves a distinctive experience: a phenomenology of vagueness, a felt sense of indeterminacy. But there is no such experience, not unless someone creates it. The borderline zone is not a place of conceptual struggle. It is a place where prototypes do not speak.

The silence, however, is not absolute. Something activates. You see sand. You see some grains. A concept fires; just not the concept presupposed by the question that would force you into struggle. The silence is relative: the heap-prototypes are silent, but other prototypes speak.

Consider a colour on the boundary between blue and green: the shade we call teal. In the first scenario, I show you this colour and ask what you see. Your first word. You say teal, or turquoise, or blue-green. A concept activates, recognition occurs, and you produce a label that fits your experience. No hesitation, no struggle.

In the second scenario, I show you the same colour but ask: is this blue or green? You must choose one. Now something different happens. Neither word fits. The colour is not blue; it is not green; it is teal. But you cannot say teal, for I have forced you into a binary that does not match your perception. Whatever you answer will be wrong. If you say blue, you misrepresent your experience, for you saw teal, not blue. If you say green, the same problem arises. Both answers fail to capture what actually happened in your mind.

This is the structure of the vagueness experience. The discomfort of the borderline case is not a feature of the colour itself. It is a feature of the mismatch between the forced categories and your actual activation. Your concept activated just fine. It activated teal. The problem is that I asked about blue and green, and your prototypes for blue and green are silent about this shade.

Return to the heap. Fifty grains on a table. You see it, and your mind does what minds do: it activates the concept that fits. Probably sand. The heap-concept does not fire. The heap-prototypes are silent. Now I ask: is this a heap, yes or no?

The question forces you to apply a concept whose prototypes do not speak to this case. Your cognitive state was not engaging with heapness at all. Your activation was sand. Now you must evaluate heap-membership despite having no heap-activation to report. You are being asked to speak where your prototypes are silent.

8. Forced Categorisation as Misrepresentation

Someone asks: is this a heap, yes or no? You must answer from the set {heap, not-heap}. But your actual activation was neither. It was sand. What is the status of whatever answer you produce?

When you see ten thousand grains and say "heap," you report something. The concept activated; recognition occurred; your word expresses what happened in your mind. When you see fifty grains and say "heap," you report nothing of the kind. The concept did not activate. You had no heap-recognition. You produce a verdict that corresponds to no recognition event.

And the same is true if you say "not a heap." That concept did not activate either. You did not recognise a non-heap. You recognised sand. The not-heap concept is as absent as the heap concept. Both answers fail to report your actual cognitive state.

When you say "heap" about fifty grains, you represent yourself as having had a heap-recognition. You did not. This is misrepresentation. Not about the world, for there is no fact about heap-membership to be right or wrong about. Rather, misrepresentation about your own cognitive state. The form of your utterance implies a recognition event that did not occur.

This explains the distinctive phenomenology of borderline cases. The discomfort is not the difficulty of a hard choice between two genuine options. It is the discomfort of being forced to misrepresent. You know that neither answer captures what happened in your mind. Both feel wrong because both are wrong. You are being made to describe your experience in terms that do not fit it.

This distinguishes borderline cases from other kinds of uncertainty. "Will it rain tomorrow?" involves genuine uncertainty about the world. It does not feel wrong to say "I don't know." "Is 847 prime?" involves uncertainty due to lack of computation. It does not feel wrong to guess. "Is fifty grains a heap?" involves neither. It is not that you lack information or computational resources. It is that you are being asked to speak where your prototypes are silent, and any answer misrepresents your actual cognitive state.

9. Truth-Value Gaps

What does prototype silence imply for truth?

For paradigmatic cases, truth and falsity are straightforward. "This is a heap" (ten thousand grains) is true because the heap-concept reliably activates and you report that activation. "This is a heap" (one grain) is false because the heap-concept reliably does not activate and your assertion misrepresents this. In both cases, the prototypes speak, and your utterance either correctly or incorrectly reports what they say.

For borderline cases, the situation differs. "This is a heap" (fifty grains) presupposes that the heap-prototypes speak to this configuration. But they do not, at least not reliably. The heap-concept does not reliably activate; nor does it reliably fail to activate. The prototypes are silent.

The sentence is therefore not true, for heap does not reliably activate. But neither is it false, for heap does not reliably fail to activate. The sentence presupposes that the prototypes speak when they do not. There is nothing for it to be true or false of.

This is a truth-value gap, but grounded differently from other proposals. The gap does not arise from semantic indeterminacy or from multiple admissible precisifications. It arises from prototype silence. The concept presupposed by the sentence does not stably engage for this configuration. The sentence fails to express a determinate proposition because there is no stable cognitive content for it to express.

An analogy may help. The question "is this number happy or sad?" presupposes that numbers have emotional valence. They do not. The question is not false; it presupposes something that does not obtain. Similarly, "is fifty grains a heap?" presupposes that the heap-prototypes speak to this configuration. They do not. The question is not asking about a hidden fact. It is presupposing engagement that is absent.

10. Dissolving the Sorites

The sorites paradox, recall, runs as follows. Ten thousand grains is a heap. For any n, if n grains is a heap, then n minus one grains is a heap. Therefore, one grain is a heap. The argument is valid. The first premise is true. The conclusion is false. So the induction premise must fail. But where, and why?

The theory developed in this paper provides a two-part dissolution: one logical, one methodological.

10.1 The Logical Dissolution

The logical point concerns the status of the induction premise. The premise asserts that for all n, if n grains is a heap, then n minus one grains is a heap. For this universal claim to be true, each of its instances must be true. Consider an instance in the borderline zone: if five hundred grains is a heap, then four hundred ninety-nine grains is a heap.

On the account I have defended, the antecedent of this conditional presupposes that the heap-prototypes speak to five hundred grains. They do not. The heap-concept does not reliably activate for this configuration. The sentence "five hundred grains is a heap" fails to express a determinate proposition because the prototypes it presupposes are silent. The antecedent therefore lacks a truth value.

A conditional with an antecedent that lacks a truth value is not true. It is not false either; it simply fails to express a determinate claim. The induction premise, which quantifies over all such conditionals, therefore fails. It is not that we cannot identify where the premise goes wrong. It is that the premise was never true to begin with: many of its instances have antecedents that lack truth values, and a universal generalisation over such instances does not hold.

This dissolves the paradox logically. The argument does not go through because the induction premise is not true.

10.2 The Methodological Dissolution

The methodological point concerns how the paradox is typically evaluated. Even if one were sceptical of the logical dissolution, the standard procedure for walking through the sorites is independently flawed.

When you evaluate the sorites step by step, beginning with ten thousand grains, the heap-concept activates and remains primed. Each subsequent judgment is contaminated by this priming. You are not asking, at each step, whether this configuration would activate the heap-concept if encountered fresh. You are asking whether you have sufficient reason to deactivate a concept that is already running. The answer is always no, because adjacent configurations are indistinguishable. So the concept stays active, carried by priming rather than by genuine recognition.

By the time you reach five hundred grains, or fifty, you are still saying heap. But this is an artifact of the procedure, not a fact about how the concept applies. If you encountered fifty grains fresh, with no priming from prior judgments, you would not say heap. You would say sand. The sorites tests priming persistence, not concept application.

The dissolution is therefore complete. The logical point shows that the argument is unsound: the induction premise fails because its borderline instances have antecedents without truth values. The methodological point shows that the standard way of evaluating the argument is contaminated: priming carries judgments past the zone where fresh activation would occur. Either point alone suffices to dissolve the paradox. Together, they explain both why the argument fails and why it seems compelling.

11. The Blind Sorites

If the standard sorites is methodologically flawed, what would a proper test of concept application look like?

I propose what I call the Blind Sorites. Take a single person and show them configurations of sand one at a time. Between each trial, wipe their memory of the previous configuration. Each judgment is fresh, independent, unprimed.

Ask them: what is this? First word. Ten thousand grains: heap. Memory wiped. Five thousand grains: heap or pile. Memory wiped. One thousand grains: pile or sand. Memory wiped. Two hundred grains: sand. Memory wiped. Fifty grains: sand. Memory wiped. One grain: a grain.

No paradox emerges. Different concepts activate in different zones, with smooth transitions between them. No struggle, no felt indeterminacy. Each configuration receives a natural label without hesitation.

The borderline zone, as philosophers discuss it, does not appear, because the borderline zone is an artifact of forced categorisation where prototypes are silent. When you let concepts activate naturally, without forcing the heap/not-heap binary and without priming, the problem vanishes.

If you forced the heap/not-heap question but still wiped memory between trials, you would find a probability gradient: sometimes heap, sometimes not, with the proportion shifting smoothly across the range. At any single trial, a determinate answer. Across trials, a distribution. The cutoff exists moment to moment but shifts from moment to moment. This is what vagueness looks like when you measure it without contamination.

The Blind Sorites is the correct experimental design. It tests what the standard sorites claims to test but does not: how prototype-based concepts actually apply to configurations across a range.

12. Vagueness as Relational

The theory I have developed makes a testable prediction: vagueness should be relational. If vagueness arises from prototype silence, then whether a case is borderline depends on which prototypes are brought to bear. The same configuration should be clear relative to one set of prototypes and borderline relative to another. If, on the other hand, vagueness were a mind-independent property of configurations, the same configuration could not be both clear and borderline.

Consider a thought experiment. Imagine a linguistic community whose language contains a word, gort, for configurations of sand in the middle range: not a heap, not a few scattered grains, but the in-between. They learned gort from prototypes the way we learned heap and sand from prototypes.

I show someone from this community five hundred grains of sand. Gort, they say. The concept activates immediately. Clear recognition, no hesitation. Their prototypes speak to this case.

I show the same configuration to one of us. We have no word for this. Our prototype heaps are bigger; our prototype sand is smaller or more scattered. This configuration falls where our prototypes do not speak. If I ask whether it is a heap, we feel uncertain. The concept does not activate cleanly, and the case seems indeterminate.

But the configuration itself has not changed. Same grains, same arrangement, same physical facts. What differs is the conceptual resources applied to it. For the gort-speakers, a clear case: their prototypes speak. For us, a borderline case: our prototypes are silent.

This is exactly what the theory predicts. And it could not be true if vagueness were a mind-independent property of configurations. A configuration cannot be both clear and borderline as a matter of mind-independent fact. But it can be clear relative to one set of prototypes and borderline relative to another. The gort case confirms that vagueness is relational: a property of the relation between configurations and conceptual schemes, not a property of configurations themselves.

Could we eliminate vagueness by adding gort to our vocabulary? No. We would relocate it. There would now be zones where our prototypes fall silent between pile and gort, and between gort and heap. The borders move. They do not disappear. Add another word and fill in another gap, and new zones of silence appear at the new boundaries.

The only way to eliminate vagueness entirely would be an infinite vocabulary, with a unique word for every possible configuration. But then it would not be a vocabulary. It would be an inventory. Language requires grouping. Grouping by prototypes produces zones of silence. Silence produces vagueness.

13. The Rule-Based Exception

The theory makes a second testable prediction: vagueness should be specific to prototype-based concepts. Concepts that are not learned from prototypes should not exhibit vagueness. If vagueness arose from something other than prototype silence, rule-based concepts might also be vague. They are not.

Consider even number. It has no borderline cases. A number is even or it is not. There is no fade zone, no gradient, no vagueness.

Why? Because even number is not prototype-based. You did not learn it by encountering examples and grasping a similarity structure. You learned a rule: a number is even if and only if it is divisible by two. When you categorise, you do not compare to prototypes. You check the rule. It is satisfied or it is not.

Rule-based concepts have stipulated boundaries. The stipulation does what prototypes cannot. It determines an edge. Where exactly does even end and odd begin? At divisibility by two. Precisely there. The rule says so. Prototypes are not silent because there are no prototypes. There is a rule, and the rule speaks to every case.

This is exactly what the theory predicts. Prototype-based concepts are vague because prototypes determine centres, not edges. Rule-based concepts are not vague because rules determine edges. The contrast confirms that prototype structure is what produces vagueness.

This is why legal systems often convert vague concepts into rule-based ones. Adult is vague, learned from prototypes with a gradient and zones of silence. When does childhood end and adulthood begin? The prototypes do not say. For practical purposes, the law stipulates: eighteen years old. The cutoff is arbitrary. Why eighteen and not seventeen? But it is precise. Vagueness is eliminated by fiat.

Intoxicated becomes blood alcohol above 0.08. Speeding becomes exceeding the posted limit. The law cannot function with zones of silence, so it stipulates them away.

But notice that this requires explicit stipulation. Someone must make a decision. The precision does not emerge from the concept itself. It is imposed from outside. Most concepts, most words in natural language and most categories in everyday thought, have no such stipulation. They are learned from examples, prototype-based, zones of silence all the way down.

Vagueness is the default for prototype-based concepts. Rule-based sharpness is the exception, achieved only through explicit stipulation.

14. Formal Statement of the Theory

The theory developed in this paper can be stated more formally as follows.

Let C be a prototype-based concept with prototype instances P(C). Let s(x) be a measure of the similarity between configuration x and P(C). Let θ be the activation threshold, which varies across moments according to a probability distribution D.

C activates for x at moment t if and only if s(x) > θₜ.

The probability that C activates for x is Prob(s(x) > θ), where θ is distributed according to D.

The extension of C is the region of configurations for which this probability approaches 1. The anti-extension of C is the region for which this probability approaches 0. The borderline zone is the region for which this probability is intermediate: the zone where prototypes do not reliably speak.

A sentence of the form "x is a C" is true if and only if Prob(C activates for x) ≈ 1. It is false if and only if Prob(C activates for x) ≈ 0. It lacks a truth value if the probability is intermediate, because the prototypes presupposed by the sentence do not reliably speak to x.

Forced categorisation in the borderline zone produces utterances that misrepresent the speaker's cognitive state, because the speaker is compelled to report an activation that did not occur.

The sorites paradox fails because its induction premise has instances whose antecedents lack truth values. Additionally, the standard procedure for evaluating the paradox contaminates judgments through priming, testing priming persistence rather than concept application.

15. The Synthesis

The account I have developed stands in a distinctive relation to the major theories of vagueness in the literature. Rather than competing with them, it reveals them to be compatible descriptions of different levels of the same phenomenon.

One prominent view holds that vague predicates have sharp boundaries we cannot know. Margin-for-error principles explain our ignorance: knowledge requires a margin of safety, and borderline cases fall within the margin. My account vindicates this view at the level of any single moment. When a person faces a configuration in a particular cognitive state, there is a fact about whether their concept activates. The threshold exists; it is sharp. But the threshold is a momentary fact about a cognitive system, not a stable fact about the concept. Test again tomorrow and the threshold will have shifted. This view is correct about the instant and wrong about the concept.

Another prominent view holds that truth comes in degrees, so that borderline sentences are true to some intermediate degree rather than being fully true or fully false. My account provides a foundation for the gradient, while locating it differently. The gradient is not in truth but in activation probability. For configurations near prototypes, the probability that the concept activates approaches 1. For configurations far from prototypes, the probability approaches 0. In between lies a smooth gradient. This view is tracking something real, but it is a gradient in how cognitive systems respond, not a gradient in semantic values. The advantage of this relocation is that it avoids the problem of artificial precision: what makes a sentence true to degree 0.7 rather than 0.71? My account faces no such problem, because the underlying fact is a probability distribution over activation thresholds, and probability distributions can be smooth without being arbitrarily precise.

A third prominent view holds that the extension of vague predicates shifts with context. My account is contextualist in the sense that context matters: which prototypes are activated shapes where the gradient falls. Different contexts activate different prototypes, yielding different thresholds. But my diagnosis of the sorites is distinctive. Existing contextualist accounts focus on how context shifts during a run of the paradox. My diagnosis focuses on priming contamination: the sorites does not test concept application but priming persistence, and this is a problem with the methodology regardless of whether context shifts.

The synthesis, then, is this. One view is right about the instant: there is a sharp threshold for any person at any moment. Another view is right about the gradient: there is a probability distribution across instances. A third view is right about prototype selection: context shapes which prototypes are active. These are not competing theories but descriptions of different levels of the same underlying cognitive phenomenon.

16. Conclusion

Vagueness, on the account I have developed, is what happens when prototype-based concepts meet cases where prototypes do not speak. Concepts are learned from instances rather than definitions. Prototypes anchor the concept at its centre but do not determine its edges. For clear cases, the prototypes speak. For borderline cases, they are silent.

When we encounter something, concepts either activate or they do not. Activation is binary at any given moment, but the threshold shifts across moments, contexts, and cognitive states. What remains stable is not a fixed boundary but a probability distribution over possible boundaries. This distribution is what the borderline zone consists in.

In clear cases, activation is reliable: near-certainty, a fact of the matter. In borderline cases, activation is unreliable, varying from moment to moment. Or, more commonly, a different concept activates instead. You see fifty grains and think sand, not borderline heap.

The vagueness experience arises from prototype silence. We are asked heap or not-heap when our actual activation was sand. We are being asked to speak where our prototypes are silent. We produce a verdict, but neither option corresponds to what happened in our minds. Both answers misrepresent.

The sorites paradox exploits this by priming a concept at the start and keeping it primed through continuous questioning. The Blind Sorites, which tests fresh judgments without priming, reveals what vagueness actually looks like: smooth transitions, different concepts in different zones, a probability gradient where the gradient belongs.

Vagueness is relational. The same configuration is clear for one vocabulary and borderline for another. Add a word for the middle zone and what were borderline cases become clear cases; new zones of silence appear at the edges. This is not a defect but the structure of prototype-based concepts.

Nor is vagueness a problem to be solved. It is a consequence of how finite minds represent an infinitely detailed world. We learn from instances, we generalise by similarity, and we cannot learn precise boundaries because boundaries would require encountering every possible case. So we have concepts with clear centres and silent peripheries. This is not a failure of our conceptual scheme. It is the only way concepts like ours could work.

The puzzle was never about finding a hidden line. There is no line, not hidden but absent. There are prototypes, there is activation, there is probability. What there is not is a sharp boundary waiting to be discovered. The appearance of paradox arises from thinking that concepts must have edges merely because they have centres. They do not. What they have are centres, and silence beyond.