Back

Cognitive Representation Theory

An Ethical Framework for Human Interaction with Non-Sentient Robots

I. Introduction

Robots are becoming increasingly sophisticated. What were once rigid automatons confined to factory floors now walk, speak, respond to touch, and display behaviours that invite interpretation in psychological terms. Within decades, humanoid robots of considerable verisimilitude will be commercially available: robots that move fluidly, express apparent emotion, and interact with humans in ways that blur the boundary between tool and social partner.

This technological trajectory raises ethical questions for which our existing frameworks are poorly prepared. Western moral philosophy has developed rich resources for evaluating human conduct toward other humans, toward animals, and toward the natural environment. But a certain class of robots fits none of these categories comfortably. They are not persons; they possess no consciousness, no interests, no welfare to be promoted or harmed. Neither are they mere tools in the way that hammers and automobiles are tools. Their designed capacity to simulate psychological properties invites forms of engagement that hammers do not.

This essay concerns robots that are non-sentient: machines that lack consciousness, subjective experience, and the capacity for suffering. The assumption of non-sentience is not incidental but central to the analysis. The ethical puzzle arises precisely because these robots possess none of the properties that typically ground moral concern for patients, yet interactions with them seem to carry moral weight nonetheless. If a robot were conscious, if it could genuinely suffer, familiar frameworks would apply. The interesting question is what to say when we know, or reasonably assume, that no one is home.

The robots in question are those engineered to afford psychological representations: social robots, humanoid companions, and machines designed to simulate emotion, pain, desire, or resistance. A factory arm that moves containers poses no distinctive ethical puzzle. But a robot designed to display apparent suffering when damaged, to simulate childlike vulnerability for sexual purposes, or to vocalise non-consent occupies different territory. Such machines are engineered to elicit representations that diverge systematically from their actual properties. The divergence is not accidental but constitutive of the product’s purpose.

This essay develops an ethical framework for evaluating human interaction with such robots. The framework I propose, Cognitive Representation Theory, holds that the moral status of an action toward a robot is determined by the agent’s cognitive representation of that robot, not by the robot’s actual properties. The principle is applied through a counterfactual substitution test: we evaluate the action as if the representation were veridical, and the resulting moral characterisation attaches to the actual action.

I shall argue that CRT is not an ad hoc solution to a novel problem but a formalisation of evaluative practices we already employ in other domains. It completes virtue ethics by making explicit what that tradition’s agent-centred orientation implies when we attend carefully to the metaphysics of character. And it provides ethical traction in a domain where other frameworks lose grip, offering determinate verdicts that do not depend on resolving intractable questions about machine consciousness.

II. Three Cases

Consider three scenarios involving human interaction with robots. In each case, the robots are non-sentient: they possess no consciousness, no subjective experience, no capacity for suffering. They are sophisticated machines, nothing more.

Case One: The Child-Form Sex Robot. A company manufactures robots with childlike features for sexual use: small bodies, round faces, high voices. A man purchases one and uses it as intended. He is aware it is a machine. No child is involved. No child is harmed. No child will ever be harmed as a result of this interaction. The robot experiences nothing.

Case Two: The Non-Consent Simulator. A company manufactures humanoid robots whose distinguishing feature is that they simulate non-consent. When engaged sexually, the robot vocalises refusal, struggles against the user, and displays apparent distress. Users acquire this product specifically to overcome its simulated resistance. They are aware the robot cannot actually consent or refuse, that its protestations are programmed responses. No person is violated. No person will ever be violated as a result. The robot experiences nothing.

Case Three: The Torture Exhibition. An individual acquires a highly realistic humanoid robot and proceeds to torture it systematically: burning it, cutting it, methodically destroying its limbs. The robot produces distress vocalisations and simulated pain responses. She records these sessions, laughing, and distributes the recordings online. She is fully aware the robot has no inner life, that its apparent agony is mere simulation. No one suffers. No one will ever suffer as a result. The robot experiences nothing.

Many observers report moral discomfort with these scenarios. The discomfort is not universal, but it is widespread and, for those who feel it, strong. There is a sense that something has gone wrong, that the people in these cases have done something objectionable, that we would judge them negatively as moral agents.

Yet the usual grounds for moral criticism are absent. No sentient being is harmed. No rights are violated. No welfare is diminished. The robots are objects, sophisticated but inert, and one cannot wrong an object. If someone were to smash a television or disassemble a laptop, we would not think them cruel. What makes these cases different?

What, if anything, is wrong here? And if something is wrong, how might we explain it? The following sections examine whether existing ethical frameworks can provide an answer.

III. The Limits of Existing Frameworks

The three dominant traditions in Western moral philosophy are consequentialism, deontology, and virtue ethics. Each offers resources for moral evaluation, yet each encounters difficulties when applied to human interaction with non-sentient robots.

Consequentialism

Consequentialist frameworks evaluate actions by their effects on the welfare of affected parties. An action is right insofar as it produces good consequences and wrong insofar as it produces bad ones. The locus of moral significance is the patient whose welfare is affected.

Applied to human-robot interaction, consequentialism encounters an immediate difficulty. A non-sentient robot has no welfare. It experiences nothing. Its states carry no positive or negative valence. If we consider the robot in isolation, there are no welfare effects to evaluate. The consequentialist calculus returns null.

One might attempt to rescue the framework by attending to downstream effects on the agent or third parties. Perhaps torturing a lifelike robot will coarsen the agent’s sensibilities, increasing the probability of future harm to sentient beings. Perhaps the existence of child-form sex robots will normalise paedophilic desire, leading to increased offending against actual children.

These empirical hypotheses may prove correct. But observe what the rescue concedes. If the empirical evidence were to show no such effects, or even beneficial effects, the consequentialist framework would be compelled to pronounce these acts permissible, indeed obligatory if they maximise welfare.

This verdict conflicts with the structure of the intuitions many observers report. The wrongness appears to inhere in the act itself, or in the agent performing it, rather than in contingent causal sequelae. Consequentialism cannot capture what these intuitions track. It can only reconstruct a shadow of the concern by pointing to speculative downstream effects whose existence remains empirically uncertain.

Deontology

Deontological frameworks evaluate actions by their conformity to duties or principles. On the Kantian formulation, moral agents possess obligations toward rational beings: entities capable of autonomous choice, of setting their own ends, and of moral reasoning. The fundamental principle requires treating such beings as ends in themselves, never merely as means.

Robots are not rational beings in the morally relevant sense. They execute algorithms rather than setting ends. They process inputs rather than reasoning morally. Whatever duties moral agents possess, they do not appear to be owed to machines. The deontological framework, oriented toward the rights and dignity of patients, finds no patient to which such considerations apply.

Kant himself addressed a structurally similar problem in his treatment of animals. Animals, like robots, are not rational beings; they cannot enter the kingdom of ends. Yet Kant held that cruelty to animals is wrong. His solution was the doctrine of indirect duties: we have duties regarding animals, though not duties to them. The duty is owed to ourselves. Cruelty to animals is wrong because it deadens the kindly and humane qualities in the agent and makes one hard in one’s dealings with other humans.

This Kantian move is instructive but insufficient. It locates the wrongness of cruelty in its effects on the agent’s character and, ultimately, in the increased probability of cruelty toward rational beings. But this is to smuggle consequentialist reasoning into a deontological framework. The wrong is wrong because of what it leads to, not because of what it is. And if the empirical link could be severed, the Kantian rationale would dissolve.

Virtue Ethics

Virtue ethics appears more promising. This tradition evaluates actions not by their consequences or by duties owed, but by what they reveal and reinforce about the agent’s character. The virtues are stable dispositions toward excellent action and response; the vices are their contraries. The evaluative focus is the agent rather than the patient.

Robert Sparrow has developed the most sophisticated virtue-ethical approach to robot ethics. He argues that viciousness towards robots is real viciousness, that even if an agent’s cruel treatment of a robot has no implications for future behaviour toward people or animals, it may reveal something about their character, which in turn gives us reason to criticise their actions. This is the correct orientation. Character is what matters, and character can be revealed through interaction with robots.

Yet virtue ethics as standardly formulated encounters a difficulty. Consider the claim that torturing a lifelike robot expresses cruelty. A natural objection presents itself: cruelty is a disposition to enjoy or inflict suffering on those capable of suffering. The robot cannot suffer. Therefore one cannot, strictly speaking, be cruel to it. One might damage it, destroy it, or disassemble it. But cruelty, which conceptually implies a patient capable of experiencing torment, appears to have no application; harming a robot is no different, metaphysically and cognitively speaking, than a ‘rage room’ business, or so the objection goes.

This objection exploits the fact that specific vices are typically defined relationally, by reference to actual properties of the patient. Cruelty requires a suffering patient. Injustice requires a patient with legitimate claims. Betrayal requires a patient capable of trust. When these relational conditions are not met, the vice-concepts lose grip.

Recent Approaches and Their Shared Vulnerability

Recent work in robot ethics has attempted to address these limitations. Sparrow argues that viciousness toward robots is real viciousness. Mark Coeckelbergh proposes grounding moral consideration in social relations rather than intrinsic properties: the very fact that we relate to robots in certain ways becomes the basis for moral evaluation.

But both face a common vulnerability. An ardent opponent can simply deny holding the relevant representation or refuse to enter the relevant relation. Consider someone who tortures a lifelike robot and claims: “I know it’s just a machine. My enjoyment comes from the mechanical spectacle, the stress relief, the engineering challenge of destruction. To me it’s no different from smashing a tennis racket.”

Against Sparrow, this opponent simply denies holding the cruel orientation. Sparrow can assert that the engagement reveals vice, but the opponent asserts otherwise. Against Coeckelbergh, the opponent denies being in a social relation with the robot: “I don’t relate to it that way. I relate to it as a tool.” Neither theorist has the tools to determine what representation the agent actually holds independent of what they avow. They can be outmanoeuvred by denial.

Both theories, moreover, implicitly presuppose that the robot’s apparent sentience does the moral work. Why is smashing a robot different from smashing a tennis racket? The implicit answer: because the robot appears to suffer. But if someone rejects this appearance and insists they don’t engage with the robot that way, the theories have no rejoinder. They have not solved the problem for genuinely non-sentient machines; they have relied on those machines looking sentient, which any determined opponent can simply deny engaging with.

What Is Required

What is needed is a mechanism for evaluating actions toward morally inert objects based on what the agent brings to the encounter: their cognitive orientation, their desires, their phenomenology of engagement. This mechanism should preserve virtue ethics’s agent-centred orientation while providing a principled way to apply vice-concepts when representation and reality diverge - and, crucially, to determine what representation the agent holds regardless of what they avow. The following section develops such a mechanism.

IV. Cognitive Representation Theory

I propose the following principle for evaluating human interaction with non-sentient robots and other objects whose actual properties diverge from their represented properties.

Cognitive Representation Theory (CRT): The moral status of an agent’s action toward an object is determined by the agent’s cognitive representation of that object. Specifically:

(1) Identify the agent’s cognitive representation: what the agent’s engagement phenomenologically treats the object as, what properties their responses are keyed to.

(2) Evaluate the action as if that representation were veridical.

(3) The resulting evaluation applies to the actual action.

Cognitive Representation

Cognitive representation refers not to explicit belief but to phenomenologically disclosed orientation. A person may explicitly state, and fervently assert, that a robot is mere machinery consisting of plastic, metal, and circuitry, while their engagement treats it as a suffering child. The distinction is between what one would avow under questioning and what one’s phenomenology reveals.

The relevant representation is disclosed by phenomenology: by what makes the experience appealing, arousing, or satisfying. Consider a user of a child-form sex robot. His arousal is responsive to the robot’s childlike features: the small body, the round face, the high voice. These features constitute the product’s appeal. Without them, he would have selected a different product. His engagement presupposes a representation of the robot as childlike, whatever he might explicitly avow about its mechanical nature.

Similarly, consider someone who tortures a lifelike humanoid robot, taking pleasure in its distress vocalisations and simulated pain responses. Her enjoyment derives from the robot’s apparent suffering. Were the robot simply to cease functioning silently when damaged, the activity would lose its appeal. Her pleasure is keyed to apparent anguish. Her engagement presupposes a representation of the robot as suffering, regardless of what she knows about its actual incapacity for experience.

The principle can be stated thus: one cannot find an object appealing in virtue of features one does not represent it as possessing. The phenomenology of appeal discloses the operative representation.

This does not require the agent to be deceived about the robot’s nature. The user of the child-form robot may know perfectly well that it is a machine. But his arousal pattern reveals what representation his desire is keyed to. Knowledge and representation can come apart. What matters for moral evaluation is the representation that structures engagement, not the beliefs one would avow.

The Phenomenological Test

This is what distinguishes CRT from Sparrow’s and Coeckelbergh’s approaches. The agent does not get to decide what their representation is by avowal. It is revealed by the structure of their engagement - by what makes the experience appealing.

Return to the opponent who claims their pleasure in robot torture derives from mechanical spectacle rather than apparent suffering. CRT provides a test: what distinguishes this activity from destroying any other complex object? If the appeal is merely mechanical, a washing machine should serve as well as a humanoid robot. A circuit board should satisfy as well as a face contorted in simulated agony.

But of course this is not so. The opponent selected a humanoid robot over other objects. They find the distress vocalisations satisfying in a way that grinding gears are not. They chose a product engineered specifically to simulate suffering. This selection reveals what their engagement is keyed to. One does not seek out apparent anguish unless apparent anguish is what one wants.

The phenomenology of appeal constrains the possible representations. You cannot find an object appealing in virtue of features you do not represent it as possessing. If the humanoid form matters, if the simulated pain responses matter, if the apparent distress is what makes this activity preferable to smashing furniture, then the agent's representation includes these features. This is not up to the agent to assert or deny. It is disclosed by the structure of what attracts them.

The Substitution Test

CRT instructs us to evaluate the action as if the cognitive representation were veridical. This is a counterfactual evaluation. We ask: what would the action be, morally speaking, if the object actually possessed the properties the agent represents it as possessing?

The user of the child-form sex robot represents the robot as childlike. If that representation were veridical, the action would be sexual engagement with a child. That moral characterisation attaches to the actual action. The non-consent simulator user represents the robot as a non-consenting victim. If veridical, the action would be rape. The torturer represents the robot as a suffering being. If veridical, the action would be cruelty toward a sentient creature.

The counterfactual is a test, not a metaphysical claim. CRT does not assert that robots are conscious, that they genuinely suffer, or that simulated children are actual children. It holds that moral evaluation of the agent tracks representation, and it employs the counterfactual as a device for extracting the moral significance of that representation.

CRT as Completion of Virtue Ethics

CRT is not a rival to virtue ethics but a completion of it. Virtue ethics correctly identifies character as the locus of moral evaluation. What CRT adds is a mechanism for applying that evaluation when representation and reality diverge, and for determining what representation the agent holds regardless of avowal.

Standard virtue ethics stalls when the patient is morally inert because vice-concepts are defined relationally. CRT resolves this by shifting from actual relations to represented relations. The question is not “is this object capable of suffering?” but “does the agent’s engagement treat this object as capable of suffering, and is their pleasure keyed to that represented capacity?” If yes, then the vice-concepts apply to the represented engagement, and that evaluation transfers to the actual action.

V. CRT in Existing Moral Practice as Implicit Premise

Why should representation determine moral status? The answer lies in what character is. Virtue ethics evaluates agents in terms of their character, understood as the stable dispositions that constitute who they are. These dispositions consist in what the agent desires, enjoys, is drawn toward, and takes satisfaction in. These psychological states are intrinsic to the agent. They exist in the agent regardless of whether their objects exist in the world.

Consider: if one's enjoyment is structured around apparent suffering, that enjoyment is real. It occurs. It is part of one's psychological economy. Whether anything actually suffers does not reach back and alter what is occurring in one's mind. The desire is the same desire. The pleasure is the same pleasure. The only difference is whether reality cooperates with the representation.

A person who sincerely desires to harm children possesses a vicious orientation regardless of whether he ever encounters a child. His desire exists in him. It constitutes part of who he is. We do not require him to actualise the desire before pronouncing on his character. The desire itself reveals the vice.

CRT as Implicit Practice

This is not a novel claim. It formalises something already implicit in how we evaluate agents. In ordinary moral life, we assess character through representation without noticing that we do so, because representation and reality typically align. When someone acts kindly toward a suffering person, we attribute compassion. But we are responding to their orientation toward apparent suffering, not to the metaphysical fact of suffering itself. The alignment of appearance and reality obscures this.

The cases where representation and reality come apart reveal what was always operative. We present four such cases: two involving virtuous action toward objects mistakenly represented as persons, and two involving vicious action under analogous conditions. The symmetry is instructive.

Virtuous Action Under False Belief

The soldier and the dud grenade. A platoon comes under attack. A grenade lands in their midst. Without hesitation, one soldier throws himself onto it, shielding his comrades with his body. The grenade fails to detonate. It was a dud. No one was ever in danger.

We do not retract our judgment that he acted with supreme courage. We do not say, "Well, since the grenade was defective, his action was meaningless." His willingness to sacrifice his life for others is a fact about his character, revealed by his action. The external situation failed to match his representation, but this is irrelevant to evaluating what his action expressed about him. He represented himself as facing certain death for the sake of others, and he chose to die. That reality did not cooperate with this representation does not diminish his virtue. It merely means he survived to be honoured for it.

The woman and the drowning mannequin. A woman is walking along a riverbank when she sees what appears to be a small child caught in the current, struggling to stay afloat. The water is fast and cold; entering it poses serious risk. Without hesitation, she plunges in and swims toward the struggling figure. When she reaches it, she discovers it is a discarded mannequin, caught on debris, bobbing in a way that mimicked a drowning child.

No one was in danger. No one was saved. Her clothes are soaked, she is shivering, and she risked her life for a piece of plastic. Yet we judge her as having acted courageously, as having expressed compassion and a willingness to sacrifice for a stranger in need. The mannequin's inanimacy does not diminish her virtue in the slightest. We evaluate her based on her cognitive representation. She represented herself as saving a child, and her immediate willingness to risk herself reveals her character. What the object actually was is beside the point.

Vicious Action Under False Belief

The man and the sleeping mannequin. A man is walking home late at night. In a doorway, he sees what he takes to be a homeless person sleeping rough, huddled under cardboard. He walks over and, without provocation, kicks the figure repeatedly. He stomps on it. He spits on it. He laughs. When he finally stops, he discovers the figure was a mannequin, discarded and dressed in old clothes, presumably as a prank or art installation.

No one was harmed. The object of his attack had no welfare to damage, no dignity to violate. Yet we judge him as having expressed cruelty, as being disposed toward violence against vulnerable people. His representation was of a defenceless human being, and his unprovoked assault on that apparent human reveals his character. That his victim turned out to be plastic does not alter what his action disclosed about him. He believed he was beating a homeless person, and he enjoyed it.

The contrast with the drowning case is precise. Same type of object (a mannequin), opposite representations (person in need versus person to victimise), opposite evaluations (virtuous versus vicious). The object's actual properties are identical in both cases. The divergence in moral evaluation tracks entirely the agent's representation and the orientation that representation reveals.

The developer of torture simulations. A software developer creates a virtual reality experience. The experience is detailed and immersive. Its sole purpose is to allow users to torture a bound, pleading victim in graphic detail: to hear their screams, watch them writhe, inflict escalating pain over extended sessions. The victims are digital constructs, lines of code rendered as photorealistic humans. No one suffers. The developer knows this perfectly well.

Yet the product he has created, and the enjoyment it is designed to facilitate, reveals something about his orientation. He has devoted his talents to engineering an experience keyed entirely to apparent anguish. The appeal of the product, its market, its reason for existing, is that some people wish to experience the infliction of suffering. Users who seek out this experience do so because apparent suffering is what attracts them. The simulation's appeal lies precisely in its capacity to produce the representation of a suffering victim.

That no genuine suffering occurs does not alter what the engagement discloses. The users' pleasure is keyed to apparent agony. The developer has created the occasion for this pleasure and profits from it. Neither can claim indifference to suffering while selecting products and designing experiences whose entire purpose is to simulate it.

What These Cases Reveal

These cases demonstrate that we already evaluate agents based on cognitive representation. The soldier is courageous because of what he represented himself as doing, not because of what the grenade actually was. The woman is compassionate because she responded to apparent distress, not because anyone actually needed saving. The man in the doorway is cruel because he represented his victim as human and attacked anyway, not because of the mannequin's properties. The developer and his users express vicious orientations because their engagement is keyed to apparent suffering, not because anyone actually suffers.

The evaluation tracks what the agent's engagement is keyed to, not what happens to exist in the world. CRT does not introduce a new principle. It makes explicit what divergence cases reveal: character is constituted by psychological orientation, and psychological orientation is directed at representations, not at reality as such.

Robots designed to simulate suffering, childhood, or non-consent create systematic divergence between representation and reality. They are engineered precisely to elicit representations that do not correspond to their actual properties. In such cases, the implicit principle must be stated explicitly. CRT provides that statement.

VI. Application to Human-Robot Interaction

With CRT established and its presence in existing practice confirmed, we return to human-robot interaction. The framework yields determinate verdicts in cases that other approaches find intractable.

Paradigmatic Cases

The child-form sex robot. A company manufactures a robot with childlike features designed for sexual use. The robot is not conscious. Apply CRT: the user’s arousal is responsive to childlike features - these differentiate it from adult-form alternatives. His cognitive representation, revealed by what his arousal is keyed to, is of a childlike being. By the substitution test: if veridical, the action would be sexual engagement with a child. The user expresses paedophilic orientation.

The non-consent simulator. A humanoid robot simulates non-consent: it vocalises refusal, struggles, displays apparent distress. Users acquire it specifically to overcome this simulated resistance. The user’s arousal is responsive to simulated non-consent. By the substitution test: if veridical, the action would be rape. The user expresses orientation toward violation.

The torture for amusement. An individual tortures a realistic humanoid robot - burning, cutting, destroying limbs - while it produces distress vocalisations. She records these sessions, laughing. Her pleasure derives from apparent suffering. Were the robot to cease functioning silently, the activity would lose its appeal. By the substitution test: if veridical, the action would be torture of a sentient being. She expresses cruelty.

Contrasting Cases

CRT’s explanatory power is demonstrated by its capacity to distinguish cases that are superficially similar but morally different.

The engineer’s demonstration. An engineer strikes a quadruped robot during a technical presentation to demonstrate its balance-recovery capabilities. The action is physically similar to abuse but morally neutral. CRT explains: her cognitive representation is of a machine being tested. Her engagement is responsive to technical properties. There is no pleasure in apparent suffering. The substitution test yields: testing equipment.

The medical simulation. A medical student practises surgical procedures on a robot that simulates physiological responses, including apparent pain indicators. Her goal is to develop technique. CRT explains: her cognitive representation is of a pedagogical instrument. Her engagement is keyed to learning. The substitution test yields: conscientious training.

The compassionate response. A person encounters a robot that has fallen and is producing distress sounds. She helps it upright, speaking soothingly. CRT handles virtuous responses: her engagement treats the robot as a being in distress. If veridical, her action would be compassionate assistance. She expresses kindness. That the robot experiences nothing does not diminish what her response reveals about her orientation toward apparent distress.

The Role of Design

CRT attributes moral significance to design decisions. Robots engineered to afford vicious cognitive representations - simulating childhood for sexual purposes, simulating non-consent, simulating suffering for the user’s pleasure - create affordances for vice that simpler machines do not.

Designers cannot escape responsibility by noting that their products are “merely machines.” The machine’s ontological status is precisely what makes cognitive representation crucial, and design determines what representations the product affords. A robot designed to produce the representation “suffering child” is designed to afford vicious engagement, whatever disclaimers accompany it.

VII. Objections and Replies

The Video Game Objection

Objection: Millions of people regularly simulate violence in video games. If CRT is sound, these players express vicious character. But this verdict is implausible.

Reply: The objection assumes that video game players cognitively represent their targets as persons they are murdering. Phenomenological examination suggests otherwise. For most players, the cognitive representation is closer to “obstacles in a skill challenge” or “opponents in a competitive game.” The satisfaction derives from mastery and strategic success, not from apparent suffering. Evidence: replace human-appearing enemies with robots or abstract shapes, and for most players the satisfaction is largely preserved. This would not be the case if the appeal were keyed to apparent humanity.

The genuine test case is a video game designed specifically for unjustified cruelty: torturing innocents, suffering as the sole source of appeal, with no narrative justification and no skill challenge. Such games are rare. When they exist, they produce widespread moral discomfort. CRT predicts this reaction.

Additionally, physical robots engage embodied cognition in ways that screen-based interaction does not. Kicking a robot involves one’s body; manipulating a controller does not. The embodied dimension may sustain cognitive representations that purely visual interaction cannot.

The Subjectivity Objection

Objection: CRT entails that identical physical actions can carry different moral statuses depending on the agent’s internal states. This renders morality unacceptably subjective.

Reply: The objection conflates subjectivity with agent-dependence. CRT holds that moral status depends partly on the agent’s psychology. This is agent-dependent but not subjective in any problematic sense. The agent’s cognitive representation is a fact: a psychological fact about what their engagement is responsive to. It is not up to the agent to decide what their representation is. It is revealed by phenomenology and constrained by logic.

Moreover, the feature the objection identifies is not unique to CRT. Any agent-focused moral framework holds that identical external actions can express different character traits depending on internal states. Giving money can express generosity or vanity. Speaking difficult truths can express honesty or cruelty. The moral significance of action has always depended partly on what it expresses about the agent.

The Fantasy Objection

Objection: People are entitled to their private fantasies. CRT moralises the inner life in objectionable ways.

Reply: CRT does not hold that all fantasies are morally evaluable. It holds that actions - including actions toward robots - are evaluable in terms of the cognitive representations they manifest. There is a difference between a fantasy that remains in the imagination and an engagement with an external object that manifests and potentially cultivates that fantasy.

Furthermore, virtue ethics has always held that desires and pleasures are morally evaluable. A person who takes pleasure in others’ misfortune possesses a vicious trait even if they never act on it. CRT does not introduce morality into the inner life; it was already there.

VIII. Implications

Regulatory Implications

CRT provides grounds for regulation that do not depend on contested empirical claims about downstream effects. A society might prohibit child-form sex robots not because such prohibition has been demonstrated to reduce offending against actual children - a claim that may never be conclusively established - but because their use constitutes expression of vicious orientation.

This shifts the regulatory burden. One need not prove that permitting such robots causes measurable harm to third parties. One need only establish that their use expresses vicious character. The debate becomes normative rather than empirical: is this expression of character something a society may legitimately prohibit? That question is difficult, but it is the right question.

Design Implications

CRT implies that designers bear responsibility for the representations their products afford. Designers who specifically engineer products to afford vicious representations participate in creating occasions for vice. Responsible design would attend to what representations a product affords and would avoid engineering features whose primary function is to afford vicious engagement.

The Consciousness Question

A significant advantage of CRT is that it renders the consciousness question ethically inert for purposes of agent-evaluation.

Much contemporary AI ethics is paralysed by questions about machine consciousness. Could robots be conscious? Do they suffer? How would we know? These questions are epistemically intractable and may remain so indefinitely. If the ethics of human-robot interaction depends on answering them, we are stuck.

CRT sidesteps this paralysis. The robot’s actual phenomenal states are irrelevant to the evaluation of the agent. What matters is the agent’s representation. If you take pleasure in apparent suffering, you express a cruel orientation regardless of whether anything actually suffers. The hard problem of consciousness is someone else’s problem. You do not need to resolve questions about machine consciousness to know that torturing a realistic humanoid robot for pleasure expresses vicious character.

Importantly, CRT does not deny that consciousness might matter for other purposes. If a robot were conscious, harming it might wrong it, not merely express vice in the agent. Patient-centred evaluation would become relevant. But CRT operates on a different axis: it evaluates what actions reveal about agents. This evaluation proceeds regardless of facts about patient consciousness.

IX. Conclusion

This essay has developed Cognitive Representation Theory as a framework for evaluating human interaction with non-sentient robots. The core principle is that the moral status of an action toward a robot is determined by the agent’s cognitive representation of that robot, not by the robot’s actual properties. The principle is applied through a counterfactual substitution test: evaluate the action as if the representation were veridical, and the resulting moral characterisation attaches to the actual action.

CRT completes virtue ethics by providing a mechanism for applying vice-concepts when representation and reality diverge. It takes seriously the claim that character is intrinsic to the agent and works out what follows: if character is constituted by psychological orientation, then character can be evaluated through representations alone, regardless of whether the world cooperates.

Crucially, CRT provides what existing approaches lack: a way to determine what representation the agent holds independent of their avowals. The phenomenology of appeal is the arbiter. You cannot find an object appealing in virtue of features you do not represent it as possessing. This closes the gap that allows opponents to outmanoeuvre Sparrow and Coeckelbergh by simple denial.

The framework is not an innovation but a formalisation of existing practice. We already evaluate agents based on cognitive representation in cases of mistaken object, attempt, and virtuous action under false belief. Robots are distinctive not because they require a new evaluative principle but because they create systematic and intentional divergence between representation and reality.

Applied to the cases that motivate concern about human-robot interaction, CRT yields determinate verdicts. The child-form sex robot expresses paedophilic orientation. The non-consent simulator expresses orientation toward violation. The torture for amusement expresses cruelty. These evaluations do not depend on speculative claims about downstream effects, nor do they require resolving intractable questions about machine consciousness. They follow from analysis of what the engagements reveal about the agents who perform them.

The vices are real. The representations ground their attribution. That no victim suffers is irrelevant to the evaluation of what these actions express about those who perform them.