Individual Submission Summary
Share...

Direct link:

Poster #195 - Children's Attribution of Free Will to a Humanoid Robot

Fri, March 22, 2:30 to 3:45pm, Baltimore Convention Center, Floor: Level 1, Exhibit Hall B

Integrative Statement

Children ascribe psychological and social capacities to inanimate robots (Bernstein & Crowley, 2008; Chernyak & Gary, 2016; Kahn et al., 2012). However, it is unknown whether children also attribute free will to these mechanical beings. Since children endorse the concept of free will for human agents (Nichols, 2004), they might haphazardly extend this ability to other agentive objects. Yet, children’s sophisticated responses to scenarios where free will is constrained suggest they hold a nuanced view of choice even for humans (Chernyak et al., 2013; Kushnir, 2018; Kushnir et al., 2015) and may be more judicious in their attributions of free will to machines. The current study investigates whether children ascribe free will to humanoid robots and whether this attribution changes depending on context. As robot actions are clearly influenced by predetermined programming, this approach allows for a unique window into children’s folk psychology.
Thirty-two children (5–7 years, Mage=5.72) watched videos of an agent choosing one of two possible board games (a science or history game). The agent was either a humanoid robot or a child who had been programmed or born to consistently play a science game. Participants viewed the agent preparing to select a game in three different within-subjects contexts: no constraints (which game would the agent choose if given the choice?), rational constraints (which game would the agent choose if the science game was broken?), and moral constraints (which game would the agent choose if the science game made another child sad?). In each context, participants predicted the agent’s decision and were then asked whether the agent “chose to” play that game (denoting a sense of free will) or “had to” (denoting a lack of free will). These different scenarios tested the boundaries of childrens’ beliefs about whether different kinds of agents can go against their programming/desires and whether free will is dependent on various kinds of constraints (see Fig. 1).
Binomial tests demonstrated that, without constraints, participants predicted both the robot and the child would play the science game (ps<.05), though they were at chance concerning whether the agent “chose to” or “had to” act this way (ps>.2). This suggests that children were accurately tracking the agents’ programming/desires, but were inconsistent in their beliefs about whether this tendency reflected free will. If the science game was broken, participants predicted the agent would play the unbroken history game (ps<.001), and in this context were more likely to say this act was because they “had to” (ps<.05). Most interestingly, participants were above chance in predicting that the child agent would play the history game if the science game made another person sad (p=.001). However, they did not harbor the same expectations if the agent was a robot (p=.804), suggesting that only humans, not robots, can override a previous inclination to avoid harming another. Participants were at chance believing the agent “chose to” or “had to” play the game (ps>.45) (see Fig. 2). These findings carry implications for children’s attributions of free will and their understanding of moral agents.

Authors