这是用户在 2024-7-9 9:37 为 https://app.immersivetranslate.com/pdf-pro/5c1cab87-6cd8-4414-a393-498cb38cb724 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?

Using Skin Texture Change to Design Emotion Expression in Social Robots

Yuhan  玉涵 Cornell University 康奈尔大学Ithaca, NY, USA 美国纽约州伊萨卡yh758@cornell.edu

Guy Hoffman 盖伊·霍夫曼Cornell University 康奈尔大学Ithaca, NY, USA 美国纽约州伊萨卡hoffman@cornell.edu

Abstract 摘要

We evaluate the emotional expression capacity of skin texture change, a new design modality for social robots. In contrast to the majority of robots that use gestures and facial movements to express internal states, we developed an emotionally expressive robot that communicates using dynamically changing skin textures. The robot's shell is covered in actuated goosebumps and spikes, with programmable frequency and amplitude patterns. In a controlled study ( ) we presented eight texture patterns to participants in three interaction modes: online video viewing, in person observation, and touching the texture. For most of the explored texture patterns, participants consistently perceived them as expressing specific emotions, with similar distributions across all three modes. This indicates that a texture changing skin can be a useful new tool for robot designers. Based on the specific texture-to-emotion mappings, we provide actionable design implications, recommending using the shape of a texture to communicate emotional valence, and the frequency of texture movement to convey emotional arousal. Given that participants were most sensitive to valence when touching the texture, and were also most confident in their ratings using that mode, we conclude that touch is a promising design channel for human-robot communication.
我们评估皮肤纹理变化的情感表达能力,这是社交机器人的一种新设计模式。与大多数机器人使用手势和面部运动来表达内部状态不同,我们开发了一种情感表达丰富的机器人,它使用动态变化的皮肤纹理进行交流。机器人的外壳覆盖着可激活的鸭皮肤和刺,具有可编程的频率和幅度模式。在一项控制研究中( ),我们向参与者展示了八种纹理图案,分为三种交互模式:在线视频观看、亲自观察和触摸纹理。对于大多数探索的纹理图案,参与者一致认为它们表达了特定的情感,这种分布在所有三种模式中都相似。这表明,纹理变化的皮肤可以成为机器人设计师的有用新工具。根据特定的纹理-情感映射,我们提供可操作的设计建议,建议使用纹理的形状来传达情感价值,使用纹理运动的频率来传达情感唤起。 鉴于参与者在触摸质地时对价值观最敏感,并且在使用该模式进行评分时也最有信心,我们得出结论,触摸是人机沟通的一个有前途的设计渠道。

Index Terms-Soft robotics; human-robot interaction; emotion expression; empirical study; texture-change; nonverbal behavior
索引词-软机器人; 人机交互; 情感表达; 实证研究; 纹理变化; 非语言行为


In this paper, we explore the potential of a texture-changing skin as a design modality for robotic emotion expression. In an experimental study we find consistency in how people map skin texture movements to emotions, suggesting that texturechanging skins can be a useful component in the design of social robots.
Internal and emotional state expression is at the core of Human-Robot Interaction (HRI) [1]-[3], and many social robots are designed to convey their states not only with speech but also through nonverbal signals. To date, the vast majority of nonverbal expression in robots is in the form of facial expressions [4]-[6] and body movements [7], [8], including gaze behavior [9]. However, some robots may not necessarily be designed with anthropomorphic features and configurations that allow for such expressive behaviors [10]-[12].
To enable emotion expressions for robots in a manner applicable to different robot configurations, we developed an expressive channel for social robots in the form of texture changes on a soft skin. Our approach is inspired by some biological systems, which alter skin textures to express emotional states. This behavior includes human goosebumps, a
Fig. 1. A social robot prototype combining facial expressions and a texturechanging skin. During the interaction, the user makes eye contact with the screen face and puts their palms on the sides of the robot to touch the actuated textures.
图 1. 一个结合面部表情和可变纹理皮肤的社交机器人原型。在互动过程中,用户与屏幕脸部进行眼神交流,并将手掌放在机器人的两侧触摸被激活的纹理。
cat's back fur raise, a blowfish displaying its spikes, or birds ruffling their feathers [13]. While prevalent in animals, this intuitive and widespread behaviour has not thus far been used to communicate expressions for social robots. Adding such expressive textures to social robots can enrich the design space of a robot's expressive spectrum: it can interact both visually and haptically, and even communicate silently, for example in situations in which they cannot be seen or heard but touched, such as in military or low-visibility scenarios.
The soft robotic skin generates pneumatically actuated dynamic textures, deforming in response to changes in pressure inside fluidic chambers [14]. In the social robot depicted in Fig. 1, we integrated skins with two textured shapes inspired by nature: goosebumps and spikes. We can then use the shape and volume of the texture, as well as the speed of the texture's movement, to convey a variety of emotional expressions.
软机器人皮肤产生气动驱动的动态纹理,对液体腔室内压力变化做出变形反应[14]。在图 1 中描绘的社交机器人中,我们集成了受自然启发的两种纹理形状的皮肤:鸡皮疙瘩和刺。然后,我们可以利用纹理的形状和体积,以及纹理移动的速度,传达各种情感表达。
Given the novelty of this expressive modality, there are no design guidelines or empirical evidence as to how texture changes map to emotions. To investigate this question, we conducted a controlled study to map emotions to texture gestures, and to assess their expressive capabilities. We developed eight texture gestures using binary values for three parameters: shape, frequency, and amplitude. We then asked study participants to label these gestures with emotion words. Participants did so in three interaction modes: watching online videos of the texture, observing the texture in person, and touching it.
The study results indicate that a robot's skin texture change

can be a promising channel for communicating specific emotions across different interaction modes. The results also provide insights on how to design texture behaviors to communicate a specific emotion. The choice of texture units and the movement frequency are the two most significant parameters defining the textures' expressive content, with goosebumps conveying a more positive emotion than spikes, and a higher frequencies communicating a more aroused state. Moreover, we found a variance in valence perception between interaction modes, suggesting that touching the robot's texture change is the most promising channel to evoke emotional valence.

A. Nonverbal Expression in Social Robots

Generating nonverbal behaviors is a central ability for social robots to facilitate communication with humans. To achieve this, the majority of social robots use gestures and facial expressions, e.g., KOBIAN [15], Probo [16], NAO [17], Kismet [18], EMYS [19], and Nexi [20]. On the other hand, the modality of touch that communicates or evokes emotions has received less attention. Yet the sense of touch is important for humans in communicating emotions and for social bonding [21]-[23]. As a result, there are a number of social robots that use affective haptic interfaces for users. These robots include PARO [24], a seal-like robot which has touch sensors embedded under its skin and engages people through sound and movement; the Haptic Creature [25], an animallike robot that displays its emotions by altering breathing rates and adjusting body stiffness; and Cuddlebits [26], a robot that also displays its emotions through breathing-like behaviors. In all of these cases, the robots express their states either through whole body movements or vibrotactile sensations. None of these robots change their skin textures as an expressive channel.
生成非语言行为是社交机器人促进与人类沟通的核心能力。为了实现这一目标,大多数社交机器人使用手势和面部表情,例如 KOBIAN [15],Probo [16],NAO [17],Kismet [18],EMYS [19]和 Nexi [20]。另一方面,传达或唤起情感的触觉方式受到较少关注。然而,触觉对于人类在传达情感和社交联系方面至关重要。因此,有许多社交机器人为用户使用情感触觉界面。这些机器人包括 PARO [24],一种类似海豹的机器人,其皮肤下嵌入了触觉传感器,并通过声音和动作与人们互动;Haptic Creature [25],一种类似动物的机器人,通过改变呼吸速率和调整身体僵硬度来展示情绪;以及 Cuddlebits [26],一种机器人也通过类似呼吸的行为展示情绪。在所有这些情况下,机器人通过整体身体运动或振动触觉感觉来表达其状态。这些机器人都没有改变皮肤纹理作为表达渠道。
Outside of the robotics field, designers have explored affective haptics through "shape-changing interfaces" [27]. Recently, Van Oosterhout et al studied the effect of shapechange on emotional experience when interacting with an intelligent thermostat [28]. Davis designed an architectural shape changing textile wall panel for emotional expression and nonverbal communication through vision and touch [29].
在机器人领域之外,设计师们通过“形状变化界面”探索了情感触觉[27]。最近,Van Oosterhout 等人研究了与智能恒温器互动时形状变化对情感体验的影响[28]。Davis 设计了一种用于情感表达和通过视觉和触觉进行非语言交流的建筑形状变化纺织墙板[29]。

B. Soft Materials and Mechanisms in Social Robotics
B. 社交机器人中的软材料和机制

Recently, soft materials and soft mechanisms have been explored in social robot design. The vast majority of such robots conform to a similar design that includes a soft exterior over a rigid or tensile inner mechanism. Examples include Keepon [30], a rigid linkage mechanism covered by a soft snowman-like exterior; Tofu [31], a winch mechanism embedded in a foam structure; and Blossom [32], a compliant tensile mechanism covered with handcrafted exteriors. Although some of these robots deform their soft exteriors for expression, they focus on their full body deformation on a macro scale. Change of skin on a micro scale still remains an unexplored channel for socially interactive robots.
最近,软材料和软机制在社交机器人设计中得到了探索。这类机器人中绝大多数遵循相似的设计,即在刚性或拉伸内部机制上覆盖软外部。例如,Keepon [30],一个刚性连杆机制覆盖着软雪人状外部;Tofu [31],一个绞盘机制嵌入在泡沫结构中;以及 Blossom [32],一个柔性拉伸机制覆盖着手工制作的外部。尽管其中一些机器人通过改变其软外部来表达,但它们主要关注宏观尺度上的整体变形。在微观尺度上改变皮肤仍然是社交互动机器人中未被探索的渠道。
Fig. 2. A goosebump TU transforms from a flat initial surface to a smooth bump under positive pressure. The volume of the bump increases with the increase of internal pressure.
图 2. 一只鸭皮 TU 在正压力下从平坦的初始表面变成光滑的隆起。 隆起的体积随内部压力的增加而增加。
Fig. 3. Spike TU inflation from negative pressure, with the haptic tip hidden inside the elastomer, to positive pressure, with the sharp thorn protruding.
图 3. 尖刺 TU 从负压膨胀,触觉尖端隐藏在弹性体内,到正压,尖刺突出。

C. Russell's Circumplex Emotion Model
C. Russell 的圆环情绪模型

In social and cognitive psychology, a common model for emotional states is Russell's circumplex model of emotions [33]. From Russell's theory, an emotion is composed of two orthogonal dimensions (originally three). One dimension-valence-is scaled from unpleasant (negative) to pleasant (positive), while the other dimension-arousalranges from deactivated (low) to activated (high). We use this decomposition in our analysis below.

III. Design of Robotic SKIn And EXPReSSIONS
III. 机器人皮肤和表情设计

To implement skin texture changes for social robots, we developed a design using layers of cast elastomer and inextensible films. This design allows us to map pressure to surface deformation of the skin in the form of specifically shaped texture units (TUs). These units can optionally also have rigid components embedded in them for haptic expression. Below the layers of TUs is a network of interleaved fluidic channels connecting the units of the same type for separate control.
为了实现社交机器人的皮肤纹理变化,我们开发了一种设计,使用铸造弹性体层和不可伸展薄膜。这种设计使我们能够将压力映射到皮肤表面变形,形成特定形状的纹理单元(TUs)。这些单元还可以选择性地嵌入刚性组件,用于触觉表达。在 TUs 层下面是一组交错排列的流体通道网络,连接相同类型的单元以进行分开控制。
We used this design to develop two forms of expressive textures inspired by nature: goosebumps and spikes. A goosebump unit transforms from a flat surface to a smooth bump under pressure, as presented in Fig. 2. A spike unit is structured as a cone with a rigid element embedded on the tip for providing distinct visual and haptic feedback. The spike deforms and retracts the sharp tip under negative pressure (Fig. 3). Fig. 4 shows a skin design with interleaved 2D arrays of goosebumps and spikes under different pressure levels.
我们使用这种设计开发了两种受自然启发的表现纹理形式:鸡皮疙瘩和尖刺。鸡皮疙瘩单元在压力下从平坦表面变成光滑凸起,如图 2 所示。尖刺单元结构为锥形,顶部嵌有刚性元素,用于提供明显的视觉和触觉反馈。尖刺在负压下变形并收回尖锐的顶端(图 3)。图 4 展示了一种皮肤设计,其中交错排列着鸡皮疙瘩和尖刺的二维阵列,在不同的压力水平下。
Noise is an important consideration for designing the actuation systems for texture units. To address possible distraction due to noise, we designed a power screw actuated linear displacement pump. The core of this design is a re-purposed syringe, which we used as a cylindrical pump with a plunger displaced by a linear stepper motor. This system afforded low noise, high control accuracy and efficiency when compared to the commonly used rotary displacement pump. A full description of the mechanism design is found in [14].
Fig. 4. Deformation of texture module in response to the inner chamber pressure. From left to right, the pressure of the spikes channel changes from negative to positive. From bottom to top, the goosebumps channel inflates from negative pressure to positive pressure.
图 4. 纹理模块对内腔压力的变形。 从左到右,尖刺通道的压力从负变为正。 从下到上,鸡皮疙瘩通道从负压膨胀到正压。
Fig. 5. The eight expressions used in this study vary in a binary fashion along three texture parameters as independent control variables: texture shape, change frequency (not shown) and amplitude.
图 5. 本研究中使用的八种表达方式在三个纹理参数上以二进制方式变化,作为独立控制变量:纹理形状、变化频率(未显示)和振幅。

A. Design of Expressive Behaviors
A. 表现行为设计

We created eight expressions, each by selecting from binary values of the three texture parameters (Fig. 5).
我们通过从三个纹理参数的二进制值中选择,创建了八种表达方式(图 5)。
  1. Texture Unit Type: We varied the two Texture Unit (TU) types described in III: goosebumps and spikes. We hypothesized there would be a mapping between emotion valence and the shape of textures, with negative valence mapping to the spikes TU, and positive to the goosebumps TU. This is inspired by natural responses, where spikes represent an angry, defensive state, and goosebumps are related to pleasure and excitement. Moreover, the spikes with sharp, rigid tips deliver an unpleasant haptic experience when compared to the sensation of smooth soft bumps.
    纹理单元类型:我们变化了第 III 节中描述的两种纹理单元(TU)类型:鸡皮疙瘩和尖刺。我们假设情绪价值与纹理形状之间存在映射,负价值映射到尖刺 TU,正价值映射到鸡皮疙瘩 TU。这受到自然反应的启发,其中尖刺代表愤怒、防御状态,而鸡皮疙瘩与愉悦和兴奋有关。此外,与光滑软疙瘩的感觉相比,具有尖锐、刚硬尖端的尖刺提供了令人不快的触觉体验。
  2. Texture Change Frequency: Texture change frequency is defined as the number of texture rises per unit time. The texture change frequency is determined by the inflating and deflating speed of the linear displacement pump. Constrained by the maximum linear speed of power screw ), available texture change frequencies range from 0 to 60 rises per minute (rpm).
    纹理变化频率:纹理变化频率定义为单位时间内纹理上升的次数。纹理变化频率由线性位移泵的膨胀和收缩速度确定。受到动力螺杆的最大线速度的限制,可用的纹理变化频率范围从 0 到 60 次/分钟。
We hypothesized that the frequency of the texture gestures would affect the arousal dimension of emotional states, with higher texture change frequency mapped to a higher arousal level. This is inspired by analogies in human experiences, where humans' physiological "frequency", such as breath and heartbeats falls to a lower rate when in a low arousal state, and increase when being aroused. Through piloting we converged on and as binary frequency values for the expressions used in the experiment.
我们假设纹理手势的频率会影响情绪状态的唤醒维度,更高的纹理变化频率对应更高的唤醒水平。这受到人类经验中的类比启发,当人处于低唤醒状态时,生理“频率”(如呼吸和心跳)会降低,而在被唤醒时会增加。通过试点研究,我们确定 作为实验中使用的表达的二进制频率值。
  1. Texture Change Amplitude: Research in human physiology shows that the increased arousal not only results in an increased breathing rate, but also in a larger ventilation volume [34]. Moreover, there exist some natural analogies of various forms of amplitudes, for example, the loudness of voice, and the amplitude of sea waves. In these cases, the lower amplitudes are usually associated with calm and peaceful states. Inspired by these phenomena, we hypothesized that the amplitude of texture change could be used to convey various arousal levels of emotions, with higher amplitude representing a higher arousal level.
In our system, the amplitude of texture movement is determined by the inflating and deflating volume of the pump. Amplitude values range from for spikes channel and for goosebumps. Designing the candidate expressions, we decided to use and as two values of amplitudes for spikes channel and and for the goosebumps. Note that the amplitude values chosen deliver a qualitatively different texture experience, especially for spike TUs. In lowamplitude spikes the rigid tip detracts fully into the skin, whereas in high-amplitude spikes the rigid tip stays above the surface. This is related to the dynamics of the inflation response to the pressure changes in each TU type.
在我们的系统中,纹理运动的振幅由泵的充气和放气体积确定。振幅值范围从 到尖刺通道, 到鸡皮疙瘩。在设计候选表达式时,我们决定使用 作为尖刺通道的两个振幅值, 用于鸡皮疙瘩。请注意,所选的振幅值提供了一种质量不同的纹理体验,尤其是对于尖刺 TU。在低振幅尖刺中,刚性尖端完全收回到皮肤内,而在高振幅尖刺中,刚性尖端保持在表面之上。这与每种 TU 类型对压力变化的充气响应动态有关。

IV. 研究问题和假设

In this study, we evaluate the following research questions and hypotheses. First, we wanted to know whether and to what extent robot texture change can be perceived as conveying specific emotions. We had the following hypotheses:
  • H1: Texture change can be consistently perceived as a certain emotion across different interaction modes.
  • H2: Consistency of people's perceptions of emotions differs across interaction modes.
Second, we wanted to explore how the three parameters of the texture change correlate with dimensions of valence and arousal. We had the following hypotheses:
  • H3: Goosebumps textures will be perceived as conveying more positive valence than spikes.
  • H4: High frequency gestures will be perceived as conveying higher arousal states than low frequency ones.
  • H5: High amplitude gestures will be perceived as conveying higher arousal states than low amplitude ones
Please look at the video below, and choose the one (1) or two (2) emotions that most accurately describe the movement of the textured skin.
You may play the video as many times as you like.
Excited 兴奋 Calm 平静 Angry 生气
Happy 快乐 Sleepy 困了 Sad 伤心
Content 内容 Scared 害怕 Bored 无聊
Fig. 6. Screen shot from the online study. Participants watched a video of a texture gesture, then labeled this behavior with emotion words out of a provided list. After this choice, they evaluated their confidence of that choice on a five-point scale from "Very not confident" to "Very confident".
图 6. 在线研究的屏幕截图。参与者观看了一个纹理手势的视频,然后从提供的列表中选择情绪词来标记这种行为。在做出选择后,他们用从“非常不自信”到“非常自信”的五点量表评估了他们的自信程度。


Participants experienced the eight different texture gestures under three interaction modes: online video viewing, observing the texture in person, and touching the texture. In all interaction modes, the textures were presented on a x Texture Module, as depicted in Fig. 5. In each experiment, we asked participants to label gestures with emotion words representing nine key emotions from Russells circumplex model [33], [35]: angry, bored, calm, content, excited, happy, sad, scared, and sleepy. These emotions were chosen to form a set that covers all four quadrants of the emotion.
参与者在三种交互模式下体验了八种不同的质地手势:在线视频观看、亲自观察质地和触摸质地。在所有交互模式中,质地呈现在一个 x 的质地模块上,如图 5 所示。在每个实验中,我们要求参与者用代表 Russell 情感圆模型[33],[35]中的九种关键情绪的情感词标记手势:愤怒、无聊、平静、满足、兴奋、快乐、悲伤、害怕和困倦。这些情绪被选择为涵盖情感四个象限的一组。

A. Participants A. 参与者

We recruited 140 participants for this study, 100 for the online video study and 40 participants for a laboratory experiment, which included both in-person observation and touch interaction. Participants were recruited via an internal university participant system and compensated with participation credits. One sample from the online video study was dropped due to missing response data, leaving us with 99 video participants, and 139 overall participants.
我们为这项研究招募了 140 名参与者,其中 100 名参与在线视频研究,40 名参与实验室实验,实验包括面对面观察和触摸交互。参与者通过内部大学参与者系统招募,并获得参与学分作为补偿。由于缺少响应数据,在线视频研究中的一个样本被剔除,留下了 99 名视频参与者,总共有 139 名参与者。

B. Procedure B. 程序

  1. Online Video Study: We recorded eight videos, one for each texture expression. Videos showed the Texture Module from a side angle and from a three-quarter top-down angle. Each video clip lasted 10 seconds. Participants were shown the videos in randomized order. Below each video, they were asked to choose one or two emotions that most accurately described the movement of the skin and evaluate their confidence on a 5-point scale. Participants could play the videos as often as they wanted. Fig. 6 shows a screen shot of one page in the online study.
    在线视频研究:我们录制了八个视频,每个视频展示了一种纹理表达。视频从侧面角度和三分之四的俯视角度展示了纹理模块。每个视频片段持续 10 秒。参与者以随机顺序观看视频。在每个视频下方,他们被要求选择最准确描述皮肤运动的一两种情绪,并在 5 点量表上评估他们的信心。参与者可以随意播放视频。图 6 显示了在线研究中一页的屏幕截图。
  2. Laboratory Experiment: Participants sat in a chair facing a laptop with the Texture Module placed to the right of the laptop. A research assistant sat nearby to activate the robot's textures. They were told to imagine the textured surface as the skin of a creature, and that they would see some changing of textures in the skin during the experiment. They were asked to guess the emotional states of the creature based on the changes in its skin.
The experiment was divided into two sessions: in one, participants were asked to observe the texture expressions without touching it; in the other, they placed one palm on the textured area and experienced its motion only though touch. To avoid sequence effects, the order of two sessions was counterbalanced. Within each session, the order of gestures was randomized across participants. Each gesture lasted for 10 seconds, followed by a pause to let participants select one or two emotion labels, and report their confidence. The questions and responses format were presented on the laptop and were in the same format as in the online study, only without the video element. The textures' motion could be replayed as often as the participants liked by asking the research assistant to replay a texture. After completing the survey, participants would indicate to the research assistant to activate the next gesture. Before each session started, participants were given a practice session, where they observed or touched two examples of texture movements to make sure they fully understood the procedure. We used low-amplitude, low-frequency spikes and high-amplitude high-frequency goosebumps as two examples to illustrate the range of gestures. Participants wore noisecancelling headphones playing pink noise during the experiment to mask mechanical noises.
实验分为两个阶段:在一个阶段中,参与者被要求观察纹理表达,但不触摸它;在另一个阶段中,他们将一只手掌放在有纹理的区域上,仅通过触摸来体验其运动。为了避免序列效应,两个阶段的顺序是对称平衡的。在每个阶段内,手势的顺序是随机分配给参与者的。每个手势持续 10 秒,然后暂停一下,让参与者选择一个或两个情绪标签,并报告他们的信心。问题和回答的格式显示在笔记本电脑上,并且与在线研究中的格式相同,只是没有视频元素。参与者可以要求研究助理重播纹理,以便他们喜欢的次数。完成调查后,参与者会告诉研究助理激活下一个手势。在每个阶段开始之前,参与者都会接受一个练习阶段,观察或触摸两个纹理运动的示例,以确保他们完全理解程序。 我们使用低幅度、低频率的尖峰和高幅度、高频率的鸡皮疙瘩作为两个例子来说明手势的范围。参与者在实验过程中戴着降噪耳机,播放粉红噪音以掩盖机械噪音。


We quantitatively and qualitatively analyze the mapping between texture gestures, texture parameters, emotions, and emotion dimensions to address our research questions and hypothesis. Given the novelty of texture changes as an expressive modality, the analysis is exploratory, viewing the data from a number of lenses to gain better design insights for this technology in social robotics.
In the rest of the paper, texture gestures are denoted by three-letter codes, with the first letter representing the Texture Unit type ( or for Goosebumps or Spikes), the second letter representing the frequency ( or for high or low), and the third representing the amplitude ( or for high or low). For example, SHL is a spikes texture with high-frequency and low-amplitude.
在本文的其余部分,纹理手势用三个字母代码表示,第一个字母代表纹理单元类型( 代表鸡皮疙瘩或尖刺),第二个字母代表频率( 代表高或低),第三个代表振幅( 代表高或低)。例如,SHL 是一个高频低振幅的尖刺纹理。

A. Mapping Texture Gestures to Emotions

To assess H1, we first analyze the emotion distribution for each gesture using Pearson's Chi-square test. Table. I lists the results for each of the three interaction modes. All of the gestures were significantly non-uniform, except for GHL in
为了评估 H1,我们首先使用 Pearson 的卡方检验分析每个手势的情感分布。表 I 列出了三种交互模式的结果。所有手势都显著不均匀,除了 GHL。
Fig. 7. The emotion selection distributions of texture gestures. Numbers indicate percent of participants choosing this emotion for a specific gesture
图 7. 纹理手势的情绪选择分布。数字表示选择特定手势的参与者百分比
Fig. 8. The result of multiple correspondence analysis in three interaction modes. The two categories: gesture and emotion are plotted onto a plane with two principal dimensions. The plot visualizes the association between the categories: the closer two variables values are, the more often they co-occur in the data.
图 8. 三种交互模式下的多重对应分析结果。手势和情感两个类别被绘制在具有两个主要维度的平面上。该图可视化了类别之间的关联:两个变量值越接近,它们在数据中共同出现的频率就越高。
video mode . This finding broadly supports , namely that emotions attributed to each of the eight gestures tend to cluster. Low-frequency, low-amplitude spikes (SLL) was the most uniform, i.e., the least informative, gesture.
视频模式 。这一发现广泛支持 ,即每个八个手势所归因的情绪倾向于聚集。低频率、低幅度的尖峰(SLL)是最统一的,即最不具信息性的手势。
Comparing modes, we note that for all gestures except SLL, the video distribution was more uniform than either of the two in-person modes. This indicates that viewing video textures is less informative than experiencing them in person. Between the two in-person modes, low-amplitude gestures were more informative in observation, whereas high-amplitude gestures were more informative in the touch mode. This suggests that the use of subtle amplitude differences are more noticeable visually than haptically.
比较模式时,我们注意到除了 SLL 之外的所有手势,视频分布比两种面对面模式中的任何一种都更均匀。这表明观看视频纹理比亲自体验它们更少提供信息。在两种面对面模式之间,低振幅手势在观察中更具信息量,而高振幅手势在触摸模式中更具信息量。这表明微妙的振幅差异在视觉上比触觉上更容易注意到。
Fig. 7 illustrates these insights via the mapping between texture gestures and emotions. For example, the GHL column in the video mode, which was not significant based on its Chi-square result, is quite uniformly distributed and thus not informative, whereas in both in-person modes, GHL was mostly seen as being either "happy" or "excited".
图 7 通过纹理手势和情绪之间的映射展示了这些见解。例如,在视频模式中,基于卡方结果并不显著的 GHL 列是相当均匀分布的,因此并不具有信息量,而在面对面模式中,GHL 大多被视为“快乐”或“兴奋”。
Visually inspecting Fig. 7 we can see several trends. Both GL・ (low-frequency goosebumps, columns 3-4) and (high frequency spikes, columns 5-6) gestures seem to
通过视觉检查图 7,我们可以看到几种趋势。 GL・(低频鸡皮疙瘩,列 3-4)和 (高频尖峰,列 5-6)手势似乎
卡方和 P 值通过手势和交互模式选择情感分布。
Ges. Video 视频 Observe 观察 Touch 触摸
Chi-Sq 卡方 Chi-Sq 卡方 Chi-Sq 卡方
GHH 67.81 69.68 117.5
GHL 16.96 0.031 110.64 65.27
GLH 122.7 127.3 140.38
GLL 99.18 183.73 162.3
SHH 93.5 132.95 218.58
SHL 177.61 174.33 92.55
SLH 46.56 55.18 210.12
SLL 31.72 23.92 0.002 43.8
evoke agreement between in three modes. As a general trend, emotion mappings seem to determined mostly by the first two parameters: Texture Unit and frequency.

B. Multiple Correspondence Analysis
B. 多重对应分析

To provide an additional view on the data in Fig. 7, we conduct a multiple correspondence analysis (MCA) for each mode (Fig. 8). This method is similar to Principle Component Analysis (PCA), but focuses on the co-occurrence between two categorical variables. The two most significant dimensions account for a cumulative inertia of and for video, observe, and touch respectively. It is interesting to note that while we have not yet introduced
为了对图 7 中的数据提供额外的视角,我们对每种模式进行了多重对应分析(MCA)(图 8)。这种方法类似于主成分分析(PCA),但侧重于两个分类变量之间的共同出现。最显著的两个维度分别占视频、观察和触摸的累积惯性的 。值得注意的是,虽然我们尚未引入。
Fig. 9. Kernel Density Estimation (KDE) for each gesture plotted onto an 2D emotion plane. The marginal distributions show the estimations of valence and arousal values for the expressed texture gesture.
图 9. 核密度估计(KDE)绘制在 2D 情绪平面上的每个手势。边际分布显示了表达纹理手势的愉悦度和唤醒值的估计。
the circumplex model, the factored MCA dimensions broadly map onto the commonly used arousal (Dimension 1) and valence (Dimension 2) dimensions. The "valence" dimension (Dimension 2) accounts for the least inertia for video, and the most for touching the texture, suggesting that valence is most communicative through touch, and least through video observation.
圆环模型,分解的 MCA 维度广泛映射到常用的唤醒(维度 1)和价值(维度 2)维度。 “价值”维度(维度 2)对视频的惯性最小,对触摸纹理的惯性最大,表明通过触摸传达价值最为有效,而通过视频观察最不为有效。
Some clear relationships emerge from the MCA analysis, which reflect the insights above. In all interaction modes, GL gestures are strongly associated with sleepy and calm; gestures are associated with angry and scared. In person, are associated with happy and excited, whereas in video, GHH is only associated with happy, and GHL is associated with content. Sad is only associated with gestures in the visual modes (SLH in video, and SLL in person), but not in the touch mode .
MCA 分析中出现了一些明显的关系,反映了上述的见解。在所有的互动模式中,GL 手势与瞌睡和平静强烈相关; 手势与愤怒和恐惧相关。在面对面交流中, 手势与快乐和兴奋相关,而在视频中,GHH 仅与快乐相关,GHL 与满足相关。悲伤仅与视觉模式中的手势相关(视频中的 SLH 和面对面的 SLL),而不与触摸模式相关。

C. Mapping Gestures to Valence / Arousal
C. 将手势映射到愉悦度/唤醒状态

To further assess the relation between the gestures and the classic valence and arousal decomposition, we mapped the nine emotion labels used in this experiment to the Circumplex Model [33], [35]. There has been much debate about the mapping between emotion labels and these two dimensions, and there is no generally accepted mapping, as it seems to vary between cultures, ages, and other factors. In this exploratory work, we visually map each word to the plane using a 7 -point arousal and valence scale ranging from -3 to +3 , based on the original model published by Russell et al.
为了进一步评估手势与经典的价值和唤醒分解之间的关系,我们将在这个实验中使用的九种情绪标签映射到 Circumplex 模型[33],[35]。关于情绪标签与这两个维度之间的映射存在很多争论,并且没有普遍接受的映射,因为它似乎在文化、年龄和其他因素之间变化。在这项探索性工作中,我们使用从-3 到+3 的 7 点唤醒和价值刻度,基于 Russell 等人发表的原始模型,将每个单词可视化地映射到平面上。
Fig. 9 plots the joint probability distributions, pooled for all modes, using a Kernel Density Estimate (KDE) method, with marginal distributions along two axes showing the arousal and valence estimations of the expressed emotion. These, again, show that gestures and gestures are the most informative. GL・ gestures are generally associated with low-arousal emotions and gestures with high arousal emotions. We can also note that GHH relates to high-valence emotions. SL and GHL gestures are not as informative, but the former tend to low-valence, high arousal (scared and angry), and the latter to positive valence emotions.
图 9 绘制了联合概率分布,使用核密度估计(KDE)方法对所有模式进行汇总,两个轴上的边际分布显示了表达情感的唤醒和价值估计。这些再次表明, 手势和 手势是最具信息量的。GL・手势通常与低唤醒情绪相关, 手势与高唤醒情绪相关。我们还可以注意到 GHH 与高价值情绪相关。SL 和 GHL 手势不太具信息量,但前者倾向于低价值、高唤醒(害怕和愤怒),后者倾向于积极价值情绪。

D. Texture Parameters and Valence / Arousal
D. 纹理参数和愉悦度/唤醒

Our exploratory analysis above suggests that the first two parameters (TU and frequency) most reliably relate to emotion expressions. To further dissect this relationship, we analyzed each of the three parameters (Texture Unit, frequency, and amplitude) separately vis-a-vis emotion ratings. Fig. 10 shows the percentage for each emotion by texture parameters.
我们上面的探索性分析表明,前两个参数(TU 和频率)与情绪表达最可靠地相关。为了进一步剖析这种关系,我们分别分析了三个参数(纹理单位、频率和振幅)与情绪评分之间的关系。图 10 显示了每种情绪的纹理参数百分比。
We can visually note again that TU type and frequency are less uniformly distributed and carry the most emotional information. Video, overall, is less discriminative, and the amplitude parameter is the least informative.
我们可以再次看到 TU 类型和频率分布不太均匀,并携带了大部分的情感信息。总体而言,视频的区分度较低,振幅参数信息最少。
To more specifically address , we analyze the relationship between texture parameter and emotion dimension. Fig. 11 show this relationship split by mode. We use fixed-effects regressions to evaluate these relationships statistically:
为了更具体地解决 ,我们分析了纹理参数与情感维度之间的关系。图 11 展示了这种 关系按模式分割。我们使用固定效应回归来统计评估这些关系:
  1. Texture Types and Valence (H3): A fixed-effects regression for valence with Texture Unit type as a predictor, controlled for participant ID shows that TU type strongly predicts valence rating . Goosebumps communicate high valence and spikes low valence. This supports , namely that type relates to emotional valence. Fig. 11 also indicates that that gestures with goosebumps TU were associated with lower arousal than
    纹理类型和价值(H3):使用纹理单元类型作为预测因子的价值固定效应回归,控制参与者 ID 显示 TU 类型强烈预测价值评分 。鸡皮疙瘩传达高价值,而刺激传达低价值。这支持 ,即 类型与情绪价值相关。图 11 还表明,带有鸡皮疙瘩 TU 的手势与低唤醒相关。
Fig. 10. Emotion selection distributions (in percentage) broken down by texture control parameters: TU type, frequency, and amplitude.
图 10. 情绪选择分布(以百分比表示),按纹理控制参数进行细分:TU 类型、频率和振幅。
Fig. 11. Comparison of average arousal and valence values between each pair of binary parameter values in three interaction modes.
图 11. 在三种交互模式中,每对二进制参数值之间的平均唤醒和价值数值的比较。
that those with spikes , although this was not one of our original hypotheses.
那些有刺的 ,尽管这不是我们最初的假设之一。
  1. Frequency and Arousal (H4): We ran a fixed-effects regression for arousal with frequency as a predictor, controlled for participant ID. Frequency strongly predicts the arousal rating . This supports , namely that frequency relates to emotional arousal.
    频率和唤醒(H4):我们进行了一个固定效应回归,以频率作为预测因子,控制了参与者 ID。频率强烈预测唤醒评分 。这支持 ,即频率与情绪唤醒有关。
  2. Amplitude and Arousal (H5): We ran a fixed-effects regression for arousal with amplitude as a predictor, controlled for participant ID. Amplitude does not significantly predict the arousal rating . H5 is thus not supported. Amplitude does not reliably map onto arousal.
    振幅和唤醒(H5):我们对唤醒进行了固定效应回归,以振幅作为预测因子,控制了参与者 ID。振幅并不显著预测唤醒评分 。因此,H5 得不到支持。振幅并不可靠地映射到唤醒。

E. Confidence Between Interaction Modes
E. 交互模式之间的信心

Finally, we turn to . We already found some support in the lower Chi-square values for the video mode, indicating that watching the texture on video was somewhat less informative than viewing them in person or touching the gestures.
最后,我们转向 。我们已经发现视频模式下较低的卡方值提供了一些支持,表明在视频上观看纹理比亲自观看或触摸手势要少一些信息。
To further evaluate the interaction modes we compare selfreported confidence ratings across modes. A fixed-effect regression with mode, gesture, and the interaction between mode and gesture as predictors, controlled for participant shows that gesture , mode , and the interaction between gesture and mode 6.58) significantly predict the confidence level, all at a 0.0001 level. The mean confidence values of each mode are shown in Fig. 12. This also supports H2.
为了进一步评估交互模式,我们比较了各种模式下的自我报告信心评分。一个固定效应回归模型,以模式、手势和模式与手势之间的交互作为预测变量,控制了参与者,结果显示手势 ,模式 ,以及手势和模式之间的交互 6.58)显著预测了信心水平,所有在 0.0001 水平上。每种模式的平均信心值如图 12 所示。这也支持 H2。


The results of our study suggest that the proposed new modality of texture changes is capable of conveying specific
Fig. 12. Mean confidence rating by mode. Error bars show standard errors.
图 12. 按模式的平均置信评分。误差线显示标准误差。
emotional states. For most of the explored texture gestures, participants consistently rated these as expressing the same emotions, making texture a promising choice for a communicative channel in the design of future social robots.
The results indicate that interaction modes have less of an effect on humans' emotional perceptions than we anticipated. That said, the perceived intensity of emotions and people's confidence in their perception are affected by how they experience the texture gestures. Video emotion attribution is less consistent than live modes. Participants were least confident in the video mode, more so in person observation, and most when touching the robot. This finding conforms with the increasing insights in HRI research, that physically present robots have more impact on users and are more informative in contexts than robots depicted in videos [37], [38].
From the results of the MCA analysis, we find that the video mode has the least proportion of inertia for the "valence" dimension, indicating that valence is less likely to be communicated well by a robot that is not physically presented. Some of this effect may be owing to users' physical and psychological "distance" from the robot over video. This is supported by research in Psychology, that the distance from
从 MCA 分析的结果中,我们发现视频模式在“愉悦”维度上的惯性比例最小,表明愉悦情感不太可能通过没有实际呈现的机器人很好地传达。这种效应的一部分可能归因于用户在视频中与机器人的物理和心理“距离”。这得到了心理学研究的支持,即与机器人的距离。

an object has effects on people's "degree of concern", and thus affects the "representational elements" valence" [39], [40].
Valence perception is most strongly discriminated in the touch mode. So, even though we have shown texture changes to effectively communicate both visually and haptically, when a robot designer wants to evoke valence, touching the robot's texture change is the most promising channel. This finding can also shed additional light on using textures skin in robot design. For example, a robot may need to adjust its expressions and emotional expectations by considering the factor of whether it is physically presented, and possibly the viewing distance and the degree of psychological engagement.
In addition to psychological distance, this finding could also be due to the emotional impact brought about by tactile sensations. In our case, spikes are more likely to be interpreted as a negative emotion when sensing a sharp, unfavorable sensation under one's palm. The effectiveness in conveying valence by touching a changing texture skin stands out from most current social robotics research dealing with haptic interactions, when using breathing-like behaviors or through vibrotactile sensations: Most of them were proven to have ambiguity in communicating valence [25], [26]. The tangibility and materiality of TUs, on the contrary, give users direct tactile impressions and can be effective and intuitive in conveying emotional valence.
除了心理距离之外,这一发现也可能是由触觉感触带来的情感影响所致。在我们的情况下,当感觉到手掌下的尖锐、不利的感觉时,尖刺更有可能被解释为负面情绪。通过触摸变化的纹理皮肤传达价值观的有效性在处理触觉互动的大多数当前社交机器人研究中脱颖而出,当使用类似呼吸的行为或通过振动触觉感触时:大多数被证明在传达价值观时存在模糊性[25],[26]。相反,TUs 的可触及性和物质性为用户提供直接的触觉印象,并且在传达情感价值方面可以是有效且直观的。
Of the parameters we chose to vary in our design, the type of TU and its movement frequency had the most significant impact on emotion recognition. We recommend using those parameters in the future design of texture skins for robots. Amplitude was not a good communicative parameter, with the exception that small amplitude changes were readable via in-person observation. The two strongest parameters also map nicely onto the commonly accepted Circumplex Model of emotion plane. As shown through a number of related analyses, the shape of a texture maps roughly to emotional valence, and texture change frequency maps to arousal.
在我们设计中选择变化的参数中,TU 的类型和其运动频率对情绪识别产生了最显著的影响。我们建议在未来机器人纹理皮肤的设计中使用这些参数。振幅不是一个良好的沟通参数,除了小振幅变化可以通过亲自观察来读取。最强的两个参数也很好地映射到常用的情绪平面圆环模型上。正如通过许多相关分析所显示的那样,纹理的形状大致映射到情绪的价值,而纹理变化频率则映射到唤醒。
We find that most emotions we tested could be expressed by a specific set of gestures, however, the expressive significance varied across emotions. Emotions with "extreme" (high or low) arousal are more strongly categorized than emotions with medium arousal. This indicates an arousal gap in the expressive content. One possible cause is that in our case, the arousal axis is mainly expressed by frequency parameters, and we only chose two extreme levels. Designers may be able to increase the expressive consistency and enrich the context range by continuously varying the frequency.
Based on our analysis from the findings, we can suggest the mapping between texture gestures and emotion expressions laid out in Table II.
根据我们从研究结果中的分析,我们可以建议在表 II 中列出的纹理手势和情感表达之间的映射。
We include the weak mapping between SL and the sad and bored emotions, but stress that we did not identify any gesture that was selected consistently as sad or bored. Also, Table II shows that several similar emotions are hard to differentiate, e.g., calm and sleepy. We believe that by adding more degrees of freedom of texture parameters, and optional values that they can take, a designer could increase the robot's expressive
我们包括了 SL 之间的弱映射和悲伤以及无聊的情绪,但强调我们没有确定任何手势被一致选为悲伤或无聊。此外,表 II 显示几种类似的情绪很难区分,例如,平静和困倦。我们相信通过增加更多的纹理参数的自由度和它们可以采取的可选值,设计师可以增加机器人的表达能力。

Texture Gesture 纹理手势 Emotions 情绪 Notes 笔记
Low-frequency Goosebumps
sleepy / calm 困倦 / 平静
High-frequency Spikes
angry / scared 生气 / 害怕
High-frequency Goosebumps
happy / excited 快乐 / 兴奋 In person 亲自
High-frequency Goosebumps
happy / content 快乐 / 满足 Over video 在视频中
Low-frequency Spikes
sad / bored 伤心 / 无聊 Weak 虚弱
range. Some other parameters we would like to study in the future include the size of texture units, non-periodic texture movements, and asymmetric texture change patterns.
There are some limitations on the generality of the results. In the experiment, we surveyed two texture forms: goosebumps and spikes. We do not have empirical data on texture forms beyond these two. Furthermore, emotion perceptions may be affected by form factors of the robot [41]. In this paper, we used a disembodied flat textured skin module. It stands to reason that, when the texture is attached to a robot with a different form factor or when it is combined with other communicative channels such as facial expressions and gaze, the findings may not apply. The interaction between texture expressions and other modalities needs to be further studied.


In this paper, we empirically explored the expressive potential of a new design channel for social robots in the form of skin texture change. We conducted a study with 139 participants labeling eight texture movements with validated emotion words in three interaction modes: watching in video, observing and touching live texture movements. Our results showed that most texture expressions could be perceived as specific emotions, and perceptions were similar across different interaction modes. We also found a strong correlation between texture patterns and Russell's emotional decomposition, with the goosebumps texture perceived as more positive valence than spikes, and high texture change frequency mapped to a higher arousal emotion, leading to clear design implications for the use of texture as expressive modality. Overall our findings identify skin texture change as a promising design tool in social robots design.
在这篇论文中,我们从实证角度探讨了一种新的社交机器人设计渠道——皮肤纹理变化的表现潜力。我们进行了一项研究,共有 139 名参与者在三种交互模式下(观看视频、观察和触摸实时纹理运动)用经过验证的情绪词标记了八种纹理运动。我们的结果显示,大多数纹理表达可以被感知为特定的情绪,并且在不同的交互模式下感知相似。我们还发现了纹理模式与 Russell 的情绪分解之间的强相关性,鸡皮疙瘩纹理被感知为比尖刺更积极的情绪,高纹理变化频率映射到更高的唤醒情绪,从而为纹理作为表现形式的设计提供了明确的设计启示。总的来说,我们的研究结果确定了皮肤纹理变化作为社交机器人设计中一种有前途的设计工具。
Future work will compare the effectiveness of texture expression to other modalities such as facial expressions. We plan to explore more TU shapes and texture change parameters to increase the expressive range of this modality. We further would like to study the expressive capability of texture change in specific applications, such as in autonomous cars, and for developing a communicative channel for robots for visuallyimpaired people.
未来的工作将比较纹理表达与其他形式(如面部表情)的有效性。我们计划探索更多的 TU 形状和纹理变化参数,以扩大这种形式的表达范围。我们还希望研究纹理变化在特定应用中的表达能力,比如在自动驾驶汽车中,以及为视障人士开发机器人的交流渠道。


We would like to thank Zhengnan Zhao, Abheek Vimal, and Philina Chen for valuable discussions, their help developing the robot prototype, and their assistance with data collection.


[1] T. Fong, I. Nourbakhsh, and K. Dautenhahn, "A survey of socially interactive robots," Robotics and autonomous systems, vol. 42, no. 34, pp. 143-166, 2003
T. Fong, I. Nourbakhsh, 和 K. Dautenhahn, "社交互动机器人综述," 机器人与自主系统, vol. 42, no. 34, pp. 143-166, 2003
[2] C. Breazeal, K. Dautenhahn, and T. Kanda, "Social robotics," in Springer handbook of robotics. Springer, 2016, pp. 1935-1972.
C. Breazeal, K. Dautenhahn, 和 T. Kanda, "社交机器人," 在 Springer 机器人手册. Springer, 2016, 页码 1935-1972.
[3] M. A. Goodrich, A. C. Schultz et al., "Human-robot interaction: a survey," Foundations and Trends® in Human-Computer Interaction, vol. 1, no. 3, pp. 203-275, 2008.
M. A. Goodrich, A. C. Schultz 等人,“人机交互:一项调查”,《人机交互基础与趋势》,第 1 卷,第 3 期,2008 年,203-275 页。
[4] R. Gockley, A. Bruce, J. Forlizzi, M. Michalowski, A. Mundell, S. Rosenthal, B. Sellner, R. Simmons, K. Snipes, A. C. Schultz et al., "Designing robots for long-term social interaction," in 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005.(IROS 2005). IEEE, 2005, pp. 1338-1343.
[4] R. Gockley, A. Bruce, J. Forlizzi, M. Michalowski, A. Mundell, S. Rosenthal, B. Sellner, R. Simmons, K. Snipes, A. C. Schultz 等人,“为长期社交互动设计机器人”,2005 年 IEEE/RSJ 国际智能机器人与系统会议论文集,2005 年(IROS 2005),IEEE,2005 年,页码 1338-1343。
[5] C. C. Bennett and S. Šabanović, "Deriving minimal features for humanlike facial expressions in robotic faces," International Journal of Social Robotics, vol. 6, no. 3, pp. 367-381, 2014
C. C. Bennett 和 S. Šabanović,“在机器人脸部中提取人类面部表情的最小特征”,《国际社交机器人学杂志》,第 6 卷,第 3 期,2014 年,页码 367-381.
[6] A. Kalegina, G. Schroeder, A. Allchin, K. Berlin, and M. Cakmak, "Characterizing the design space of rendered robot faces," in Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. ACM, 2018, pp. 96-104.
[6] A. Kalegina, G. Schroeder, A. Allchin, K. Berlin, and M. Cakmak, "表征渲染机器人面孔设计空间," 2018 年 ACM/IEEE 国际人机交互会议论文集. ACM, 2018, pp. 96-104.
[7] A. Thomaz, G. Hoffman, M. Cakmak et al., "Computational humanrobot interaction," Foundations and Trends in Robotics, vol. 4, no. 2-3, pp. 105-223, 2016.
[7] A. Thomaz, G. Hoffman, M. Cakmak 等人,“计算人机交互”,《机器人基础与趋势》,第 4 卷,第 2-3 期,2016 年,页码 105-223。
[8] C.-M. Huang and B. Mutlu, "Modeling and evaluating narrative gestures for humanlike robots," in Robotics: Science and Systems, 2013, pp. 5764.
C.-M.黄和 B.穆特鲁,“对类人机器人建模和评估叙述手势”,载于《机器人学:科学与系统》,2013 年,第 5764 页。
[9] H. Admoni, A. Dragan, S. S. Srinivasa, and B. Scassellati, "Deliberate delays during robot-to-human handovers improve compliance with gaze communication," in Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction. ACM, 2014, pp. 49-56.
H. Admoni, A. Dragan, S. S. Srinivasa, 和 B. Scassellati, "机器人向人类交接时故意延迟有助于提高凝视交流的遵从性," 发表于 2014 年 ACM/IEEE 国际人机交互会议论文集。ACM, 2014, 页码 49-56。
[10] C. L. Bethel and R. R. Murphy, "Affective expression in appearance constrained robots," in Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction. ACM, 2006, pp. 327-328.
C. L. Bethel 和 R. R. Murphy,“受限外观机器人的情感表达”,发表于第 1 届 ACM SIGCHI/SIGART 人机交互会议论文集。ACM,2006 年,第 327-328 页。
[11] -, "Survey of non-facial / non-verbal affective expressions for appearance-constrained robots," IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 38, no. 1, pp 83-92, 2008 .
[11] -,“外观受限机器人的非面部/非语言情感表达调查”,IEEE 系统、人类和控制论交易,C 部分(应用和评论),第 38 卷,第 1 期,2008 年,83-92 页。
[12] G. Hoffman and W. Ju, "Designing robots with movement in mind," Journal of Human-Robot Interaction, vol. 3, no. 1, pp. 91-122, 2014.
[12] G. Hoffman 和 W. Ju,"考虑运动设计机器人," 《人机交互杂志》,第 3 卷,第 1 期,2014 年,第 91-122 页。
[13] C. Darwin, The expression of the emotions in man and animals. D. Appleton And Company, 1899.
[13] 查尔斯·达尔文,《人类和动物的情感表达》,D. Appleton And Company,1899 年。
[14] Y. Hu, Z. Zhao, A. Vimal, and G. Hoffman, "Soft skin texture modulation for social robotics," in 2018 IEEE International Conference on Soft Robotics (RoboSoft). IEEE, 2018, pp. 182-187.
Y. 胡,Z. 赵,A. 维马尔和 G. 霍夫曼,“用于社交机器人的软皮肤纹理调制”,2018 年 IEEE 软机器人国际会议(RoboSoft)中,IEEE,2018 年,第 182-187 页。
[15] M. Zecca, Y. Mizoguchi, K. Endo, F. Iida, Y. Kawabata, N. Endo, K. Itoh, and A. Takanishi, "Whole body emotion expressions for KOBIAN humanoid robot-preliminary experiments with different emotional patterns," in RO-MAN 2009. The 18th IEEE International Symposium on Robot and Human Interactive Communication, 2009. IEEE, 2009, pp. 381-386.
M. Zecca, Y. Mizoguchi, K. Endo, F. Iida, Y. Kawabata, N. Endo, K. Itoh, and A. Takanishi,“KOBIAN 人形机器人的全身情感表达-不同情感模式的初步实验”,2009 年 IEEE 机器人与人类互动交流国际研讨会 RO-MAN 2009,IEEE,2009 年,第 381-386 页。
[16] J. Saldien, K. Goris, B. Vanderborght, J. Vanderfaeillie, and D. Lefeber, "Expressing emotions with the social robot Probo," International Journal of Social Robotics, vol. 2, no. 4, pp. 377-389, 2010.
J. Saldien, K. Goris, B. Vanderborght, J. Vanderfaeillie, 和 D. Lefeber, "用社交机器人 Probo 表达情感," 《国际社交机器人杂志》,卷 2,号 4,页 377-389,2010。
[17] M. S. Erden, "Emotional postures for the humanoid-robot Nao," International Journal of Social Robotics, vol. 5, no. 4, pp. 441-456, 2013.
M. S. Erden,“人形机器人 Nao 的情绪姿势”,《社交机器人国际期刊》,第 5 卷,第 4 期,2013 年,页码 441-456。
[18] C. Breazeal and B. Scassellati, "A context-dependent attention system for a social robot," rn, vol. 255, p. 3, 1999.
[18] C. Breazeal and B. Scassellati,“一个社交机器人的上下文依赖注意力系统”,rn,第 255 卷,第 3 页,1999 年。
[19] J. Kedzierski, R. Muszyński, C. Zoll, A. Oleksy, and M. Frontkiewicz, "EMYS emotive head of a social robot," International Journal of Social Robotics, vol. 5, no. 2, pp. 237-249, 2013
J. Kedzierski, R. Muszyński, C. Zoll, A. Oleksy 和 M. Frontkiewicz,"EMYS 情感机器人的头部",《国际社交机器人学杂志》,第 5 卷,第 2 期,2013 年,页码 237-249.
[20] T. Allman, The Nexi Robot. Norwood House Press, 2009.
T. Allman, Nexi 机器人。Norwood House Press,2009。
[21] M. J. Hertenstein, R. Holmes, M. McCullough, and D. Keltner, "The communication of emotion via touch," Emotion, vol. 9, no. 4, p. 566, 2009.
M. J. Hertenstein, R. Holmes, M. McCullough, 和 D. Keltner, "通过触摸传达情感," Emotion, vol. 9, no. 4, p. 566, 2009.
[22] M. J. Hertenstein, D. Keltner, B. App, B. A. Bulleit, and A. R. Jaskolka, "Touch communicates distinct emotions," Emotion, vol. 6, no. 3, p. 528 2006.
M. J. Hertenstein, D. Keltner, B. App, B. A. Bulleit, 和 A. R. Jaskolka, "触摸传达不同的情绪," Emotion, vol. 6, no. 3, p. 528 2006.
[23] J. D. Fisher, M. Rytting, and R. Heslin, "Hands touching hands: Affective and evaluative effects of an interpersonal touch," Sociometry, pp. 416-421, 1976 .
J. D. Fisher、M. Rytting 和 R. Heslin,"Hands touching hands: Affective and evaluative effects of an interpersonal touch," Sociometry,1976 年,第 416-421 页。

[24] C. D. Kidd, W. Taggart, and S. Turkle, "A sociable robot to encourage social interaction among the elderly," in Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006. IEEE, 2006, pp. 3972-3976.
C. D. Kidd, W. Taggart, 和 S. Turkle, "一个社交机器人来鼓励老年人之间的社交互动," 在 2006 年 IEEE 国际机器人与自动化大会论文集中, 2006. ICRA 2006. IEEE, 2006, 页码 3972-3976.
[25] S. Yohanan and K. E. MacLean, "Design and assessment of the Haptic Creature's affect display," in 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 2011, pp. 473480 .
S. Yohanan 和 K. E. MacLean,“触觉生物情感显示的设计与评估”,2011 年第 6 届 ACM/IEEE 国际人机交互会议(HRI)中。IEEE,2011 年,第 473-480 页。
[26] P. Bucci, X. L. Cang, A. Valair, D. Marino, L. Tseng, M. Jung, J. Rantala, O. S. Schneider, and K. E. MacLean, "Sketching Cuddlebits: coupled prototyping of body and behaviour for an affective robot pet," in Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 2017, pp. 3681-3692.
P. Bucci, X. L. Cang, A. Valair, D. Marino, L. Tseng, M. Jung, J. Rantala, O. S. Schneider, and K. E. MacLean,“Sketching Cuddlebits: coupled prototyping of body and behaviour for an affective robot pet”,收录于 2017 年人机交互计算系统 CHI 会议论文集。ACM,2017 年,页码 3681-3692。
[27] S. Follmer, D. Leithinger, A. Olwal, A. Hogge, and H. Ishii, "inFORM: dynamic physical affordances and constraints through shape and object actuation." in UIST'13, vol. 13, 2013, pp. 417-426.
S. Follmer, D. Leithinger, A. Olwal, A. Hogge, 和 H. Ishii, "inFORM: 通过形状和物体激励的动态物理可负担性和约束." 在 UIST'13, vol. 13, 2013, pp. 417-426.
[28] A. Van Oosterhout, M. Bruns Alonso, and S. Jumisko-Pyykkö, "Ripple thermostat: Affecting the emotional experience through interactive force feedback and shape change," in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 2018, p. 655
[28] A. Van Oosterhout, M. Bruns Alonso, 和 S. Jumisko-Pyykkö, "Ripple thermostat: 通过交互式力反馈和形状变化影响情感体验," 在 2018 年 CHI 人机交互计算系统会议论文集中. ACM, 2018, p. 655
[29] F. Davis, "FELT: communicating emotion through a shape changing textile wall panel," in Textiles for Advanced Applications. InTech, 2017.
F. Davis, "FELT: 通过可变形纺织墙板传达情感," 收录于《先进应用纺织品》一书中。InTech,2017。
[30] H. Kozima, M. P. Michalowski, and C. Nakagawa, "Keepon," International Journal of Social Robotics, vol. 1, no. 1, pp. 3-18, 2009.
H. Kozima, M. P. Michalowski 和 C. Nakagawa,“Keepon”,《国际社交机器人学杂志》,第 1 卷,第 1 期,2009 年,第 3-18 页。
[31] R. Wistort and C. Breazeal, "TOFU: a socially expressive robot character for child interaction," in Proceedings of the 8th International Conference on Interaction Design and Children. ACM, 2009, pp. 292-293.
R. Wistort 和 C. Breazeal,“TOFU:一个用于儿童互动的社交表达机器人角色”,载于第 8 届国际互动设计与儿童会议论文集。ACM,2009 年,第 292-293 页。
[32] M. Suguitan and G. Hoffman, "Blossom: a tensile social robot design with a handcrafted shell," in Companion of the 2018 ACM/IEEE Inter national Conference on Human-Robot Interaction. ACM, 2018, pp. 383-383.
[32] M. Suguitan 和 G. Hoffman,“Blossom: 一个带有手工外壳的拉伸社交机器人设计”,发表于 2018 年 ACM/IEEE 国际人机交互会议的同行论文集。ACM,2018 年,页码 383-383。
[33] J. A. Russell, "A circumplex model of affect," Journal of personality and social psychology, vol. 39, no. 6, p. 1161, 1980.
J. A. Russell,“情感的圆环模型”,《人格与社会心理学杂志》,第 39 卷,第 6 期,第 1161 页,1980 年。
[34] S. A. Shea, "Behavioural and arousal-related influences on breathing in humans," Experimental physiology, vol. 81, no. 1, pp. 1-26, 1996.
S. A. Shea,“行为和唤醒相关因素对人类呼吸的影响”,实验生理学,第 81 卷,第 1 期,1996 年。
[35] J. A. Russell and M. Bullock, "Multidimensional scaling of emotional facial expressions: similarity from preschoolers to adults," Journal of Personality and Social Psychology, vol. 48, no. 5, p. 1290, 1985.
J. A. Russell 和 M. Bullock,“情绪面部表情的多维标度:从学龄前儿童到成人的相似性”,《人格与社会心理学杂志》,第 48 卷,第 5 期,第 1290 页,1985 年。
[36] D. J. Benjamin, J. O. Berger, M. Johannesson, B. A. Nosek, E.-J. Wagenmakers, R. Berk, K. A. Bollen, B. Brembs, L. Brown, C. Camerer et al., "Redefine statistical significance," Nature Human Behaviour, vol. 2, no. 1, p. 6, 2018.
[36] D. J.本杰明,J. O.伯杰,M.约翰内松,B. A.诺塞克,E.-J.瓦根马克斯,R.伯克,K. A.博伦,B.布伦布斯,L.布朗,C.卡梅勒等,“重新定义统计显著性”,《自然人类行为》,卷 2,第 1 期,第 6 页,2018 年。
[37] Q. Xu, J. S. L. Ng, Y. L. Cheong, O. Y. Tan, J. B. Wong, B. T. C. Tay, and T. Park, "Effect of scenario media on human-robot interaction evaluation," in Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction. ACM, 2012, pp. 275-276.
[37] Q. Xu, J. S. L. Ng, Y. L. Cheong, O. Y. Tan, J. B. Wong, B. T. C. Tay, and T. Park,“场景媒体对人机交互评估的影响”,收录于第七届 ACM/IEEE 国际人机交互会议论文集。ACM,2012 年,第 275-276 页。
[38] W. A. Bainbridge, J. W. Hart, E. S. Kim, and B. Scassellati, "The benefits of interactions with physically present robots over video-displayed agents," International Journal of Social Robotics, vol. 3, no. 1, pp. 41 .
[38] W. A. Bainbridge, J. W. Hart, E. S. Kim, and B. Scassellati,“与实际存在的机器人互动相比视频显示的代理的好处”,《国际社交机器人学杂志》,第 3 卷,第 1 期,第 41 页。
[39] J. Bordarie and S. Gaymard, "Social representations and public policy: Influence of the distance from the object on representational valence," Open Journal of Social Sciences, vol. 3, no. 09, p. 300, 2015.
[39] J. Bordarie 和 S. Gaymard,“社会表征与公共政策:客体距离对表征价值的影响”,《社会科学开放杂志》,第 3 卷,第 09 期,2015 年。
[40] D. Krpan and S. Schnall, "Too close for comfort: Stimulus valence moderates the influence of motivational orientation on distance perception," Journal of personality and social psychology, vol. 107, no. 6, p. 978 2014.
D. Krpan 和 S. Schnall,“太近了让人不舒服:刺激价值调节动机取向对距离感知的影响”,《人格与社会心理学杂志》,第 107 卷,第 6 期,第 978 页,2014 年。
[41] E. Park, H. Kong, H.-t. Lim, J. Lee, S. You, and A. P. del Pobil, "The effect of robot's behavior vs. appearance on communication with humans," in Proceedings of the 6th international conference on Humanrobot interaction. ACM, 2011, pp. 219-220.
E. Park, H. Kong, H.-t. Lim, J. Lee, S. You, 和 A. P. del Pobil, "机器人的行为与外观对与人类交流的影响," 发表于第 6 届国际人机交互会议论文集. ACM, 2011, 页码 219-220.

  1. Following Benjamin et al. [36], we consider to be statistically significant, rather than the more common .
    根据 Benjamin 等人[36]的研究,我们认为 在统计上是显著的,而不是更常见的