A survey of human-computer interaction (HCI) & natural habits-based behavioural biometric modalities for user recognition schemes
针对用户识别方案的人机交互 (HCI) 和基于自然习惯的行为生物识别模式的调查

计算机科学TOPEI检索SCI升级版 计算机科学1区SCI基础版 工程技术2区IF 8.0 如果8.0SWJTU A++ 西南交通大学A++SWUFE A 西南财经大学A
https://doi.org/10.1016/j.patcog.2023.109453 Get rights and content 获取权利和内容

Highlights 强调

  • The article presents a survey of the human-computer interaction and natural habits-based biometrics, namely, touchstroke, swipe, touch-signature, hand-movements, voice, gait, and single footstep that can be acquired from smart devices equipped with motion sensors, touch screens, and microphones or by external IoT sensors or nodes in an unobtrusive manner.
    本文对人机交互和基于自然习惯的生物识别进行了调查,即可以从配备运动传感器的智能设备获取的触摸、滑动、触摸签名、手部动作、语音、步态和单脚步。 、触摸屏和麦克风或通过外部物联网传感器或节点以不显眼的方式。

  • The article elicits attributes and features of the aforementioned behavioral biometrics that can be exploited for designing reliable user recognition schemes. We discuss the methodologies, classifiers, datasets, and performance results of recent user recognition schemes that employ these behavioral biometrics modalities.
    该文章引出了上述行为生物识别技术的属性和特征,可用于设计可靠的用户识别方案。我们讨论了采用这些行为生物识别模式的最新用户识别方案的方法、分类器、数据集和性能结果。

  • The article presents security, privacy, and usability attributes with regard to the (CIA) properties in human-to-things recognition schemes.
    本文介绍了人对物识别方案中 (CIA) 属性的安全性、隐私性和可用性属性。

  • The article discusses challenges, limitations, prospects, and opportunities associated with behavioral biometric-based user recognition schemes. The prospects and market trends indicate that behavioral biometrics can instigate innovative ways to implement implicit (frictionless), continuous (active), or risk-based (non-static) recognition schemes for IoT applications.
    本文讨论了与基于行为生物特征识别的用户识别方案相关的挑战、局限性、前景和机遇。前景和市场趋势表明,行为生物识别技术可以激发创新方法,为物联网应用实施隐式(无摩擦)、连续(主动)或基于风险(非静态)的识别方案。

  • Ultimately, with the availability of smart sensors, advanced machine learning algorithms, and powerful IoT platforms, behavioral biometrics can substitute conventional recognition schemes, thus, reshaping the existing user recognition landscape.
    最终,随着智能传感器、先进的机器学习算法和强大的物联网平台的出现,行为生物识别技术可以取代传统的识别方案,从而重塑现有的用户识别格局。

Abstract 抽象的

The proliferation of Internet of Things (IoT) systems is having a profound impact across all aspects of life. Recognising and identifying particular users is central to delivering the personalised experience that citizens want to experience, and that organisations wish to deliver. This article presents a survey of human-computer interaction-based (HCI-based) and natural habits-based behavioural biometrics that can be acquired unobtrusively through smart devices or IoT sensors for user recognition purposes. Robust and usable user recognition is also a security requirement for emerging IoT ecosystems to protect them from adversaries. Typically, it can be specified as a fundamental building block for most types of human-to-things accountability principles and access-control methods. However, end-users are facing numerous security and usability challenges in using currently available knowledge- and token-based recognition (i.e., authentication and identification) schemes. To address the limitations of conventional recognition schemes, biometrics, naturally come as a first choice to supporting sophisticated user recognition solutions. We perform a comprehensive review of touch-stroke, swipe, touch signature, hand-movements, voice, gait and footstep behavioural biometrics modalities. This survey analyzes the recent state-of-the-art research of these behavioural biometrics with a goal to identify their attributes and features for generating unique identification signatures. Finally, we present security, privacy, and usability evaluations that can strengthen the designing of robust and usable user recognition schemes for IoT applications.
物联网 (IoT) 系统的激增正在对生活的各个方面产生深远的影响。识别和识别特定用户对于提供公民想要体验的个性化体验以及组织希望提供的个性化体验至关重要。本文介绍了基于人机交互(基于 HCI)和基于自然习惯的行为生物识别技术的调查,这些生物识别技术可以通过智能设备或物联网传感器以不显眼的方式获取,以实现用户识别目的。强大且可用的用户识别也是新兴物联网生态系统的安全要求,以保护它们免受对手的攻击。通常,它可以被指定为大多数类型的人对物问责原则和访问控制方法的基本构建块。然而,最终用户在使用当前可用的基于知识和基于令牌的识别(即身份验证和识别)方案时面临着许多安全性和可用性挑战。为了解决传统识别方案的局限性,生物识别技术自然成为支持复杂的用户识别解决方案的首选。我们对触摸、滑动、触摸签名、手部动作、声音、步态和脚步行为生物识别模式进行全面审查。这项调查分析了这些行为生物识别技术的最新研究成果,旨在识别它们的属性和特征,以生成独特的识别签名。最后,我们提出了安全性、隐私性和可用性评估,可以加强物联网应用程序稳健且可用的用户识别方案的设计。

Keywords 关键词

Internet of Things (IoT)
User recognition
Behavioural biometrics
Secutity
Privacy
Usability

物联网 (IoT)用户识别行为生物识别安全隐私可用性

1. Introduction 一、简介

IoT ecosystems, integrating smart sensors, actuators, advanced communications, efficient computation, and artificial intelligence, have the power to transform the way we live and work. Almost every business vertical has started to embrace IoT technology [1]. This includes sectors as diverse as automotive, energy, entertainment, education, food, finance, healthcare, and transportation where smart integrated systems are delivering improved quality of life and resource efficiency by providing security-sensitive services via IoT applications. Bera et al. [2] reported that user authentication, access control, key management, and intrusion detection are essential requirements to prevent real-time data access directly from the IoT-enabled smart devices that are deployed in IoT ecosystems. Studies have indicated that application-layer attacks in the IoT are particularly complex to detect and deflect [3], [4]. Ultimately, any security breach of IoT ecosystems has the potential for profound consequences on consumers and society [5]. Therefore, robust and usable Authentication, Authorization and Accounting (AAA) mechanisms for applications bridging humans and IoT ecosystems, which can be specified as IoT Applications, are critical for maintaining confidentiality, integrity, availability (CIA) in the system illustrated in Fig. 1.
物联网生态系统集成了智能传感器、执行器、先进通信、高效计算和人工智能,有能力改变我们的生活和工作方式。几乎每个垂直行业都开始采用物联网技术[1]。这包括汽车、能源、娱乐、教育、食品、金融、医疗保健和交通等各个领域,其中智能集成系统通过物联网应用提供安全敏感的服务,从而提高生活质量和资源效率。贝拉等人。 [2] 报告称,用户身份验证、访问控制、密钥管理和入侵检测是防止直接从部署在物联网生态系统中的支持物联网的智能设备进行实时数据访问的基本要求。研究表明,物联网中的应用层攻击的检测和转移特别复杂[3]、[4]。最终,物联网生态系统的任何安全漏洞都可能对消费者和社会产生深远的影响[5]。因此,为连接人类和物联网生态系统的应用程序(可指定为物联网应用程序)提供强大且可用的身份验证、授权和计费 (AAA) 机制,对于维护图 1 所示系统中的机密性、完整性、可用性 (CIA) 至关重要。

Fig. 1
  1. Download : Download high-res image (140KB)
    下载:下载高分辨率图像 (140KB)
  2. Download : Download full-size image
    下载:下载全尺寸图像

Fig. 1. AAA mechanisms vs. CIA properties in the context of IoT applications.
图 1.物联网应用背景下的 AAA 机制与 CIA 属性。

Many IoT ecosystems still rely on traditional Personal Identification Numbers (PINs), passwords, and tokens based user recognition mechanisms [6]. This is despite, users facing both security and usability challenges in using these conventional (knowledge- and token-based) recognition schemes [7], [8]. Further, the decision process in conventional authentication mechanisms is usually binary [9]. PINs and passwords can be easily guessed, shared, cloned, or stolen [10]. Conventional authentication schemes are also prone to a wide range of common attacks [11], such as dictionary-, observation- and replay-attacks. Weak passwords remain the major cause of botnet-based attacks, such as Mirai, on huge numbers of IoT systems [12]. Additionally, they possess several usability issues [13], such as placing overwhelming cognitive load on users and ergonomic inefficiencies for newer IoT end-points. As such, human-to-things recognition schemes for IoT ecosystems require rethinking, with behavioural biometrics providing an appropriate alternative to overcoming the drawbacks present in conventional authentication schemes.
许多物联网生态系统仍然依赖传统的个人识别码 (PIN)、密码和基于令牌的用户识别机制 [6]。尽管如此,用户在使用这些传统的(基于知识和令牌的)识别方案时面临安全性和可用性挑战[7],[8]。此外,传统认证机制中的决策过程通常是二元的[9]。 PIN 和密码很容易被猜出、共享、克隆或窃取 [10]。传统的身份验证方案还容易受到各种常见攻击[11],例如字典攻击、观察攻击和重放攻击。弱密码仍然是大量物联网系统上基于僵尸网络的攻击(例如 Mirai)的主要原因 [12]。此外,它们还存在一些可用性问题[13],例如给用户带来巨大的认知负担以及较新的物联网端点的人体工程学效率低下。因此,物联网生态系统的人对物识别方案需要重新思考,行为生物识别技术可以提供适当的替代方案来克服传统身份验证方案中存在的缺点。

This article presents a comprehensive review of touch-stroke, swipe, touch signature, hand-movements, voice, gait and footstep behavioural biometric modalities (refer Fig. 2) for designing user recognition schemes in emerging IoT ecosystems. The motivation for this particular selection of modalities is provided by the current focus of academic research, and the industrial trend towards human-computer interaction (HCI) and natural habits-based behavioural biometrics-based recognition schemes. For instance, ViewSonic and Namirial partnered to deliver a behavioural biometric eSignature solution that includes the behavioural biometric of handwritten signatures to boost electronic signature security and reliability [14].
本文全面回顾了触摸笔画、滑动、触摸签名、手部动作、语音、步态和脚步行为生物识别模式(参见图 2),用于在新兴物联网生态系统中设计用户识别方案。这种特定模式选择的动机是由当前学术研究的焦点以及人机交互 (HCI) 和基于自然习惯的行为生物识别识别方案的行业趋势提供的。例如,ViewSonic 和 Namirial 合作提供了行为生物识别电子签名解决方案,其中包括手写签名的行为生物识别,以提高电子签名的安全性和可靠性 [14]。

Fig. 2
  1. Download : Download high-res image (596KB)
    下载:下载高分辨率图像 (596KB)
  2. Download : Download full-size image
    下载:下载全尺寸图像

Fig. 2. Use of behavioural biometric for Authentication, Authorization and Accounting (AAA) mechanisms for IoT applications (Source: Google Images).
图 2. 使用行为生物识别技术进行物联网应用的身份验证、授权和计费 (AAA) 机制(来源:Google 图片)。

Banking sectors are investigating characteristics including touch-stroke dynamics to generate a trusted user profiles for distinguishing between normal and unusual user behavior, as a means to detect fraudulent users [15]. For example, leading companies, such as BehavioSec [16] and BioCatch [17] are leveraging behavioural biometrics, including swipe or touch gestures, typing rhythm, or the particular way an individual holds their device, to offer enterprise-scale security solutions for continual and risk-based authentication or fraud detection. Also, many electronic payment card providers are investigating behavioural biometrics for cutting-edge payment systems of the future [18].
银行部门正在研究包括触摸动态在内的特征,以生成可信的用户配置文件,以区分正常和异常的用户行为,作为检测欺诈用户的手段[15]。例如,BehavioSec [16] 和 BioCatch [17] 等领先公司正在利用行为生物识别技术(包括滑动或触摸手势、打字节奏或个人握持设备的特定方式)来提供企业级安全解决方案,以实现持续的安全解决方案。基于风险的身份验证或欺诈检测。此外,许多电子支付卡提供商正在研究未来尖端支付系统的行为生物识别技术[18]。

A study of biometrics to achieve intelligent, convenient, and secure solutions for smart cities and smart transportation are presented in [19] and [20], respectively. Sensor-based activity recognition [21], such as gait, can be used to verify commuters through their walking patterns, thereby replacing the need for a travel pass to access public transportation. NEC Corporation and SITA have collaborated to roll out a walk-through, contactless digital identity solution for airports leveraging their biometric identity management platform to facilitate a non-intrusive method of identity verification [22]. So large is the potential that the market study forecasts that by 2025 behavioural biometrics market will reach 3.92 Billion [23].
[19]和[20]分别介绍了生物识别技术的研究,以实现智慧城市和智能交通的智能、便捷和安全的解决方案。基于传感器的活动识别[21],例如步态,可用于通过步行模式验证通勤者,从而取代旅行通行证来使用公共交通。 NEC 公司和 SITA 合作,利用其生物识别身份管理平台,为机场推出了步行式非接触式数字身份解决方案,以促进非侵入式身份验证方法 [22]。潜力如此之大,市场研究预测,到 2025 年,行为生物识别市场将达到 39.2 亿[23]。

1.1. Objectives and survey strategy
1.1.目标和调查策略

The objective of this article is to survey HCI and natural habits-based biometrics that can be utilized by researchers and engineers to design uni-modal or multi-modal user recognition schemes (leveraging concepts such as implicit, continuous, or risk-based  [9]) for security-sensitive applications, thus, safeguarding IoT ecosystems. Fig. 3 illustrates the timeline and Table 1 lists previous surveys related to the behavioural biometric modalities covered in this article.
本文的目的是调查人机交互和基于自然习惯的生物识别技术,研究人员和工程师可以利用这些生物识别技术来设计单模式或多模式用户识别方案(利用隐式、连续或基于风险等概念[9] ])适用于安全敏感的应用程序,从而保护物联网生态系统。图 3 说明了时间线,表 1 列出了之前与本文涵盖的行为生物识别模式相关的调查。

Fig. 3
  1. Download : Download high-res image (965KB)
    下载:下载高分辨率图像 (965KB)
  2. Download : Download full-size image
    下载:下载全尺寸图像

Fig. 3. Timeline showing the important events related to behavioural biometric modalities covered in this article.
图 3. 时间线显示了与本文涵盖的行为生物识别模式相关的重要事件。

Table 1. Earlier behavioural biometrics surveys.
表 1. 早期的行为生物识别调查。

RefYear Contributions 贡献
Yampolskiy & Govindaraju [24]
扬波尔斯基和戈文达拉茹 [24]
2008This survey presented a classification of behavioural biometrics based on skills, style, preference, knowledge, motor skills, or strategy applied by humans.
这项调查根据人类应用的技能、风格、偏好、知识、运动技能或策略对行为生物识别进行了分类。
Meng et al. [25] 孟等人。 [25]2015This survey covered the development of biometric user authentication techniques on mobile phones. And, presented a study of voice, signature, gait, behavior profiling, keystroke and touch dynamics behavioural biometrics.
该调查涵盖了移动电话上生物识别用户身份验证技术的发展。并且,提出了对语音、签名、步态、行为分析、击键和触摸动态行为生物识别的研究。
Alzubaidi & Kalita [26] 阿尔祖拜迪和卡利塔 [26]2016This survey investigated authentication of smartphone users based on handwaving, gait, touchscreen, keystroke, voice, signature and general profiling behavioural biometrics.
这项调查调查了基于挥手、步态、触摸屏、击键、语音、签名和一般分析行为生物识别技术的智能手机用户身份验证。
Oak [27] 橡木 [27]2018This survey analyzed persons’ behavior, such as keystroke dynamics, mouse dynamics, haptics, gait, and log files, for their designing persistent security solutions.
这项调查分析了人们的行为,例如击键动力学、鼠标动力学、触觉、步态和日志文件,以便他们设计持久的安全解决方案。
Dang et al. [28] 党等人。 [28]2020This survey focused on Human activity recognition (HAR) for designing context-aware applications for emerging domains like IoT and healthcare by analyzing sensor- and vision-based behavioural patterns.
这项调查的重点是人类活动识别 (HAR),通过分析基于传感器和视觉的行为模式,为物联网和医疗保健等新兴领域设计上下文感知应用程序。
Stylios et al. [29] 斯蒂利奥斯等人。 [29]2020This survey presented the classification of behavioural biometrics technologies. It reviewed behavioural traits like gait, touch gestures, keystroke dynamics, hand-waving, behavioural profile, power consumption, for continuous authentication for mobile devices.
这项调查提出了行为生物识别技术的分类。它审查了步态、触摸手势、击键动态、挥手、行为概况、功耗等行为特征,以实现移动设备的持续身份验证。

In this survey, we first elucidate attributes and features of behavioural biometric modalities that can be acquired from smart devices equipped with motion sensors, touch screens, and microphones or by external IoT sensors or nodes in an unobtrusive manner. We discuss the methodologies, classifiers, datasets, and performance results of recent user recognition schemes that employ these behavioural biometrics modalities. We then present security, privacy, and usability attributes with regard to the CIA properties in human-to-things recognition schemes. Ultimately, the challenges, limitations, prospects, and opportunities associated with behavioural biometric-based user recognition schemes are presented.
在本次调查中,我们首先阐明行为生物识别模式的属性和特征,这些属性和特征可以从配备运动传感器、触摸屏和麦克风的智能设备或通过外部物联网传感器或节点以不显眼的方式获取。我们讨论了采用这些行为生物识别模式的最新用户识别方案的方法、分类器、数据集和性能结果。然后,我们介绍人对物识别方案中 CIA 属性的安全性、隐私性和可用性属性。最后,提出了与基于行为生物识别的用户识别方案相关的挑战、局限性、前景和机遇。

1.2. Article structure 1.2.文章结构

The article is structured as follows: Section 2 discusses behavioural biometrics, sensors, human-to-things recognition mechanisms and performance metrics. Section 2.1 elicits attributes and features of touch-stroke, swipe, touch signature, hand-movements, voice, gait, and footstep modalities that can be exploited for designing user recognition schemes. Section 4 presents the state-of-the-arts of user recognition schemes based on modalities discussed in Section 2.1. Section 5 presents a discussion on security, privacy, and usability of behavioural biometric-based user recognition schemes. Section 6 discusses the open challenges and limitations that deserve attention together with prospects and opportunities for evolving and designing behavioural biometric-based human-to-things recognition schemes. Section 7 concludes the article.
本文的结构如下:第 2 部分讨论行为生物识别、传感器、人对物识别机制和性能指标。第 2.1 节引出了触摸、滑动、触摸签名、手部动作、语音、步态和脚步模式的属性和特征,可用于设计用户识别方案。第 4 节介绍了基于第 2.1 节中讨论的模式的最先进的用户识别方案。第 5 节讨论了基于行为生物识别的用户识别方案的安全性、隐私性和可用性。第 6 节讨论了值得关注的开放挑战和限制,以及发展和设计基于行为生物识别的人对物识别方案的前景和机遇。第 7 节总结了本文。

2. Background 2. 背景

Despite many advancements in recent years, human-to-things recognition (identification and authentication) remains a challenge for emerging IoT ecosystems [30]. Evidently, with improvements in sensors technology, the opportunity to evolve behavioural biometric-based human-to-things recognition schemes has increased significantly.
尽管近年来取得了许多进步,但人对物识别(识别和身份验证)仍然是新兴物联网生态系统的挑战[30]。显然,随着传感器技术的进步,发展基于行为生物识别的人对物识别方案的机会显着增加。

2.1. Behavioural biometrics
2.1.行为生物识别

Behavioural biometrics involve human behavioural characteristics or activity patterns that are measurable and uniquely identifiable and so can be designed into user recognition schemes. Typically, behavioural biometric modalities can be considered according to persons’ skills, style, preference, knowledge, motor-skills, or strategy while they interact with an IoT application [24]. The categories that can be derived are 1) authorship; 2) HCI; 3) indirect HCI; 4) motor skills; and 5) natural habit, based on various information extracted or gathered from a person. These categories are summarised in Fig. 4.
行为生物识别涉及可测量且可唯一识别的人类行为特征或活动模式,因此可以设计到用户识别方案中。通常,可以根据人们与物联网应用程序交互时的技能、风格、偏好、知识、运动技能或策略来考虑行为生物识别模式[24]。可以导出的类别有:1)作者身份; 2)人机交互; 3)间接人机交互; 4)运动技能; 5)自然习惯,基于从一个人提取或收集的各种信息。图 4 总结了这些类别。

  • Authorship-based biometrics involves verifying a person by observing peculiarities in their behavior. This includes the vocabulary used, style of writing, punctuation, or brush strokes, occuring in their writings or drawing  [31].
    基于作者身份的生物识别技术涉及通过观察一个人行为的特殊性来验证一个人。这包括他们的写作或绘画中使用的词汇、写作风格、标点符号或笔触[31]。

  • HCI-based biometrics, exploits a person’s inherent, distinctive, and consistent muscle actions while they use regular input devices, such as touch-devices, keyboards, computer mice, and haptics [32]. Furthermore, it leverages advanced human behavior involving knowledge, strategies, or skills exhibited by a person during interaction with smart devices.
    基于人机交互的生物识别技术利用一个人在使用常规输入设备(例如触摸设备、键盘、电脑鼠标和触觉设备)时固有的、独特的和一致的肌肉动作[32]。此外,它还利用先进的人类行为,包括人在与智能设备交互过程中表现出的知识、策略或技能。

  • Indirect HCI-based biometrics may be considered as an extension of the second category. It considers a person’s indirect interaction behavior, by monitoring low-level computer events (e.g., battery usage) [33], stack traces [34], application audit [35], or network traffic logs [36], or mutual interaction analysis (e.g., completely automated public Turing test to tell computers and humans apart - CAPTCHA[37].
    基于人机交互的间接生物识别技术可以被视为第二类的延伸。它通过监视低级计算机事件(例如电池使用情况)[33]、堆栈跟踪[34]、应用程序审核[35]或网络流量日志[36]或相互交互分析来考虑人的间接交互行为(例如,完全自动化的公共图灵测试来区分计算机和人类 - CAPTCHA)[37]。

  • Motor-skills based behavioural biometrics can be described as the ability of a person to perform a particular action using muscle movements [38]. These muscle movements are produced as a result of coordination between the brain, skeleton, joints, and nervous system that differs from person to person [39].
    基于运动技能的行为生物识别可以描述为一个人使用肌肉运动执行特定动作的能力[38]。这些肌肉运动是大脑、骨骼、关节和神经系统之间协调的结果,因人而异[39]。

  • Natural habits-based biometrics constitute purely behavioural biometrics measuring persistent human behavior such as gait [40], hand-movement [41], swipe [42], grip [43], and footstep [44].
    基于自然习惯的生物识别技术构成纯粹的行为生物识别技术,测量持续的人类行为,例如步态[40]、手部运动[41]、滑动[42]、握力[43]和脚步[44]。

Fig. 4
  1. Download : Download high-res image (741KB)
    下载:下载高分辨率图像 (741KB)
  2. Download : Download full-size image
    下载:下载全尺寸图像

Fig. 4. A categorization of behavioural biometrics [24].
图 4. 行为生物识别学的分类 [24]。

2.2. Sensors 2.2.传感器

The rapid evolution of system-on-chip (SoC) and wireless technologies play a vital role in evolving smarter, smaller, accurate, and efficient sensors for behavioural biometric data acquisition. Table 2 describes sensors that can be integrated into smart devices and portable IoT devices for acquiring behavioural biometric modalities covered in Section 2.1.
片上系统 (SoC) 和无线技术的快速发展在发展更智能、更小、更准确和更高效的行为生物识别数据采集传感器方面发挥着至关重要的作用。表 2 介绍了可集成到智能设备和便携式 IoT 设备中的传感器,用于获取第 2.1 节中所述的行为生物识别模式。

Table 2. Sensors for acquiring behavioural biometric modalities.
表 2. 用于获取行为生物识别模式的传感器。

Category 类别Sensor description 传感器说明Sensor Type 传感器类型
Position 位置Position sensors can be linear, angular, or multi-axis. It measures the position of an object that can be either relative in terms of displacements or absolute positions.
位置传感器可以是线性的、角度的或多轴的。它测量物体的位置,可以是相对位移或绝对位置。
Proximity sensor, Potentiometer, Inclinometer
接近传感器、电位计、倾角计
Motion, Occupancy 运动、占用Motion and occupancy sensors detect movement and presence of people and objects, respectively.
运动传感器和占用传感器分别检测人和物体的运动和存在。
Electric eye, RADAR, Depth Camera
电眼、雷达、深度相机
Velocity, Acceleration, Direction
速度、加速度、方向
Velocity sensors can be linear or angular. It measures the rate of change linear or angular displacement. Acceleration sensors measure the rate of change of velocity. Magnetometer estimates the device orientation relative to earth’s magnetic north. Gravity sensor indicates the direction and magnitude of gravity.
速度传感器可以是线性的或角度的。它测量线性或角位移的变化率。加速度传感器测量速度的变化率。磁力计估计设备相对于地球磁北的方向。重力传感器指示重力的方向和大小。
Accelerometer, Gyroscope, Magnetometer, Gravity sensor
加速度计、陀螺仪、磁力计、重力传感器
Pressure 压力Pressure sensors detect force per unit area
压力传感器检测单位面积的力
Barometer, bourdon gauge, piezometer
气压计、波登计、渗压计
Force 力量Force sensors detect resistance changes when a force, pressure, or mechanical stress is applied.
力传感器检测施加力、压力或机械应力时的电阻变化。
Force gauge, Viscometer, Tactile sensor (Touch sensor), Capacitive touchscreen
测力计、粘度计、触觉传感器(触摸传感器)、电容式触摸屏
Acoustic, Voice 声学、语音Acoustic sensors measure sound levels transform it into digital or analog data signals.
声学传感器测量声级并将其转换为数字或模拟数据信号。
Microphone, geophone, hydrophone
麦克风、地震检波器、水听器

IoT endpoints (devices) can provide position, orientation, or other motion-based measurements to determine unique and finite hand micro-movements. These 3-D space measurements can describe device positioning and movement while users interact. Similarly, acoustic, pressure, motion, or occupancy sensors can be used for acquiring behavioural biometric modalities such as voice, gait, or footstep for user recognition. Touch screens can be utilized to acquire touch-stroke, swipe, or touch-signature data.
物联网端点(设备)可以提供位置、方向或其他基于运动的测量,以确定独特且有限的手部微运动。这些 3D 空间测量可以描述用户交互时的设备定位和移动。类似地,声学、压力、运动或占用传感器可用于获取行为生物特征模态,例如用于用户识别的语音、步态或脚步。触摸屏可用于获取触摸笔划、滑动或触摸签名数据。

2.3. Human-to-things recognition process
2.3.人对物的识别过程

ISO2382-2017 [45] specified biometric recognition or biometrics as an automated recognition of individuals based on their biological and behavioural characteristics. ISO2382-2017 mentioned that the use of ‘authentication’ as a synonym for “biometric verification or biometric identification” is deprecated; the term biometric recognition is preferred. Thus, human-to-things recognition can be a generic term encompassing automated identification and verification of individuals in the context of IoT applications.
ISO2382-2017 [45] 将生物识别或生物识别指定为基于个人的生物和行为特征的自动识别。 ISO2382-2017提到不推荐使用“身份验证”作为“生物特征验证或生物特征识别”的同义词;优选术语“生物特征识别”。因此,人对物识别可以是一个通用术语,涵盖物联网应用背景下的个人自动识别和验证。

  • According to ISO2382-2017 [45], an identification process is a one-to-many comparison decision to determine whether a particular biometric data subject is in a biometric reference database. Identification systems can be employed for both negative recognition (such as preventing a single person from using multiple identities) or positive recognition for authentication purposes.
    根据ISO2382-2017 [45],识别过程是一对多的比较决策,以确定特定的生物特征数据主体是否在生物特征参考数据库中。身份识别系统可用于消极识别(例如防止一个人使用多个身份)或用于身份验证目的的积极识别。

  • Similarly, ISO2382-2017 [45] defines a verification process as a comparison decision to determine the validity of a biometric claim in a verification transaction. Thus, a verification process is a one-to-one comparison in which the biometric probe(s) of a subject is compared with the biometric reference(s) of the subject to produce a comparison score. Generally, a verification system requires a labeled claimant identity as an input to be compared with the stored templates (e.g., biometrics templates) corresponding to the given label, to assert the individual’s claim. Often, verification systems are deployed for positive identification to prevent systems from zero-effort impostors and illegitimate persons.
    类似地,ISO2382-2017 [45]将验证过程定义为确定验证交易中生物特征声明有效性的比较决策。因此,验证过程是一对一的比较,其中将受试者的生物测定探针与受试者的生物测定参考进行比较以产生比较分数。一般来说,验证系统需要带标签的索赔人身份作为输入,与给定标签对应的存储模板(例如生物识别模板)进行比较,以断言个人的索赔。通常,部署验证系统是为了进行积极识别,以防止系统被零努力冒名顶替者和非法人员攻击。

2.4. Performance metrics 2.4.性能指标

In a biometric system designed to distinguish between a legitimate user or an impostor, there can be four possible scenarios. These are derived from the person being legitimate or not, and being (correctly or incorrectly) identified as legitimate or not. These are termed true acceptance (TA) or false rejection (FR) and true rejection (TR) or false acceptance (FA[46]. Table 3 describe the most commonly used indicators for the performance evaluation of biometric systems.
在旨在区分合法用户或冒名顶替者的生物识别系统中,可能有四种可能的情况。这些源自该人是否合法,以及被(正确或错误)识别为合法或不合法。这些被称为真正接受( TA )或错误拒绝( FR )和真正拒绝( TR )或错误接受( FA )[46]。表 3 描述了生物识别系统性能评估最常用的指标。

Table 3. Performance metrics for biometric systems.
表 3. 生物识别系统的性能指标。

Indicator 指标Description 描述
True Acceptance Rate (TAR)
真实接受率 (TAR)
This is the ratio of TA legitimate user attempts to the overall number of attempts (TA+FR). A higher TAR indicates that the system performs better in recognizing a legitimate user.
这是 TA 合法用户尝试与总尝试次数 ( TA+FR ) 的比率。 TAR 越高,表明系统在识别合法用户方面表现更好。
False Rejection Rate (FRR)
错误拒绝率 (FRR)
This is the ratio of FR legitimate user attempts to the overall attempts (TA+FR). FRR is a complement of TAR and it can be calculated as FRR = 1 - TAR. ISO/IEC 19795-1:2006 [47] also denote the term FRR as False Non-Match Rate (FNMR).
这是 FR 合法用户尝试与总体尝试 ( TA+FR ) 的比率。 FRR 是 TAR 的补数,计算公式为 FRR = 1 - TAR。 ISO/IEC 19795-1:2006 [47] 还将术语 FRR 表示为错误不匹配率 (FNMR)。
False Acceptance Rate (FAR)
错误接受率 (FAR)
This is the ratio of FA impostor attempts to overall attempts (FA+TR). A lower FAR means the system is robust to impostor attempts. ISO/IEC 19795-1:2006 [47] also specified the term FAR as False Match Rate (FMR).
这是 FA 冒充者尝试与总体尝试的比率 ( FA+TR )。 FAR 较低意味着系统对于冒充者的尝试具有鲁棒性。 ISO/IEC 19795-1:2006 [47] 还将术语 FAR 指定为错误匹配率 (FMR)。
True Rejection Rate (TRR)
真实拒绝率 (TRR)
This is the ratio of TR attempts of impostors to all overall attempts (FA+TR). TRR is the complement of FAR and can be calculated as TRR = 1 - FAR.
这是冒充者的 TR 尝试与所有总体尝试的比率 ( FA+TR )。 TRR 是 FAR 的补数,计算公式为 TRR = 1 - FAR。
Equal Error Rate (EER) 等错误率 (EER)It is the value where both errors rates, FAR and FRR, are equal (i.e., FAR = FRR).
它是错误率 FAR 和 FRR 相等时的值(即 FAR = FRR)。
Half Total Error Rate (HTER)
总错误率的一半 (HTER)
It is the average of FAR and FRR [48]. HTER and EER are identical for a given threshold with a weight set to 0.5 except that HTER can be use for measuring a classifier’s performance.
它是 FAR 和 FRR 的平均值[48]。对于权重设置为 0.5 的给定阈值,HTER 和 EER 是相同的,只是 HTER 可用于测量分类器的性能。
Accuracy 准确性The ratio of (TA+TR) to (TA+FR+TR+FA).
( TA+TR ) 与 ( TA+FR+TR+FA ) 的比率。
Receiver-Operating Characteristic (ROC)
接收器工作特性 (ROC)
ROC plot is a visual characterization of trade-off between FAR and TAR [47]. In simple terms, this is a plot between correctly raised alarms against incorrectly raised alarm. The curve is generated by plotting the FAR versus the TAR for varying thresholds to assess the classifier’s performance.
ROC 图是 FAR 和 TAR 之间权衡的直观表征[47]。简单来说,这是正确发出警报与错误发出警报之间的关系图。该曲线是通过绘制不同阈值的 FAR 与 TAR 来生成的,以评估分类器的性能。
Detection Error Trade-off (DET) Curve
检测误差权衡 (DET) 曲线
A DET curve is plotted using FRR and FAR for varying decision thresholds. To determine the region of error rates, both axes are scaled non-linearly [47]. Deviation- or logarithmic scales are the most commonly used scales in such graphs.
使用 FRR 和 FAR 针对不同的决策阈值绘制 DET 曲线。为了确定错误率区域,两个轴都被非线性缩放[47]。偏差或对数刻度是此类图中最常用的刻度。

3. Behavioural biometric modalities’ attributes and features
3. 行为生物识别模式的属性和特征

This section presents the attributes and features of behavioural biometric modalities that can be exploited for conceptualizing and designing human-to-things recognition schemes. In particular we examine behavioural biometric modalities based on HCI and natural habits that can be collected with no explicit user input using users’ smart devices, e.g., smart devices, smartwatches, etc., or external IoT sensors/nodes, e.g., pressure sensors, camera, etc.
本节介绍了行为生物识别模式的属性和特征,可用于概念化和设计人对物识别方案。特别是,我们检查基于人机交互和自然习惯的行为生物识别模式,这些模式可以使用用户的智能设备(例如智能设备、智能手表等)或外部物联网传感器/节点(例如压力传感器)进行收集,无需明确的用户输入,相机等

3.1. Touch-strokes dynamics
3.1.触碰动态

Touch-strokes can be described as touch sequences registered by a touchscreen sensor while users navigate on touchscreen-based smart devices using their fingers [49]. Studies have shown that human musculoskeletal structure can produce finger movements that can differ from person to person [50]. Thus, a unique digital signature can be obtained from individuals’ touch-points or keystrokes collected using built-in touch sensors available in smart devices. Commonly, touch-stroke features can be categorized as spatial, timing, and motion features [51].
触摸笔划可以描述为当用户使用手指在基于触摸屏的智能设备上导航时由触摸屏传感器记录的触摸序列[49]。研究表明,人类肌肉骨骼结构可以产生因人而异的手指运动[50]。因此,可以从使用智能设备中内置触摸传感器收集的个人触摸点或击键中获得独特的数字签名。通常,触摸笔划特征可以分为空间特征、时间特征和运动特征[51]。

3.1.1. Spatial features 3.1.1.空间特征

Spatial features for touch-stroke involves physical interactions between a user fingertip and a device touchscreen surface that can be acquired when a touch event is triggered. Subsequently, a cumulative distance, i.e., a sum of lengths computed from all the consecutive touchpoints in the 2-D space, and speed, i.e., cumulative distance divided by total touch-time, can be derived from touch events [52]. Commonly used spatial features are touch positions, time-stamp, touch size, and pressure [53], [54].
触摸笔画的空间特征涉及用户指尖和设备触摸屏表面之间的物理交互,可以在触发触摸事件时获取这些物理交互。随后,可以从触摸事件中导出累积距离(即从二维空间中所有连续触摸点计算的长度总和)和速度(即累积距离除以总触摸时间)[52]。常用的空间特征是触摸位置、时间戳、触摸大小和压力[53]、[54]。

3.1.2. Timing features 3.1.2.计时功能

The touch-stroke timing features generation method can utilize dwell (press or hold) and flight (latency) time. Dwell time can be defined as the time duration of a touch-event of the same key and flight time can be defined as the time interval between the touch events of two successive keys. These features are directly proportional to the number of touches on the touch-screen. As an example, Fig. 5 illustrates 30 features containing 8-Type0 dwell time features and 22-Type1 to Type4 flight time features that can be extracted from the 8 touch-sequence [55].
触摸笔画时序特征生成方法可以利用停留(按下或保持)和飞行(延迟)时间。停留时间可以定义为同一按键的触摸事件的持续时间,飞行时间可以定义为两个连续按键的触摸事件之间的时间间隔。这些功能与触摸屏上的触摸次数成正比。作为一个例子,图 5 显示了 30 个特征,其中包含 8 个类型 0 的停留时间特征和 22 个类型 1 到类型 4 的飞行时间特征,这些特征可以从 8 个触摸序列中提取[55]。

Fig. 5
  1. Download : Download high-res image (506KB)
    下载:下载高分辨率图像 (506KB)
  2. Download : Download full-size image
    下载:下载全尺寸图像

Fig. 5. Commonly used duration based touch-strokes timing features.
图 5. 常用的基于触摸笔画计时功能的持续时间。

The touch-stroke timing features generation method can also utilize different key-touch duration as illustrated in Fig. 6. The shortest feature-length can be termed as uni-graph, which is the timing feature extracted by taking the touch event timestamp values of the same key [56]. The timing features extracted from two, three, or more keys are termed as di-graph, tri-graph, and n-graph, respectively.
触摸笔画时序特征生成方法还可以利用不同的按键触摸持续时间,如图 6 所示。最短的特征长度可以称为单图,它是通过取触摸事件时间戳值提取的时序特征相同的键[56]。从两个、三个或更多键提取的时序特征分别称为二图、三图和n图。

Fig. 6
  1. Download : Download high-res image (132KB)
    下载:下载高分辨率图像 (132KB)
  2. Download : Download full-size image
    下载:下载全尺寸图像

Fig. 6. Graph based touch-strokes timing features.
图 6. 基于图形的触摸笔画计时特征。

3.1.3. Motion features 3.1.3.运动特性

Motion features can be acquired using motion sensors, such as Accelerometer, Gyroscope, Magnetometer, or gravity sensors that are available in most smart devices. Each touch event normally inflicts some movements or rotations that can be registered to generate a unique user authentication signature [57]. However, these motion features can be associated better for other user behaviors like hold- and pick-up movement [58].
运动特征可以使用运动传感器来获取,例如大多数智能设备中提供的加速度计、陀螺仪、磁力计或重力传感器。每个触摸事件通常会造成一些移动或旋转,可以注册这些移动或旋转来生成唯一的用户身份验证签名[57]。然而,这些运动特征可以更好地与其他用户行为相关联,例如握住和拿起运动[58]。

3.2. Swipe 3.2.滑动

Swipe can be defined as a finite touch-events sequence that occurred as a result of users touching a smart device’s touchscreen with their finger. Smart devices provide APIs to get touch coordinates, velocity, and pressure data for each touch-point [59].
滑动可以定义为用户用手指触摸智能设备触摸屏时发生的有限触摸事件序列。智能设备提供 API 来获取每个触摸点的触摸坐标、速度和压力数据 [59]。

Some of the spatial features that can be extracted from a swipe action are the touch-points timestamp, x- and y-coordinates, velocity, and acceleration. Acceleration for each touch-point can be computed mathematically, from velocity data. The touch pressure of each touch-point determines how hard the finger was pressed on the screen, and what was the touch size. Also, trajectory length, duration, average velocity, average touch-size, start and end touch coordinates can be derived from a swipe data [60], [61]. Additionally, statistical features, such as min, max, average, standard deviation, variance, kurtosis, and skewness can be computed from each 2-D touch sequence, i.e., position, velocity, acceleration, and pressure, acquired for a swipe action [62].
可以从滑动动作中提取的一些空间特征包括触摸点时间戳、x 和 y 坐标、速度和加速度。每个接触点的加速度可以根据速度数据进行数学计算。每个触摸点的触摸压力决定了手指在屏幕上按压的力度以及触摸的大小。此外,轨迹长度、持续时间、平均速度、平均触摸大小、开始和结束触摸坐标可以从滑动数据导出[60]、[61]。此外,还可以根据针对滑动动作获取的每个二维触摸序列(即位置、速度、加速度和压力)来计算统计特征,例如最小值、最大值、平均值、标准差、方差、峰度和偏度。 62]。

3.3. Touch signature 3.3.触摸签名

Touch signature, i.e., a person signing on smart devices’ touchscreen using their finger or stylus, is similar to a handwritten signature. Although, a touch signature can utilize the features that are extracted for a swipe gesture to generate a unique identification for users specified in Section 3.2.
触摸签名,即一个人使用手指或手写笔在智能设备的触摸屏上签名,类似于手写签名。尽管如此,触摸签名可以利用为滑动手势提取的特征来为第 3.2 节中指定的用户生成唯一标识。

Typically, touch signature features can be classified as global and local features [63]. Global features include total writing time, number of strokes, and signature size. Local features include local velocity, stroke angles, etc., computed at an instance of time or for a short duration. Some of the statistical features that can be extracted for touch signature are minimum, maximum, and mean of speed, acceleration, pressure, and size of the continuous strokes [64]. Further, for each stroke in a touch signature, touch-duration, segment direction, log curvature radius, stroke length to width ratio can be extracted [65], [66].
通常,触摸签名特征可以分为全局特征和局部特征[63]。全局特征包括总书写时间、笔划数和签名大小。局部特征包括在某个时刻或短时间内计算的局部速度、行程角度等。可以提取触摸签名的一些统计特征是速度、加速度、压力和连续笔画大小的最小值、最大值和平均值[64]。此外,对于触摸签名中的每个笔划,可以提取触摸持续时间、分段方向、对数曲率半径、笔划长宽比[65]、[66]。

Touch-duration can be utilized for finding similarity between touch signatures of a person. The difference between the two touch-duration sequences (Tdifference) can be computed using Tdifference=n=1N|Ts(n)Tr(n)|. Ts(n) and Tr(n) are touch-duration of nth touch sequence, respectively that are obtained from two touch signatures of a person. The direction (θi) of i-th segment having coordinates (xi,yi; xi+1,yi+1) can be calculated as θi=arctan(yi+1yixi+1xi)i=1toN. After decomposing the signature into multiple strokes, Lognormal velocity distribution vi(t) of ith stroke for a given starting time (t0i), stroke-length (Di), logtime delay (μi) and logresponse time (σi) can be obtained using Equation, i.e., |vi(t)|=Di2πσi(tt0i)exp((ln(tt0i)μi)22σi2).
触摸持续时间可用于查找人的触摸签名之间的相似性。两个触摸持续时间序列之间的差异 ( Tdifference ) 可以使用 Tdifference=n=1N|Ts(n)Tr(n)| 计算。 Ts(n) Tr(n) nth 触摸序列的触摸持续时间,分别是从一个人的两个触摸签名获得的。坐标为 ( xi,yi ; xi+1,yi+1 ) > )可以计算为 θi=arctan(yi+1yixi+1xi)i=1toN 。将签名分解为多个笔划后,给定开始时间( ith 笔划的对数正态速度分布 vi(t) t0i ),行程长度( Di ),对数时间延迟( μi )和对数响应时间( σi )可以使用方程获得,即 |vi(t)|=Di2πσi(tt0i)exp((ln(tt0i)μi)22σi2)

3.4. Hand movements 3.4.手部动作

Hand movements can be defined as a finite trajectory in 3-D space for gestures like hold, upward, downward, or snap while users perform a particular activity using their smart devices. For a user’s hand-movement action, unique user-identification-signature can be generated from collected X, Y, Z, and M coordinates. In this process, X, Y, and Z streams can be collected using sensors such as Accelerometer, Gyroscope, Magnetometer, or Gravity sensors, available in smart devices. Whereas, magnitude (M) stream can be derived mathematically, from each sensor sample (X, Y, Z), i.e.,  M=(X2+Y2+Z2).
当用户使用智能设备执行特定活动时,手部运动可以定义为 3D 空间中的有限轨迹,用于保持、向上、向下或捕捉等手势。对于用户的手势动作,可以从收集的 X Y 、 < b4> Z M 坐标。在此过程中, X Y Z 流可以使用智能设备中提供的加速度计、陀螺仪、磁力计或重力传感器等传感器来收集。然而,幅度( M )流可以通过数学方式从每个传感器样本( X Z ),即 M=(X2+Y2+Z2)

Univariate statistical features can then be extracted from each raw stream that aid to reduce the dimensionality of raw data and improve the signal-to-noise ratio [41]. Some of the statistical features, such as min (minimum value), max (maximum value), mean (average value), standard deviation (variation from the mean value), skewness (measure of the distortion or asymmetry), kurtosis (measure of the tailedness), etc., for a dataset (S) containing N values can be computed using Eqs. 1.
然后可以从每个原始流中提取单变量统计特征,这有助于降低原始数据的维数并提高信噪比[41]。一些统计特征,例如 min(最小值)、max(最大值)、mean(平均值)、标准差(与平均值的偏差)、偏度(扭曲或不对称的度量)、峰度(扭曲或不对称的度量)对于包含 N 值的数据集 ( S ) 等,可以使用等式计算: 1.
(1)Minimum(Min)=mini=1NSiMaximum(Max)=maxi=1NSiMean(μ)=1Ni=1NSiStandardDeviation(σ)=i=1N(Siμ)NKurtosis(k)=1Ni=1N(Siμ)4σ4Skewness(s)=1Ni=1N(Siμ)3σ3

3.5. Voice 3.5.嗓音

Speech processing can be a challenging task as people have different accents, pronunciations, styles, word rates, speed of speech, speech emphasis, accent, and emotional states. Typically, a voice-based authentication system can be either text-dependent or text-independent. Fig. 7 illustrates speech processing methods encompassing speaker identification, speaker detection, and speaker verification [67].
语音处理可能是一项具有挑战性的任务,因为人们有不同的口音、发音、风格、语速、语速、语音重点、口音和情绪状态。通常,基于语音的认证系统可以是文本相关的,也可以是文本无关的。图 7 说明了包含说话人识别、说话人检测和说话人验证的语音处理方法 [67]。

Fig. 7
  1. Download : Download high-res image (381KB)
    下载:下载高分辨率图像 (381KB)
  2. Download : Download full-size image
    下载:下载全尺寸图像

Fig. 7. An overview of speech processing [67].
图 7. 语音处理概述 [67]。

Voice biometrics exploit human speech parametrization or pattern matching/scoring methods to generate a unique identification signature. Human speech generation involves the lungs, vocal cords, and vocal tracts [68]. When a person speaks, the air expels from the lungs passing through the vocal cords that dilate or expand allowing the airflow to produce unvoiced or voiced sound. Subsequently, the air is resonated and reshaped by the vocal tract that consists of multiple organs such as the throat, mouth, nose, tongue, teeth, and lips. The vocal cord’s modulation, interaction, and movement of these organs can alter sound waves and produce unique sounds for each person. For a sound, the phoneme is known as the smallest distinctive unit sound of a speech [69] and pitch can be referred to as a fundamental frequency [70]. Each phoneme sound can be explained as airwaves produced by the lungs that are modulated by the vocal cords and vocal tract system.
语音生物识别技术利用人类语音参数化或模式匹配/评分方法来生成唯一的识别签名。人类言语的产生涉及肺、声带和声道[68]。当人说话时,空气从肺部排出,穿过声带,声带扩张或扩张,使气流产生清音或浊音。随后,空气通过由喉咙、口腔、鼻子、舌头、牙齿和嘴唇等多个器官组成的声道产生共鸣和重塑。声带这些器官的调制、相互作用和运动可以改变声波并为每个人产生独特的声音。对于声音来说,音素被称为语音的最小区别单位声音[69],音高可以被称为基频[70]。每个音素声音都可以解释为由肺部产生的、由声带和声道系统调制的气波。

Speech parametrization transforms a speech signal into a set of feature vectors, such as Mel Frequency Cepstral Coefficients (MFCCs), mean Hilbert envelope coefficients (MHEC) [71], Power Normalized Cepstral Coefficients (PNCCs) [72], and non-negative matrix factorisation (NMF) [73]. MFCCs are widely used parametric features for automatic speech and speaker recognition systems [74]. A Mel is a unit of pitch [75]. The sound pairs that are perceptually equidistant in pitch are separated by an equal number of Mels. The mapping between frequency in Hertz and the Mel scale is linear below 1000 Hz and logarithmic above 1000 Hz. The Mel frequency (mel(f)=1127ln(1+f700)) can be computed from the raw acoustic frequency.
语音参数化将语音信号转换为一组特征向量,例如梅尔频率倒谱系数(MFCC)、平均希尔伯特包络系数(MHEC)[71]、功率归一化倒谱系数(PNCC)[72]和非负矩阵因式分解(NMF)[73]。 MFCC 是自动语音和说话人识别系统广泛使用的参数特征[74]。 Mel 是音高单位[75]。感知上音高等距的声音对被相同数量的梅尔分开。以赫兹为单位的频率与梅尔标度之间的映射在 1000 Hz 以下为线性,在 1000 Hz 以上为对数。梅尔频率 ( mel(f)=1127ln(1+f700) ) 可以根据原始声频计算出来。

To extract MFCCs, first the voice signal is pre-emphasized using a first-order high-pass filter to boost the high frequencies energy. The next step involves windowing that can be performed using the Hamming function to extract spectral features from a small window of speech. Afterward, Fast Fourier Transform (FFT) is applied to extract spectral information from the windowed signal to determine the amount of energy at each frequency band. For computing MFCCs, filter banks are created with 10 filters spaced linearly below 1000 Hz, and the remaining filters spread logarithmically, above 1000 Hz collecting energy from each frequency band. After taking the log of each of the mel spectrum values. Finally, Inverse Fast Fourier Transform (IFFT) is applied extracting the energy and 12 cepstral coefficients for each frame.
为了提取 MFCC,首先使用一阶高通滤波器对语音信号进行预加重,以增强高频能量。下一步涉及可以使用汉明函数执行的加窗,以从小语音窗口中提取频谱特征。然后,应用快速傅里叶变换 (FFT) 从加窗信号中提取频谱信息,以确定每个频带的能量量。为了计算 MFCC,滤波器组由 1000 Hz 以下线性间隔的 10 个滤波器创建,其余滤波器以对数方式扩展,在 1000 Hz 以上收集来自每个频段的能量。取每个梅尔谱值的对数后。最后,应用快速傅里叶逆变换 (IFFT) 提取每帧的能量和 12 个倒谱系数。

Pattern matching/scoring methods involves probabilistic modeling (e.g., Gaussian Mixture Model (GMM) [76], Hidden Markov Models (HMMs) [77], Joint factor analysis (JFA), i-vectors [76]), template matching (e.g., vector quantization, nearest neighbor) and deep neural network trained on various combinations of i-vectors, x-vector, feature-space maximum likelihood linear regression (fMLLR) transformation [76] or Gabor filter (GF) [78]. I-vectors are low-dimensional fixed-length speaker-and-channel dependent space that is a result of joint factor analysis [79]. For extremely short utterances, i-vectors based approaches can provide an effective speaker identification solution using different scoring methods like cosine distance or probabilistic linear discriminant analysis (PLDA). In an x-vector system, DNN is trained to extract the speaker’s voice features, and the extracted speaker embedding is called x-vector [80].
模式匹配/评分方法涉及概率建模(例如高斯混合模型(GMM)[76]、隐马尔可夫模型(HMM)[77]、联合因子分析(JFA)、i-向量[76])、模板匹配(例如、向量量化、最近邻)和深度神经网络,在 i 向量、x 向量、特征空间最大似然线性回归 (fMLLR) 变换 [76] 或 Gabor 滤波器 (GF) [78] 的各种组合上进行训练。 I 向量是低维固定长度的说话者和通道相关空间,是联合因子分析的结果[79]。对于极短的话语,基于 i 向量的方法可以使用不同的评分方法(例如余弦距离或概率线性判别分析 (PLDA))提供有效的说话人识别解决方案。在x向量系统中,训练DNN来提取说话人的语音特征,提取的说话人嵌入称为x向量[80]。

3.6. Gait 3.6.步态

Human gait is the defined as the manner and style of walking [81]. Gait can be characterized by its cadence that is measured as the number of steps per time unit. Typically, a person’s gait varies during different activities, e.g., walking, running, hopping, ascending, or descending, etc. [82]. A gait cycle, illustrated in Fig. 8, consists of two primary phases: stance and swing [83]. The stance phase is the time-period during which feet are on the ground, constitutes approximately 60% of the gait cycle. The swing phase is the time-period during which the foot is in the air, constitutes the remaining 40% of the gait cycle. A stance phase can be further divided into 1) initial-contact and loading-response, 2) mid-contact and terminal-response, and, 3) Pre-swing. Similarly, a swing phase can be divided into 1) initial, 2) mid, and 3) terminal swing [84]. Using these parameters, both time-based and spatial features can be extracted as indicated in Table 4.
人类步态被定义为行走的方式和风格[81]。步态可以通过节奏来表征,节奏以每个时间单位的步数来衡量。通常,一个人的步态在不同的活动中会有所不同,例如步行、跑步、跳跃、上升或下降等[82]。如图 8 所示,步态周期由两个主要阶段组成:站立和摆动 [83]。站立阶段是脚接触地面的时间段,约占步态周期的 60%。摆动阶段是脚在空中的时间段,构成步态周期的剩余 40%。站立阶段可进一步分为 1) 初始接触和负载响应,2) 中间接触和最终响应,以及 3) 预摆动。类似地,摆动阶段可以分为 1)初始,2)中期和 3)末端摆动 [84]。使用这些参数,可以提取基于时间和空间的特征,如表 4 所示。

Fig. 8
  1. Download : Download high-res image (368KB)
    下载:下载高分辨率图像 (368KB)
  2. Download : Download full-size image
    下载:下载全尺寸图像

Fig. 8. An illustration of a gait cycle.
图 8.步态周期的图示。

Table 4. Gait features. 表 4.步态特征。

#Spatial 空间Time 时间
1.Stride length (cm) 步幅(厘米)Duration of step (milli sec)
步骤持续时间(毫秒)
2.Step length (cm) 步长(厘米)Stride duration (milli sec)
步幅持续时间(毫秒)
3.Stride width or base of support (cm)
步幅宽度或支撑基础(厘米)
Stance phase (milli sec) 站立阶段(毫秒)
4.Internal/External Angle (deg)
内/外角 ( deg )
Swing phase (milli sec) 摆动相位(毫秒)
5.Speed (m/s or cm/s) 速度(米/秒或厘米/秒)Cadence(steps/min) 步频(步/分钟)
6.Walk ratio (cm/step/min) 步行比(厘米/步/分钟)-

Some more gait features [40] that can be analyzed for user recognition are gait variability and angular kinematics. Gait Variability (GV) can be defined as changes in gait parameters from one stride to the next. In a gait cycle, the coefficient of variation (CV) that is a measure of total variability can be calculated as root mean square (RMS) of standard deviation (σ) of the moment over stride period t mean of the absolute moment of force over stride period using Equation, i.e., CV=1ni=1nσ21ni=1n|Xi|.
可以分析用户识别的更多步态特征 [40] 包括步态变异性和角度运动学。步态变异性 (GV) 可以定义为步态参数从一步到下一步的变化。在一个步态周期中,作为总变异性度量的变异系数 (CV) 可以计算为标准差 ( σ ) 的均方根 (RMS)步幅周期内的力矩 t 使用方程计算步幅周期内绝对力力矩的平均值,即 CV=1ni=1nσ21ni=1n|Xi|

Angular Kinematics of joint angles refers to the kinematics analysis of angular motion [40]. Angular displacement (the difference between the initial and final angular position), angular velocity (change in angular position over a period of time), and angular acceleration (change in angular velocity over a period of time) can be obtained using Eqs. 2.
关节角度的角运动学是指角运动的运动学分析[40]。角位移(初始角位置与最终角位置之差)、角速度(一段时间内角位置的变化)和角加速度(一段时间内角速度的变化)可以使用等式获得。 2.
(2)Angulardisplacement(Δθ)=θfinalθinitialangularvelocity(ω)=dθdtangularacceleration(α)=dωdt

3.7. Footstep 3.7.脚步

A footstep is defined as a combination of a single left and right stride of a person. Footstep features include stride length, stride direction, timing information, acoustic and psycho-acoustic parameters, spatial positions, and relative pressure values in foot regions. These features can be captured using a range of sensors including floor-based sensors[85], such as piezoelectric sensors, switch sensors, or fabric-based pressure mapping sensors.
脚步被定义为一个人向左和向右迈出的单个步幅的组合。脚步特征包括步长、步幅方向、计时信息、声学和心理声学参数、空间位置以及足部区域的相对压力值。这些特征可以使用一系列传感器来捕获,包括基于地板的传感器[85],例如压电传感器、开关传感器或基于织物的压力映射传感器。

Ground Reaction Force (GRF) is the common feature providing a description of a person’s footstep force acquired from pressure sensors [44]. Ground Reaction Force (GRFi) per sensor can be computed by accumulating each ith sensor pressure amplitude from time t=1 to t=Tmax using Equation, i.e., GRFi=t=1TmaxPi[t].
地面反作用力(GRF)是描述从压力传感器获取的人的脚步力的常见特征[44]。每个传感器的地面反作用力 ( GRFi ) 可以通过从时间 ith 传感器压力幅度来计算/b4> t=1 t=Tmax 使用方程,即 GRFi=t=1TmaxPi[t]

Furthermore, using Eq. 3 time-series arrays, namely, average spatial pressure (SPave), cumulative spatial pressure (SPcumulative), upper (SPupper) and lower (SPlower) contours can be generated from the pressure signals acquired from N sensors for a T time-period [86].
此外,使用方程。 3个时间序列数组,分别为平均空间压力( SPave )、累积空间压力( SPcumulative )、上限( SPupper )和下部( SPlower )轮廓可以根据从 获取的压力信号生成 N 传感器用于 T 时间段[86]。
(3)SPave[t]=i=1NPi[t]SPcumulative[t]=i=1NPi[t]+i=1NPi[t1]SPupper[t]=maxi=1NSi[t]SPlower[t]=mini=1NSi[t]where, Pi[t] is the differential pressure value from the ith sensors at the time t, and, N is the total number of sensors. Footstep analysis is applicable for numerous applications, such as predicting human action, security, and surveillance at public places [86].
其中, Pi[t] ith 传感器在 t N 是传感器总数。足迹分析适用于许多应用,例如预测人类行为、公共场所的安全和监视[86]。

4. State-of-the-art in HCI and natural habits based behavioural biometrics
4. 最先进的人机交互和基于自然习惯的行为生物识别技术

This section discusses the state-of-the-art for user recognition schemes based on HCI and natural habits-based behavioural biometrics discussed in Section 2.1. We present a systematic narrative of the recent literature developing touch-stroke dynamics, swipe gesture, touch signature, hand micro-movements, voice-prints, gait, and footstep behavioural biometrics modalities for designing user recognition schemes targeting IoT applications.
本节讨论基于 HCI 和第 2.1 节中讨论的基于自然习惯的行为生物识别技术的最先进的用户识别方案。我们系统地叙述了最近开发触摸笔画动力学、滑动手势、触摸签名、手部微动作、声纹、步态和脚步行为生物识别模式的文献,用于设计针对物联网应用的用户识别方案。

Touch-stroke dynamics: User recognition methods based on touch-stroke dynamics can readily implemented in IoT endpoints such as smartphones, tablets, smartwatches, or other devices equipped with a touchscreen. Zheng et al. [53] utilized users’ tapping behavior for user verification in a passcode-enabled smartphone. They recruited 80 subjects to explore tapping behaviors using four different factors, i.e., acceleration, pressure, size, and time. They evaluated their scheme using a one-class classifier and achieved an EER of 3.65%. Further, their experiment to quantitatively measure the effect of the mimic attack revealed that only dissimilarity scores of acceleration reduced, whereas the score ranges of the other three features spread wider. Similarly, Teh et al. [54] investigated touch dynamics biometrics by extracting a basic set of timing and spatial features known as First Order Features (FOF). They derived an extended Set of Features (SOF) from the FOF features. They used both a one-class classifier (K-Nearest Neighbor (kNN), Support Vector Data Description (SVDD)), and a binary-class classifier (kNN, State Vector Machine (SVM)) for evaluation of their scheme on a dataset having 150 subjects. Through experiments, they demonstrated a reduction in impersonation attempts to 9.9% from 100% by integrating the touch dynamics authentication method into a 4-digit PIN-based authentication method in contrast to the sole use of PIN-based authentication.
触摸笔划动态:基于触摸笔划动态的用户识别方法可以轻松地在智能手机、平板电脑、智能手表或其他配备触摸屏的设备等物联网端点中实现。郑等人。 [53]利用用户的点击行为在支持密码的智能手机中进行用户验证。他们招募了 80 名受试者,利用四种不同的因素(即加速度、压力、大小和时间)来探索敲击行为。他们使用一类分类器评估了他们的方案,并取得了 3.65% 的 EER。此外,他们定量测量模仿攻击效果的实验表明,只有加速度的相异性分数降低了,而其他三个特征的分数范围扩大了。同样,Teh 等人。 [54]通过提取一组基本的时间和空间特征(称为一阶特征(FOF))来研究触摸动态生物识别。他们从 FOF 特征中派生出扩展特征集 (SOF)。他们使用一类分类器(K 最近邻 (kNN)、支持向量数据描述 (SVDD))和二元类分类器(kNN、状态向量机 (SVM))来评估数据集上的方案有150个科目。通过实验,他们证明,通过将触摸动态身份验证方法集成到基于 4 位 PIN 的身份验证方法中,与单独使用基于 PIN 的身份验证相比,可以将假冒尝试从 100% 减少到 9.9%。

Draw-a-pin is a PIN content analyzer and drawing behavior analyzer to verify the two factors of a log-in attempt [87]. The system extracts touch information, such as x-coordinates, y-coordinates, finger pressure, and touch area size, from each 4-digit pin. They claim the scheme is resilient against shoulder surfing attacks and achieved an EER of 4.84% using the Dynamic Time Warping (DTW) algorithm on 20 subjects. Similar to the draw-a-pin approach, Tolosana et al. [88] suggested replacing conventional authentication systems based on PIN and One-Time Passwords (OTP) with a scheme that allows users to draw each digit of the password on the device’s touchscreen. They created an e-BioDigit database consisting of 93 subjects to conduct their experiment. The authors evaluated the scheme using DTW by combining with the Sequential Forward Feature Selection (SFFS) function selection algorithm and Recurrent Neural Networks (RNNs) deep learning technology that exploited various touch features; they achieved an EER of 4%.
Draw-a-pin 是一个 PIN 内容分析器和绘图行为分析器,用于验证登录尝试的两个因素 [87]。系统从每个 4 位引脚中提取触摸信息,例如 x 坐标、y 坐标、手指压力和触摸区域大小。他们声称该方案能够抵御肩窥攻击,并在 20 名受试者上使用动态时间规整 (DTW) 算法实现了 4.84% 的 EER。与拔针方法类似,Tolosana 等人。 [88]建议用一种允许用户在设备触摸屏上绘制密码的每一位数字的方案来取代基于 PIN 和一次性密码 (OTP) 的传统身份验证系统。他们创建了一个包含 93 名受试者的 e-BioDigit 数据库来进行实验。作者结合顺序前向特征选择(SFFS)函数选择算法和利用各种触摸特征的循环神经网络(RNN)深度学习技术,使用DTW对该方案进行了评估;他们的 EER 达到了 4%。

Multi-touch authentication with TFST (touch with fingers straight and together) gestures is a simple and reliable authentication scheme for devices equipped with multi-touch screens [58]. The scheme exploits both hand geometry and behavioural characteristics and the authors collected a large multi-touch dataset from 161 subjects. They achieved an EER of 5.48% (5 training samples) using one-class SVM and kNN classifiers. Furthermore, they performed a security analysis for a zero-effort attack, smudge attack, shoulder surfing attack, and statistical attack. Touch-stroke dynamics is a relatively recent behavioural biometrics when compared to well established behavioural biometrics such as signature verification. Table 5 compares user recognition schemes based on touch-strokes dynamics.
对于配备多点触摸屏的设备来说,使用 TFST(手指并拢触摸)手势的多点触摸身份验证是一种简单可靠的身份验证方案[58]。该方案利用了手部几何形状和行为特征,作者从 161 名受试者中收集了大型多点触摸数据集。他们使用一类 SVM 和 kNN 分类器实现了 5.48% 的 EER(5 个训练样本)。此外,他们还对零努力攻击、涂抹攻击、肩窥攻击和统计攻击进行了安全分析。与签名验证等成熟的行为生物识别技术相比,触摸动作动力学是一种相对较新的行为生物识别技术。表 5 比较了基于触摸笔划动态的用户识别方案。

Table 5. User recognition schemes based on touch-strokes dynamics.
表 5. 基于触摸笔划动态的用户识别方案。

Study 学习Methodology/Features 方法/特点Algorithm/Classifier 算法/分类器Dataset 数据集Performance 表现
Li et al. [89], 2021
李等人。 [89],2021
Single touch, touch movement and multi-touch
单点触摸、触摸移动和多点触摸
SVM60 subjects 60 个科目Average error rate 2.9%
平均错误率 2.9%
Teh et al. [54], 2019
泰等人。 [54],2019
FOF and SOF FOF和SOFkNN, SVDD, and SVM kNN、SVDD 和 SVM150 subjects 150 个科目Impersonation rate = 9.9%
冒充率 = 9.9%
Zheng et al. [53], 2014
郑等人。 [53],2014
Tapping behaviors 敲击行为one-class machine learning technique
一级机器学习技术
80 subjects 80 个科目EER = 3.65% 能效比=3.65%
Song at al. [58], 2017
歌曲在等。 [58],2017
Multi-touch with TFST 带有 TFST 的多点触控One-class SVM and kNN 一类 SVM 和 kNN161 subjects 161 科目EER = 5.48% (5 training samples)
EER = 5.48%(5 个训练样本)
Tolosana et al. [88], 2017
托洛萨纳等人。 [88],2017
Handwritten numerical digits using finger-touch
使用手指触摸手写数字
DTW combined with the SFFS and RNNs
DTW 与 SFFS 和 RNN 相结合
e-BioDigit [90] (93 subjects)
e-BioDigit [90](93 个科目)
EER = 4% 能效比=4%

Swipe gesture: A swipe gesture (collection of touch-strokes from a touch-down to touch-release) can be processed for user recognition. SwipeVlock authenticates users based on their way of swiping the phone screen with a background image [61]. The scheme was evaluated using a decision tree, Naive Bayes (NB), SVM, and Back Propagation Neural Network (BPNN) on 150 subjects and achieved a success rate of 98%. DriverAuth collected and encoded a sequence of touch-events when a user swipes on the touchscreen using their finger. It achieved a TAR of 87% using Quadratic SVM (Q-SVM) on a dataset of 86 subjects. Jain et al. [57] analyzed swipe gestures, such as left-to-right swipe (L2R), right-to-left swipe (R2L), scroll up (SU), scroll down (SD), zoom in (ZI), zoom out (ZO) and single tap (ST), subsequently, extracting xy coordinates, accelerometer, orientation sensor readings, and area covered by a finger to design an authentication scheme. The scheme recruited 104 subjects for evaluation and 30 subjects for performance verification. Using a modified Hausdorff distance (MHD), they achieved an EER of 0.31% for combined gestures using score level fusion.
滑动手势:可以处理滑动手势(从触摸到触摸释放的触摸笔划的集合)以供用户识别。 SwipeVlock 根据用户使用背景图像滑动手机屏幕的方式对用户进行身份验证 [61]。该方案使用决策树、朴素贝叶斯 (NB)、SVM 和反向传播神经网络 (BPNN) 对 150 名受试者进行了评估,成功率达到 98%。当用户用手指在触摸屏上滑动时,DriverAuth 收集并编码一系列触摸事件。它在 86 名受试者的数据集上使用二次 SVM (Q-SVM) 实现了 87% 的 TAR。贾恩等人。 [57]分析了滑动手势,例如从左向右滑动(L2R)、从右向左滑动(R2L)、向上滚动(SU)、向下滚动(SD)、放大(ZI)、缩小( ZO)和单击(ST),随后提取 xy 坐标、加速度计、方向传感器读数和手指覆盖的区域来设计身份验证方案。该计划招募了 104 名受试者进行评估,30 名受试者进行绩效验证。使用修改后的豪斯多夫距离 (MHD),他们使用分数级别融合实现了组合手势的 EER 为 0.31%。

Ellavarason et al. [60] proposed a swipe gesture authentication and collected a dataset under four scenarios, i.e., sitting (room and bus) and walking (outdoor and treadmill). They used SVM, kNN, and NB are used to evaluate the robustness of swipe gestures and achieved an ERR of 1% (sitting in a room), 30% (sitting in a bus), 23% (walking on a treadmill), 27% (walking outdoor) on 50 subjects. According to Poze et al. [91], horizontal strokes hold more user-specific information and are more discriminating than vertical strokes. They investigated a statistical approach based on adapted Gaussian Mixture Models (GMM) for swipe gestures and achieved an EER of 20% (40 training samples) using a dataset with 90 subjects. Garbuz et al. [92] proposed an approach that analyzed both swipes and taps to provide continuous authentication. The one-class classification model is generated using one-class SVM. The scheme can detect an impostor in 2–3 gestures, whereas the legitimate user is blocked on average after 115–116 gestures.
埃拉瓦拉森等人。 [60]提出了一种滑动手势认证,并收集了四种场景下的数据集,即坐着(房间和公共汽车)和步行(户外和跑步机)。他们使用 SVM、kNN 和 NB 来评估滑动手势的鲁棒性,并取得了 1%(坐在房间里)、30%(坐在公共汽车上)、23%(在跑步机上行走)的 ERR,27 %(户外行走)50 个科目。根据 Poze 等人的说法。 [91],水平笔划包含更多用户特定信息,并且比垂直笔划更具辨别力。他们研究了一种基于适用于滑动手势的自适应高斯混合模型 (GMM) 的统计方法,并使用包含 90 名受试者的数据集实现了 20% 的 EER(40 个训练样本)。加尔布兹等人。 [92]提出了一种分析滑动和点击以提供连续身份验证的方法。一类分类模型是使用一类支持向量机生成的。该方案可以在 2-3 个手势中检测到冒名顶替者,而合法用户平均在 115-116 个手势后就会被阻止。

Another scheme involved the extraction of temporal information from consecutive touch-strokes [93]. For evaluation, they temporal Regression Forest (TRF) architecture and achieved an EER of 4%, 2.5% on the Serwadda and Frank datasets, having 190 and 41 subjects, respectively. Kumar et al. [94] proposed a multimodal scheme that exploited swiping gestures, typing behavior, phone movement patterns while typing/swiping, and their possible fusion at the feature- and score-level for authenticating smartphone users, continuously. A multi-template classification framework (MTCF) is implemented for evaluation. They achieved an accuracy of 93.33% and 89.31% using feature level and score level fusion, respectively on 28 subjects. Table 6 compares user recognition schemes based on swipe gesture.
另一种方案涉及从连续的触摸笔画中提取时间信息[93]。为了进行评估,他们采用了时间回归森林 (TRF) 架构,并在 Serwadda 和 Frank 数据集(分别有 190 名受试者和 41 名受试者)上实现了 4% 和 2.5% 的 EER。库马尔等人。 [94]提出了一种多模式方案,该方案利用滑动手势、打字行为、打字/滑动时的手机移动模式,以及它们在功能和分数级别上可能的融合,以连续验证智能手机用户的身份。采用多模板分类框架(MTCF)进行评估。他们在 28 名受试者上使用特征级别和分数级别融合分别获得了 93.33% 和 89.31% 的准确率。表 6 比较了基于滑动手势的用户识别方案。

Table 6. User recognition schemes based on swipe.
表 6. 基于滑动的用户识别方案。

Study 学习Methodology/Features 方法/特点Algorithm/Classifier 算法/分类器Dataset 数据集Performance 表现
Jain et al. [57], 2021
贾恩等人。 [57],2021
Touchscreen gestures (L2R, R2L, SU, SD, ZI, ZO, and ST)
触摸屏手势(L2R、R2L、SU、SD、ZI、ZO 和 ST)
Modified MHD 改良磁流体动力学104 subjects for evaluation and 30 subjects for performance verification
104个评估科目和30个绩效验证科目
EER = 0.31% for combined gestures using score level fusion
使用分数级别融合的组合手势的 EER = 0.31%
Gupta et al. [59], 2019
古普塔等人。 [59],2019
Touch-events sequence 触摸事件序列Q-SVM Q支持向量机86 subjects [95] 86 个科目 [95]TAR = 87% 焦油 = 87%
Ellavarason et al. [60], 2020
埃拉瓦拉森等人。 [60],2020
Swipe gesture in four scenarios - sitting (room and bus) and walking (outdoor and treadmill)
四种场景下的滑动手势 - 坐着(房间和公共汽车)和步行(户外和跑步机)
SVM, kNN, and NB SVM、kNN 和 NB50 subjects 50 个科目ERR = 1% (sitting in room), 30%(sitting in bus), 23% (walking on treadmill), 27% (walking outdoor)
ERR = 1%(坐在房间里)、30%(坐在公交车上)、23%(在跑步机上行走)、27%(在室外行走)
Li et al. [61], 2020
李等人。 [61],2020
Swipe on an image 在图像上滑动Decision tree, NB, SVM, and BPNN
决策树、NB、SVM 和 BPNN
150 subjects 150 个科目Success Rate = 98% 成功率 = 98%
Pozo et al. [91]. 2017
波佐等人。 [91]。 2017年
Horizontal and vertical strokes
水平和垂直笔划
GMM190 subjects 190 科目EER = 20% (40 training samples)
EER = 20%(40 个训练样本)
Kumar et al. [94], 2016
库马尔等人。 [94],2016
Swipe, typing behavior, phone movement patterns
滑动、打字行为、手机移动模式
MTCF28 subjects 28 个科目Accuracy = 93.33% (feature level fusion), 89.31% (score level fusion)
准确率 = 93.33%(特征级融合)、89.31%(分数级融合)
Ooi et al. [93], 2019
奥伊等人。 [93],2019
Touch-strokes temporal information
触摸-笔划时间信息
TRFSerwadda (190 subjects), Frank [96] (41 subjects)
Serwadda (190 科目), Frank [96] (41 科目)
EER = 4%, 2.5% 能效比 = 4%, 2.5%

Touch-signature: Touch-signature using a finger or stylus on a touchscreen device is emerging as an alternative to an all-time acceptable handwritten signature for user recognition. Features explained in Section 3.3 can be exploited to identify a user for a number of security-sensitive applications, such as hotel bookings, online-banking, and shopping thereby helping minimize fraudulent activities.
触摸签名:在触摸屏设备上使用手指或手写笔进行触摸签名正在成为用户识别的一直可接受的手写签名的替代方案。可以利用第 3.3 节中解释的功能来识别许多安全敏感应用程序的用户身份,例如酒店预订、在线银行和购物,从而有助于最大限度地减少欺诈活动。

Tolosana et al. [65] proposed an on-line signature verification system that is adaptable to the signature complexity level. In their proposed approach, a signature complexity detector based on the number of lognormals from the Sigma LogNormal writing generation model, and a time function extraction module are generated for each complexity level. Then, the DTW algorithm is used to compute the similarity between the time functions from the input signature and training signatures of the claimed user. The scheme achieved an EER of 2.5% and 5.6% on BiosecurID (pen scenario of 400 subjects) and BioSign (pen and finger scenario of 65 subjects) datasets, respectively. Yoshida et al. [66] analyzed touch-strokes duration and segments’ directions of signatures using two Japanese characters. An objective measure of the difference between two sequences of touching duration is used to evaluate the similarity and the scheme achieved an EER of 7.1% using 10 subjects. Gomez et al. [97] proposed to improve the performance of online signature verification systems based on the Kinematic Theory of rapid human movements and its associated Sigma LogNormal model. The authors used the BiosecurID multimodal database of 400 subjects having 6400 genuine signatures and 4800 skilled forgeries for the evaluation of their schemes using DTW.
托洛萨纳等人。 [65]提出了一种适应签名复杂程度的在线签名验证系统。在他们提出的方法中,为每个复杂性级别生成基于 Sigma LogNormal 写入生成模型的对数正态数的签名复杂性检测器和时间函数提取模块。然后,使用 DTW 算法计算输入签名和所声明用户的训练签名的时间函数之间的相似度。该方案在 BiosecurID(400 名受试者的笔场景)和 BioSign(65 名受试者的笔和手指场景)数据集上分别实现了 2.5% 和 5.6% 的 EER。吉田等人。 [66]使用两个日语字符分析了签名的触摸笔画持续时间和片段方向。使用对两个触摸持续时间序列之间差异的客观测量来评估相似性,该方案使用 10 名受试者实现了 7.1% 的 EER。戈麦斯等人。 [97]提出基于人体快速运动的运动学理论及其相关的 Sigma LogNormal 模型来提高在线签名验证系统的性能。作者使用包含 400 名受试者的 BiosecurID 多模态数据库(其中有 6400 个真实签名和 4800 个熟练的伪造签名)来使用 DTW 评估他们的方案。

Ren et al. [98] proposed a signature verification system leveraging a multi-touch screen for mobile transactions by extracting critical segments to capture a user’s intrinsic signing behavior for accurate signature verification. They applied DTW to calculate an optimal match between two temporal sequences with different lengths, and then measure the similarity between them. On 25 subjects, an EER of 2%, 1%, and 3% for single-finger, two-finger, and under the observation and imitation attack scenarios, respectively achieved. Al-Jarrah et al. [99] proposed anomaly detectors, such as STD Z-Score Anomaly Detector, Average Absolute Deviation (AAD) Anomaly Detector, and Median Absolute Deviation (MAD) Anomaly Detector, for signature verification. Using distance functions for evaluation, they achieved an EER between 3.21% to 5.44% for skilled forgeries and 4.74% to 6.31% for random forgeries among 55 subjects. Behera et al. [100] proposed an approach based on spot signature within a continuous air writing captured through Leap motion depth sensors. The processed signatures are represented using convex hull vertices and DTW is selected for performance verification of the spotted signatures. The authors achieved an accuracy of 80% on 20 subjects. Ramachandra et al. [101] proposed user verification using a smartwatch-based writing pattern or style that exploited accelerometer data acquired from 30 participants. The accelerometer data is further transformed using 2D Continuous Wavelet Transform (CWT) and deep features extracted using the pre-trained ResNet50. Table 7 compares user recognition schemes based on touch signature.
任等人。 [98]提出了一种签名验证系统,利用多点触摸屏进行移动交易,通过提取关键片段来捕获用户的内在签名行为,以实现准确的签名验证。他们应用 DTW 来计算两个不同长度的时间序列之间的最佳匹配,然后测量它们之间的相似度。在 25 名受试者中,单指、两指、观察和模仿攻击场景的 EER 分别达到 2%、1% 和 3%。阿尔-贾拉等人。 [99]提出了用于签名验证的异常检测器,例如 STD Z 得分异常检测器、平均绝对偏差(AAD)异常检测器和中值绝对偏差(MAD)异常检测器。使用距离函数进行评估,他们在 55 名受试者中实现了熟练伪造的 EER 在 3.21% 至 5.44% 之间,而随机伪造的 EER 在 4.74% 至 6.31% 之间。贝赫拉等人。 [100]提出了一种基于通过 Leap 运动深度传感器捕获的连续空中书写中的点签名的方法。处理后的签名使用凸包顶点表示,并选择 DTW 来验证发现的签名的性能。作者对 20 名受试者的准确率达到了 80%。拉马钱德拉等人。 [101]建议使用基于智能手表的书写模式或风格进行用户验证,该模式或风格利用从 30 个参与者获取的加速度计数据。使用 2D 连续小波变换 (CWT) 进一步转换加速度计数据,并使用预训练的 ResNet50 提取深度特征。表 7 比较了基于触摸签名的用户识别方案。

Table 7. User recognition schemes based on touch signature.
表 7. 基于触摸签名的用户识别方案。

Study 学习Methodology/Features 方法/特点Algorithm/Classifier 算法/分类器Dataset 数据集Performance 表现
Tolosana et al. [65], 2020
托洛萨纳等人。 [65],2020
Time functions for different complexity, Lognormals from Sigma LogNormal
不同复杂度的时间函数,来自 Sigma LogNormal 的对数正态
DTWBiosecurID (pen scenario of 400 subjects), BioSign (pen and finger scenario of 65 subjects)
BiosecurID(400 名受试者的笔场景)、BioSign(65 名受试者的笔和手指场景)
EER = 2.5%, 5.6% 能效比=2.5%、5.6%
Al et al. [99], 2019
艾尔等人。 [99],2019
finger-drawn signature 手指画签名Distance-based functions 基于距离的功能55 subjects 55 个科目EER = 3.21% to 5.44% (Skilled Forgery), 4.74% to 6.31% (Random Forgery)
EER = 3.21% 至 5.44%(熟练伪造),4.74% 至 6.31%(随机伪造)
Van et al. [87], 2017
范等人。 [87],2017
Touch information from 4-digit pin drawing
来自 4 位引脚图的触摸信息
DTW20 subjects 20 个科目EER = 4.84% 能效比=4.84%
Yoshida et al. [66], 2017
吉田等人。 [66],2017
Signatures touch-strokes duration and segments directions
签名触摸笔划持续时间和分段方向
Distance-based 基于距离的10 Subjects 10 个科目EER = 7.1% 能效比=7.1%
Behera et al. [100], 2017
贝赫拉等人。 [100],2017
Spot signature using leap motion
使用跳跃动作点签名
DTW20 subjects 20 个科目Accuracy = 80% 准确度 = 80%
Ren et al. [98], 2019
任等人。 [98],2019
Signature using multi-touch screen
使用多点触摸屏签名
DTW25 subjects 25 个科目EER = 2% (for single-finger scenarios), 1% (for two-finger scenarios), 3% (under the observe and imitate attack scenarios)
EER = 2%(单指场景)、1%(两指场景)、3%(观察和模仿攻击场景下)

Hand-movement: IoT end-points equipped with motion sensors are capable of acquiring micro-movement produced as a result of a user’s unique gesture to perform certain activities. Subsequently, the raw data collected from various sensors for an activity can be exploited when designing a user recognition scheme. SmartHandle utilizes the user’s hand-movement in 3-dimensional space by determining the X, Y, and Z coordinates corresponding to the hand-movement trajectory, to generate a user-identification signature [41]. The classification model is evaluated using 3 different classifiers, i.e., the linear discriminant classifier (LDC), uncorrelated normal based quadratic Bayes classifier (UDC), and random forest (RF). The scheme achieved an accuracy of 87.27% on a dataset containing 11 subjects. Centeno et al. [102] designed an approach that acquires user-specific motion patterns using an accelerometer as a result of the user’s interaction with a smartphone. The feature extraction process is based on autoencoders (a deep learning technique). On a dataset of 120 subjects, the scheme achieved an EER of 2.2%.
手部运动:配备运动传感器的物联网端点能够获取由于用户执行某些活动的独特手势而产生的微运动。随后,在设计用户识别方案时可以利用从各种传感器收集的活动原始数据。 SmartHandle利用用户在3维空间中的手部运动,通过确定手部运动轨迹对应的X、Y和Z坐标,来生成用户识别签名[41]。使用 3 个不同的分类器评估分类模型,即线性判别分类器 (LDC)、基于不相关正态的二次贝叶斯分类器 (UDC) 和随机森林 (RF)。该方案在包含 11 名受试者的数据集上达到了 87.27% 的准确率。森特诺等人。 [102]设计了一种方法,通过用户与智能手机的交互,使用加速度计获取用户特定的运动模式。特征提取过程基于自动编码器(一种深度学习技术)。在 120 名受试者的数据集上,该方案的 EER 为 2.2%。

DeepAuth leverages time and frequency domain features extracted from motion sensors and a long short-term memory (LSTM) model with negative sampling to build a re-authentication framework using 47 subjects [103]. The authors also compared DeepAuth with state-of-the-art classification methods such as SVM, RF, Logistic Regression (LR), and Gradient Boosting (GB) classifiers and achieved an accuracy of 96.70% for the data collected for 20 seconds. Another bimodal scheme exploited touch-tapping and hand-movements while users enter the 8-digit free-text secret [55]. For the evaluation, NB, NeuralNet (NN), and RF classifiers are used and a TAR of 85.77% is achieved on 97 subjects. VeriNET employed motion signals as a password and leveraged a deep-RNN to authenticate users [104]. The scheme is evaluated on a dataset containing 310 subjects to achieve an EER of 7.17% for PINs and 6.09% for Android locking patterns.
DeepAuth 利用从运动传感器提取的时域和频域特征以及具有负采样的长短期记忆 (LSTM) 模型来构建使用 47 个受试者的重新身份验证框架 [103]。作者还将 DeepAuth 与 SVM、RF、逻辑回归 (LR) 和梯度提升 (GB) 分类器等最先进的分类方法进行了比较,并在 20 秒内收集的数据中实现了 96.70% 的准确率。另一种双模式方案利用触摸点击和手部移动,同时用户输入 8 位自由文本密码 [55]。在评估中,使用了 NB、NeuralNet (NN) 和 RF 分类器,97 个受试者的 TAR 达到了 85.77%。 VeriNET 使用运动信号作为密码,并利用深度 RNN 来验证用户 [104]。该方案在包含 310 名受试者的数据集上进行评估,PIN 码的 EER 为 7.17%,Android 锁定模式的 EER 为 6.09%。

SnapAuth profiles a user’s arm-movements when the user performs a snap-action wearing smart watches [105]. The scheme was evaluated using Bayes Net (BN), Multilayer Perceptron (MLP), and RF classifiers on a dataset of 11 subjects and achieved a TAR 82.34%. Li et al. [106] proposed a continuous authentication scheme based on free-text keystroke that exploited both keystroke latency patterns and wrist motion behaviors acquired by wrist-worn smartwatches. A Dynamic Trust Model (DTM) is developed to fuse two one-vs-all RF ensemble classifiers and achieved a TAR of 98.12% on 25 subjects. Another continuous authentication scheme compares the wristband’s motion with the phone’s motion of a user to produce a score indicating its confidence that the person holding (and using) the phone is the person wearing the wristband [107]. A two-tier classification approach (using RF and NB binary classifiers) to correlate wrist motion with the touch input is deployed giving an accuracy of 96.5% tested with 38 subjects. A motion-based authentication method for smart wearable devices, MotionAuth, constructed users’ identifiable signature by profiling their different natural gestures such as raising or lowering the arm [108]. They achieved an EER of 2.6% on a dataset of 30 users.
当用户佩戴智能手表执行快速动作时,SnapAuth 会分析用户的手臂运动 [105]。该方案使用贝叶斯网络 (BN)、多层感知器 (MLP) 和 RF 分类器在 11 名受试者的数据集上进行评估,并取得了 82.34% 的 TAR。李等人。 [106]提出了一种基于自由文本击键的连续身份验证方案,该方案利用了击键延迟模式和腕戴式智能手表获取的手腕运动行为。开发了动态信任模型 (DTM),用于融合两个一对一的 RF 集成分类器,并在 25 个受试者上实现了 98.12% 的 TAR。另一种连续身份验证方案将腕带的运动与用户手机的运动进行比较,以产生一个分数,表明持有(和使用)手机的人是佩戴腕带的人的置信度[107]。采用两层分类方法(使用 RF 和 NB 二元分类器)将手腕运动与触摸输入关联起来,对 38 名受试者进行测试,准确率达到 96.5%。 MotionAuth 是一种用于智能可穿戴设备的基于动作的身份验证方法,通过分析用户不同的自然手势(例如抬起或放下手臂)来构建用户的可识别签名[108]。他们在 30 个用户的数据集上实现了 2.6% 的 EER。

SilentSense exploited touch behavior (e.g., pressure, area, duration, position) and micro hand-movements (e.g., acceleration and rotation) [109]. SVM is employed to detect the identity of the current user according to each interacting behavior observation. On a dataset containing 100 subjects, SilentSense achieved an accuracy of 99%. Similarly, Hand Movement, Orientation, and Grasp (HMOG) exploited both tapping and keystrokes modalities [110]. The features are extracted for hand micro-movements, grasp, and orientation patterns when a user taps or presses keys on a touchscreen. For the evaluation of the scheme, Scaled Manhattan with Fisher Score (SM-FS) Ranking, Scaled Euclidean with PCA (SE-PCA), and 1-Class SVM with Fisher Score (OCSVM-FR) Ranking is used. The scheme achieved an EER of 7.16% and 10.05% for walking and sitting postures, respectively, using a set of 100 subjects for the validation. Table 8 compares user recognition schemes based on hand-movements.
SilentSense 利用触摸行为(例如压力、面积、持续时间、位置)和微手运动(例如加速和旋转)[109]。 SVM用于根据每个交互行为观察来检测当前用户的身份。在包含 100 名受试者的数据集上,SilentSense 的准确率达到 99%。类似地,手部运动、方向和抓握 (HMOG) 利用了敲击和击键模式 [110]。当用户点击或按下触摸屏上的按键时,会提取手部微动作、抓握​​和方向模式的特征。为了评估该方案,使用了费舍尔评分的缩放曼哈顿 (SM-FS) 排名、PCA 的缩放欧几里德 (SE-PCA) 和费舍尔评分的 1 类 SVM (OCSVM-FR) 排名。该方案使用 100 名受试者进行验证,步行和坐姿的 EER 分别为 7.16% 和 10.05%。表 8 比较了基于手部动作的用户识别方案。

Table 8. User recognition schemes based on hand-movements.
表 8. 基于手部动作的用户识别方案。

Study 学习Methodology/Features 方法/特点Algorithm/Classifier 算法/分类器Dataset 数据集Performance 表现
Centeno et al. [102], 2017
森特诺等人。 [102],2017
Motion patterns using accelerometer
使用加速度计的运动模式
Autoencoders 自动编码器120 subjects 120 个科目EER = 2.2% 能效比=2.2%
Gupta et al. [41], 2019
古普塔等人。 [41],2019
User’s hand-movement in 3-D space
用户在 3D 空间中的手部运动
LDC, UDC, and RF. LDC、UDC 和 RF。11 subjects 11 个科目Accuracy = 87.27% 准确度 = 87.27%
Bo et al. [109], 2013
博等人。 [109],2013
Touching behavior 触摸行为SVM100 subjects 100 个科目Accuracy = 99% 准确度 = 99%
Amini et al.[103], 2018 阿米尼等人[103],2018Time and frequency domain features from motion sensors and a LSTM model
来自运动传感器和 LSTM 模型的时域和频域特征
SVM, RF, LR and GB
SVM、RF、LR 和 GB
47 subjects 47 个科目Accuracy = 96.70% (20 seconds)
准确度 = 96.70%(20 秒)
Mare et al. [107], 2019
马雷等人。 [107],2019
Compares the wristband’s motion with the phone’s motion
将腕带的运动与手机的运动进行比较
RF and NB 射频和NB38 subjects 38 个科目Accuracy = 96.5% 准确度 = 96.5%
Li et al. [106], 2017
李等人。 [106],2017
Free-text keystroke 自由文本击键DTM25 subjects 25 个科目TAR = 98.12% 焦油率 = 98.12%
Buriro et al. [105], 2018
布里罗等人。 [105],2018
Arm-movements to perform snap-action
手臂动作以执行快速动作
BN, MLP, and RF BN、MLP 和 RF11 subjects 11 个科目TAR = 82.34% 焦油率 = 82.34%
Lu et al. [104], 2017
卢等人。 [104],2017
Motion signals 运动信号Deep RNN 深度循环神经网络310 subjects 310 个科目EER = 7.17% (PINs), 6.09% (Android locking patterns)
EER = 7.17%(PIN)、6.09%(Android 锁定模式)
Buriro et al. [55], 2021
布里罗等人。 [55],2021
Touch-tapping and hand-movements
触摸敲击和手部动作
NB, NN, and RF NB、NN 和 RF97 subjects 97 个科目TAR = 85.77% 焦油 = 85.77%
Sitova et al. [110], 2015
西托娃等人。 [110],2015
Hand movement, orientation, grasp, tap and keystroke
手部运动、方向、抓握、点击和击键
SM-FS, SE-PCA, and OCSVM-FC Ranking
SM-FS、SE-PCA 和 OCSVM-FC 排名
100 subjects. Data were for sitting and walking posture
100 个科目。数据为坐姿和行走姿势
EERs = 7.16% (walking) and 10.05% (sitting)
EER = 7.16%(步行)和 10.05%(坐着)
Gupta et al. [111], 2022
古普塔等人。 [111],2022
Hand-movements 手部动作RF40 subjects 40 个科目TAR=77.25% 总收益=77.25%

Voice: Voice is an easily collectible behavioural biometric modality that can be acquired by any IoT end-point equipped with a microphone. Section 3.5 has explained the features that are normally exploited for designing voice-based user recognition schemes.
语音:语音是一种易于收集的行为生物识别模式,任何配备麦克风的物联网端点都可以获取它。第 3.5 节解释了通常用于设计基于语音的用户识别方案的功能。

An automatic voice biometric authentication scheme that recognizes a speaker using MFCC and Discrete Cosine Transform (DCT) is presented in [112]. On a dataset of 13 subjects, a SVM using radial-basis function (RBF) kernel is used for evaluation, achieving a success rate of 90%. DriverAuth computed statistical features after extracting MFCCs from a bandpass filter voice signal containing 2 channels sampled at 44,100 Hz with 16 bits per sample [59]. The authors used Q-SVM, ETB, Weighted kNN (W-kNN) classifiers for generating a multi-class classification model. On a dataset of 86 subjects, the system achieved a TAR of 90.5% with voice features and 95.1% with voice and swipe features combined.
[112]中提出了一种使用 MFCC 和离散余弦变换(DCT)来识别说话者的自动语音生物特征认证方案。在包含 13 名受试者的数据集上,使用使用径向基函数 (RBF) 核的 SVM 进行评估,成功率达到 90%。 DriverAuth 从带通滤波器语音信号中提取 MFCC 后计算统计特征,该信号包含 2 个通道,采样频率为 44,100 Hz,每个样本 16 位 [59]。作者使用 Q-SVM、ETB、加权 kNN (W-kNN) 分类器来生成多类分类模型。在 86 名受试者的数据集上,系统使用语音功能实现了 90.5% 的 TAR,使用语音和滑动功能组合实现了 95.1% 的 TAR。

Doddappagol et al. [113] proposed text prompted voice recognition system that used MFCCs, Pitch and Formant technique for extracting features. On a dataset containing 25 subjects, with SVM employed for user classification, an accuracy between 88.7% and 92% was achieved. BreathPrint exploits the audio signatures, i.e., sniff, normal, and deep breathing, of a person [114]. A microphone sensor in close proximity to users’ nose acquires these three audio signatures produced by them. A classification pipeline using Gammatone Frequency Cepstral Coefficients (GFCC) as features as part of a GMM based classifier was used for evaluation, and achieved an accuracy of 94% on a dataset comprising 10 subjects. VoiceLive performs liveness detection by measuring Time-Difference-of-Arrival (TDoA) changes for a sequence of phoneme sounds [69]. It evaluates a phoneme sound localization based liveness detection system that distinguishes a passphrase spoken by a live user from a replayed one giving an accuracy of 99% on a dataset containing 12 subjects. Table 9 compares user recognition schemes based on voice-print.
多达帕戈等人。 [113]提出了文本提示语音识别系统,该系统使用 MFCC、音调和共振峰技术来提取特征。在包含 25 个受试者的数据集上,采用 SVM 进行用户分类,准确率达到 88.7% 至 92%。 BreathPrint 利用音频特征,即人的嗅呼吸、正常呼吸和深呼吸[114]。靠近用户鼻子的麦克风传感器获取用户产生的这三个音频签名。使用伽马通频率倒谱系数 (GFCC) 作为特征作为基于 GMM 的分类器的一部分的分类管道用于评估,并在包含 10 个受试者的数据集上实现了 94% 的准确率。 VoiceLive 通过测量一系列音素声音的到达时间差 (TDoA) 变化来执行活性检测 [69]。它评估一种基于音素声音定位的活体检测系统,该系统能够区分实时用户说出的密码和重放的密码,在包含 12 个受试者的数据集上准确率达到 99%。表 9 比较了基于声纹的用户识别方案。

Table 9. User recognition schemes based on voice.
表9. 基于语音的用户识别方案。

Study 学习Methodology/Features 方法/特点Algorithm/Classifier 算法/分类器Dataset 数据集Performance 表现
Doddappago et al. [113], 2016
多达帕戈等人。 [113],2016
MFCCs, Pitch and Formant technique
MFCC、音高和共振峰技术
SVM25 subjects 25 个科目Accuracy = 88.7% to 92%
准确度 = 88.7% 至 92%
Chauhan et al. [114], 2017
乔汉等人。 [114],2017
Audio signatures (sniff, normal, and deep breathing)
音频特征(嗅、正常和深呼吸)
A GFCC and GMM GFCC 和 GMM10 subjects 10 个科目Accuracy = 94% 准确度 = 94%
Zhang et al. [69], 2016
张等人。 [69],2016
Spoken passphrase 口语密码Liveness detection by measuring TDoA changes for a sequence of phoneme sounds
通过测量音素声音序列的 TDoA 变化来检测活性
12 subjects 12 个科目Accuracy = 99% 准确度 = 99%
Barbosa et al. [112], 2015
巴博萨等人。 [112],2015
MFCC and DCT of voiceprint.
声纹的MFCC和DCT。
SVM-RBF 支持向量机RBF13 subjects 13 个科目Success Rate = 90% 成功率 = 90%
Gupta et al. [59], 2019
古普塔等人。 [59],2019
Statistical features from MFCCs
MFCC 的统计特征
Q-SVM. Q-SVM。86 users 86 位用户TAR = 90.5% 焦油=90.5%

Gait: The human gait is a spatio-temporal motor-controlled biometric behavior that can be employed for to recognise individuals unobtrusively, using a camera, radar, position-, motion-, or pressure-based sensors. Musale et al. [115] proposed a Lightweight Gait Authentication Technique (Li-GAT) that exploits information, such as the subconscious level of user activities, collected from IoT devices having inbuilt motion sensors including an accelerometer. For evaluation, LR using deep-NN, RF, kNN classifiers were selected and achieved an accuracy of 96.69% on a dataset containing 12 subjects. Kastaniotis et al. [116] designed a gait recognition system based on a hierarchical representation of gait trajectories acquired using depth and motion sensors. The acquired pose sequences are expressed as angular vectors (Euler angles) of eight selected limbs. These trajectories (sequences of angular vectors) are then mapped in the dissimilarity space, resulting in a vector of dissimilarities that are modeled via sparse representation. For verification, three criteria were evaluated: the Sparsity Concentration Index (SCI), the minimum dissimilarity (MinDiss), and the combination of both, and achieved an EER of 3.1% on 30 subjects.
步态:人类步态是一种时空电机控制的生物识别行为,可用于使用摄像头、雷达、基于位置、运动或压力的传感器来不引人注目地识别个体。穆萨莱等人。 [115]提出了一种轻量级步态认证技术(Li-GAT),该技术利用从具有内置运动传感器(包括加速度计)的物联网设备收集的信息,例如用户活动的潜意识水平。为了进行评估,选择了使用深度 NN、RF、kNN 分类器的 LR,并在包含 12 个受试者的数据集上实现了 96.69% 的准确率。卡斯塔尼奥蒂斯等人。 [116]设计了一种步态识别系统,该系统基于使用深度和运动传感器获取的步态轨迹的分层表示。获取的姿势序列表示为八个选定肢体的角度矢量(欧拉角)。然后将这些轨迹(角度向量序列)映射到相异空间中,从而产生通过稀疏表示建模的相异向量。为了验证,评估了三个标准:稀疏浓度指数 (SCI)、最小相异度 (MinDiss) 以及两者的组合,并在 30 名受试者上实现了 3.1% 的 EER。

Deep Gait authenticates users based on a single walk cycle [117]. It acquires accelerometer and gyroscope readings from wearable or hand-held devices to determine a users’ gait. For evaluation, a deep-NN is used that achieved an EER of 1.8% on 51 subjects. Another smartphone-based gait recognition system with the application of Subjective Logic (SL) for biometric data fusion is presented in [118]. Gait features considered for the system are statistical (ST), the histogram of the distribution (BIN), MFCCs, and Bark-frequency cepstral coefficients (BF1 and BF2). For evaluation, Extremely Randomized Trees (ERT), MLP, and RF classifiers are selected that gave an EER of 1.31% on 48 subjects. Lamiche et al. [119] proposed a bimodal authentication scheme based on gait patterns and keystroke dynamics. By using the smartphone’s built-in sensors, the user’s gait signals with keystroke dynamics are acquired simultaneously, during walking and text typing activities. The scheme was evaluated using 20 subjects and an accuracy of 99.11% is achieved using a MLP classifier.
Deep Gait 基于单个步行周期对用户进行身份验证 [117]。它从可穿戴或手持设备获取加速度计和陀螺仪读数,以确定用户的步态。为了进行评估,使用了深度神经网络,它在 51 名受试者上实现了 1.8% 的 EER。 [118]中提出了另一种基于智能手机的步态识别系统,应用主观逻辑(SL)进行生物特征数据融合。系统考虑的步态特征包括统计 (ST)、分布直方图 (BIN)、MFCC 和 Bark 频率倒谱系数(BF1 和 BF2)。为了进行评估,选择了极端随机树 (ERT)、MLP 和 RF 分类器,其对 48 名受试者的 EER 为 1.31%。拉米什等人。 [119]提出了一种基于步态模式和击键动力学的双模态认证方案。通过使用智能手机的内置传感器,可以在步行和文本输入活动期间同时获取用户的步态信号和击键动态。该方案使用 20 名受试者进行评估,使用 MLP 分类器实现了 99.11% 的准确率。

Gait-Watch is a context-aware gait-based authentication system, which is coupled with a smart-watch based activity detector to identify a user’s current activity [120]. As per the real-time input of the activity detector, identification is performed on corresponding training templates. The method extracted unique features of gait dynamics by exploiting the scale-space of gait acceleration signals using a sparse coding scheme. For identification, probabilistic sparse representation classification (PSRC) is employed and the method achieved 97.3% recognition accuracy and 3.5% EER. An improvement of 30.21% in recognition accuracy is observed by dynamically determining the user’s activity. Table 10 compares user recognition purposes based on a user’s gait.
Gait-Watch 是一种基于上下文感知步态的身份验证系统,它与基于智能手表的活动检测器相结合,以识别用户当前的活动[120]。根据活动检测器的实时输入,对相应的训练模板进行识别。该方法通过使用稀疏编码方案利用步态加速度信号的尺度空间来提取步态动力学的独特特征。在识别方面,采用概率稀疏表示分类(PSRC),该方法实现了 97.3% 的识别准确率和 3.5% 的 EER。通过动态确定用户的活动,识别准确率提高了 30.21%。表 10 比较了基于用户步态的用户识别目的。

Table 10. User recognition schemes based on gait.
表 10. 基于步态的用户识别方案。

Study 学习Methodology/Features 方法/特点Algorithm/Classifier 算法/分类器Dataset 数据集Performance 表现
Wasnik et al. [118], 2017
瓦斯尼克等人。 [118],2017
Users’ gait ST, BIN, MFCCs, BF1 and BF2
用户步态 ST、BIN、MFCC、BF1 和 BF2
ERT, MLP and RF ERT、MLP 和 RF48 subjects 48 个科目EER = 1.31% 能效比=1.31%
Musale et al. [115], 2018
穆萨莱等人。 [115],2018
Walking based activities 以步行为基础的活动deep-NN, RF, kNN 深度神经网络、RF、kNN12 subjects 12 个科目Accuracy = 96.69% 准确度 = 96.69%
Kastaniotis et al. [116], 2015
卡斯塔尼奥蒂斯等人。 [116],2015
Gait trajectories 步态轨迹SCI, MinDiss and their combination
SCI、MinDiss 及其组合
30 subjects 30 个科目EER = 3.1% 能效比=3.1%
Bael et al. [117], 2019
贝尔等人。 [117],2019
Single walk cycle using motion sensors
使用运动传感器的单次行走循环
deep-NN 深度神经网络51 subjects 51 个科目EER = 1.8% 能效比=1.8%
Lamiche et al. [119], 2019
拉米什等人。 [119],2019
Gait patterns and keystroke dynamics
步态模式和击键动力学
MLP20 subjects 20 个科目Accuracy = 99.11% 准确度 = 99.11%

Footstep: Footstep features to recognize a person can be collected imperceptibly using pressure-based sensors. Moreover, people can be allowed to walk over the footstep sensors wearing footwear (such as shoes, trainers, boots) and carrying weights (such as shoulder bags and files) that make the recognition process more realistic.
足迹:使用基于压力的传感器可以在不知不觉中收集用于识别人的足迹特征。此外,人们可以穿着鞋类(例如鞋子、运动鞋、靴子)并携带重物(例如肩包和文件)走过脚步传感器,这使得识别过程更加真实。

Rodriguez et al. [121] proposed a scheme that exploits footstep signals in both the time and space domains. In the time domain, the extracted features include the ground reaction force (GRF), the spatial average, and the upper and lower contours of the pressure signals; the spatial domain, involves features including 3D images of the accumulated pressure. A SVM-RBF is used for evaluation. On a dataset of 120 subjects, EERs of 15.2%, 13.4%, and 7.9% were achieved, by a training classification model with 40, 100, and 500 single footstep signals respectively, after fusing both time-domain and space-domain features. Similarly, Edward et al. [44] extracted geometric and wavelet features from a footstep dataset collected by the Swansea University Speech and Image Research Group. On a dataset of 94 subjects, the scheme achieved an EER 16.3% using the RF classifier for individual prediction.
罗德里格斯等人。 [121]提出了一种利用时域和空间域中的脚步信号的方案。在时域上,提取的特征包括地面反作用力(GRF)、空间平均值以及压力信号的上下轮廓;空间域,涉及包括 3D 累积压力图像在内的特征。 SVM-RBF 用于评估。在 120 名受试者的数据集上,融合时域和空间域特征后,分别使用 40、100 和 500 个单脚步信号训练分类模型,获得了 15.2%、13.4% 和 7.9% 的 EER。同样,爱德华等人。 [44]从斯旺西大学语音和图像研究小组收集的足迹数据集中提取几何和小波特征。在 94 名受试者的数据集上,该方案使用 RF 分类器进行个体预测,达到了 16.3% 的 EER。

Zhou et al. [122] proposed a user identification scheme based on a single footstep biometric without considering the shape details or inter-step relationships of users’ footprints. They utilized fabric sensors to register features such as shifting of the center of gravity, maximum pressure point, and overall pressured area. Evaluation of the scheme was performed using Q-SVM and it achieved an accuracy of 76.9% on a dataset containing 529 footsteps collected from 13 subjects.
周等人。 [122]提出了一种基于单足迹生物识别的用户识别方案,没有考虑用户足迹的形状细节或步间关系。他们利用织物传感器来记录重心移动、最大压力点和整体受压面积等特征。该方案的评估是使用 Q-SVM 进行的,在包含从 13 名受试者收集的 529 个足迹的数据集上达到了 76.9% 的准确率。

One automatic biometric verification scheme leveraged spatio-temporal footstep representation acquired from floor-only sensor data [86]. For evaluation, an ensemble of a deep resnet architecture and SVM models were used and achieved an EER of 0.7% on 120 subjects. Riwurohi et al. [123] proposed a biometric identification system based on the sound of footsteps acquired using microphone arrays. The footstep sound features of 10 participants were extracted using MFCCs. The scheme achieved an accuracy of 98.8% using BPNN. Table 11 compares user recognition schemes based on a user’s footstep.
一种自动生物特征验证方案利用从仅地板传感器数据获取的时空足迹表示[86]。为了进行评估,使用了深度 resnet 架构和 SVM 模型的集合,并在 120 名受试者上实现了 0.7% 的 EER。 Riwurohi 等人。 [123]提出了一种基于使用麦克风阵列获取的脚步声的生物识别系统。使用 MFCC 提取 10 名参与者的脚步声特征。该方案使用BPNN实现了98.8%的准确率。表 11 比较了基于用户脚步的用户识别方案。

Table 11. User recognition schemes based on footsteps.
表 11. 基于脚步的用户识别方案。

Study 学习Methodology/Features 方法/特点Algorithm/Classifier 算法/分类器Dataset 数据集Performance 表现
Edward et al. [44], 2014
爱德华等人。 [44],2014
Extracted geometric and wavelet features from a footstep.
从足迹中提取几何和小波特征。
RF94 subjects 94 个科目EER = 16.3% 能效比=16.3%
Vera et al. [121], 2013
维拉等人。 [121],2013
Time and space domains footstep signals.
时域和空间域脚步信号。
SVM-RBF 支持向量机RBF120 subjects 120 个科目EERs = 15.2%, 13.4%, and 7.9% with 40, 100, and 500, respectively
EER 分别为 15.2%、13.4% 和 7.9%,分别为 40、100 和 500
Costilla et al. [86], 2018
科斯蒂利亚等人。 [86],2018
Spatio-temporal footstep representations
时空足迹表示
Deep resnet architecture and SVM
深度resnet架构和SVM
120 subjects 120 个科目EER = 0.7% 能效比=0.7%
Zhou et al. [122], 2017
周等人。 [122],2017
Single footstep signal with inter-step relationships
具有步间关系的单足迹信号
Q-SVM Q支持向量机13 subjects 13 个科目Accuracy = 76.9% 准确度 = 76.9%
Riwurohi et al. [123], 2018
Riwurohi 等人。 [123],2018
Footsteps’ sound 脚步声BPNN BP神经网络10 subjects 10 个科目Accuracy = 98.8% 准确度 = 98.8%
Gupta et al. [111], 2022
古普塔等人。 [111],2022
Pressure signal 压力信号SVM40 subjects 40 个科目TAR=72.25% 总收益=72.25%

Multi-modal biometric systems: Multi-modal biometric systems that combined two or more behavioural biometrics are mentioned in Table 12. HMOG scheme exploited micro-movement and orientation dynamics resulting from how a user grasps, holds, and taps on the smartphone to continuously authenticate smartphone users [110]. DiverAuth scheme leveraged swipe, voice, and face for authenticating drivers to make the on-demand ride and ride-sharing services safer and more secure for riders [42], [59].
多模态生物识别系统:表 12 提到了结合了两种或多种行为生物识别的多模态生物识别系统。HMOG 方案利用了用户如何抓握、握持和点击智能手机而产生的微运动和方向动态来持续进行身份验证智能手机用户[110]。 DiverAuth 方案利用刷卡、语音和面部对驾驶员进行身份验证,使按需乘车和乘车共享服务对乘客来说更加安全 [42]、[59]。

Table 12. Multimodal user recognition schemes.
表 12. 多模式用户识别方案。

Study 学习Methodology/Features 方法/特点Algorithm/Classifier 算法/分类器Dataset 数据集Performance 表现
Sitova et al. [110], 2015
西托娃等人。 [110],2015
Hand movement, orientation, grasp, tap and keystroke
手部运动、方向、抓握、点击和击键
SM-FS, SE-PCA, and OCSVM-FC Ranking
SM-FS、SE-PCA 和 OCSVM-FC 排名
100 subjects. 100 个科目。EERs = 7.16% (walking) and 10.05% (sitting)
EER = 7.16%(步行)和 10.05%(坐着)
Gupta et al. [59], 2019
古普塔等人。 [59],2019
Swipe, Voice and face 滑动、语音和脸部Ensemble Bagged Tree 整体袋装树86 subjects [95] 86 个科目 [95]TAR = 96.48%, FAR = 0.02%
焦油 = 96.48%,焦油 = 0.02%
Buriro et al. [55], 2021
布里罗等人。 [55],2021
Touch-tapping and hand-movements
触摸敲击和手部动作
NB, NN, and RF NB、NN 和 RF97 subjects 97 个科目TAR = 85.77% 焦油 = 85.77%
Gupta et al. [111], 2022
古普塔等人。 [111],2022
Hand-movements and footstep
手部动作和脚步
RF40 subjects 40 个科目TAR=97.25%, FAR=0.01% TAR=97.25%,FAR=0.01%

Underlying mechanism in PIN/password-based authentication schemes can be strengthen by employing behavioural biometrics. Buriro et al. [55] proposed a scheme that used touch-timing-differences of the entered strokes and the hand-movement gesture recorded during the random text entry, to authenticate users. Thus, the scheme provides flexibility to users to enter any random 8-digit alphanumeric text, instead of pre-configured PIN/Passwords. Step & Turn exploited single footstep and hand-movement behaviors to authenticate the users providing frictionless and smooth access to smart physical spaces [111].
基于 PIN/密码的身份验证方案中的底层机制可以通过采用行为生物识别技术来加强。布里罗等人。 [55]提出了一种方案,利用输入笔划的触摸时序差异和随机文本输入期间记录的手部运动手势来对用户进行身份验证。因此,该方案为用户提供了输入任意 8 位字母数字文本的灵活性,而不是预先配置的 PIN/密码。 Step & Turn 利用单脚步和手部移动行为来验证用户身份,提供对智能物理空间的无摩擦且顺畅的访问 [111]。

5. Security, privacy and usability considerations
5. 安全、隐私和可用性考虑

Security, privacy and usability can be described as indispensable non-functional requirements for designing human-to-things recognition schemes [124]. They are required to satisfy CIA criteria, i.e., confidentiality (ensuring access to legitimate users only), integrity (guaranteeing modification by legitimate users) and availability (ensuring uninterrupted availability to legitimate users). With regard to these requirements, substantial improvements can be observed in evolving behavioural biometric-based user recognition schemes for Authentication, Authorization and Accounting.
安全性、隐私性和可用性可以被描述为设计人对物识别方案不可或缺的非功能性要求[124]。它们需要满足 CIA 标准,即机密性(确保仅合法用户可以访问)、完整性(保证合法用户可以修改)和可用性(确保合法用户不间断的可用性)。关于这些要求,在不断发展的基于行为生物特征识别的身份验证、授权和计费用户识别方案中可以观察到重大改进。

5.1. Security 5.1.安全

This section discusses security aspects of IoT systems employing behavioural biometrics-based authentication mechanisms. Fig. 9 illustrates a threat model emphasizing sixteen attack points or vectors (AP) that can be circumvented in a typical biometric recognition system [125]. The set of potential threats and vulnerabilities at each AP is provided in the caption in Fig. 9. These threat models mainly focus on attack points that can be useful to address the potential security loopholes in IoT systems employing behavioural biometrics-based authentication mechanisms. Reportedly, a number of security analyses have been performed to evaluate touch-based recognition mechanisms against common attacks such as impersonation, mimicking, smudge or shoulder-surfing [53], [54].
本节讨论采用基于行为生物识别的身份验证机制的物联网系统的安全方面。图 9 说明了一个威胁模型,强调了在典型的生物识别系统中可以规避的 16 个攻击点或向量 (AP) [125]。图 9 的标题中提供了每个接入点的潜在威胁和漏洞集。这些威胁模型主要关注攻击点,这些攻击点可用于解决采用基于行为生物识别的身份验证机制的物联网系统中的潜在安全漏洞。据报道,已经进行了许多安全分析来评估基于触摸的识别机制,以抵御常见的攻击,例如冒充、模仿、涂抹或肩窥[53]、[54]。

Fig. 9
  1. Download : Download high-res image (404KB)
    下载:下载高分辨率图像 (404KB)
  2. Download : Download full-size image
    下载:下载全尺寸图像

Fig. 9. Attack points (AP) in a generic biometric recognition system [125]. [AP1 - Adversary attacks (spoofing, presentation, fake physical biometrics, latent print reactivation, wolf attack), Administrative frauds (false enrollment, exception abuse); AP2 - Denial of service; AP3 - Hill-climbing, brute force attack, dictionary attack, replay attack, man-in-the-middle attack; AP4 - Override feature extractor, Trojan horse attack; AP5 - False data inject, reuse of residual; AP6 - Side channel attack; AP7 - Intercept template, data reject; AP8 - Unauthorized access through external compromised system, Steel, delete, modify, substitute, reconstruct templates; AP9 - Alter system parameters, Synthesized feature vector, brute force attack, hill-climbing, fake digital biometrics, replay attack; AP10 - Intercept stored template, replay attack; AP11 - Comparison module override, side channel attack, Trojan horse attack; AP12 - Modify score; AP13 - Override decision module; AP14 - False match; AP15 - False non-match; AP16 - Denial of service] .
图 9. 通用生物特征识别系统中的攻击点 (AP) [125]。 [AP1 - 对手攻击(欺骗、演示、虚假物理生物识别、潜印重新激活、狼攻击)、管理欺诈(虚假注册、异常滥用); AP2——拒绝服务; AP3——爬山、暴力攻击、字典攻击、重放攻击、中间人攻击; AP4 - 覆盖特征提取器、木马攻击; AP5 - 虚假数据注入,残差重用; AP6——侧信道攻击; AP7——拦截模板,数据拒绝; AP8 - 通过外部受感染系统进行未经授权的访问、钢化、删除、修改、替换、重建模板; AP9 - 改变系统参数、合成特征向量、暴力攻击、爬山、伪造数字生物识别、重放攻击; AP10——拦截存储模板,重放攻击; AP11——比较模块覆盖、侧通道攻击、木马攻击; AP12——修改分数; AP13——覆盖决策模块; AP14 - 错误匹配; AP15 - 错误不匹配; AP16 - 拒绝服务]。

Sewadda et al. [126] rigorously analyzed the impact of Lego-driven robotic attacks, namely, population statistics-driven and user-tailored attack on touch-based authentication. In a population statistics-driven attack, patterns are acquired from a large database to train the robot, whereas, in a user-specific attack, samples of a legitimate user are stolen to train the robot. Subsequently, both attacks were launched by a Lego robot trained to swipe on the touch screen. Further, these attack methods can be refined for standard impostor testing for touch-based recognition schemes. Song et al. [58] conducted a security analysis of their TFST gesture authentication against: zero-effort attack, i.e., an adversary attacks without any prior knowledge of the underlying authentication scheme; smudge attack, i.e., an adversary manages to identify and trace oily residues on a touchscreen; shoulder surfing attack, i.e., an adversary secretly observes the legitimate user; and statistical attack, i.e., an adversary employs knowledge obtained from the statistics of a group of users.
塞瓦达等人。 [126]严格分析了乐高驱动的机器人攻击的影响,即人口统计驱动和用户定制的攻击对基于触摸的身份验证的影响。在人口统计驱动的攻击中,从大型数据库中获取模式来训练机器人,而在特定于用户的攻击中,窃取合法用户的样本来训练机器人。随后,这两次攻击都是由经过训练可以在触摸屏上滑动的乐高机器人发起的。此外,这些攻击方法可以针对基于触摸的识别方案的标准冒充者测试进行改进。宋等人。 [58]对其 TFST 手势认证进行了安全分析,以防止:零努力攻击,即对手在没有任何底层认证方案先验知识的情况下进行攻击;污迹攻击,即对手设法识别和追踪触摸屏上的油性残留物;肩窥攻击,即对手秘密观察合法用户;统计攻击,即对手利用从一组用户的统计数据中获得的知识。

A Continuous Smartphone Authentication Method using wristbands (CSAW) exploited motion gestures to verify whether a smartphone is in the hands of a legitimate owner or not [107]. Security analysis for CSAW is performed against: opportunistic snooping, i.e., an adversary snoops into other smartphones when the owner is not around; stealing credentials, i.e., an adversary steals the credentials for accessing smart devices remotely; and shadowing, i.e. an adversary shadows a user to access his/her smartphone illegitimately. They reported a false-positive rate of less than 2%. Yi et al. [127] performed an empirical study on the security and usability of a real-time free-form motion gesture authentication scheme (REMOTE) that leveraged user-created 3D gestures. They evaluated REMOTE against: random attacks, i.e. an adversary does have any prior knowledge of the victim’s gesture and apply random guess to attack; content-aware attack, i.e., an adversary has the descriptive information about the victim’s gesture obtained via social engineering or a third party; and mimicry attack, i.e., an adversary observes a legitimate user’s gesture directly or through a recorded video. The authors reported that random attacks are ineffective for attacking gesture-based behavioural biometric authentication. In the case of content-aware attacks, additional descriptive information provides only minimal help to adversaries. Although, mimicry attacks seem more effective than the random and content-aware attacks, they still only achieve negligible success in most of the attack attempts.
使用腕带的连续智能手机身份验证方法(CSAW)利用运动手势来验证智能手机是否在合法所有者手中[107]。 CSAW 的安全分析针对: 机会主义窥探,即对手在所有者不在场时窥探其他智能手机;窃取凭证,即对手窃取远程访问智能设备的凭证;和跟踪,即对手跟踪用户以非法访问他/她的智能手机。他们报告的假阳性率低于 2%。易等人。 [127]对利用用户创建的 3D 手势的实时自由形式运动手势认证方案 (REMOTE) 的安全性和可用性进行了实证研究。他们评估了远程攻击:随机攻击,即对手对受害者的手势有任何先验知识,并应用随机猜测进行攻击;内容感知攻击,即对手拥有通过社会工程或第三方获得的有关受害者手势的描述性信息;模仿攻击,即对手直接或通过录制的视频观察合法用户的手势。作者报告说,随机攻击对于攻击基于手势的行为生物特征认证是无效的。在内容感知攻击的情况下,额外的描述性信息只能为对手提供最小的帮助。尽管模仿攻击似乎比随机攻击和内容感知攻击更有效,但在大多数攻击尝试中,它们仍然只取得了微不足道的成功。

Many studies have been performed to understand common attacks on voice-based recognition systems. VAuth [128] exploited users’ language, accent, or mobility to ensure voice assistants - such as Siri, Google Now and Cortana - execute the commands that originate only from the voice of the owner. VAuth successfully averted attacks, such as replay-, voice-mangling, and impersonation attacks using a multi-stage matching algorithm. Rahmeni et al. [129] proposed a method to mitigate spoofing attacks, such as impersonation, replay, voice-conversion, and speech-synthesis independent of an attack-type. Their proposed method decomposes the speech signal into a glottal source signal and models the vocal tract filter using glottal inverse filtering. Features are obtained using Iterative Adaptive Inverse Filter (IAIF) descriptors that can be exploited to distinguish between genuine or spoofed input speech using a SVM and an extreme learning machine (ELM).
已经进行了许多研究来了解针对基于语音的识别系统的常见攻击。 VAuth [128] 利用用户的语言、口音或移动性来确保语音助手(例如 Siri、Google Now 和 Cortana)执行仅来自所有者声音的命令。 VAuth 使用多阶段匹配算法成功避免了重放、语音篡改和模拟攻击等攻击。拉赫梅尼等人。 [129]提出了一种减轻欺骗攻击的方法,例如独立于攻击类型的模拟、重放、语音转换和语音合成。他们提出的方法将语音信号分解为声门源信号,并使用声门逆滤波对声道滤波器进行建模。使用迭代自适应逆滤波器 (IAIF) 描述符获得特征,可利用 SVM 和极限学习机 (ELM) 来区分真实或欺骗的输入语音。

Chang [130] proposed a two-layer authentication method using a voiceprint to mitigate replay attacks. Similarly, the VoiceLive system addressed a replay attack using extracts of the TDoA of each phoneme sound to distinguish between a passphrase spoken by a live user and a replayed one. It leverages the human speech production system and advanced smartphone audio hardware. Garg et al. [131] investigated the effectiveness of Constant-Q Cepstral Coefficients (CQCC) and MFCC features extracted from individual frequency subbands to improve the performance of replay attack detection in automatic speaker verification (ASV) systems. Tom et al. [132] proposed group delay (GD) grams that can be obtained by concatenating a group delay function over consecutive frames as a novel time-frequency representation of an utterance. Subsequently, GD-grams provides a time-frequency representation with a high spectral resolution that can be used for the end-to-end training of deep-convolutional NNs to detect audio replay attacks.
Chang [130]提出了一种使用声纹的两层身份验证方法来减轻重放攻击。同样,VoiceLive 系统使用每个音素声音的 TDoA 提取来解决重放攻击,以区分实时用户所说的密码和重放的密码。它利用人类语音产生系统和先进的智能手机音频硬件。加格等人。 [131]研究了从各个频率子带中提取的恒定 Q 倒谱系数(CQCC)和 MFCC 特征的有效性,以提高自动说话人验证(ASV)系统中重放攻击检测的性能。汤姆等人。 [132]提出了群延迟(GD)克,可以通过连接连续帧上的群延迟函数作为话语的新颖时频表示来获得。随后,GD-grams 提供了具有高频谱分辨率的时频表示,可用于深度卷积神经网络的端到端训练以检测音频重放攻击。

Voice conversion attacks apply synthetic speech generation or source voice morphing to achieve the same effect as human impersonation or adapted speech synthesis, thus, deceiving the speaker identification (SID) and speaker verification (SV). An approach exploited score-level fusion of front-end features, namely, CQCCs, all-pole group delay function (APGDF), and fundamental frequency variation (FFV) to detect a synthetic speech [133]. Similarly, Yang et al. [134] investigated the high-frequency-based features for the detection of spoofing attacks. The method analyzed inverted constant-Q coefficients (ICQC) and inverted CQCC using DCT on inverted octave power spectrum and inverted linear power spectrum respectively, to detect synthetic speeches. Wu et al. [135] reported that a hidden Markov model (HMM) based text-dependent systems with temporal speech information provided more resistance to voice conversion attacks than systems lacking temporal modeling.
语音转换攻击利用合成语音生成或源语音变形来达到与人类模仿或自适应语音合成相同的效果,从而欺骗说话人识别(SID)和说话人验证(SV)。一种方法利用前端特征的分数级融合,即 CQCC、全极点群延迟函数 (APGDF) 和基频变化 (FFV) 来检测合成语音 [133]。同样,杨等人。 [134]研究了用于检测欺骗攻击的基于高频的特征。该方法分别在反倍频程功率谱和反线性功率谱上使用 DCT 分析反常 Q 系数 (ICQC) 和反 CQCC,以检测合成语音。吴等人。 [135]报道称,与缺乏时间建模的系统相比,具有时间语音信息的基于隐马尔可夫模型(HMM)的文本相关系统对语音转换攻击具有更强的抵抗力。

Munaz et al. [136] evaluated the security strength of a smartphone-based gait recognition system against zero-effort and live minimal-effort impersonation attacks, under realistic scenarios using live visual and audio feedback. Particularly, live impersonation attacks were performed by five professional actors specialized in mimicking body movements and body language. They reported no false positives under impersonation attacks and 29% of attacks were completely unsuccessful. Gait-Watch was evaluated against the imposter attack scenario [120] and reported a false acceptance of only 3.5 per 100 impostor trials. ZEMFA [137], a zero-effort multi-factor authentication system for securing access to a terminal, leveraged a smartphone and smartwatch (or bracelet) to acquire gait patterns, i.e., mid/lower body movements measured using the phone and wrist/arm movements using the watch. The scheme reported 0.2% false negatives and 0.3% false positives on average for passive attacks under benign settings. Further, the authors reported 4.55% false positives on average for active imitation attacks, such as treadmill-based attacks. Tram et al. [138] proposed a technique to prevent statistical attacks due to the inter-class low-discrimination and intra-class high-variation of gait data. The proposed technique leveraged Linear Discrimination Analysis (LDA) to enhance the discrimination of gait templates, and Gray code quantization to extract high discriminative and stable binary template that can significantly improve the security and performance of inertial-sensor based gait cryptosystem.
穆纳兹等人。 [136]在使用实时视觉和音频反馈的真实场景下评估了基于智能手机的步态识别系统针对零努力和实时最小努力模拟攻击的安全强度。特别是,现场模仿攻击是由五名专门模仿肢体动作和肢体语言的专业演员进行的。他们报告说,在假冒攻击中没有出现误报,29% 的攻击完全不成功。 Gait-Watch 针对冒名顶替者攻击场景进行了评估 [120],报告称每 100 次冒名顶替者试验中只有 3.5 次错误接受。 ZEMFA [137] 是一种零努力的多因素身份验证系统,用于保护对终端的访问,利用智能手机和智能手表(或手环)来获取步态模式,即使用手机和手腕/手臂测量的中/下半身运动使用手表进行动作。该计划报告,在良性设置下,被动攻击的平均误报率为 0.2%,误报率为 0.3%。此外,作者报告称,主动模仿攻击(例如基于跑步机的攻击)平均误报率为 4.55%。电车等人。 [138]提出了一种防止由于步态数据的类间低歧视性和类内高变异性而导致的统计攻击的技术。该技术利用线性判别分析(LDA)来增强步态模板的判别力,并利用格雷码量化来提取高判别力和稳定的二进制模板,可以显着提高基于惯性传感器的步态密码系统的安全性和性能。

Moreover, behavioural biometrics have been evaluated for designing implicit [139], [140], [141], continuous [92], [94], [119], and risk-based [55], [142] user recognition schemes. Although, more comprehensive security evaluations of these behavioural biometric modalities are desired to avert any unauthorized intrusion by adversaries, repudiation claims by malicious users, denial-of-service to legitimate users, or users’ privacy erosion due to function creep [143].
此外,行为生物识别技术已被评估用于设计隐式[139]、[140]、[141]、连续[92]、[94]、[119]和基于风险的[55]、[142]用户识别方案。尽管如此,仍需要对这些行为生物识别模式进行更全面的安全评估,以避免对手的任何未经授权的入侵、恶意用户的拒绝声明、对合法用户的拒绝服务或由于功能蠕变而导致的用户隐私侵蚀[143]。

5.2. Privacy 5.2.隐私

Privacy-preserving techniques [144], such as Template Protection Schemes, Biometric Crypto-Systems, and Pseudonymous Biometric Identities can be implemented to safeguard users’ biometric data to address issues arising from concerns in areas such as irreversibility, revocability, unlinkability, and discriminability. There are an increasing number of regional, national and international privacy protection laws and regulations, such as  [145], [146], [147], that place biometric modalities under a special category of personal data. ISO24745:2011 [148] defines the following four properties for a template protection scheme illustrated in Fig. 10:
可以实施模板保护方案、生物识别密码系统和假名生物识别身份等隐私保护技术[144]来保护用户的生物识别数据,以解决不可逆性、可撤销性、不可链接性和可辨别性等领域引起的问题。越来越多的区域、国家和国际隐私保护法律和法规,例如[145]、[146]、[147],将生物识别模式置于个人数据的特殊类别之下。 ISO24745:2011 [148] 为图 10 所示的模板保护方案定义了以下四个属性:

  • Irreversibility: Reconstruction of original biometric features from a stored biometric template must be computationally exhaustive to discourage adversaries to reconstruct the biometric data from features in protected form.
    不可逆性:从存储的生物识别模板重建原始生物识别特征必须在计算上详尽,以阻止对手从受保护形式的特征重建生物识别数据。

  • Revocability: Ability to generate multiple versions of secure biometric templates from the same biometric data of a user that can enable the replacement of the compromised biometric template with a new template instantaneously, without causing any inconveniences to the user.
    可撤销性:能够根据用户的相同生物识别数据生成多个版本的安全生物识别模板,从而可以立即用新模板替换受损的生物识别模板,而不会给用户带来任何不便。

  • Unlinkability: Multiple biometric templates of the same subject used by different recognition systems must not allow identifying/linking the user based on protected features.
    不可链接性:不同识别系统使用的同一主题的多个生物识别模板不得允许基于受保护的特征来识别/链接用户。

  • Discriminability: Secure template must not degrade the recognition accuracy of a biometric-based recognition system and should maintain sufficient discriminative information from rest of the registered users.
    可区分性:安全模板不得降低基于生物识别的识别系统的识别准确性,并且应保留与其他注册用户足够的区分信息。

Fig. 10
  1. Download : Download high-res image (340KB)
    下载:下载高分辨率图像 (340KB)
  2. Download : Download full-size image
    下载:下载全尺寸图像

Fig. 10. Template protection properties.
图 10. 模板保护属性。

Some of the basic techniques for generating cancelable biometric templates are based on noninvertible geometric transformations, such as affine, cartesian, polar, or functional transformation [149]. Bioconvolving [150] can be useful for all the behavioural biometric modalities in which raw signals are a sequence of real-numbers of finite length. In this method, each transformed sequence can be obtained from the corresponding original sequence having N values by dividing the original sequence into W non-overlapping segments (W<N) using randomly selected W integers between 1 and 99 in the ascending order. Zhi et al. [151] proposed learning-based Index-of-Maximum (LIoM) hashing that utilizes a supervised learning mechanism to generate a more discriminative and compact cancelable touch-stroke template. With a supervised learning approach, the LIoM learns the optimized projection itself, unlike data-agnostic IoM hashing that depends on random projection for hashing. The authors reported that the classification model generated with a protected template achieved significantly better accuracy than with an original template.
生成可取消生物识别模板的一些基本技术基于不可逆几何变换,例如仿射变换、笛卡尔变换、极坐标变换或函数变换[149]。生物卷积[150]可用于所有行为生物识别模式,其中原始信号是有限长度的实数序列。在该方法中,每个变换后的序列可以通过将原始序列划分为 W 值的原始序列中获得> 非重叠段 ( W<N ),使用按升序随机选择的 W 1 到 99 之间的整数。智等人。 [151]提出了基于学习的最大指数(LIoM)散列,它利用监督学习机制来生成更具辨别力和紧凑的可取消触摸笔画模板。通过监督学习方法,LIoM 可以学习优化的投影本身,这与依赖于随机投影进行散列的数据无关的 IoM 散列不同。作者报告说,使用受保护模板生成的分类模型比使用原始模板生成的准确度明显更高。

Chee [152] proposed Random Binary Orthogonal Matrices Projection (RBOMP) and Two-dimensional Winner-Takes-All (2DWTA) hashing for voice template protection. RBOMP transforms a 1-D voice feature (i-vector having a fixed-length real value representation) from a linear space into an ordinal space by convolving with a binary orthogonal matrix. Further, a user-specific random token and a non-invertible function such as prime factorization are used to conceal the returned index that strengthens the system security significantly. Conversely, 2DWTA hashing transforms a 2-D feature from a continuous value to a discrete value. 2DWTA relies on an implicit ordering of the feature rather than the absolute feature value of the features. That is, 2DWTA hashing defines an ordinal embedding with an associated rank-correlation measure. Billeb et al. [153] proposed a fuzzy commitment scheme by employing binarized feature vectors in a cryptographic primitive for voice features that are extracted with a speech recognition system based on GMM and UBM (Universal Background Modeling). The proposed binarization scheme generates fixed-length binary voice templates.
Chee [152]提出了用于语音模板保护的随机二值正交矩阵投影(RBOMP)和二维赢家通吃(2DWTA)散列。 RBOMP 通过与二进制正交矩阵进行卷积,将一维语音特征(具有固定长度实值表示的 i 向量)从线性空间变换到序数空间。此外,使用用户特定的随机令牌和不可逆函数(例如质因数分解)来隐藏返回的索引,从而显着增强了系统的安全性。相反,2DWTA 哈希将二维特征从连续值转换为离散值。 2DWTA 依赖于特征的隐式排序,而不是特征的绝对特征值。也就是说,2DWTA 散列定义了具有关联的秩相关度量的序数嵌入。比勒布等人。 [153]提出了一种模糊承诺方案,通过在密码原语中采用二值化特征向量来获取语音特征,这些语音特征是通过基于 GMM 和 UBM(通用背景建模)的语音识别系统提取的。所提出的二值化方案生成固定长度的二进制语音模板。

Elrefaei et al. [154] proposed a fuzzy commitment scheme to protect gait features extracted from gait images of one complete gait cycle using a local ternary pattern (LTP). The final feature vector is produced using principal component analysis (PCA) on the average images concatenated using a 2D joint histogram. Further, to enhance the robustness of the system, only highly robust and reliable bits from the feature vector are extracted. BoseChaudhuriHocquenghem (BCH) codes are used for key encoding and decoding during the enrolment and verification phase, respectively. Similarly, Rúa et al. [155] proposed a Hidden Markov Model-Universal Background Model (HMM-UBM) gait authentication system that incorporated template protection based on a fuzzy commitment scheme. The authentication succeeds only when the Hamming distance between the binary representation obtained during the verification and the one stored at the time of the enrollment is equal to, or less than, the error-correcting capability of the employed Error Correcting Code (ECC).
埃尔雷法伊等人。 [154]提出了一种模糊承诺方案,使用局部三元模式(LTP)保护从一个完整步态周期的步态图像中提取的步态特征。最终特征向量是通过对使用 2D 联合直方图连接的平均图像进行主成分分析 (PCA) 来生成的。此外,为了增强系统的鲁棒性,仅从特征向量中提取高度鲁棒且可靠的比特。 BoseChaudhuriHocquenghem (BCH) 代码分别用于注册和验证阶段的密钥编码和解码。同样,Rúa 等人。 [155]提出了一种隐马尔可夫模型-通用背景模型(HMM-UBM)步态认证系统,该系统结合了基于模糊承诺方案的模板保护。仅当验证期间获得的二进制表示与注册时存储的二进制表示之间的汉明距离等于或小于所使用的纠错码(ECC)的纠错能力时,认证才会成功。

In addition, hardware-level encryption can be employed on client devices to establish trust between users and businesses as a part of a privacy-first approach for behavioural analytics. A biometric system in an IoT setting becomes unusable if it is unable to revoke biometric templates and avoid biometric template leakage as multiple services rely upon same biometric modalities from each user. Comparatively, issues related to user privacy in employing behavioural biometrics are less invasive than biological biometrics; it is strongly recommended to include an appropriate template protection scheme for designing behavioural biometric-based user authentication schemes.
此外,可以在客户端设备上采用硬件级加密来建立用户和企业之间的信任,作为隐私优先的行为分析方法的一部分。如果物联网环境中的生物识别系统无法撤销生物识别模板并避免生物识别模板泄漏,则该系统将变得无法使用,因为多个服务依赖于每个用户的相同生物识别模式。相比之下,使用行为生物识别技术时涉及用户隐私的问题比生物生物识别技术的侵入性要小;强烈建议包含适当的模板保护方案来设计基于行为生物识别的用户身份验证方案。

Privacy issues can vary greatly from use-case to use-case. Overall, behavioural biometrics protection mechanisms can significantly address privacy concerns at little-to-no cost for designing Identity and Access Management (IAM) schemes for IoT systems.
隐私问题因用例而异。总体而言,行为生物识别保护机制可以在设计物联网系统身份和访问管理(IAM)方案时以很少甚至免费的成本显着解决隐私问题。

5.3. Usability 5.3.可用性

This section discusses how behavioural biometrics for user recognition schemes can meet the guidelines defined by ISO 9241-11 standard [156]. This standard defines usability as “the extent to which a product can be used by specified users to achieve specific goals with effectiveness, efficiency, and satisfaction in a specified context of use”. Furthermore, we describe how these attributes can be used for quantifying the usability of a user recognition system.
本节讨论用户识别方案的行为生物识别如何满足 ISO 9241-11 标准 [156] 定义的指南。该标准将可用性定义为“特定用户使用产品在特定使用环境中有效、高效和满意地实现特定目标的程度”。此外,我们描述了如何使用这些属性来量化用户识别系统的可用性。

Still et al. [157] presented a set of human-centered authentication design guidelines. The guidelines for usable security included the need for transparent authentication process, no modality overheads on users’ limited working memory, to support inclusivity, and to provide faster access. Generally, usability evaluation methods (UEM) incorporate techniques such as inspection, testing, or surveying, to assess the extent to which usability objectives are achieved for a user recognition system. The usability evaluation processes can be formative, i.e., evaluation performed during the design and development phase of a system, or summative, i.e., evaluation based on users’ assessment after they use the system [158].
仍然等人。 [157]提出了一套以人为本的身份验证设计指南。可用安全性的准则包括需要透明的身份验证过程、对用户有限的工作内存没有任何形式的开销、支持包容性并提供更快的访问。一般来说,可用性评估方法 (UEM) 结合了检查、测试或调查等技术,以评估用户识别系统实现可用性目标的程度。可用性评估过程可以是形成性的,即在系统的设计和开发阶段进行的评估,也可以是总结性的,即基于用户使用系统后的评估进行的评估[158]。

A number of behavioural biometric-based user recognition schemes rely on a System Usability Scale (SUS) for the subjective assessments of their usability [87], [159], [160]. VAuth conducted a usability survey using Amazon Mechanical Turk [128]. TFST gesture authentication evaluates its usability by comparing to the commonly used methods of passcode and pattern lock mechanisms [58]. They determine the usability from four different perspectives: 1) Is it easy to memorize?; 2) Is it fast to login?; 3) Is it convenient to perform authentication?; and 4) Is it less error-prone? For each question, users could respond as “disagree”, “neutral” or “agree”. UEMs and surveys can help to analyze perceived usability and user experiences for a user recognition scheme to ensure wider acceptance from users.
许多基于行为生物识别的用户识别方案依赖系统可用性量表(SUS)对其可用性进行主观评估[87],[159],[160]。 VAuth 使用 Amazon Mechanical Turk [128] 进行了可用性调查。 TFST 手势认证通过与常用的密码和模式锁定机制方法进行比较来评估其可用性[58]。他们从四个不同的角度来判断可用性:1)是否容易记忆? 2)登录速度快吗? 3)认证方便吗? 4)它是否不易出错?对于每个问题,用户可以回答“不同意”、“中立”或“同意”。 UEM 和调查可以帮助分析用户识别方案的感知可用性和用户体验,以确保用户更广泛的接受。

As illustrated in Fig. 11, we recommend a holistic method for computing intrinsic usability attributes that can impact end-users’ decision to use a security mechanism: effectiveness, efficiency, satisfaction, thoroughness, validity and reliability. Eqs. 4 to 9 can be applied to measure usability attributes empirically, for a user recognition scheme by employing a UEM.
如图 11 所示,我们建议采用一种整体方法来计算内在可用性属性,这些属性可以影响最终用户使用安全机制的决定:有效性、效率、满意度、彻底性、有效性和可靠性。等式。图4至图9可以应用于通过采用UEM来针对用户识别方案凭经验测量可用性属性。

Fig. 11
  1. Download : Download high-res image (137KB)
    下载:下载高分辨率图像 (137KB)
  2. Download : Download full-size image
    下载:下载全尺寸图像

Fig. 11. Attributes for usability evaluation.
图 11. 可用性评估的属性。

Effectiveness [161] is the degree to which users correctly and completely achieve specified goals and it can be measured using Eq. 4.
有效性[161]是用户正确、完全实现指定目标的程度,可以使用等式1来衡量。 4.
(4)Effectiveness=GoalsachievedsuccessfullyTotalnumberofgoals×100%

Efficiency [161] can be measured using speed and interactiveness using Eq. 5.
效率[161]可以通过使用等式的速度和交互性来测量。 5.
(5)Efficiencyspeed=StopTimemillisecondsStartTimemillisecondsEfficiencyinteractiveness=Count(NumberofSteps)

Satisfaction [161] can be measured using Eq. 6, which is an average of all the responses to a post-task questionnaire questions. Questionnaire responses can be an ordinal value, e.g., Linkert scale (1 = Strongly disagree to 5 = Strongly agree).
满意度[161]可以使用等式来测量。 6,这是对任务后调查问卷问题的所有回答的平均值。问卷回答可以是序数值,例如 Linkert 量表(1 = 强烈不同意,5 = 强烈同意)。
(6)Satisfaction=n=1NResponsenN

Thoroughness [162] of a user recognition scheme concerning all of the identified usability issues can be measured using Eq. 7. A UEM is expected to determine all the possible usability issues with respect to a user recognition scheme.
涉及所有已识别可用性问题的用户识别方案的完整性[162]可以使用等式1来测量。 7. UEM 需要确定与用户识别方案相关的所有可能的可用性问题。
(7)Thoroughness=NumberofrealusabilityissuesidentifiedNumberofrealusabilityissuesexist

Validity [162] to assert the correctness of the UEM results can be measured using Eq. 8.
断言 UEM 结果正确性的有效性 [162] 可以使用等式进行测量。 8.
(8)Validity=NumberofrealusabilityissuesidentifiedNumberofallusabilityissuesidentified

Reliability [162] to determine the consistency of a UEM, regardless of the individual performing the usability evaluation, can be measured using Eq. 9.
确定 UEM 一致性的可靠性 [162],无论执行可用性评估的个人如何,都可以使用等式 1 进行测量。 9.
(9)Reliability=NumberofusabilityissuesidentifiedbyeachuserNumberofusabilityissuesidentifiedbyatleastoneuser

During the design phase of a user authentication scheme, UEMs can effectively embody these attributes to indicate the overall usability. A relationship between the system architecture and given sets of usability requirements can be derived using Eqs. 4 - 9. This enables both software engineers and usability specialists to evaluate whether the system is ultimately usable. These metrics enable usability specialists to determine which aspects of usability require redress. Subsequently, software engineers can evaluate how these aspects of usability can be fulfilled within the context of the architecture without affecting vital quality attributes, such as security, performance, availability, time and cost. Usability is a significant quality attribute, or non-functional requirement, since in cases that the human-to-things recognition scheme is unusable, users will either compromise the function to make it more usable, or avoid using completely.
在用户认证方案的设计阶段,UEM可以有效地体现这些属性来指示整体可用性。系统架构和给定的可用性需求集之间的关系可以使用等式导出。 4 - 9. 这使得软件工程师和可用性专家能够评估系统最终是否可用。这些指标使可用性专家能够确定可用性的哪些方面需要纠正。随后,软件工程师可以评估如何在架构上下文中实现可用性的这些方面,而不影响重要的质量属性,例如安全性、性能、可用​​性、时间和成本。可用性是一个重要的质量属性,或非功能性需求,因为在人对物识别方案不可用的情况下,用户要么损害该功能以使其更可用,要么完全避免使用。

5.4. User recognition scheme readiness
5.4.用户识别方案准备就绪

While designing a user authentication scheme, the attributes - security, privacy, and usability are often perceived as orthogonal to each other. Studies have shown that available user recognition schemes struggle to satisfy these three attributes simultaneously [163]. We introduce a dashboard that is a 2×2 matrix having usability and privacy status indicators as rows and columns to interpret a user recognition scheme readiness, as illustrated in Fig. 12.
在设计用户身份验证方案时,安全性、隐私性和可用性等属性通常被认为是相互正交的。研究表明,现有的用户识别方案很难同时满足这三个属性[163]。我们引入了一个仪表板,它是一个 2×2 矩阵,具有可用性和隐私状态指示器作为行和列来解释用户识别方案准备情况,如图 12 所示。

Fig. 12
  1. Download : Download high-res image (315KB)
    下载:下载高分辨率图像 (315KB)
  2. Download : Download full-size image
    下载:下载全尺寸图像

Fig. 12. A dashboard for a user recognition scheme readiness.
图 12. 用户识别方案准备情况的仪表板。

The dashboard can be useful when the user recognition scheme is baselined after incorporating a given set of security requirements. User recognition scheme qualifying to the Top-Right block of the dashboard indicates the scheme is usable and privacy-compliant, i.e., ready for deployment. Section 5.2 can be referred if the scheme qualifies to the Top-Left block, i.e., usable but not privacy-compliant. Section 5.3 can be referred if the scheme qualifies to the Bottom-Right block, i.e., not usable but privacy-compliant. The scheme is not ready if it only qualifies to the Bottom-Left block, i.e., neither usable nor privacy-compliant.
当用户识别方案在纳入一组给定的安全要求后建立基线时,仪表板会很有用。符合仪表板右上角块的用户识别方案表明该方案可用且符合隐私要求,即已准备好部署。如果该方案符合左上角块的条件(即可用但不符合隐私要求),则可以参考第 5.2 节。如果该方案符合右下区块的条件(即不可用但符合隐私要求),则可以参考第 5.3 节。如果该方案仅符合左下区块的要求,即既不可用也不符合隐私要求,则该方案尚未准备就绪。

6. Open challenges and opportunities
6. 开放的挑战和机遇

This section presents the limitations of current approaches to designing behavioural biometric-based authentication schemes and outstanding challenges followed by general prospects and opportunities. It is worth emphasizing that HCI and natural habit-based behavioural biometrics have the power to reshape the human-to-things recognition market in the next few years.
本节介绍了当前设计基于行为生物识别的身份验证方案的方法的局限性以及突出的挑战以及总体前景和机遇。值得强调的是,人机交互和基于自然习惯的行为生物识别技术有能力在未来几年重塑人对物识别市场。

6.1. Challenges and limitations
6.1.挑战和限制

Given the heterogeneity of behavioural biometric modalities, the limitations and vulnerabilities associated with each modality must be investigated during the conceptualization phase of a behavioural biometric-based user recognition system.
鉴于行为生物识别模式的异质性,必须在基于行为生物识别的用户识别系统的概念化阶段研究与每种模式相关的局限性和漏洞。

  • Recently, deep generative models (DGMs) such as Generative Adversarial Networks (GANs) or Variational Autoencoders (VAE) have been adopted to generate attacks on biometric-based recognition systems and these represent a significant emerging challenge [164]. A thorough testing strategy for liveness-detection, intra-class variance and common attacks (e.g., malware, mimic, impersonation, spoofing, replay, statistical, algorithmic, and robotics attack) mitigation [29] must be developed as part of the security analysis.
    最近,生成对抗网络(GAN)或变分自动编码器(VAE)等深度生成模型(DGM)已被用来对基于生物识别的识别系统进行攻击,这些代表了一个重大的新兴挑战[164]。作为安全分析的一部分,必须开发针对活性检测、类内差异和常见攻击(例如恶意软件、模仿、冒充、欺骗、重放、统计、算法和机器人攻击)缓解的全面测试策略 [29] 。

  • Privacy regulation laws, such as General Data Protection Regulation (GDPR) [145], the California Consumer Privacy Act (CCPA) [146] and the Health Insurance Portability and Accountability Act (HIPAA) [147], mandate an increase in responsibility and transparency for using and storing personal data. According to GDPR, biometric data that allow or confirm the unique identification of an individual is recognized as a special category of personal data under Art. 9 [165]. Consequently, there is a need to employ adequate measures (e.g., template protection and template storage location) for users’ privacy conformance as per these laws.
    隐私法规,例如《通用数据保护条例》(GDPR) [145]、《加州消费者隐私法案》(CCPA) [146] 和《健康保险流通与责任法案》(HIPAA) [147],要求增加责任和透明度用于使用和存储个人数据。根据 GDPR,允许或确认个人唯一身份的生物识别数据被认为是第 1 条规定的特殊类别的个人数据。 9[165]。因此,需要根据这些法律采取适当的措施(例如模板保护和模板存储位置)来确保用户的隐私一致性。

  • Another important aspect that requires addressing concerns the ethical risks in the use of behavioural biometrics [166]. Recording of data for behavioural biometric modalities over time could result in the dynamic behavior profiling of a person, which can reveal how the person has behaved in a certain context. Particularly, this can become more critical when modalities are combined with soft biometrics, such as age, gender, height, weight and ethnicity, since this can generate a more sensitive profile of a person. The creation of sensitive profiles can lead to ethical risks, such as: discrimination - for example to exclude a person from certain areas and activities; stigmatization - to create a negative interpretation of a person; and biased and unwanted confrontation - the disclosure of personal information (for example, body signals may indicate a certain disease or cognitive ability of a person).
    需要解决的另一个重要方面涉及使用行为生物识别技术的道德风险[166]。随着时间的推移,记录行为生物识别模式的数据可能会产生一个人的动态行为分析,这可以揭示该人在特定背景下的行为方式。特别是,当模式与年龄、性别、身高、体重和种族等软生物识别技术相结合时,这一点可能变得更加重要,因为这可以生成更敏感的个人资料。创建敏感档案可能会导致道德风险,例如: 歧视 - 例如将某人排除在某些领域和活动之外;污名化——对一个人做出负面解释;以及有偏见和不必要的对抗 - 个人信息的披露(例如,身体信号可能表明一个人的某种疾病或认知能力)。

  • Quality control of the biometric template is a prerequisite before the enrollment or verification/identification step [167]. This can support the correctness, consistency, redundancy and speed of a biometric system to overcome problems arising from the sensors, environment or users themselves.
    生物识别模板的质量控制是注册或验证/识别步骤之前的先决条件[167]。这可以支持生物识别系统的正确性、一致性、冗余性和速度,以克服传感器、环境或用户本身产生的问题。

  • Certain factors such as aging, fatigue, stress, mood, sleep deprivation, injury and disease could inhibit the effectiveness of behavioural biometric modalities. These factors also require a thorough investigation to support the evolution of behavioural biometric-based recognition systems.
    衰老、疲劳、压力、情绪、睡眠不足、受伤和疾病等某些因素可能会抑制行为生物识别方式的有效性。这些因素还需要进行彻底的调查,以支持基于行为生物识别的识别系统的发展。

  • Bias mitigation should be included early on to avoid negative consequences. Behavioural biometrics datasets are required to include all the demographics, such as covering different age groups, cultural factors and ethnicity, to provide better objectivity. Use of synthetic and diverse datasets can be investigated to devise fairness driven schemes to mitigate the bias factors [168], [169]. Additionally, various architectures and training schemes need to be fully studied to devise bias invariant systems [170]. Further, standards for behavioural biometrics and bench-marking of sensors must be developed and utilised, both from performance perspective and from fairness perspective.
    应尽早纳入缓解偏差以避免负面后果。行为生物识别数据集需要包含所有人口统计数据,例如涵盖不同年龄组、文化因素和种族,以提供更好的客观性。可以研究使用合成和多样化的数据集来设计公平驱动的方案来减轻偏差因素[168],[169]。此外,需要充分研究各种架构和训练方案以设计偏差不变系统[170]。此外,必须从性能角度和公平角度制定和利用行为生物识别标准和传感器基准测试。

6.2. Prospects and opportunities
6.2.前景和机遇

Behavioural biometrics have the potential to deliver secure, transparent, continuous and cost-effective human-to-things recognition solutions for emerging IoT ecosystems. They can offer multi-faceted benefits: 1) behavioural biometric modalities can be collected transparently (non-intrusive) [171]; 2) the availability of a wide range of sensors (e.g., Accelerometer, Gyroscope, Radar, Piezometer, Microphone and Proximity sensors) enable acquisition of behavioural biometric modalities accurately and efficiently; 3) they can be leveraged for designing implicit (frictionless) [139], continuous (active) [33], [42] or risk-based (non-static) recognition systems due to the evolution of embedded Machine Learning engines [172]; 4) they do not add cognitive load on users; 5) they cannot be easily stolen, shared, transferred, conjectured or hacked; and 6) they are, comparatively, less prone to cyber-attacks [124].
行为生物识别技术有潜力为新兴物联网生态系统提供安全、透明、连续且经济高效的人对物识别解决方案。它们可以提供多方面的好处:1)可以透明地(非侵入性)收集行为生物识别模式[171]; 2) 各种传感器(例如加速度计、陀螺仪、雷达、压力计、麦克风和接近传感器)的可用性使得能够准确、高效地获取行为生物识别模式; 3)由于嵌入式机器学习引擎的发展[172],它们可用于设计隐式(无摩擦)[139]、连续(主动)[33]、[42]或基于风险(非静态)的识别系统; 4)它们不会增加用户的认知负担; 5)不易被窃取、共享、转让、猜想或黑客攻击; 6)相对而言,它们不太容易受到网络攻击[124]。

Sensors to capture behavioural biometric modalities are advancing rapidly, both in scope and technology. With the emergence of fabrication techniques such as Micro-Electro-Mechanical Systems (MEMS), microminiaturized sensors, actuators, mechanical components and electronics can be integrated into a single chip [173]. ST Microelectronics is one of the leading MEMS manufacturers that provides high-performance sensors with ultra-low power requirement [174]. RoKiX Sensor Node integrates multiple sensors with Bluetooth Low Energy (BLE) interface to provide the measurement of 3D-acceleration, 3D-magnetism, 3D-rotation, pressure, and temperature [175]. A wide range of touch screen (such as 5-Wire Resistive, Surface Capacitive touch, Projected Capacitive (P-Cap), Surface Acoustic Wave (SAW) and Infrared (IR)  [176]) sensors are available in the market that can be selected for ATMs, kiosks, vending machines, smart devices or wearables’ screens. High-performance piezoresistivity, capacitance or piezoelectric pressure sensors can be miniaturized using silicon fabrication techniques, for example piezoelectric based insole sensor [177]. Time-of-Flight 3D sensors utilise Light Detection and Ranging (LIDAR) to measure distances and sizes, to track motions, and to convert the shape of objects into 3D models [178], [179].
用于捕获行为生物识别模式的传感器在范围和技术上都在迅速发展。随着微机电系统(MEMS)等制造技术的出现,微型传感器、执行器、机械部件和电子设备可以集成到单个芯片中[173]。 ST MicroElectronics 是领先的 MEMS 制造商之一,提供具有超低功耗要求的高性能传感器[174]。 RoKiX 传感器节点将多个传感器与蓝牙低功耗 (BLE) 接口集成,以提供 3D 加速度、3D 磁力、3D 旋转、压力和温度的测量 [175]。市场上有各种各样的触摸屏(例如 5 线电阻式、表面电容式触摸、投射电容式 (P-Cap)、表面声波 (SAW) 和红外 (IR) [176])传感器,可用于被选用于 ATM、信息亭、自动售货机、智能设备或可穿戴设备的屏幕。高性能压阻、电容或压电压力传感器可以使用硅制造技术小型化,例如基于压电的鞋垫传感器[177]。飞行时间 3D 传感器利用光探测和测距 (LIDAR) 来测量距离和尺寸、跟踪运动并将物体的形状转换为 3D 模型 [178]、[179]。

Operating systems such as Android, iOS, Windows provide SDK and APIs for interfacing sensors to acquire behavioural biometric modalities [180], [181], [182]. Leading system on a chip (SoC) manufacturers and designers, such as Intel and ARM provide SoCs supporting machine learning engines [23], AI-embedded chips [183] and NN-powered FPGAs [184] capable of supporting advanced algorithms for sensor data fusion, learning autonomously from existing data, acquiring knowledge for assessments, and making predictions and decisions. Further, IoT platforms, such as Google Cloud, IBM Watson, Amazon AWS, Microsoft Azure support advanced machine learning, and Artificial Intelligence algorithms backed by enormous computational power that can provide the necessary infrastructure to design behavioural biometric-based user recognition systems for a variety of applications. Thus these advances will continue to deliver further enhanced capabilities for behavioural biometric-based user recognition.
Android、iOS、Windows 等操作系统提供 SDK 和 API,用于连接传感器以获取行为生物识别模式 [180]、[181]、[182]。 Intel 和 ARM 等领先的片上系统 (SoC) 制造商和设计人员提供支持机器学习引擎 [23] 的 SoC、AI 嵌入式芯片 [183]​​ 以及能够支持传感器数据高级算法的神经网络驱动的 FPGA [184]融合,从现有数据中自主学习,获取评估知识,并做出预测和决策。此外,物联网平台,如谷歌云、IBM沃森、亚马逊AWS、微软Azure,支持先进的机器学习和人工智能算法,这些算法由巨大的计算能力支持,可以提供必要的基础设施来设计基于行为生物识别的用户识别系统。应用程序。因此,这些进步将继续为基于行为生物识别的用户识别提供进一步增强的功能。

Key market players, particularly, BehavioSec, BioCatch, EZMCOM, NEC Corporation, SecuredTouch have been exploiting behavioural biometrics to design security solutions for financial institutions, businesses, government facilities, e-commerce merchants and online businesses to support security-sensitive applications. The security solutions offered range from prevention of the use of stolen or synthetic identities in applying for credit online to making better fraud decisions. Solutions can be deployed as an extra layer of intelligence to support user recognition in the fight against cyber-crimes.
主要市场参与者,特别是 BehavioSec、BioCatch、EZMCOM、NEC Corporation、SecuredTouch 一直在利用行为生物识别技术为金融机构、企业、政府设施、电子商务商家和在线企业设计安全解决方案,以支持安全敏感应用程序。提供的安全解决方案范围广泛,从防止在在线申请信贷时使用被盗或合成身份,到做出更好的欺诈决策。解决方案可以部署为额外的智能层,以支持用户识别以打击网络犯罪。

Behavioural biometrics can offer opportunities to address the security and usability issues that end-users can face when using conventional user recognition schemes. Table 13 suggests IoT domains, key applications, and behavioural biometrics that can be exploited for user recognition. If not replacing conventional mechanisms entirely, behavioural biometrics can minimize the burden placed on them to security-sensitive IoT ecosystems [172]. Another benefit of behavioural biometrics is that they can be fused with each other, and with biological biometrics, seamlessly to build more robust recognition schemes.
行为生物识别技术可以提供解决最终用户在使用传统用户识别方案时可能面临的安全性和可用性问题的机会。表 13 列出了可用于用户识别的 IoT 领域、关键应用和行为生物识别技术。如果不完全取代传统机制,行为生物识别技术可以最大限度地减少对安全敏感的物联网生态系统的负担[172]。行为生物识别技术的另一个好处是它们可以相互融合,并与生物识别技术无缝融合,以构建更强大的识别方案。

Table 13. IoT domains, key applications and behavioural biometrics usage.
表 13.物联网领域、关键应用和行为生物识别技术的使用。

IoT Domains 物联网领域Key Applications 关键应用Touch-stroke 触摸式Swipe 滑动Touch Signature 触摸签名Hold movements 保持动作Voice 嗓音Gait 步态Footstep 脚步
Smart infrastructure 智能基础设施Smart homes, smart offices, smart cities, smart grid, Waste management, social networking apps
智能家居、智能办公室、智能城市、智能电网、废物管理、社交网络应用
Transportation 运输Smart ticket booking, intelligent access system, smart parking, driverless Taxis
智能订票、智能门禁系统、智能停车、无人驾驶出租车
Healthcare 卫生保健Smart hospital, medical records
智慧医院、病历
Industrial control 工业控制Smart retail, supply chain management
智慧零售、供应链管理
Security surveillance 安全监控Perimeter access control, border control, intrusion detection systems
周界访问控制、边界控制、入侵检测系统

Behavioural biometrics can provide two at least two variables, i.e., a person’s behavior and the application context, whereas biological biometrics offers only a single variable, i.e. a person’s trait, to design a verification scheme. For example, a person’s fingerprint remains the same to access a car, home, or office, however, a person’s behavior can generate three different and distinct micro-behavior for each scenario. Further, features like transparency and seamless integration make behavior biometrics more usable in contrast to biological biometrics for designing continuous or implicit verification schemes. Security-sensitive sectors such as smart banking, e-commerce, and finance are already leveraging behavioural biometric-based user recognition mechanisms [171]. Furthermore, HCI-based behavioural biometrics can be applied to minimise cyber-abuse and online scams, such as the spread of fake news, creation of bogus profiles on social media platforms, phishing, as well as similar illegal activities.
行为生物识别可以提供两个至少两个变量,即人的行为和应用上下文,而生物生物识别仅提供单个变量,即人的特征,来设计验证方案。例如,一个人的指纹在进入汽车、家庭或办公室时保持不变,但是,一个人的行为可以为每个场景生成三种不同且截然不同的微行为。此外,与生物生物识别技术相比,透明度和无缝集成等功能使行为生物识别技术更适用于设计连续或隐式验证方案。智能银行、电子商务和金融等安全敏感行业已经在利用基于行为生物识别的用户识别机制[171]。此外,基于人机交互的行为生物识别技术可用于最大限度地减少网络滥用和在线诈骗,例如假新闻的传播、在社交媒体平台上创建虚假个人资料、网络钓鱼以及类似的非法活动。

7. Conclusions 七、结论

Within the overall IoT security spectrum, robust and usable human-to-things recognition schemes are of increasing importance, given the highly prescriptive nature of conventional (knowledge- or token-based) recognition schemes currently being utilised. The efficacy of conventional schemes remains limited since they require users to recall something they know or to possess something. As such, user recognition schemes for emerging IoT ecosystems, which can fulfill both the security and usability criteria, and comply the privacy laws, are in genuine demand.
在整个物联网安全范围内,鉴于目前使用的传统(基于知识或令牌)识别方案的高度规范性,稳健且可用的人对物识别方案变得越来越重要。传统方案的功效仍然有限,因为它们要求用户回忆他们知道的东西或拥有某些东西。因此,真正需要新兴物联网生态系统的用户识别方案,既能满足安全性和可用性标准,又能遵守隐私法。

This article has summarized the state-of-the-art in HCI- and natural habits-based biometrics, namely, touch-stroke, swipe, touch-signature, hand-movements, voice, gait and footstep. Attributes and features for each of these identified and analysed so that they can be best exploited in the design of user-friendly recognition schemes. A discussion of security, privacy and usability evaluation indicators together with the existing challenges and limitations is also presented that requiring attention to achieve the widespread adoption of behavioural biometric-based recognition schemes.
本文总结了基于人机交互和自然习惯的生物识别技术的最新技术,即触摸笔划、滑动、触摸签名、手部动作、语音、步态和脚步。识别和分析每个属性和特征,以便在设计用户友好的识别方案时最好地利用它们。还讨论了安全性、隐私性和可用性评估指标以及现有的挑战和局限性,需要注意以实现基于行为生物识别的识别方案的广泛采用。

Overall, the prospects and market trends cited in this article indicate that behavioural biometrics can provide innovative ways to implement implicit (frictionless), continuous (active) or risk-based (non-static) recognition schemes. With the availability of smart sensors, advanced machine learning algorithms and powerful IoT platforms, behavioural biometrics can replace conventional recognition schemes, thereby reshaping the existing user recognition landscape for IoT ecosystems.
总体而言,本文引用的前景和市场趋势表明,行为生物识别技术可以提供创新的方法来实施隐式(无摩擦)、连续(主动)或基于风险(非静态)的识别方案。随着智能传感器、先进机器学习算法和强大物联网平台的出现,行为生物识别技术可以取代传统的识别方案,从而重塑物联网生态系统现有的用户识别格局。

Declaration of Competing Interest
竞争利益声明

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
作者声明,他们没有已知的可能影响本文报告工作的相互竞争的经济利益或个人关系。

Acknowledgment 致谢

This project has received funding from the European Union’s Horizon 2020 research and innovation programme DS 2018-2019-2020 as part of the E-Corridor project (www.e-corridor.eu) under grant agreement No 883135. Professor Maple would like to acknowledge the support of UKRI through the grants EP/R007195/1 (Academic Centre of Excellence in Cyber Security Research - University of Warwick), EP/N510129/1 (The Alan Turing Institute) and EP/S035362/1 (PETRAS, the National Centre of Excellence for IoT Systems Cybersecurity). This work was also supported, in part, by the Bill & Melinda Gates Foundation [INV- 001309].
该项目已获得欧盟地平线 2020 研究和创新计划 DS 2018-2019-2020 的资助,作为电子走廊项目 (www.e-corridor.eu) 的一部分,资助协议编号为 883135。Maple 教授谨此致谢UKRI 通过赠款 EP/R007195/1(网络安全研究卓越学术中心 - 华威大学)、EP/N510129/1(艾伦图灵研究所)和 EP/S035362/1(PETRAS,国家中心)提供支持物联网系统网络安全卓越奖)。这项工作也得到了比尔和梅琳达·盖茨基金会 [INV-001309] 的部分支持。

Data availability 数据可用性

No data was used for the research described in the article.
文章中描述的研究没有使用任何数据。

References 参考

Sandeep Gupta received his Ph.D. degree in Information & Communication Technology from the University of Trento, Italy in 2020. He is a recipient of the prestigious Marie Sklodowska-Curie research fellowship. Since 2016, he is/was participated in EU H2020 projects - Collabs, E-Corridor, NeCS, CyberSec4Europe. Previously, he worked with Samsung, Accenture, and Mentor Graphics (now Siemens) in information technology domains. His research interests include AI & machine learning, biometrics-based access control schemes, usable security & privacy for IoT, and cyberphysical systems.
桑迪普·古普塔 (Sandeep Gupta) 获得博士学位。 2020 年获得意大利特伦托大学信息与通信技术博士学位。他是著名的 Marie Sklodowska-Curie 研究奖学金的获得者。自 2016 年以来,他参与了 EU H2020 项目 - Collabs、E-Corridor、NeCS、Cyber​​Sec4Europe。此前,他曾在三星、埃森哲和 Mentor Graphics(现为西门子)的信息技术领域工作。他的研究兴趣包括人工智能和机器学习、基于生物识别的访问控制方案、物联网的可用安全和隐私以及网络物理系统。

Carsten Maple is the Principal Investigator of the NCSC-EPSRC Academic Centre of Excellence in Cyber Security Research at the University and Professor of Cyber Systems Engineering in WMG. He is also a co-investigator of the PETRAS National Centre of Excellence for IoT Systems Cybersecurity where he leads on Transport & Mobility. He is a Fellow of the Alan Turing Institute, the National Institute for Data Science and AI in the UK, where he is a principal investigator on a $5 million project developing trustworthy national identity to enable financial inclusion. Carsten has an international research reputation and extensive experience of institutional strategy development and interacting with external agencies. He has published over 250 peer-reviewed papers and is co-author of the UK Security Breach Investigations Report 2010, supported by the Serious Organised Crime Agency and the Police Central e-crime Unit. Carsten is also co-author of Cyberstalking in the UK, a report supported by the Crown Prosecution Service and Network for Surviving Stalking. His research has attracted millions of pounds in funding and has been widely reported through the media. He has given evidence to government committees on a variety of issues concerning safety, security, privacy and identity.
Carsten Maple 是该大学 NCSC-EPSRC 网络安全研究卓越学术中心的首席研究员,也是 WMG 网络系统工程教授。他还是 PETRAS 国家物联网系统网络安全卓越中心的联合研究员,负责领导运输和移动领域的研究。他是英国国家数据科学和人工智能研究所艾伦图灵研究所的研究员,是该研究所一个价值 500 万美元的项目的首席研究员,该项目开发值得信赖的国民身份以实现金融普惠。卡斯滕在国际研究领域享有盛誉,在机构战略制定和与外部机构互动方面拥有丰富的经验。他发表了 250 多篇经过同行评审的论文,并且是《2010 年英国安全漏洞调查报告》的合著者,该报告得到了严重有组织犯罪机构和警察中央电子犯罪部门的支持。卡斯滕还是《英国网络跟踪》一书的合著者,该报告得到了英国皇家检察署和网络跟踪生存网络的支持。他的研究吸引了数百万英镑的资金,并通过媒体广泛报道。他就涉及安全、安保、隐私和身份的各种问题向政府委员会提供了证据。

Bruno Crispo received his Ph.D. degree in computer science from the University of Cambridge, UK. in 1999, having received the M.Sc. degree in computer science from the University of Turin, Italy, in 1993. He is a full professor at the University of Trento since September 2005. Before that, he was an associate professor at Vrije Universiteit in Amsterdam. He is the co-editor of the Security Protocol International Workshop proceedings since 1997. He is a member of ACM. His main interests span the field of security and privacy. In particular, his recent work focuses on the topic of security protocols, access control in very large distributed systems, distributed policy enforcement, embedded devices, and smartphone security and privacy, and privacy-breaching malware detection. He has published more than 200 papers in international journals and conferences on security-related topics.
布鲁诺·克里斯波 (Bruno Crispo) 获得博士学位。英国剑桥大学计算机科学博士学位。 1999年,获得硕士学位。 1993 年获得意大利都灵大学计算机科学博士学位。自 2005 年 9 月起,他担任特伦托大学教授。在此之前,他曾担任阿姆斯特丹自由大学副教授。自 1997 年以来,他一直是安全协议国际研讨会论文集的联合编辑。他是 ACM 的成员。他的主要兴趣涵盖安全和隐私领域。他最近的工作重点是安全协议、大型分布式系统中的访问控制、分布式策略执行、嵌入式设备、智能手机安全和隐私以及侵犯隐私的恶意软件检测等主题。他在国际期刊和会议上发表了 200 多篇与安全相关主题的论文。

Kiran Raja obtained his Ph.D. in computer Science from Norwegian University of Science and Technology (NTNU), Norway in 2016. He is faculty member at Dept. of Computer Science at NTNU, Norway. His main research interests include statistical pattern recognition, image processing, and machine learning with applications to biometrics, security and privacy protection. He was/is participating in EU projects SOTAMD, iMARS and other national projects. He has authored several papers in his field of interest and serves as a reviewer for number of journals and conferences. He is a member of EAB and chairs Academic Special Interest Group at EAB.
Kiran Raja 获得博士学位。 2016 年获得挪威科技大学 (NTNU) 计算机科学博士学位。他是挪威 NTNU 计算机科学系的教员。他的主要研究兴趣包括统计模式识别、图像处理和机器学习及其在生物识别、安全和隐私保护方面的应用。他曾经/正在参与欧盟项目 SOTAMD、iMARS 和其他国家项目。他在自己感兴趣的领域发表了多篇论文,并担任许多期刊和会议的审稿人。他是 EAB 的成员,并担任 EAB 学术特别兴趣小组的主席。

Artsiom Yautsiukhin Artsiom Yautsiukhin received his Ph.D. degree in Information and Communication Technology at the University of Trento in 2009. He is a researcher of the information security group at the Institute of Informatics and Telematics (IIT) of the National Council of Research (CNR), Pisa, Italy. His main research interests are IT security metrics and security evaluation. He participated in several European projects: SERENITY, SENSORIA, ANIKETOS, NESSOS, SESAMO, CAMINO.
Artsiom Yautsiukhin Artsiom Yautsiukhin 获得博士学位。 2009年获得特伦托大学信息与通信技术博士学位。他是意大利比萨国家研究委员会(CNR)信息学和远程信息处理研究所(IIT)信息安全小组的研究员。他的主要研究兴趣是IT安全指标和安全评估。他参与了多个欧洲项目:SERENITY、SENSORIA、ANIKETOS、NESSOS、SESAMO、CAMINO。

Fabio Martinelli (M.Sc. 1995, Ph.D.1999) is research director at Istituto di Informatica e Telematica, Consiglio Nazionale delle Ricerche, Italy. He chaired the WG on security and trust management of ERCIM, and he is involved in several Steering Committees of international conferences/workshops. He was co-chair of the Italian technology platform for homeland security (SERIT) and was the co-chair of the WG3 on Research and Innovation of the Platform on Network and Information Security (NIS) promoted by the European Commission and in this role co-edited the first strategic research agenda on cyber security at the European level. He is actively involved in European and Italian projects on information and communication security. He co-authored more than four hundred papers in international journals and conference/workshop proceedings. His main research interests involve security and privacy in distributed and mobile systems and foundations of security and trust.
Fabio Martinelli(硕士,1995 年,博士,1999 年)是意大利国家研究中心 Informatica e Telematica 研究所的研究主任。他担任 ERCIM 安全和信任管理工作组主席,并参与多个国际会议/研讨会的指导委员会。他曾担任意大利国土安全技术平台 (SERIT) 的联合主席,以及欧盟委员会推动的网络和信息安全平台 (NIS) 研究与创新第三工作组 (WG3) 的联合主席,并在此职位上共同主持了-编辑了第一个欧洲层面网络安全战略研究议程。他积极参与欧洲和意大利的信息和通信安全项目。他在国际期刊和会议/研讨会论文集上共同撰写了四百多篇论文。他的主要研究兴趣包括分布式和移动系统中的安全和隐私以及安全和信任的基础。

View Abstract