外文资料原文及翻译
基于工业机械手和视觉系统的机器人磨削站
A Robotic grinding station based on an
基于
industrial manipulator and vision system
工业机械手和视觉系统
学院名称: 智能工程学院
专 业: 机器人工程
班 级: 机器人工程213
学生姓名: 刁梦梦
学 号: 2108281321
指导教师姓名: 李静
指导教师职称: 讲师
2025年3月6日
外文资料原文及翻译
基于工业机械手和视觉系统的机器人磨削站
A Robotic grinding station based on an
基于
industrial manipulator and vision system
工业机械手和视觉系统
学院名称: 智能工程学院
专 业: 机器人工程
班 级: 机器人工程213
学生姓名: 刁梦梦
学 号: 2108281321
指导教师姓名: 李静
指导教师职称: 讲师
2025年3月6日
原文
A Robotic grinding station based on an
基于
industrial manipulator and vision system
工业机械手和视觉系统
Guoyang WanID*, Guofeng Wang, Yunsheng Fan
万国阳ID*, 王国峰, 范云生
Department of Marine Electrical Engineering, Dalian Maritime University, Dalian, China
大连海事大学船舶电气工程系,中国 大连
ABSTRACT
抽象
Due to ever increasing precision and automation demands in robotic grinding, the automatic and robust robotic grinding workstation has become a research hot-spot. This work proposes a grinding workstation constituting of machine vision and an industrial manipulator to solve the difficulty of positioning rough metal cast objects and automatic grinding. Faced with the complex characteristics of industrial environment, such as weak contrast, light nonuniformity and scarcity, a coarse-to-fine two-step localization strategy was used for obtaining the object position. The deep neural network and template matching method were employed for determining the object position precisely in the presence of ambient light. Subsequently, edge extraction and contour fitting techniques were used to measure the position of the contour of the object and to locate the main burr on its surface after eliminating the influence of burr. The grid method was employed for detecting the main burrs, and the offline grinding trajectory of the industrial manipulator was planned with the guidance of the coordinate transformation method. The system greatly improves the automaticity through the entire process of loading, grinding and unloading. It can determine the object position and target the robotic grinding trajectory by the shape of the burr on the surface of an object. The measurements indicate that this system can work stably and efficiently, and the experimental results demonstrate the high accuracy and high efficiency of the proposed method. Meanwhile, it could well overcome the influence of the materials of grinding work pieces, scratch and rust.
由于机器人磨削的精度和自动化要求不断提高,自动化和坚固的机器人磨削工作站已成为研究热点。本工作专业提出了由机器视觉和工业机械手构成的打磨工作站,以解决粗糙金属铸件定位和自动打磨的困难。面对工业环境对比度弱、光线不均匀、稀缺性等复杂特征,采用由粗到细的两步定位策略获取目标位置。采用深度神经网络和模板匹配方法在环境光存在下精确确定物体位置。随后,采用边缘提取和轮廓拟合技术测量物体轮廓的位置,并在消除毛刺的影响后定位其表面的主要毛刺。采用网格法检测主要毛刺,在坐标变换法的指导下规划工业机械手的离线磨削轨迹。该系统大大提高了上、磨、下料全过程的自动化程度。它可以确定物体位置,并通过物体表面毛刺的形状确定机器人研磨轨迹。测量结果表明,该系统能够稳定、高效地工作,实验结果表明所提方法具有较高的准确性和高效率。同时,它可以很好地克服磨削工件材料、划痕和生锈的影响。
Ⅰ. INTRODUCTION
I.. 引言
Grinding of metal casts has always been a difficult task in industrial applications as it causes a lot of pollution, consumes a large amount of energy, and poses a high risk in processing. With he application of an industrial manipulator grinding station for the grinding process, this situation has been significantly improved. The method of grinding using an industrial manipulator makes full use of the flexibility, high speed and simple programming of the robot [1]. As one of the main sensing technology of robots, machine vision is popular because of its noncontact nature and ability to collect a large amount of information [2]. Vision-based grinding stations can realize automated grinding, reduce labor costs, improve the consistency of the ground objects and optimize and shorten production cycles [3, 4].
金属铸件的磨削在工业应用中一直是一项艰巨的任务,因为它会产生大量污染,消耗大量能源,并且在加工过程中具有很高的风险。随着工业机械手磨削站在磨削过程中的应用,这种位置得到了显著改善。使用工业机械手的磨削方法充分利用了机器人的灵活性、高速和简单的编程 [1]。机器视觉作为机器人的主要传感技术之一,因其非接触性和收集大量信息的能力而受到广泛欢迎 [2]。基于视觉的磨削站可以实现自动化磨削,降低人工成本,提高被磨物体的一致性,优化和缩短生产周期 [3, 4]。
However, there are still problems when a manipulator is used to grind metal casts: (1) The feeding process is difficult: the loading of the object is carried out by the operator and fixed to the specific fixture. Following this, the manipulator completes the task of grasping the object. However, the casts are irregular in shape, and the weight of some metal castings is above 20 kg. Thus, the manipulator operation consumes time, and manual operation is not safe; (2) Grasp positioning is difficult: it is necessary to position and grasp the same region on the object’s surface in the same posture each time. Hence, designers need to design special fixtures for each product. When the production line needs to switch to objects of different shapes and volumes, the fixture must be redesigned. This not only consumes labor but also increases the hardware expenditure [5]; (3) The grinding trajectory of the manipulator and the special equipment are complicated [6]: the shape and position of burrs are not certain. It is a common practice to polish the location where burrs may appear on the surface of the object during the grinding process. This increases the working time and workload; (4) Manipulator grinding relies on human teaching: this method is time-consuming and labor-intensive and in order to ensure accuracy, the worker must be close to the equipment while teaching the trajectory. In the process of grinding and debugging, dust and other harmful substances are easily generated, which are harmful to the health of the worker.
但是,使用机械手磨削金属铸件时仍然存在问题:(1)进料过程困难:物体的加载由操作员进行并固定在特定的夹具上。在此之后,操纵器完成抓取对象的任务。但是,铸件的形状不规则,一些金属铸件的重量在 20 公斤以上。因此,机械手操作耗时,人工操作不安全;(2) 抓取定位困难:每次都需要以相同的姿势定位和抓取物体表面的同一区域。因此,设计师需要为每个产品设计特殊的夹具。当生产线需要切换到不同形状和体积的物体时,必须重新设计夹具。这不仅消耗劳动力,还增加了硬件支出 [5];(3)机械手和专用设备的磨削轨迹复杂[6]:毛刺的形状和位置不确定。在研磨过程中,通常会对物体表面可能出现毛刺的位置进行抛光。这增加了工作时间和工作量;(4) 机械手磨削依赖人工示教:这种方法既费时又费力,为了保证精度,工人在示教轨迹时必须靠近设备。在打磨和调试过程中,容易产生灰尘和其他有害物质,对工人的健康有害。
To solve above problems, several studies have been undertaken using industrial manipulators for grinding metal castings. Park [7] proposed a method for industrial manipulators to cut big casting objects; however, it is difficult when another object needs to be worked on. The method relies on manual loading and unloading and the processing trajectory relies entirely on manual teaching. In another study, Gaz [8] proposed a system that uses collaboration between robots to polish an object with a human operator working in the same area. The system is safe and flexible; however, the grinding trajectory needs to be taught by a human. There are also many polishing systems that combine a force sensor and an industrial manipulator [9, 10], which can improve the grinding quality and reduce the system working time. But these systems still need substantial involvement of humans during operation. Visual technology is also oftenused in robotic grinding. An automatic robotic grinding system based on reverse engineering has been realized [11]); however, its trajectory is based only on the surface shape of the product and the burr on the object is not considered. Pandiyan [12] proposed a method based on deep learning techniques for the detection of the weld seam and its removal; however, this method has a high rate of misclassification between the weld seam states and the background.
为了解决上述问题,已经使用工业机械手对金属铸件进行磨削进行了大量研究。Park [7] 提出了一种工业机械手切割大型铸造物体的方法;但是,当需要处理另一个对象时,这很困难。该方法依赖于手动加载和卸载,而处理轨迹完全依赖于手动示教。在另一项研究中,Gaz [8] 提出了一种系统,该系统利用机器人之间的协作来抛光物体,并在同一区域工作的人类操作员。该系统安全灵活;但是,磨削轨迹需要由人类教授。还有许多抛光系统结合了力传感器和工业机械手 [9, 10],可以提高研磨质量,减少系统工作时间。但这些系统在运行过程中仍然需要人类的大量参与。视觉技术也经常用于机器人研磨。基于逆向工程的自动机器人研磨系统已经实现[11]);但是,它的轨迹仅基于产品的表面形状,而不考虑物体上的毛刺。Pandiyan [12] 提出了一种基于深度学习技术的方法,用于检测焊缝及其去除;然而,这种方法在焊缝状态和背景之间的错误分类率很高。
To address the above issues, we propose an automatic grinding workstation based on an industrial manipulator and vision system which can be used for processing rough metal casts in this study. The system can robustly complete the positioning and grasping of workpieces placed on the pallet without manual intervention and can regularize the robot grinding trajecory according to the shape of the workpiece burr to achieve the automatic operation of the grinding process. A two-step localization strategy has been proposed to obtain the position of the object in the pallet. This strategy has the advantages of excellent generalization performance of the deep neural network and high positioning accuracy of template matching. It can
为了解决上述问题,我们提出了一种基于工业机械手和视觉系统的自动磨削工作站,可用于加工粗糙的金属铸件。该系统无需人工干预即可稳健地完成对放置在托盘上的工件的定位和抓取,并可根据工件毛刺的形状规范机器人磨削轨迹,实现磨削过程的自动化运行。已经提出了一种两步定位策略来获取物体在托盘中的位置。该策略具有深度神经网络泛化性能好、模板匹配定位精度高等优点。它可以
obtain the position of a blank cast object accurately even in the presence of interruptions. Following this, we propose a grid method to detect the burr position and offline planning of the industrial manipulator grinding trajectory. The system can stably and quickly position the objects placed at regular intervals on the pallet. The object’s grinding trajectory can be generated offline. Our system also has an excellent price/performance ratio.
即使在存在中断的情况下,也能准确获取空白铸造物的位置。在此基础上,我们提出了一种网格方法,用于检测工业机械手磨削轨迹的毛刺位置和离线规划。该系统可以稳定、快速地定位托盘上以规则间隔放置的物体。物体的研磨轨迹可以离线生成。我们的系统还具有出色的性价比。
The main contribution of this paper is summarized as follows.
本文的主要贡献总结如下。
1. An automatic grinding workstation for rough casts is designed. The workstation can plan the grinding path according to the main burrs on the workpiece surface.
1. 设计了用于粗铸件的自动磨削工作站。工作站可以根据工件表面的主要毛刺规划磨削路径。
2. A strategy for visual detection and positioning for rough casts has been proposed to solve the problem that the vision system has a complicated background and cannot use a large size light source in some industrial area. This strategy includes an improved yolov3 detector, i.e., Den-yolov3 and a modified template matching method. This approach offers favorable performance and fast speed for the identification of industrial objects.
2. 为解决视觉系统背景复杂,在某些工业区域无法使用大尺寸光源的问题,提出了一种粗铸件的视觉检测和定位策略。该策略包括改进的 yolov3 detector,即 Den-yolov3 和改进的模板匹配方法。这种方法为工业物体的识别提供了良好的性能和快速的速度。
3. A grinding trajectory method for industrial robot based on the grid method is proposed. The automatic grinding path planning of industrial robot based on the vision system is realized.
3. 提出了一种基于网格法的工业机器人磨削轨迹方法。实现了基于视觉系统的工业机器人自动磨削路径规划。
Methodology
方法论
Hardware architecture and architecture of the proposed method
所提方法的硬件架构和架构
The grinding station setup is composed of loading cell, grinding cell, unloading cell:
研磨站设置由上料单元、研磨单元、下料单元组成:
Loading cell: A 2D industrial camera and light are installed in the 6th axis flange of the loading industrial manipulator. The system realizes the positioning of the workpiece through the vision system, and the industrial manipulator completes the grasping operation.
加载单元:2D 工业相机和灯安装在加载工业机械手的第 6 轴法兰中。系统通过视觉系统实现工件的定位,由工业机械手完成抓取操作。
Grinding cell: It consists of two industrial manipulators and two grinding machines. They work together to finish the grinding operation.
磨削单元:它由两台工业机械手和两台磨床组成。他们共同完成磨削操作。
Unloading cell: A manipulator is used to realize the unloading and palletizing work.
卸料单元:采用机械手实现卸料和码垛工作。
The workstation is controlled by a PLC control system. Figs 1 and 2 show the system architecture and workstation layout, respectively
工作站由 PLC 控制系统控制。图 1 和图 2 分别显示了系统架构和工作站布局
The object that is to be processed by the workstation is an iron casting. It is uniformly placed on a pallet with approximately 18 objects per pallet, each with six layers and each separated by a partition. The size of the object to be ground is 290 mm × 155 mm, and its weight is 7 kg. The grinding requirement is that there is no burr around the object’s edges. The object to be ground is shown in Fig 4. During the processing, the staff only needs to place the pallet filled with rough objects in the loading area. The workstation will finish the grinding of the rough objects without manual intervention and place the processed objects in the unloading pallet. The grinding error in this method does not exceed ±0.5 mm.
工作站要处理的物体是铸铁件。它被均匀地放置在一个托盘上,每个托盘大约有 18 个物体,每个物体有六层,每个物体都有一个隔板。被研磨物体的尺寸为 290 毫米× 155 毫米,重量为 7 公斤。研磨要求是物体边缘周围没有毛刺。要研磨的物体如图 4 所示。在加工过程中,工作人员只需将装满粗糙物体的托盘放在装载区即可。工作站将完成粗糙物体的研磨,无需人工干预,并将加工后的物体放入卸货托盘中。这种方法中的磨削误差不超过 ±0.5 毫米。
To accurately and efficiently implement object grinding, combined with the hardware system, we designed a workstation automatic grinding strategy, which mainly consists of two parts: visual positioning and visual inspection (see Fig 3).
为了准确高效地实现物体打磨,结合硬件系统,我们设计了工作站自动打磨策略,主要由视觉定位和视觉检查两部分组成(见图 3)。
Theoretical basis
理论基础
Visual localization. The visual processing of the grinding workstation mainly includes two steps: visual localization during the loading process and burr detection during the grinding process.
视觉定位。 打磨工作站的视觉处理主要包括两个步骤:上料过程中的视觉定位和打磨过程中的毛刺检测。
In this paper, the object has burrs around its edges and the background has the same type of objects near the object that is to be grasped. The most difficult problem that needs to be solved is how to locate an object precisely when it does not have a clear contour and is in a cluttered background (see Fig 4).
在本文中,物体的边缘有毛刺,背景在要抓取的物体附近有相同类型的物体。需要解决的最困难的问题是,当物体没有清晰的轮廓并且处于杂乱的背景中时,如何精确定位它(见图 4)。
A 2D vision system is used to obtain the precise position of the object. In the 2D visual positioning technique, template matching [13] is a commonly used method for positioning. It uses a representative part of the object’s image to be identified as a “model” and uses this “model” to find the object to be identified in the searched image [14]. There are four main reasons why the template matching method is not useful to obtain the object’s position precisely in an
2D 视觉系统用于获取物体的精确位置。在 2D 视觉定位技术中,模板匹配 [13] 是一种常用的定位方法。它使用物体图像的代表性部分来识别为“模型”,并使用该“模型”在搜索的图像中查找要识别的对象 [14]。模板匹配方法对于在
1. In the open industrial environment, the image of an object being captured is easy to get influenced by ambient light.
1. 在开放的工业环境中,被捕获物体的图像很容易受到环境光的影响。
2. The surface of the rough metal casts which is used for image processing has scratches and stains.
2. 用于图像处理的粗糙金属铸件表面有划痕和污渍。
3. The burr on the surface of rough metal cast affects the accuracy of the positioning algorithm
3. 毛坯金属铸件表面的毛刺影响定位算法的准确性
4. The rounded transition area of the object surface makes the object contour unstable during the imaging process.
4. 物体表面的圆形过渡区使物体轮廓在成像过程中不稳定。
In this paper, we combined the latest deep learning and classic template matching technology to propose a coarse-to-fine visual detection and positioning method. In the coarse positioning stage, we use the deep neural network to detect the approximate position of the workpiece. On this basis, we use the template matching method to precisely locate the local features of the object, thereby achieving the precise positioning of the rough castings with minimal external interference. Coarse step. We use deep learning methods to help make complete use of the features in the
在本文中,我们将最新的深度学习和经典的模板匹配技术相结合,提出了一种从粗到细的视觉检测和定位方法。在粗略定位阶段,我们使用深度神经网络来检测工件的大致位置。在此基础上,我们采用模板匹配方法精确定位物体的局部特征,从而在最小外部干扰下实现毛坯铸件的精确定位。粗略的台阶。我们使用深度学习方法来帮助充分利用
detected target. This makes our detection results more robust than a single template matching method.
检测到的目标。这使得我们的检测结果比单一模板匹配方法更可靠。
Deep learning has gradually become the mainstream in target detection after 2012 [15]. At present, many efficient target detection networks have been proposed and applied in the industrial field, such as yolo [16–19], Faster R-CNN [20], and NAS-FPN [21]. The yolo algorithm was proposed by Redmon et al. After two years of development, it has grown from yoloto yolov3. The yolov3 algorithm extracts features based on a regression method. It is an endto-end training process that directly returns categories and frames on the feature layer, saving a lot of time wasted in extracting frames.
2012 年后,深度学习逐渐成为目标检测的主流 [15]。目前,许多高效的目标检测网络已被提出并在工业领域得到应用,如yolo [16\u201219]、Faster R-CNN [20]和NAS-FPN [21]。yolo 算法rithm 是由 Redmon 等人提出的。经过两年的发展,它已经从 yoloto yolov3 发展壮大。yolov3 算法基于回归方法提取特征。这是一个端到端的训练过程,直接返回特征层上的类别和帧,节省了大量浪费在提取帧上的时间。
In this paper, we propose an improved yolov3 network for rough positioning of targets.Standard yolov3 uses Darknet53 network based on residual network structure for featureextraction and generates three feature maps of 13×13, 26×26 and 52×52, respectively (see Fig 5).
在本文中,我们提出了一种改进的 yolov3 网络,用于目标的粗略定位。标准 yolov3 使用基于残差网络结构的 Darknet53 网络进行特征提取,分别生成 13×13、26×26 和 52×52 三个特征图(见图 5)。
The three size feature maps correspond to large, medium, and small target detection in the picture.
三种尺寸的特征图分别对应图中的大、中、小目标检测。
Our work is mainly focused on the following areas:
我们的工作主要集中在以下几个方面:
1. Set up industrial object image dataset: Through the method of image acquisition andimage enhancement, an industrial image-data set containing the object in this study and otherindustrial object is established.Image enhancement is an important method to improve the performance of network recognition. Image rotation, adjustment of image brightness, image blur, and image noise areused for data enhancement in this study; an object in dataset is shown in Fig 6.
1. 建立工业对象图像数据集:通过图像采集和图像增强的方法,建立一个包含本研究对象和其他工业对象的工业图像数据集。图像增强是提高网络录制性能的重要方法。本研究采用图像旋转、图像亮度调整、图像模糊和图像噪声进行数据增强;数据集中的一个对象如图 6 所示。
2. Change yolov3 network structure: 1) The DarkNet network was changed to the DenseNetnetwork, and a parallel network composed of convolution kernel functions of different scaleswas added to the bottom part of the network, thereby enabling the extraction of richer featuresin the image. 2) To strengthen the detection of small targets, three size feature maps are usedby yolov3 detector for target detection. Since the size of the measured object is not large, the feature map for small targets is removed in this method. 3) The standard yolov3 network uses
2. 改变 yolov3 网络结构: 1) 将 DarkNet 网络改为 DenseNet 网络,并在网络底部增加由不同尺度的卷积核函数组成的并行网络,从而能够提取图像中更丰富的特征。2) 为了加强对小目标的检测,yolov3 检测器使用 3 个大小的特征图进行目标检测。由于被测物体的尺寸不大,该方法去除了小目标的特征图。3) 标准的 yolov3 网络使用
structure in this part. The experiments show that the network that introduces the residual structure has better recognition ability. Improved yolov3 is named as Den-yolov3 and shown in Fig 7.
结构。实验表明,引入残差结构的网络具有较好的识别能力。改进的 yolov3 被命名为 Den-yolov3,如图 7 所示。
3. Using multiple location features for the object position: To reduce the localization failurecaused by recognition errors, we consider multiple local features of the object surface at thesame time, and infer the position of the target object according to the detection results of themultiple local features.
3. 对目标位置使用多个定位特征:为了减少识别错误导致的定位失败,我们同时考虑目标表面的多个局部特征,并根据多个局部特征的检测结果推断目标目标的位置。
Two region features namely region I and region II are select to detect the object in this study(see Fig 4). When the detection is completed, the system calculates the positional relationshipbetween the features in region I and region II to further improve the success rate of detection.Den-yolov3 can find the positions of all objects in the image, so when the detection is completed, we need to count the position information of the detected regions I and II in the imageand select the grasped object.
在本研究中,选择了两个区域特征,即区域 I 和区域 II 来检测目标(见图 4)。检测完成后,系统计算区域 I 和区域 II 中特征的位置关系,进一步提高检测成功率。Den-yolov3 可以找到图像中所有目标的位置,因此当检测完成时,我们需要统计图像中检测到的区域 I 和 II 的位置信息,并选择抓取的物体。
Fine step. After coarse positioning, the approximate location of the object is obtained. Wecan then determine the approximate location of the circular hole area on the object surface.Localization can be performed in local areas in the picture. Local positioning can improve therobustness of matching and reduce the workload of matching. The Line-MOD method [22,23] is a contour feature-based template matching method and is one of the most convenientways to locate objects. Line-2D is a part of Line-MOD with only gradient information from animage. Its excellent and stable characteristics have attracted wide attention from the industry.
精细的步伐。粗略定位后,得到物体的大致位置。然后,我们可以确定圆孔区域在物体表面上的大致位置。可以在图片中的局部区域执行定位。局部定位可以提高匹配的鲁棒性,减少匹配的工作量。Line-MOD 方法 [22\u201223] 是一种基于轮廓特征的模板匹配方法,是定位对象最方便的方法之一。Line-2D 是 Line-MOD 的一部分,仅包含来自 animage 的渐变信息。其优良稳定的特性引起了业界的广泛关注。
We propose the modified Line-2D method, namely, geometric-multilevel-Line-2D(GMLINE-2D), to obtain the object position precisely. The GMLINE-2D method combinesgeometric model, multilevel matching, and fast ICP registration method into the Line-2Dmethod. It can obtain the object position quickly and robustly. The main step of the GMLINE-2D method is as follow:
我们提出了改进的 Line-2D 方法,即几何-多级-Line-2D(GMLINE-2D),以精确获取物体位置。GMLINE-2D 方法将几何模型、多级匹配和快速 ICP 配准方法结合到 Line-2D 方法中。它可以快速、稳健地获取物体位置。GMLINE-2D 方法的主要步骤如下:
The geometric model includes the main contour information of the object’s feature and it isfree of influence by the scratches or stains on the object surface. Thus, this geometric model isrobust for template matching. However, because of the height of the object and the influenceof ambient light, the object feature in the image shows shape deformation when the object is ata different place in the visual field. To accurately determine the object position, we create multiple models with different shapes to do the template matching and then select the best result.
几何模型包括对象特征的主要轮廓信息,并且不受对象表面划痕或污渍的影响。因此,此几何模型对于模板匹配非常可靠。然而,由于物体的高度和环境光的影响,当物体在视野中的不同位置时,图像中的物体特征会出现形状变形。为了准确确定物体的位置,我们创建了不同形状的多个模型来进行模板匹配,然后选择最佳结果。
The model map consists of a precomputer response map and multi-scale models; the precomputer response map uses the same step of the Line-mode method [24]. By changing the lengthand width values of the geometric contour model, several new geometric models can be used to create and generate the models map that consists of multi-scale models (see Fig 8).
模型图由前计算机响应图和多尺度模型组成;precomputer 响应图使用与线模式方法相同的步骤 [24]。通过改变几何等高线模型的长度和宽度值,可以使用几个新的几何模型来创建和生成由多尺度模型组成的模型映射(见图 8)。
The second step is to use a multilevel matching method to obtain the object position. Multilevel matching is a method to improve the speed of template matching. It uses the image pyramid method to build multilevel models. The process is as follows:
第二步,使用多级匹配方法获取对象位置。多级匹配是一种提高模板匹配速度的方法。它使用 image pyramid 方法来构建多级模型。该过程如下:
Rotate the model m in n degree, and generate M1 (m,. . .,m’) model set. The gradient intensity and angle of the image are calculated. We can get gradient image G1, then M1 is used tomatch G1.
将模型 m 旋转 n 度,并生成 M1 (m,. . .,m') 模型集。计算图像的梯度强度和角度。我们可以得到渐变图像 G1,然后用 M1 来匹配 G1。
The similarity score is used to measure the similarity between the model and the image tobe matched. Generally, the higher the score, the higher the similarity between the model andthe input image. In this paper, if the similarity score of the match is greater than 60, the matchis successful. We use (4) to calculate the similarity score [24].
相似度分数用于衡量模型与要匹配的图像之间的相似度。通常,分数越高,模型与输入图像之间的相似度越高。在本文中,如果匹配的相似度分数大于 60,则匹配成功。我们使用 (4) 来计算相似性分数 [24]。
Among them, ori(O,r) is the gradient of the position in the model. I is the image to be matched, and P is the set of positions r to be calculated in the image O, where
其中,ori(O,r) 是模型中位置的梯度。I 是要匹配的图像,P 是图像 O 中要计算的位置集 r,其中
We use the Gaussian pyramid to sample G1 and the model, to get G2 and m2, template m2 is rotated at n * 2 degrees and the model set M2 is generated. Further, the LINE-2D method is used to match M2 with G2.
我们使用高斯金字塔对 G1 和模型进行采样,得到 G2 和 m2,模板 m2 旋转 n * 2 度,生成模型集 M2。此外,LINE-2D 方法用于将 M2 与 G2 匹配。
If the matching score in the previous step is greater than 60, the previous step is repeated.
如果上一步的匹配分数大于 60,则重复上一步。
Through the above steps, we can get the multilevel model of the image. Although the multilevel model is complex in the template establishment process, it can greatly improve the matching speed and stability in the matching process. The actual measurement can carry out four times of downsampling.
通过以上步骤,我们可以得到图像的多级模型。虽然多级模型在模板建立过程中比较复杂,但可以大大提高匹配过程中的匹配速度和稳定性。实际测量可进行四次降采样。
When we begin multilevel matching, the gradient intensity of the matched image needs to be obtained, and k is used to filter the image, then the Gauss pyramid is used to generate all levels of the image to be matched. It matches with the corresponding template set from the high level (see Fig 9).
当我们开始多级匹配时,需要得到匹配图像的梯度强度,用 k 对图像进行过滤,然后用高斯金字塔生成要匹配的图像的所有级别。它与高级别的相应模板集匹配(参见 图 9)。
The last step is to use the fast ICP algorithm to calculate the difference between the location template matching result and the contour of the object in the matched image.
最后一步,使用快速 ICP 算法计算位置模板匹配结果与匹配图像中物体轮廓的差值。
Visual inspection.
目视检查。
In this work, an offline vision-based robotic trajectory generation system has been proposed. By using the visual technique, the offline trajectory planning of the industrial manipulator is performed according to the shape of the burr in each object to be ground.
在这项工作中,提出了一种基于离线视觉的机器人轨迹生成系统。利用视觉技术,根据每个待研磨物体的毛刺形状,对工业机械手进行离线轨迹规划。
Detecting the position of the main burrs. There are two kinds of burrs for the object used in the present work, burrs of the sprue gate, generated by demolding and sprue line burrs, generated by the flashing of the sprue line during the casting process.
检测主毛刺的位置。本作品中使用的物体有两种毛刺,一种是浇口毛刺,由脱模产生,另一种是浇口毛刺,由浇口线在铸造过程中飞边产生。
Burr position analysis. Different abrasive tools may have sprue gates located at different positions. However, they are located only in specific areas on the object’s surface, and thus, the burr of the sprue gate is located only at certain specific locations on the object. On the other hand, a flash burr is caused by the aging of the mold, etc., and it may appear on the mold sprue line of the object. Sprue gate burrs are difficult to grind because of their big size. The focus of our work is on how to efficiently grind the sprue gate burrs.
毛刺位置分析。不同的研磨工具可能在不同的位置具有浇口。但是,它们仅位于对象曲面上的特定区域,因此,浇口的毛刺仅位于对象上的某些特定位置。另一方面,飞边毛刺是由模具老化等引起的,它可能出现在物体的模具浇口线上。浇口毛刺由于尺寸大而难以研磨。我们工作的重点是如何有效地研磨浇口毛刺。
Vision system detection can be used for locating the sprue gate burr position. The detection steps are as follows:
视觉系统检测可用于定位浇口浇口毛刺位置。检测步骤如下:
After localization, the object has to be placed in a fixture by the industrial manipulator in the presence of ambient light following which, the contour image of the object is obtained by the vision system.
定位后,必须在环境光存在的情况下由工业机械手将物体放置在固定装置中,然后由视觉系统获得物体的轮廓图像。
Multiple isometric lines are used to calculate the gradient of the object’s edge. We can obtain the edge of the object precisely even though there are flash burrs on the edges of the object.
多条等轴测线用于计算对象边缘的渐变。即使物体边缘有飞边毛刺,我们也可以精确地获得物体的边缘。
The sprue gate burrs are located only at specific positions on the object as the shape of the injection mold is fixed. After obtaining the four edge lines of the object by combining this information with the CAD model of the object, we can obtain the possible region of sprue gate burrs and calculate its grayscale value. By checking the grayscale value of the possible region of the sprue gate, we can ensure the position of the sprue gate burrs.
由于注塑模具的形状是固定的,因此浇口毛刺仅位于物体上的特定位置。通过将此信息与对象的 CAD 模型相结合,获得对象的四条边线后,我们可以获得浇口浇口毛刺的可能区域并计算其灰度值。通过检查浇口浇口可能区域的灰度值,我们可以确保浇口浇口毛刺的位置。
Burr shape detection. After obtaining the positions of the sprue gate burrs, we need to detect its shape.
毛刺形状检测。在获得浇口浇口毛刺的位置后,我们需要检测它的形状。
The grinding process investigated in the present work uses an industrial manipulator to grasp the object and grind it using a grinding machine. We set the industrial manipulator to grinding with the same amount of grinding feed. Before grinding, a manual test is done to determine the amount of feed, M, per grinding cycle.
本工作中研究的研磨过程使用工业机械手抓取物体并使用研磨机对其进行研磨。我们将工业机械手设置为使用相同数量的磨削进给进行磨削。在磨削之前,进行手动测试以确定每个磨削周期的进给量 M。
First, the grid method is used to check the contour of the object:
首先,使用 grid 方法检查对象的轮廓:
Meshing: The Zhang’s calibration method is used to obtain the conversion relationship between pixels and the actual distance [25]. A parallel grid covering a large number of lengths and widths is formed to cover the casting port at the edge of the object, and the side length, L, of the mesh is equal to the feed amount M of the industrial manipulator grinding track. The system detects the sum of the grayscale values, W (x, y), of each line of the grid and then determines whether the burr is included in the mesh area or not.
网格划分:采用 Zhang 的标定方法,得到像素与实际距离的转换关系 [25]。形成覆盖大量长度和宽度的平行网格,以覆盖物体边缘的铸造口,网格的边长 L 等于工业机械手磨削轨道的进料量 M 。系统检测网格每行的灰度值 W (x, y) 之和,然后确定毛刺是否包含在网格区域中。
Burr position analysis: The system divides each grid into two categories: to be ground and not to be ground. For example, if a section of the grid does not have a burr, there is no need to polish that area and vice versa. Each row of the grinding area in the grid should have a grinding start and a grinding end position. The starting position is the coordinates of the lower right corner on the grid of the first grid burr area, corresponding to the first burr boundary area in the grid, in each row of the robot grinding direction. The end position is the grid of the last grid burr border area of each line without the presence of burr (see Fig 10).
毛刺位置分析:系统将每个网格分为两类:要接地和不接地。例如,如果网格的某个部分没有毛刺,则无需抛光该区域,反之亦然。网格中每行磨削区域应有磨削起点和磨削终点位置。起始位置为机器人磨削方向每行中第一网格毛刺区域网格上右下角的坐标,对应网格中的第一个毛刺边界区域。结束位置是每条线的最后一个网格毛刺边界区域的网格,不存在毛刺(见图 10)。
Vision-based grinding trajectory generation: Using the camera calibration results, the visual coordinate system {C} can be obtained. Using the straight-line extraction method combined with RANSAC, the straight line of the two adjacent edges of the object can be accurately obtained from which the intersection point is obtained. This point is taken as the origin to establish the object coordinate system {O}.
基于视觉的磨削轨迹生成:使用相机校准结果,可以获得视觉坐标系 {C}。采用直线提取法结合 RANSAC,可以准确得到物体相邻两条边的直线,从中得到交点。将此点作为建立对象坐标系 {O} 的原点。
A user coordinate system {U} can be created using the grinding wheel’s grinding position P. The positions in {U} in the industrial manipulator base coordinate system {B} are BTP.
可以使用砂轮的磨削位置 P 创建用户坐标系 {U}。工业操纵器基准坐标系 {B} 中 {U} 中的位置是 BTP。
The object is grasped by the industrial manipulator and placed at the corner point C of the object (i.e., the origin of the object coordinate system {O}) horizontally at the point P. The values {T0} of the coordinate system BTF, are taught and recorded
该物体由工业机械手抓取,并放置在物体的拐角点 C(即物体坐标系 {O} 的原点)的水平点 P 处。示教和记录坐标系 BTF 的值 {T0}
BTP is the point P in the pose of an industrial manipulator base coordinate system. In the grinding process, C is a point on the surface of the object which moves as the object moves and FTC is a point in {T0} coordinate system. Thus, there are:
BTP 是工业操纵器基础坐标系姿势中的点 P。在研磨过程中,C 是物体表面上的一个点,它随着物体的移动而移动,FTC 是 {T0} 坐标系中的一个点。因此,有:
We have:
我们有:
OTC is the point C in {O} coordinate system, it can be obtained from the vision system coordinate system. When grinding point position C is changed to C1, we have:
OTC 是 {O} 坐标系中的点 C,可以从视觉系统坐标系中获得。当磨削点位置 C 更改为 C1 时,我们有:
Combined with (7), there are
与 (7) 组合,有
From this, the new BTF’ can be obtained, which is the grinding position of the point C1 of the object.
由此可以得到新的 BTF',即物体点 C1 的磨削位置。
By doing the coordinate transformation, the object’s grinding key point can be transformed into the position that can be executed by the industrial manipulator.
通过进行坐标变换,可以将物体的磨削关键点变换成工业机械手可以执行的位置。
The visual inspection process generates an offline robotic trajectory based on the shape of the burr in the object and thus replaces the traditional manual teaching method. The offline robotic trajectory generation method can improve the production efficiency, reduce the robot trajectory teaching time of the operators, and enable greater system safety.
目视检查过程根据物体中毛刺的形状生成离线机器人轨迹,从而取代了传统的人工示教方法。离线机器人轨迹生成方法可以提高生产效率,减少操作员的机器人轨迹示教时间,提高系统安全性。
Results and discussion
结果与讨论
Experimental equipment
实验设备
In the system, two EFORT ER50-C10 industrial manipulators, having a payload of 50 kg, were used for loading and unloading and an ABB IRB6700 series industrial manipulator, having a payload of 150 kg, was used for grinding. A vision system (including one Basler ace–3800 series camera and a 12 mm Basler lens) was mounted at the end of the 6th axis flange of the loading industrial manipulator. It is to be used for obtaining the position of the object in the pallet and feed it into loading platforms. The resolution of the camera is 3840×2748, the working distance of the vision system is about 700mm, and the field of vision is about 500mm * 300mm. Two grinding robots take the object from the loading platforms, respectively and grind it on the grinding machine. After grinding, the two grinding robots respectively take parts from the loading table and finish grinding on the grinding machine. After that, the objects are placed on the unloading platforms, and the unloading robot completes palletizing. The PC workstation used in the test with CPU:i7, GPU:RTX 2070S, RAM:32G. C++ language and opencv libraries is used to programming. Two grinding machines grasp the object from loading platforms, then use it for grinding. The experimental setup is shown in Fig 11.
在该系统中,两台有效载荷为 50 kg 的 EFT ER50-C10 工业机械手用于装卸,一台有效载荷为 150 kg 的 ABB IRB6700 系列工业机械手用于磨削。视觉系统(包括一台 Basler ace-3800 系列相机和一台 12 毫米 Basler 镜头)安装在装载工业机械手的第 6 轴法兰末端。它用于获取物体在托盘中的位置并将其送入装载平台。相机的分辨率为 3840×2748,视觉系统的工作距离约为 700mm,视野约为 500mm * 300mm。两个磨削机器人分别从装载平台上取出物体,并在磨床上研磨。磨削后,两台磨削机器人分别从上料台上取件,在磨床上完成磨削。之后,将物品放置在卸货平台上,由卸货机器人完成码垛。测试中使用的 PC 工作站,CPU:i7,GPU:RTX 2070S,RAM:32G。C++ 语言和 opencv 库用于编程。两台研磨机从装载平台上抓取物体,然后用于研磨。实验装置如图 11 所示。
The dataset which we build in this study includes the training data and test data. The training data set consists of the original image and the enhanced image, and the testing data contain 200 original image of different objects. The training data are used to train the stand yolov3, and Den-yolov3 models and test data are used to test the model function. The training data and the main initialization parameters are shown in Tables 1 and 2. The testing data sets are used to detect network performance, the result is shown in Table 3.
我们在这项研究中构建的数据集包括训练数据和测试数据。训练数据集由原始图像和增强图像组成,测试数据包含 200 张不同对象的原始图像。训练数据用于训练林分 yolov3,Den-yolov3 模型和测试数据用于测试模型功能。训练数据和主要初始化参数如表 1 和表 2 所示。 测试数据集用于检测网络性能,结果如表 3 所示 。
TP is to predict positive class as a positive class number, FP is to predict negative classes to positive classes, and FN is to predict positive classes to negative classes. Both stand-yolov3 and Den-yolov3 are trained 300 times. From Table 3 we can find that the recalls of the two detectors are the same, but Den-yolov3 shows higher precision.
TP 是将正类预测为正类数, FP 是将负类预测为正类, FN 是将正类预测为负类。stand-yolov3 和 Den-yolov3 都经过 300 次训练。从 表 3 中,我们可以发现两个探测器的召回率相同,但 Den-yolov3 显示出更高的精度。
FPS reflects the detection speed of the network. The Table 4 shows the size and FPS of stand-yolov3 and Den-yolov3. Form Table 4, we can see that Den-yolov3 has better detection speed and smaller size than stand yolov3. This is because Den-yolov3 removes the small target detection channel in yolov3, which makes it have a smaller size model and faster detection speed.
FPS 反映了网络的检测速度。 表 4 显示了 stand-yolov3 和 Den-yolov3 的大小和 FPS。从表 4 中 可以看出,Den-yolov3 比 stand yolov3 具有更好的检测速度和更小的尺寸。这是因为 Den-yolov3 去除了 yolov3 中的小目标检测通道,这使得它的模型体积更小,检测速度更快。
To determine the object position, we took 300 images of the workpiece to be captured in the working environment and selected 200 as the training set and 100 as the test set. The Den-yolov3 and the standard yolov3 mentioned in this paper were used for comparison. Accuracy is the ratio of the number of correct classifications to the total sample. Table 5 shows the accuracy of the two method for 100 and 300 times of training. From Table 5 we can find that Den-yolov3 is better for the object in this study.
为了确定物体位置,我们拍摄了 300 张要在工作环境中捕获的工件图像,并选择 200 张作为训练集,100 张作为测试集。本文提到的 Den-yolov3 和标准 yolov3 进行比较。准确率是正确分类的数量与总样本的比率。 表 5 显示了两种方法对 100 次和 300 次训练的准确性。从 表 5 中,我们可以发现 Den-yolov3 更适合本研究中的对象。
Fig 12 shows that the loss function of our method decreases rapidly, and the detection success rate is higher under the same training times. This is because, compared with the standard yolov3 algorithm, the Den-yolov3 algorithm has fewer feature maps, so the loss function decreases quickly. The improved network introduces a parallel network, DenseNet network purchase, and residual network, so it can obtain better recognition results under the same iterative training situation.
图 12 表明,在相同的训练次数下,我们方法的损失函数迅速下降,检测成功率更高。这是因为,与标准的 yolov3 算法相比,Den-yolov3 算法的特征图较少,因此损失函数减小得很快。改进后的网络引入了并行网络、DenseNet 网络购买和残差网络,因此在相同的迭代训练情况下可以获得更好的识别结果。
It can be seen from the Fig 13 that Den-yolov3 has better recognition effect than yolov3 under the same training times. Fig 14 shows that the detect result of Den-yolov3 to the object in this study.
从 图 13 可以看出,在相同的训练次数下,Den-yolov3 的识别效果优于 yolov3。 图 14 显示 Den-yolov3 对本研究中的目标的检测结果。
Experimental results of fine positioning
精细定位的实验结果
The object was placed in a fixed position and the industrial manipulator was used to get 50 images in different positions. The single LINE–2D method and the GMLINE-2D method were used to process the images and record the matching score. Following this, the GMLINE-2D method was used to calculate the object grasp pose for the 70 images. Since the object was fixed, the grasp pose would be similar. The results thus obtained are plotted in Fig 13, and the numerical values are given in Table 6.
将物体放置在固定位置,使用工业机械手在不同位置获取 50 张图像。采用单一 LINE-2D 方法和 GMLINE-2D 方法处理图像并记录匹配分数。在此之后,使用 GMLINE-2D 方法计算 70 张图像的物体抓取姿势。由于对象是固定的,因此抓取姿势将相似。这样得到的结果如图 13 所示,数值见表 6。
From Fig 15, it can be seen that the GMLINE-2D matching score is high and robust as compared to the Line-2D method, which indicates that the GMLINE-2D is more robust. GMLINE-2D uses multiple models to match the local features of the object, and selects the highest similarity score as the final matching result. Compared with Line-2D, which only use one model. Thus, it can obtain better similarity score.
从 图 15 中可以看出,与 Line-2D 方法相比,GMLINE-2D 匹配分数更高且稳健,这表明 GMLINE-2D 更稳健。GMLINE-2D 使用多个模型来匹配对象的局部特征,并选择最高的相似度分数作为最终匹配结果。与仅使用 1 个模型的 Line-2D 相比。因此,它可以获得更好的相似度分数。
From Table 6, it can be seen that the errors in the X and Y values of the grasp position are less than 2 mm and the c axes grasp value is less than 1°; these errors are sufficiently small for a rough object grasp.
从 表 6 中可以看出,抓取位置的 X 和 Y 值的误差小于 2 mm,c 轴抓取值小于 1°;这些误差对于粗略的物体抓取来说已经足够小了。
Fig 16 shows the matching comparison between Line-MOD and our method. It can be seen from the figure that Line-MOD only uses a small number of feature
图 16 显示了 Line-MOD 和我们的方法之间的匹配比较。从图中可以看出,Line-MOD 只使用了少量的特性
Fig 17 is the result of the gradient feature approximation between the template and the input image feature area after several iterations using the ICP algorithm. We choose the area of the hole on the surface of the object as the matching feature. From the registration results, we can find that even if there is a matching error due to interference in the image feature area, the ICP algorithm can still improve the image matching accuracy based on the positioning of the matching algorithm.
图 17 是使用 ICP 算法进行多次迭代后模板和输入图像特征区域之间的梯度特征近似的结果。我们选择物体表面上的孔面积作为匹配特征。从配准结果中可以发现,即使图像特征区域因干扰而出现匹配误差,ICP 算法仍然可以基于匹配算法的定位提高图像匹配精度。
The grinding station combines the burr detection and trajectory planning functions of the vision system. From the combined applications of visual positioning, RANSAC line fitting, grid burr detection, and coordinate transformation, the automation of the grinding system is greatly improved. On the one hand, the workstation completes the robot positioning and grasping of the rough object under complex background conditions. On the other hand, the system realizes accurate online detection of the burr of the casting nozzle on the surface of the object, and based on the detection result; the industrial manipulator trajectory is generated offline. The grinding process is unmanned, which optimizes the working efficiency of the robot polishing and improves the safety factor of the workstation. A comparison showing the object before and after grinding is shown in Fig 18:
磨削站结合了视觉系统的毛刺检测和轨迹规划功能。从视觉定位、RANSAC 线拟合、网格毛刺检测、坐标变换的组合应用来看,打磨系统的自动化程度大大提高。一方面,工作站完成机器人在复杂背景条件下对粗糙物体的定位和抓取。另一方面,该系统实现了对铸造喷嘴在物体表面毛刺的准确在线检测,并基于检测结果;Industrial Manipulator 轨迹是离线生成的。打磨过程无人化,优化了机器人打磨的工作效率,提高了工作站的安全系数。图 18 显示了研磨前后物体的比较:
This paper proposes an industrial manipulator grinding station based on the vision system for robotic grasping guidance and automatic planning of the grinding trajectory. The workstation realizes the automation of the entire process from automatic loading to grinding. A novel coarse-to-fine vision position strategy which consists of an improved deep neural network detector and uses the industrial object recognition along with the modified Line-MOD method for precise positioning, is proposed. This strategy can locate the rough casting position precisely and robustly. This study also proposes a method based on the grid method for industrial manipulator grinding trajectory. By extracting the edge gradient of the object and combining this information with the RANSAC straight line fitting method to obtain the position of the burr, the grinding position can be generated. Combined with the industrial manipulator control logic, the workstation can complete the automated production of a blank object from the pallet to the grinding and placing processes.
本文提出了一种基于视觉系统的工业机械手磨削站,用于机器人抓取引导和磨削轨迹的自动规划。工作站实现了从自动上料到打磨全过程的自动化。该文提出一种新的从粗到细的视觉定位策略,该策略由改进的深度神经网络检测器组成,并使用工业物体识别和改进的 Line-MOD 方法进行精确定位。这种策略可以精确而稳健地定位粗铸位置。该文还提出了一种基于网格法的工业机械手磨削轨迹方法。通过提取物体的边缘梯度,并将此信息与 RANSAC 直线拟合方法相结合,以获得毛刺的位置,可以生成磨削位置。结合工业机械手控制逻辑,工作站可以完成从托盘到打磨、放置过程的空白物体的自动化生产。
The robot grinding station has the advantages of high grasping precision and good consistency in the quality of grinding the workpiece. At the same time, the workstation can complete the work that originally required 3–4 per unit working time for carrying out the process manually. The system also reduces labor during the production and debugging process. The less amount of intervention time required in this setup increases the safety of the staff. In the future, we will further improve the function of the proposed algorithm, further improve the detection accuracy and detection speed of the detection network, and consider adding rotation detection function in the network.
机器人磨削站具有抓取精度高、磨削工件质量一致性好等优点。同时,工作站可以完成最初每单位工作时间需要 3-4 个手动执行过程的工作。该系统还减少了生产和调试过程中的劳动力。此设置所需的干预时间越少,就越能提高工作人员的安全性。未来,我们将进一步完善所提算法的功能,进一步提高检测网络的检测精度和检测速度,并考虑在网络中增加旋转检测功能。
[1].Golda G, Kampa A. Modelling of Cutting Force and Robot Load during Machining. Advanced Materials Research, 2014: 715–720.
[1].Golda G, Kampa A. 加工过程中切削力和机器人负载的建模。先进材料研究, 2014: 715–720.
View ArticleGoogle Scholar
查看文章Google Scholar
[2].Fan X, Wang X, Xiao Y, et al. A combined 2D-3D vision system for automatic robot picking. international conference on advanced mechatronic systems, 2014: 513–516.
[2].Fan X, Wang X, Xiao Y, et al.用于机器人自动拣选的 2D-3D 组合视觉系统。先进机电一体化系统国际会议,2014:513–516。
[3].Pilný L, Bissacco G, De Chiffre L, et al. Acoustic emission-based in-process monitoring of surface generation in robot-assisted polishing. International Journal of Computer Integrated Manufacturing, 2016, 29(11): 1218–1226.
[3].Pilný L、Bissacco G、De Chiffre L 等人。在机器人辅助抛光中对表面生成进行基于声发射的过程内监测。计算机集成制造学报, 2016, 29(11): 1218–1226.
View ArticleGoogle Scholar
查看文章Google Scholar
[4].Zhu W., Wang Y., Shen H., et al. Design and experiment of compliant parallel humanoid wrist joint polishing robot. Transactions of the Chinese Society for Agricultural Machinery, 2016.
[4].Zhu W., Wang Y., Shen H., et al.柔顺性平行人形腕关节抛光机器人的设计与实验。中国农业机械学报, 2016.
View ArticleGoogle Scholar
查看文章Google Scholar
[5].Qin Z, Wang P, Sun J, et al. Precise Robotic Assembly for Large-Scale Objects Based on Automatic Guidance and Alignment. IEEE Transactions on Instrumentation and Measurement, 2016, 65(6): 1398–1411.
[5].Qin Z, Wang P, Sun J, et al. 基于自动引导和对准的大型物体精密机器人装配。IEEE 仪器仪表与测量汇刊, 2016, 65(6): 1398–1411.
View ArticleGoogle Scholar
查看文章Google Scholar
[6].Wang G, Wang Y, Zhang L, et al. Development and Polishing Process of a Mobile Robot Finishing Large Mold Surface. Machining Science and Technology, 2014, 18(4): 603–625.
[6].Wang G, Wang Y, Zhang L, et al.一种移动机器人的开发和抛光工艺,用于精加工大型模具表面。机械加工科学与技术, 2014, 18(4): 603–625.
View ArticleGoogle Scholar
查看文章Google Scholar
[7].Park I, Lee I, Lee J, et al. Development of the robot system for the process improvement of the castings of the runner and gate cutting. international conference on ubiquitous robots and ambient intelligence, 2013: 760–762.
[7].Park I、Lee I、Lee J 等人。开发机器人系统,用于改进流道和浇口切割铸件的工艺。泛在机器人和环境智能国际会议,2013:760–762。
View ArticleGoogle Scholar
查看文章Google Scholar
[8].Gaz C, Magrini E, De Luca A, et al. A model-based residual approach for human-robot collaboration during manual polishing operations. Mechatronics, 2018: 234–247.
[8].Gaz C、Magrini E、De Luca A 等人。一种基于模型的残差方法,用于手动抛光操作期间的人机协作。机电一体化,2018:234–247。
View ArticleGoogle Scholar
查看文章Google Scholar
[9].Sornmo O, Robertsson A, Wanner A, et al. Force controlled knife-grinding with industrial robot. international conference on control applications, 2012: 1356–1361.
[9].Sornmo O、Robertsson A、Wanner A 等人。使用工业机器人进行力控刀磨削。国际控制应用会议,2012:1356–1361。
[10].Du H, Sun Y, Feng D, et al. Automatic robotic polishing on titanium alloy parts with compliant force/position control. Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 2015, 229(7):1180–1192.
[10].Du H, Sun Y, Feng D, et al.对钛合金零件进行自动机器人抛光,具有柔和的力/位置控制。机械工程师学会学报,B 部分:工程制造杂志,2015,229(7):1180–1192。
View ArticleGoogle Scholar
查看文章Google Scholar
[11].Zhang H, Li L, Zhao J, et al. Robot automation grinding process for nuclear reactor coolant pump based on reverse engineering. The International Journal of Advanced Manufacturing Technology, 2019: 879–891.
[11].Zhang H, Li L, Zhao J, et al.基于逆向工程的核反应堆冷却剂泵机器人自动化磨削工艺.国际先进制造技术杂志, 2019: 879–891.
View ArticleGoogle Scholar
查看文章Google Scholar
[12].Pandiyan V, Murugan P, Tjahjowidodo T, et al. In-process virtual verification of weld seam removal in robotic abrasive belt grinding process using deep learning. Robotics and Computer-integrated Manufacturing, 2019: 477–487.
[12].Pandiyan V、Murugan P、Tjahjowidodo T 等人。使用深度学习对机器人砂带磨削过程中的焊缝去除进行过程虚拟验证。机器人与计算机集成制造,2019:477–487。
View ArticleGoogle Scholar
查看文章Google Scholar
[13].Gottesfeld Brown L. A survery of image registration techniques. Acm Computing Surveys, 1992, 24(4):325–376.
[13].戈特斯菲尔德·布朗 L.图像配准技术的研究。ACM 计算调查,1992,24(4):325–376。
View ArticleGoogle Scholar
查看文章Google Scholar
[[14].Ulrich M, Steger C, Baumgartner A, et al. Real-time object recognition using a modified generalized Hough transform. Pattern Recognition, 2003, 36(11): 2557–2570.
[[14].Ulrich M、Steger C、Baumgartner A 等人。使用改进的广义 Hough 变换进行实时对象识别。模式识别, 2003, 36(11): 2557–2570.
View ArticleGoogle Scholar
查看文章Google Scholar
[15].Li J, Gu J, Huang Z, et al. Application Research of Improved YOLO V3 Algorithm in PCB Electronic Component Detection. Applied Sciences, 2019, 9(18).
[15].Li J, Gu J, Huang Z, et al. 改进 YOLO V3 算法在 PCB 电子元件检测中的应用研究.应用科学, 2019, 9(18).
View ArticleGoogle Scholar
查看文章Google Scholar
[16].Redmon J, Divvala S K, Girshick R, et al. You Only Look Once: Unified, Real-Time Object Detection. computer vision and pattern recognition, 2016: 779–788.
[16].Redmon J、Divvala S K、Girshick R 等人。您只需看一次:统一的实时对象检测。计算机视觉与模式识别,2016:779–788。
View ArticleGoogle Scholar
查看文章Google Scholar
[17].Redmon, J. and Farhadi, A. YOLO9000: Better, Faster, Stronger. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 21–26 July 2017, 6517–6525.
[17].Redmon, J. 和 Farhadi, A. YOLO9000:更好、更快、更强。IEEE 计算机视觉和模式识别会议论文集,檀香山,2017 年 7 月 21-26 日,6517-6525。
[18].Redmon J, Farhadi A. YOLOv3: An Incremental Improvement. arXiv: Computer Vision and Pattern Recognition, 2018.
[18].Redmon J, Farhadi A. YOLOv3:渐进式改进。arXiv:计算机视觉和模式识别,2018 年。
View ArticleGoogle Scholar
查看文章Google Scholar
[19].Pang S, Ding T, Qiao S, et al. A novel YOLOv3-arch model for identifying cholelithiasis and classifying gallstones on CT images. PLoS ONE, 2019, 14(6):e0217647–. pmid:31211791
[19].Pang S, Ding T, Qiao S, et al.一种新的 YOLOv3 牙弓模型,用于识别胆石症并在 CT 图像上对胆结石进行分类。公共科学图书馆一号,2019,14(6):e0217647–。PMID:31211791
View ArticlePubMed/NCBIGoogle Scholar
查看文章PubMed/NCBIGoogle Scholar
[20].Chen Y, Li W, Sakaridis C, et al. Domain Adaptive Faster R-CNN for Object Detection in the Wild. computer vision and pattern recognition, 2018: 3339–3348.
[20].Chen Y, Li W, Sakaridis C, et al. 用于野外对象检测的域自适应更快 R-CNN。计算机视觉与模式识别,2018:3339–3348。
View ArticleGoogle Scholar
查看文章Google Scholar
[21].Ghiasi G, Lin T, Le Q V, et al. NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection. computer vision and pattern recognition, 2019: 7036–7045.
[21].Ghiasi G, Lin T, Le Q V, et al. NAS-FPN:学习用于对象检测的可扩展特征金字塔架构。计算机视觉与模式识别,2019:7036–7045。
View ArticleGoogle Scholar
查看文章Google Scholar
[22].Hinterstoisser S, Cagniart C, Ilic S, et al. Gradient Response Maps for Real-Time Detection of Textureless Objects. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2012, 34(5):p.876–888. pmid:22442120
[22].Hinterstoisser S, Cagniart C, Ilic S, et al. 用于实时检测无纹理对象的梯度响应图。模式分析和机器智能,IEEE Transactions on,2012,34(5):p.876–888。PMID:22442120
View ArticlePubMed/NCBIGoogle Scholar
查看文章PubMed/NCBIGoogle Scholar
[23].Hinterstoisser S, Lepetit V, Ilic S, et al. Dominant orientation templates for real-time detection of texture-less objects. computer vision and pattern recognition, 2010: 2257–2264.
[23].Hinterstoisser S、Lepetit V、Ilic S 等人。用于实时检测无纹理对象的主导方向模板。计算机视觉与模式识别,2010:2257–2264。
View ArticleGoogle Scholar
查看文章Google Scholar
[24].Hinterstoisser S, Holzer S J, Cagniart C, et al. Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes. international conference on computer vision, 2011: 858–865.
[24].Hinterstoisser S、Holzer S J、Cagniart C 等人。多模态模板,用于在高度杂乱的场景中实时检测无纹理的对象。计算机视觉国际会议,2011:858–865。
View ArticleGoogle Scholar
查看文章Google Scholar
[25].Zhang Z. A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000. 22(11), 1330–1334.
[25]。张 Z.一种灵活的相机校准新技术。IEEE 模式分析和机器智能汇刊,2000 年。22(11),1330–1334。
View ArticleGoogle Scholar
查看文章Google Scholar