这是用户在 2024-5-10 15:28 为 https://app.immersivetranslate.com/pdf-pro/277a7668-9bcd-4184-a4aa-1a516528e3c9 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
2024_05_10_73d2d172242addade300g

NETWORK SIMULATOR  网络模拟器

ns-3 Tutorial ns-3 教程
Release ns-3. 41 发布 ns-3.41
ns-3 project ns-3 项目
Feb 09, 2024 2024 年 2 月 9 日

CONTENTS 目录

1 Quick Start ..... 3
1 快速入门 ..... 3

1.1 Brief Summary ..... 3
1.1 简要总结 ..... 3

1.2 Prerequisites ..... 3
1.2 先决条件 ..... 3

1.3 Downloading ns-3 ..... 3
1.3 下载 ns-3 ..... 3

1.4 Building and testing ns-3 . ..... 4
1.4 构建和测试 ns-3 ..... 4

2 Introduction ..... 7
2 简介 ..... 7

2.1 About ns-3 ..... 7
2.1 关于 ns-3 ..... 7

2.2 For ns-2 Users ..... 8
2.2 对于 ns-2 用户 ..... 8

2.3 Contributing ..... 8
2.3 贡献 ..... 8

2.4 Tutorial Organization ..... 9
2.4 教程组织 ..... 9

3 Resources ..... 11
3 资源 ..... 11

3.1 The Web ..... 11
3.1 网页 ..... 11

3.2 Git ..... 11
3.3 CMake ..... 11
3.4 Development Environment ..... 12
3.4 开发环境 ..... 12

3.5 Socket Programming . ..... 12
3.5 套接字编程 . ..... 12

4 Getting Started ..... 13
4 入门 ..... 13

4.1 Overview ..... 13
4.1 概述 ..... 13

4.2 Prerequisites ..... 13
4.2 先决条件 ..... 13

4.3 Downloading ns-3 using Git ..... 15
4.3 使用 Git 下载 ns-3 ..... 15

4.4 Building ns-3 ..... 18
4.4 构建 ns-3 ..... 18

4.5 Testing ns-3 ..... 41
4.5 测试 ns-3 ..... 41

4.6 Running a Script ..... 45
4.6 运行脚本 ..... 45

5 Conceptual Overview ..... 49
5 概念概述 ..... 49

5.1 Key Abstractions ..... 49
5.1 关键抽象 ..... 49

5.2 A First ns-3 Script ..... 51
5.2 第一个 ns-3 脚本 ..... 51

5.3 Ns-3 Source Code ..... 60
5.3 Ns-3 源代码 ..... 60

6 Tweaking ..... 61
6 调整 ..... 61

6.1 Using the Logging Module ..... 61
使用日志模块 ..... 61

6.2 Using Command Line Arguments ..... 66
使用命令行参数 ..... 66

6.3 Using the Tracing System ..... 71
使用跟踪系统 ..... 71

7 Building Topologies ..... 77
7 建筑拓扑 ..... 77

7.1 Building a Bus Network Topology . . ..... 77
7.1 构建总线网络拓扑 . . ..... 77

7.2 Models, Attributes and Reality .. ..... 86
7.2 模型、属性和现实 .. ..... 86

7.3 Building a Wireless Network Topology ..... 87
7.3 构建无线网络拓扑 ..... 87

7.4 Queues in ns-3 ..... 95
7.4 ns-3 中的队列 ..... 95

8 Tracing ..... 99
8 追踪 ..... 99

8.1 Background ..... 99
8.1 背景 ..... 99

8.2 Overview ..... 101
8.2 概述 ..... 101

8.3 Real Example ..... 115
8.3 真实例子 ..... 115

8.4 Trace Helpers ..... 131
8.4 跟踪助手 ..... 131

8.5 Summary ..... 144
8.5 摘要 ..... 144

9 Data Collection ..... 145
9 数据收集 ..... 145

9.1 Motivation ..... 145
9.1 动机 ..... 145

9.2 Example Code ..... 145
9.2 示例代码 ..... 145

9.3 GnuplotHelper ..... 147
9.3 Gnuplot 助手 ..... 147

9.4 Supported Trace Types ..... 150
9.4 支持的跟踪类型 ..... 150

9.5 FileHelper ..... 150
9.6 Summary ..... 151
9.6 摘要 ..... 151

10 Conclusion ..... 153
10 结论 ..... 153

10.1 Futures ..... 153
10.1 未来 ..... 153

10.2 Closing ..... 153
10.2 结束 ..... 153
This is the Tutorial. Primary documentation for the ns-3 project is organized as follows:
这是 教程。ns-3 项目的主要文档组织如下:
  • Several guides that are version controlled for each release (the latest release) and development tree:
    每个发布版本(最新版本)和开发树都有版本控制的几个指南:
  • Tutorial (this document)
    教程(本文档)
  • Installation Guide 安装指南
  • Manual 手册
  • Model Library 模型库
  • Contributing Guide 贡献指南
  • ns-3 Doxygen: Documentation of the public APIs of the simulator
    ns-3 Doxygen:模拟器公共 API 的文档
  • ns-3 wiki ns-3 维基
This document is written in reStructuredText for Sphinx and is maintained in the doc/tutorial directory of ns-3's source code. Source file column width is 100 columns.
本文档是使用 reStructuredText 编写的,用于 Sphinx,并在 ns-3 源代码的 doc/tutorial 目录中进行维护。源文件列宽为 100 列。

ns-3 Tutorial, Release ns-3.41
ns-3 教程,版本 ns-3.41

QUICK START 快速入门

This section is optional, for readers who want to get up and running as quickly as possible. Readers may skip forward to the Introduction chapter, followed by the Getting Started chapter, for a lengthier coverage of all of this material.
本节是可选的,适用于希望尽快启动和运行 的读者。读者可以直接跳转到介绍章节,然后是入门章节,以获取更详尽的所有材料内容。

1.1 Brief Summary 1.1 简要总结

is a discrete-event simulator typically run from the command line. It is written directly in C++, not in a high-level modeling language; simulation events are simply C++ function calls, organized by a scheduler.
是一个通常从命令行运行的离散事件模拟器。它直接用 C++编写,而不是用高级建模语言;模拟事件只是 C++函数调用,由调度程序组织。
An - 3 user will obtain the source code (see below), compile it into shared (or static) libraries, and link the libraries to main() programs that he or she authors. The main() program is where the specific simulation scenario configuration is performed and where the simulator is run and stopped. Several example programs are provided, which can be modified or copied to create new simulation scenarios. Users also often edit the - 3 library code (and rebuild the libraries) to change its behavior.
- 3 用户将获取 源代码(见下文),将其编译为共享(或静态)库,并将库链接到他或她编写的 main()程序中。main()程序是执行特定模拟场景配置并运行和停止模拟器的地方。提供了几个示例程序,可以修改或复制以创建新的模拟场景。用户经常编辑 - 3 库代码(并重新构建库)以更改其行为。
has optional Python bindings for authoring scenario configuration programs in Python (and using a Python-based workflow); this quick start does not cover those aspects.
具有用于在 Python 中编写场景配置程序(并使用基于 Python 的工作流)的可选 Python 绑定;本快速入门不涵盖这些方面。

1.2 Prerequisites 1.2 先决条件

has various optional extensions, but the main features just require a C++ compiler (g++ or clang++), Python (version 3.6 or above), CMake and a build-system (e.g. make, ninja, Xcode). We focus in this chapter only on getting up and running on a system supported by a recent C++ compiler and Python runtime support.
具有各种可选扩展,但主要功能仅需要 C++编译器(g++或 clang++)、Python(版本 3.6 或更高版本)、CMake 和构建系统(例如 make、ninja、Xcode)。本章仅关注在支持最新 C++编译器和 Python 运行时支持的系统上使 运行起来。
For Linux, use either g++ or clang++ compilers. For macOS, use clang++ (available in Xcode or Xcode Command Line Tools). For Windows, Msys2 tools with the MinGW64 toolchain can be used (since ns-3.37) for most use cases. For releases earlier than ns-3.37, or for use of emulation modes or Python bindings, we recommend to either use a Linux virtual machine, or the Windows Subsystem for Linux.
对于 Linux,可以使用 g++或 clang++编译器。对于 macOS,使用 clang++(在 Xcode 或 Xcode 命令行工具中可用)。对于 Windows,Msys2 工具与 MinGW64 工具链可以用于大多数用例(自 ns-3.37 起)。对于早于 ns-3.37 的版本,或者使用仿真模式或 Python 绑定,我们建议使用 Linux 虚拟机或 Windows 子系统 for Linux。

1.3 Downloading ns-3 1.3 下载 ns-3

is distributed in source code only (some binary packages exist but they are not maintained by the open source project). There are two main ways to obtain the source code: 1) downloading the latest release as a source code archive from the main web site, or 2) cloning the Git repository from GitLab.com. These two options are described next; either one or the other download option (but not both) should be followed.
仅以源代码形式分发(存在一些二进制包,但它们不受开源项目维护)。有两种主要方法可以获取源代码:1)从主 网站下载最新版本作为源代码存档,或者 2)从 GitLab.com 克隆 Git 存储库。接下来将介绍这两种选项;应遵循其中一种下载选项(但不是两者同时)。

1.3.1 Downloading the Latest Release
1.3.1 下载最新版本

  1. Download the latest release from https://www.nsnam.org/releases/latest
    从 https://www.nsnam.org/releases/latest 下载最新版本
  2. Unpack it in a working directory of your choice.
    将其解压缩到您选择的工作目录中。
$ tar xjf ns-allinone-3.41.tar.bz2
  1. Change into the directory directly; e.g.
    直接切换到 目录;例如
$ cd ns-allinone-3.41/ns-3.41
The ns-allinone directory has some additional components but we are skipping over them here; one can work directly from the source code directory. The rest of the tutorial describes the additional components.
ns-allinone 目录中有一些额外的组件,但我们在这里跳过它们;可以直接从 源代码目录中工作。本教程的其余部分描述了额外的组件。

1.3.2 Cloning ns-3 from GitLab.com
1.3.2 从 GitLab.com 克隆 ns-3

You can perform a Git clone in the usual way:
您可以按照通常的方式执行 Git 克隆:
If you are content to work with the tip of the development tree; you need only to into ns-3-dev; the master branch is checked out by default.
如果您满足于使用开发树的最新版本;您只需要 到 ns-3-dev;默认情况下将检出主分支。
If instead you want to try the most recent release (version 3.41 as of this writing), you can checkout a branch corresponding to that git tag:
如果您想尝试最新的发布版本(截至本文撰写时为版本 3.41),您可以检出与该 git 标签对应的分支:
$ git checkout -b ns-3.41-branch ns-3.41

1.4 Building and testing ns-3
1.4 构建和测试 ns-3

Once you have obtained the source either by downloading a release or by cloning a Git repository, the next step is to configure the build using the CMake build system. The below commands make use of a Python wrapper around CMake, called ns3, that simplifies the command-line syntax, resembling Waf syntax. There are several options to control the build, but enabling the example programs and the tests, for a default build profile (with asserts enabled and and support for logging) is what is usually done at first:
一旦您通过下载发布版或克隆 Git 存储库获取了源代码,下一步是使用 CMake 构建系统配置构建。下面的命令使用了一个名为 ns3 的 Python 包装器,简化了命令行语法,类似于 Waf 语法。有几个选项可以控制构建过程,但通常首先要做的是启用示例程序和测试,以进行默认构建配置(启用断言和支持 日志记录):
$ ./ns3 configure --enable-examples --enable-tests
Then, use to build :
然后,使用 构建
build  构建
Once complete, you can run the unit tests to check your build:
构建完成后,您可以运行单元测试来检查您的构建:
All tests should either PASS or be SKIPped. At this point, you have a working simulator. From here, you can start to run programs (look in the examples directory). To run the first tutorial program, whose source code is located at examples/tutorial/first.cc, use ns 3 to run it (by doing so, the shared libraries are found automatically):
所有测试应该要么通过,要么被跳过。此时,您有一个可工作的 模拟器。从这里,您可以开始运行程序(查看示例目录)。要运行第一个教程程序,其源代码位于 examples/tutorial/first.cc,请使用 ns 3 来运行它(通过这样做, 共享库会被自动找到):

run first  运行 first
To view possible command-line options, specify the -PrintHelp argument:
要查看可能的命令行选项,请指定-PrintHelp 参数:
$ ./ns3 run 'first --PrintHelp'
$ ./ns3 运行'first --PrintHelp'
To continue reading about the conceptual model and architecture of , the tutorial chapter Conceptual Overview would be the next natural place to skip to, or you can learn more about the project and the various build options by continuing directly with the Introduction and Getting Started chapters.
要继续阅读 的概念模型和架构,教程章节概念概述将是下一个自然的跳转位置,或者您可以通过直接继续阅读介绍和入门章节来了解有关该项目和各种构建选项的更多信息。

INTRODUCTION 介绍

The simulator is a discrete-event network simulator targeted primarily for research and educational use. The ns-3 project, started in 2006, is an open-source project developing .
模拟器是一个离散事件网络模拟器,主要用于研究和教育目的。ns-3 项目始于 2006 年,是一个开源项目,正在开发
The purpose of this tutorial is to introduce new users to the system in a structured way. It is sometimes difficult for new users to glean essential information from detailed manuals and to convert this information into working simulations. In this tutorial, we will build several example simulations, introducing and explaining key concepts and features as we go.
本教程的目的是以结构化的方式向新 用户介绍系统。新用户有时很难从详细的手册中获取基本信息,并将这些信息转化为可工作的模拟。在本教程中,我们将构建几个示例模拟,逐步介绍和解释关键概念和特性。
As the tutorial unfolds, we will introduce the full documentation and provide pointers to source code for those interested in delving deeper into the workings of the system.
随着教程的展开,我们将介绍完整的 文档,并为那些对深入了解系统工作原理感兴趣的人提供源代码指针。
We also provide a quick start guide for those who are comfortable diving right in without too much documentation.
我们还为那些喜欢立即开始而不需要太多文档的人提供了快速入门指南。
A few key points are worth noting at the onset:
在开始阶段值得注意的几个关键点:
  • is open-source, and the project strives to maintain an open environment for researchers to contribute and share their software.
    是开源的,该项目致力于维护一个开放的环境,让研究人员可以贡献和分享他们的软件。
  • is not a backwards-compatible extension of ns-2; it is a new simulator. The two simulators are both written in C++ but is a new simulator that does not support the APIs.
    不是 ns-2 的向后兼容扩展;它是一个新的模拟器。这两个模拟器都是用 C++编写的,但 是一个不支持 API 的新模拟器。

2.1 About ns-3 2.1 关于 ns-3

has been developed to provide an open, extensible network simulation platform, for networking research and education. In brief, provides models of how packet data networks work and perform, and provides a simulation engine for users to conduct simulation experiments. Some of the reasons to use include to perform studies that are more difficult or not possible to perform with real systems, to study system behavior in a highly controlled, reproducible environment, and to learn about how networks work. Users will note that the available model set in focuses on modeling how Internet protocols and networks work, but is not limited to Internet systems; several users are using to model non-Internet-based systems.
已经开发出来,为网络研究和教育提供一个开放的、可扩展的网络模拟平台。简而言之, 提供了关于数据包网络如何工作和执行的模型,并为用户提供了一个进行模拟实验的模拟引擎。使用 的一些原因包括进行更难或无法通过真实系统进行的研究,研究系统在高度可控、可重现的环境中的行为,以及了解网络如何工作。用户会注意到 中提供的可用模型集中在模拟互联网协议和网络的工作方式,但 并不局限于互联网系统;一些用户正在使用 来模拟非互联网系统。
Many simulation tools exist for network simulation studies. Below are a few distinguishing features of in contrast to other tools.
许多网络模拟研究的仿真工具存在。以下是 与其他工具相比的一些显著特点。
  • is designed as a set of libraries that can be combined together and also with other external software libraries. While some simulation platforms provide users with a single, integrated graphical user interface environment in which all tasks are carried out, is more modular in this regard. Several external animators and data analysis and visualization tools can be used with . However, users should expect to work at the command line and with C++ and/or Python software development tools.
    被设计为一组可以组合在一起,也可以与其他外部软件库一起使用的库。虽然一些仿真平台为用户提供了一个集成的图形用户界面环境,其中执行所有任务, 在这方面更具模块化。可以与几个外部动画制作工具、数据分析和可视化工具一起使用 。但是,用户应该期望在命令行和使用 C++ 和/或 Python 软件开发工具时工作。
  • ns-3 is primarily used on Linux or macOS systems, although support exists for BSD systems and also for Windows frameworks that can build Linux code, such as Windows Subsystem for Linux, or Cygwin. Native Windows Visual Studio is not presently supported although a developer is working on future support. Windows users may also use a Linux virtual machine.
    ns-3 主要用于 Linux 或 macOS 系统,尽管支持存在于 BSD 系统中,也支持可以构建 Linux 代码的 Windows 框架,如 Windows Subsystem for Linux 或 Cygwin。目前不支持原生 Windows Visual Studio,尽管有开发人员正在努力支持未来的支持。Windows 用户也可以使用 Linux 虚拟机。
  • is not an officially supported software product of any company. Support for is done on a best-effort basis on the ns-3-users forum (ns-3-users @googlegroups.com).
    不是任何公司的官方支持软件产品。对 的支持是在 ns-3-users 论坛(ns-3-users@googlegroups.com)上尽力而为地进行的。

2.2 For ns-2 Users
2.2 对于 ns-2 用户

For those familiar with (a popular tool that preceded ), the most visible outward change when moving to is the choice of scripting language. Programs in are scripted in OTcl and results of simulations can be visualized using the Network Animator nam. It is not possible to run a simulation in purely from C++ (i.e., as a main() program without any OTcl). Moreover, some components of are written in C++ and others in OTcl. In , the simulator is written entirely in C++, with optional Python bindings. Simulation scripts can therefore be written in C++ or in Python. New animators and visualizers are available and under current development. Since generates pcap packet trace files, other utilities can be used to analyze traces as well. In this tutorial, we will first concentrate on scripting directly in C++ and interpreting results via trace files.
对于熟悉 (一种先前 的流行工具)的人来说,移动到 时最明显的外部变化是脚本语言的选择。 中的程序使用 OTcl 脚本编写,模拟的结果可以使用网络动画器 nam 进行可视化。在 中无法纯粹从 C++(即,作为一个没有任何 OTcl 的 main()程序)运行模拟。此外, 的一些组件是用 C++编写的,另一些是用 OTcl 编写的。在 中,模拟器完全用 C++编写,可选的 Python 绑定。因此,模拟脚本可以用 C++或 Python 编写。新的动画制作者和可视化器可用并正在当前开发中。由于 生成 pcap 数据包跟踪文件,因此还可以使用其他实用程序来分析跟踪。在本教程中,我们将首先集中讨论直接在 C++中编写脚本并通过跟踪文件解释结果。
But there are similarities as well (both, for example, are based on C++ objects, and some code from has already been ported to . We will try to highlight differences between and as we proceed in this tutorial.
但也有相似之处(例如,两者都基于 C++对象,并且 的一些代码已经被移植到 中。我们将在本教程中继续进行时尝试突出 之间的差异。
A question that we often hear is "Should I still use or move to ?" In this author's opinion, unless the user is somehow vested in (either based on existing personal comfort with and knowledge of , or based on a specific simulation model that is only available in ), a user will be more productive with for the following reasons:
我们经常听到的一个问题是“我应该继续使用 还是转移到 ?”在本作者看来,除非用户在 方面有一定投入(无论是基于对 的现有个人舒适感和了解,还是基于仅在 中可用的特定模拟模型),用户将因以下原因而更具生产力使用
  • is actively maintained with an active, responsive users mailing list, while is only lightly maintained and has not seen significant development in its main code tree for over a decade.
    正在积极维护,并拥有一个活跃、响应迅速的用户邮件列表,而 只是轻度维护,并且在其主代码树上已经超过十年没有看到重大发展。
  • provides features not available in , such as a implementation code execution environment (allowing users to run real implementation code in the simulator)
    提供了 中没有的功能,比如实现代码执行环境(允许用户在模拟器中运行真实的实现代码)。
  • provides a lower base level of abstraction compared with , allowing it to align better with how real systems are put together. Some limitations found in (such as supporting multiple types of interfaces on nodes correctly) have been remedied in .
    提供了与 相比更低的抽象基准,使其能够更好地与实际系统的组合方式相匹配。在 中发现的一些限制(例如在节点上正确支持多种类型的接口)已在 中得到解决。
If in doubt, a good guideline would be to look at both simulators (as well as other simulators), and in particular the models available for your research, but keep in mind that your experience may be better in using the tool that is being actively developed and maintained .
如果有疑问,一个很好的指导原则是查看两个模拟器(以及其他模拟器),特别是您研究所需的模型,但请记住,您的体验可能更好地使用正在积极开发和维护的工具

2.3 Contributing 2.3 贡献

is a research and educational simulator, by and for the research community. It will rely on the ongoing contributions of the community to develop new models, debug or maintain existing ones, and share results. There are a few policies that we hope will encourage people to contribute to like they have for :
是一个研究和教育模拟器,由研究社区创建和使用。它将依赖社区持续贡献来开发新模型,调试或维护现有模型,并分享结果。我们希望有一些政策能够鼓励人们像为 贡献一样为 贡献:
  • Open source licensing based on GNU GPLv2 compatibility
    基于 GNU GPLv2 兼容性的开源许可证
  • An app store 一个应用商店
  • Contributed Code page, similar to 's popular Contributed Code page
    贡献代码页面,类似于 的热门贡献代码页面
  • Documentation on how to contribute
    如何贡献文档
  • Use of Git hosting at GitLab.com including issue tracker
    在 GitLab.com 上使用 Git 托管,包括问题跟踪器
We realize that if you are reading this document, contributing back to the project is probably not your foremost concern at this point, but we want you to be aware that contributing is in the spirit of the project and that even the act of dropping us a note about your early experience with (e.g. "this tutorial section was not clear..."), reports of stale documentation or comments in the code, etc. are much appreciated. The preferred way to submit patches is either to fork our project on GitLab.com and generate a Merge Request, or to open an issue on our issue tracker and append a patch.
我们意识到,如果您正在阅读本文档,对项目的贡献可能并非您目前的首要关注点,但我们希望您意识到,贡献是项目精神的体现,即使是给我们留言关于您对 的早期体验(例如,“这个教程部分不清楚…”)、过时文档的报告或代码中的评论等,都会受到高度赞赏。提交补丁的首选方式要么是在 GitLab.com 上 fork 我们的项目并生成合并请求,要么是在我们的问题跟踪器上打开一个问题并附上一个补丁。

2.4 Tutorial Organization
2.4 教程组织

The tutorial assumes that new users might initially follow a path such as the following:
本教程假定新用户最初可能会按照以下路径进行。
  • Try to download and build a copy;
    尝试下载并构建副本;
  • Try to run a few sample programs;
    尝试运行一些示例程序;
  • Look at simulation output, and try to adjust it.
    查看模拟输出,并尝试调整它。
As a result, we have tried to organize the tutorial along the above broad sequences of events.
因此,我们尝试沿着上述事件的广泛顺序组织教程。
RESOURCES 资源

3.1 The Web 3.1 网页

There are several important resources of which any user must be aware. The main web site is located at https: //www.nsnam.org and provides access to basic information about the system. Detailed documentation is available through the main web site at https://www.nsnam.org/documentation/. You can also find documents relating to the system architecture from this page.
有几个重要资源,任何 用户都必须了解。主要网站位于 https://www.nsnam.org,提供有关 系统的基本信息。详细文档可通过主网站 https://www.nsnam.org/documentation/获得。您还可以从此页面找到与系统架构相关的文档。
There is a Wiki that complements the main web site which you will find at https://www.nsnam.org/wiki/. You will find user and developer FAQs there, as well as troubleshooting guides, third-party contributed code, papers, etc.
有一个维基百科,它是主 网站的补充,您可以在 https://www.nsnam.org/wiki/找到。您将在那里找到用户和开发者的常见问题解答,以及故障排除指南、第三方贡献的代码、论文等。
The source code may be found and browsed at GitLab.com: https://gitlab.com/nsnam/. There you will find the current development tree in the repository named ns-3-dev. Past releases and experimental repositories of the core developers may also be found at the project's old Mercurial site at http://code.nsnam.org.
源代码可以在 GitLab.com 上找到和浏览:https://gitlab.com/nsnam/。在那里,您将在名为 ns-3-dev 的存储库中找到当前的开发树。核心开发人员的过去版本和实验性存储库也可以在该项目的旧 Mercurial 网站 http://code.nsnam.org 上找到。

3.2 Git

Complex software systems need some way to manage the organization and changes to the underlying code and documentation. There are many ways to perform this feat, and you may have heard of some of the systems that are currently used to do this. Until recently, the project used Mercurial as its source code management system, but in December 2018, switch to using Git. Although you do not need to know much about Git in order to complete this tutorial, we recommend becoming familiar with Git and using it to access the source code. GitLab.com provides resources to get started at: https://docs.gitlab.com/ee/gitlab-basics/.
复杂的软件系统需要一种管理基础代码和文档变更的方式。有许多方法可以完成这项任务,您可能已经听说过一些当前用于执行此操作的系统。直到最近, 项目使用 Mercurial 作为其源代码管理系统,但在 2018 年 12 月转而使用 Git。虽然您不需要对 Git 了解太多才能完成本教程,但我们建议您熟悉 Git 并使用它来访问源代码。GitLab.com 提供了入门资源:https://docs.gitlab.com/ee/gitlab-basics/。

3.3 CMake

Once you have source code downloaded to your local system, you will need to compile that source to produce usable programs. Just as in the case of source code management, there are many tools available to perform this function. Probably the most well known of these tools is make. Along with being the most well known, make is probably the most difficult to use in a very large and highly configurable system. Because of this, many alternatives have been developed.
一旦您将源代码下载到本地系统,您将需要编译该源代码以生成可用程序。就源代码管理而言,有许多可用工具来执行此功能。这些工具中最著名的可能是 make。除了是最著名的工具外,make 可能是在非常庞大和高度可配置的系统中使用最困难的工具。因此,已开发了许多替代方案。
The build system CMake is used on the project.
构建系统 CMake 用于 项目。
For those interested in the details of CMake, the CMake documents are available at https://cmake.org/cmake/help/ latest/index.html and the current code at https://gitlab.kitware.com/cmake/cmake.
对于那些对 CMake 细节感兴趣的人,CMake 文档可在 https://cmake.org/cmake/help/latest/index.html 上找到,当前代码可在 https://gitlab.kitware.com/cmake/cmake 上找到。

3.4 Development Environment
3.4 开发环境

As mentioned above, scripting in is done in C++ or Python. Most of the API is available in Python, but the models are written in C++ in either case. A working knowledge of C++ and object-oriented concepts is assumed in this document. We will take some time to review some of the more advanced concepts or possibly unfamiliar language features, idioms and design patterns as they appear. We don't want this tutorial to devolve into a C++ tutorial, though, so we do expect a basic command of the language. There are a wide number of sources of information on C++ available on the web or in print.
如上所述,在 中,脚本编写使用 C++ 或 Python 完成。大多数 API 可在 Python 中使用,但模型无论如何都是用 C++ 编写的。本文档假定您具有 C++ 和面向对象概念的工作知识。我们将花一些时间来审查一些更高级的概念或可能不熟悉的语言特性、习语和设计模式。我们不希望本教程演变成 C++ 教程,但我们确实期望您具备基本的语言掌握能力。网上或印刷品上有大量关于 C++ 的信息来源。
If you are new to C++, you may want to find a tutorial- or cookbook-based book or web site and work through at least the basic features of the language before proceeding. For instance, this tutorial.
如果您是 C++的新手,您可能希望在继续之前找到一个基于教程或食谱的书籍或网站,并至少学习一下语言的基本特性。例如,这个教程。
On Linux, the system uses several components of the GNU "toolchain" for development. A software toolchain is the set of programming tools available in the given environment. For a quick review of what is included in the GNU toolchain see, http://en.wikipedia.org/wiki/GNU_toolchain. ns-3 uses gcc, GNU binutils, and gdb. However, we do not use the GNU build system tools, neither make directly. We use CMake for these functions.
在 Linux 上, 系统使用 GNU“工具链”的几个组件进行开发。软件工具链是给定环境中可用的编程工具集。要快速查看 GNU 工具链中包含的内容,请参阅 http://en.wikipedia.org/wiki/GNU_toolchain。ns-3 使用 gcc、GNU binutils 和 gdb。但是,我们不直接使用 GNU 构建系统工具,也不直接使用 make。我们使用 CMake 来进行这些功能。
On macOS, the toolchain used is Xcode. ns-3 users on a Mac are strongly encouraged to install Xcode and the command-line tools packages from the Apple App Store, and to look at the installation guide for more information (https://www.nsnam.org/docs/installation/html/).
在 macOS 上,使用的工具链是 Xcode。强烈建议 Mac 上的 ns-3 用户安装 Xcode 和来自 Apple App Store 的命令行工具包,并查看 安装指南以获取更多信息(https://www.nsnam.org/docs/installation/html/)。
Typically an author will work in Linux or a Unix-like environment. For those running under Windows, there do exist environments which simulate the Linux environment to various degrees. The installation guide has information about Windows support (https://www.nsnam.org/docs/installation/html/windows.html).
通常, 作者会在 Linux 或类 Unix 环境中工作。对于在 Windows 下运行的用户,确实存在一些模拟 Linux 环境的工具。 安装指南中有关于 Windows 支持的信息(https://www.nsnam.org/docs/installation/html/windows.html)。

3.5 Socket Programming 3.5 套接字编程

We will assume a basic facility with the Berkeley Sockets API in the examples used in this tutorial. If you are new to sockets, we recommend reviewing the API and some common usage cases. For a good overview of programming TCP/IP sockets we recommend TCP/IP Sockets in C, Donahoo and Calvert.
我们假设在本教程中使用的示例中对 Berkeley Sockets API 有基本的了解。如果您对套接字不熟悉,我们建议您查阅 API 和一些常见的使用案例。要了解 TCP/IP 套接字编程的概述,我们推荐 Donahoo 和 Calvert 的《TCP/IP Sockets in C》。
There is an associated web site that includes source for the examples in the book, which you can find at: http://cs. baylor.edu/ donahoo/practical/CSockets/.
本书示例的源代码包含在关联的网站中,您可以在以下网址找到:http://cs.baylor.edu/donahoo/practical/CSockets/。
If you understand the first four chapters of the book (or for those who do not have access to a copy of the book, the echo clients and servers shown in the website above) you will be in good shape to understand the tutorial. There is a similar book on Multicast Sockets, Multicast Sockets, Makofske and Almeroth. that covers material you may need to understand if you look at the multicast examples in the distribution.
如果您理解了本书的前四章(或者对于那些无法获取本书副本的人,可以查看上述网站中显示的回显客户端和服务器),那么您将能够很好地理解本教程。有一本类似的关于多播套接字的书,Multicast Sockets, Makofske 和 Almeroth,它涵盖了您可能需要理解的材料,如果您查看分发中的多播示例。

GETTING STARTED 入门指南

This section is aimed at getting a user to a working state starting with a machine that may never have had installed. It covers supported platforms, prerequisites, ways to obtain , ways to build , and ways to verify your build and run simple programs.
本节旨在使用户从一个可能从未安装过 的机器开始,达到一个工作状态。它涵盖了支持的平台,先决条件,获取 的方法,构建 的方法,以及验证您的构建并运行简单程序的方法。

4.1 Overview 4.1 概述

is built as a system of software libraries that work together. User programs can be written that links with (or imports from) these libraries. User programs are written in either the C++ or Python programming languages.
被构建为一组共同工作的软件库。用户可以编写链接(或从中导入)这些库的用户程序。用户程序可用 C++ 或 Python 编程语言编写。
is distributed as source code, meaning that the target system needs to have a software development environment to build the libraries first, then build the user program. - 3 could in principle be distributed as pre-built libraries for selected systems, and in the future it may be distributed that way, but at present, many users actually do their work by editing itself, so having the source code around to rebuild the libraries is useful. If someone would like to undertake the job of making pre-built libraries and packages for operating systems, please contact the ns-developers mailing list.
被分发为源代码,这意味着目标系统需要具有软件开发环境来首先构建库,然后构建用户程序。原则上, - 3 可以作为预构建库分发给选定的系统,将来可能会以这种方式分发,但目前,许多用户实际上通过编辑 来完成工作,因此保留源代码以重新构建库是有用的。如果有人想要承担制作操作系统的预构建库和软件包的工作,请联系 ns-developers 邮件列表。
In the following, we'll look at three ways of downloading and building . The first is to download and build an official release from the main web site. The second is to fetch and build development copies of a basic installation. The third is to use an additional build tool to download more extensions for . We'll walk through each since the tools involved are slightly different.
在接下来,我们将看看下载和构建 的三种方法。第一种是从主网站下载和构建官方发布版。第二种是获取和构建基本 安装的开发副本。第三种是使用额外的构建工具下载更多 的扩展。我们将逐个介绍,因为涉及的工具略有不同。
Experienced Linux users may wonder at this point why is not provided like most other libraries using a package management tool? Although there exist some binary packages for various Linux distributions (e.g. Debian), most users end up editing and having to rebuild the libraries themselves, so having the source code available is more convenient. We will therefore focus on a source installation in this tutorial.
有经验的 Linux 用户可能会想知道为什么 不像大多数其他库一样使用软件包管理工具提供?尽管各种 Linux 发行版(例如 Debian)存在一些二进制软件包,但大多数用户最终会编辑并不得不重新构建 库,因此提供源代码更为方便。因此,在本教程中,我们将重点介绍源代码安装。
For most uses of , root permissions are not needed, and the use of a non-privileged user account is recommended.
对于 的大多数用途,不需要 root 权限,建议使用非特权用户帐户。

4.2 Prerequisites 4.2 先决条件

The entire set of available libraries has a number of dependencies on third-party libraries, but most of can be built and used with support for a few common (often installed by default) components: a C++ compiler, an installation of Python, a source code editor (such as vim, emacs, or Eclipse) and, if using the development repositories, an installation of Git source code control system. Most beginning users need not concern themselves if their configuration reports some missing optional features of , but for those wishing a full installation, the project provides an installation guide for various systems, available at https://www.nsnam.org/docs/installation/html/index.html.
可用的整套 库对第三方库有许多依赖,但大多数 可以构建和使用,支持一些常见的(通常默认安装的)组件:C++编译器,Python 安装,源代码编辑器(如 vim,emacs 或 Eclipse),如果使用开发存储库,则需要安装 Git 源代码控制系统。大多数初学者无需担心配置报告 的一些缺失可选功能,但对于希望进行完整安装的用户,该项目提供了各种系统的安装指南,网址为 https://www.nsnam.org/docs/installation/html/index.html。
As of the most recent release (ns-3.41), the following tools are needed to get started with :
截至最新的 发布(ns-3.41),以下工具是开始使用 所需的:
Prerequisite Package/version 包/版本
C ++ compiler C ++ 编译器 clang ++ or version 9 or greater)
clang ++ 或 版本 9 或更高版本)
Python python3 version  python3 版本
CMake cmake version  cmake 版本
Build system make, nin ja, xcodebuild (XCode)
make, ninja, xcodebuild (XCode)
Git any recent version (to access from GitLab.com)
任何最近的版本(从 GitLab.com 访问
tar any recent version (to unpack an ns-3 release)
任何最近的版本(用于解压 ns-3 发布版)
bunzip2 any recent version (to uncompress an release)
任何最近的版本(用于解压 发布版)
To check the default version of Python, type python - . To check the default version of , type - . If your installation is missing or too old, please consult the installation guide for guidance.
要检查 Python 的默认版本,请键入 python - 。要检查 的默认版本,请键入 - 。如果您的安装缺失或太旧,请参考 安装指南以获取帮助。
From this point forward, we are going to assume that the reader is working in Linux, macOS, or a Linux emulation environment, and has at least the above prerequisites.
从这一点开始,我们将假设读者正在使用 Linux、macOS 或 Linux 模拟环境,并且至少具备上述先决条件。
For example, do not use a directory path such as the below, because one of the parent directories contains a space in the directory name:
例如,不要使用类似下面这样的目录路径,因为其中一个父目录的目录名包含空格:
pwd
/home/user/5G simulations/ns-3-allinone/ns-3-dev
/home/user/5G 模拟/ns-3-allinone/ns-3-dev

4.2.1 Downloading a release of ns-3 as a source archive
4.2.1 下载 ns-3 的发布版本作为源代码存档

This option is for the new user who wishes to download and experiment with the most recently released and packaged version of . publishes its releases as compressed source archives, sometimes referred to as a tarball. A tarball is a particular format of software archive where multiple files are bundled together and the archive is usually compressed. The process for downloading via tarball is simple; you just have to pick a release, download it and uncompress it.
此选项适用于希望下载并尝试最近发布和打包的 版本的新用户。 将其发布为压缩的源代码存档,有时称为 tarball。 Tarball 是一种特定格式的软件存档,其中多个文件捆绑在一起,通常进行了压缩。 通过 tarball 下载 的过程很简单;您只需选择一个版本,下载并解压缩即可。
Let's assume that you, as a user, wish to build in a local directory called workspace. If you adopt the workspace directory approach, you can get a copy of a release by typing the following into your Linux shell (substitute the appropriate version numbers, of course)
假设您作为用户希望在名为 workspace 的本地目录中构建 。如果您采用 workspace 目录方法,可以通过在 Linux shell 中键入以下内容来获取发布的副本(当然要替换适当的版本号)
$cd
$ mkdir workspace
$ cd workspace
$ wget https://www.nsnam.org/release/ns-allinone-3.41.tar.bz2
$ tar xjf ns-allinone-3.41.tar.bz2
Notice the use above of the wget utility, which is a command-line tool to fetch objects from the web; if you do not have this installed, you can use a browser for this step.
请注意上面使用了 wget 实用程序,这是一个从网络获取对象的命令行工具;如果您没有安装此工具,可以使用浏览器完成此步骤。
Following these steps, if you change into the directory ns-allinone-3.41, you should see a number of files and directories
遵循这些步骤,如果您切换到 ns-allinone-3.41 目录,您应该会看到许多文件和目录
$ cd ns-allinone-3.41
ls
bake build.py constants.py netanim-3.109 ns-3.41 README.md util.py
烘烤 build.py 常量.py netanim-3.109 ns-3.41 README.md 工具.py
You are now ready to build the base distribution and may skip ahead to the section on building .
您现在可以开始构建基础 发行版,并可以直接跳转到构建 部分。

4.3 Downloading ns-3 using Git
4.3 使用 Git 下载 ns-3

The code is available in Git repositories on the GitLab.com service at https://gitlab.com/nsnam/. The group name nsnam organizes the various repositories used by the open source project.
代码可在 GitLab.com 服务上的 Git 仓库中找到,网址为 https://gitlab.com/nsnam/。nsnam 组织了这个开源项目使用的各种仓库。
The simplest way to get started using Git repositories is to fork or clone the ns-3-allinone environment. This is a set of scripts that manages the downloading and building of the most commonly used subsystems of for you. If you are new to Git, the terminology of fork and clone may be foreign to you; if so, we recommend that you simply clone (create your own replica) of the repository found on GitLab.com, as follows:
使用 Git 仓库开始的最简单方法是 fork 或 clone ns-3-allinone 环境。这是一组脚本,用于管理为您下载和构建 最常用的子系统。如果您对 Git 还不熟悉,fork 和 clone 这些术语可能对您来说很陌生;如果是这样,我们建议您简单地克隆(创建自己的副本)在 GitLab.com 上找到的仓库,方法如下:
mkdir workspace  创建工作空间
cd workspace  进入工作空间
$ git clone https://gitlab.com/nsnam/ns-3-allinone.git
$ git 克隆 https://gitlab.com/nsnam/ns-3-allinone.git
cd ns-3-allinone
At this point, your view of the ns-3-allinone directory is slightly different than described above with a release archive; it should look something like this:
此时,您对 ns-3-allinone 目录的视图与上述发布存档中稍有不同;它应该看起来像这样:
Note the presence of the download.py script, which will further fetch the and related sourcecode. At this point, you have a choice, to either download the most recent development snapshot of :
请注意 download.py 脚本的存在,该脚本将进一步获取 和相关源代码。此时,您可以选择要么下载 的最新开发快照:
$ python3 download.py
or to specify a release of , using the flag to specify a release number:
要么指定 的一个版本,使用 标志来指定版本号:
$ python3 download.py -n ns-3.41
After this step, the additional repositories of , bake, pybindgen, and netanim will be downloaded to the ns-3-allinone directory.
在此步骤之后, ,bake,pybindgen 和 netanim 的额外存储库将被下载到 ns-3-allinone 目录中。

4.3.1 Downloading ns-3 Using Bake
使用 Bake 下载 ns-3 4.3.1

The above two techniques (source archive, or ns-3-allinone repository via Git) are useful to get the most basic installation of with a few addons (pybindgen for generating Python bindings, and netanim for network animations). The third repository provided by default in ns-3-allinone is called bake.
以上两种技术(源存档,或通过 Git 的 ns-3-allinone 仓库)对于使用 进行最基本安装是有用的,还带有一些附加组件(用于生成 Python 绑定的 pybindgen,以及用于网络动画的 netanim)。ns-3-allinone 默认提供的第三个仓库称为 bake。
Bake is a tool for coordinated software building from multiple repositories, developed for the project. Bake can be used to fetch development versions of the software, and to download and build extensions to the base distribution, such as the Direct Code Execution environment, Network Simulation Cradle, ability to create new Python bindings, and various "apps". If you envision that your installation may use advanced or optional features, you may wish to follow this installation path.
Bake 是一个用于从多个仓库协调构建软件的工具,专为 项目开发。Bake 可用于获取 软件的开发版本,并下载和构建对基础 发行版的扩展,例如直接代码执行环境、网络模拟支架、创建新的 Python 绑定的能力,以及各种 “应用程序”。如果您预计您的 安装可能会使用高级或可选功能,您可能希望遵循这种安装路径。
In recent releases, Bake has been included in the release tarball. The configuration file included in the released version will allow one to download any software that was current at the time of the release. That is, for example, the version of Bake that is distributed with the ns-3.30 release can be used to fetch components for that release or earlier, but can't be used to fetch components for later releases (unless the bakeconf. package description file is updated).
在最近的 版本中,Bake 已经包含在发布的 tarball 中。发布版本中包含的配置文件将允许用户下载发布时的任何软件。也就是说,例如,与 ns-3.30 发布一起分发的 Bake 版本可以用于获取该 发布或更早版本的组件,但不能用于获取以后版本的组件(除非更新 bakeconf. 包描述文件)。
You can also get the most recent copy of bake by typing the following into your Linux shell (assuming you have installed Git):
您还可以通过在 Linux shell 中输入以下内容来获取 Bake 的最新副本(假设您已安装 Git):
$cd
$ mkdir workspace
$ cd workspace
$ git clone https://gitlab.com/nsnam/bake.git
As the git command executes, you should see something like the following displayed:
当执行 git 命令时,您应该看到类似以下内容的显示:
Cloning into 'bake'...
remote: Enumerating objects: 2086, done.
remote: Counting objects: 100% (2086/2086), done.
remote: Compressing objects: 100% (649/649), done.
remote: Total 2086 (delta 1404), reused 2078 (delta 1399)
Receiving objects: 100% (2086/2086), 2.68 MiB | 3.82 MiB/s, done.
Resolving deltas: 100% (1404/1404), done.
After the clone command completes, you should have a directory called bake, the contents of which should look something like the following:
克隆命令完成后,您应该有一个名为 bake 的目录,其内容应该类似于以下内容:
cd bake
bake bakeconf.xml bake.py doc examples generate-binary.py test TODO
Notice that you have downloaded some Python scripts, a Python module called bake, and an XML configuration file. The next step will be to use those scripts to download and build the distribution of your choice.
请注意,您已下载了一些 Python 脚本,一个名为 bake 的 Python 模块和一个 XML 配置文件。下一步将是使用这些脚本来下载和构建您选择的 发行版。
There are a few configuration targets available:
有几个可用的配置目标:
  1. ns-3.41: the code corresponding to the release
    ns-3.41:对应于发布版本的代码
  2. ns-3-dev: a similar module but using the development code tree
    ns-3-dev:一个类似的模块,但使用开发代码树
  3. ns-allinone-3.41: the module that includes other optional features such as bake build system, netanim animator, and pybindgen
    ns-allinone-3.41:包括其他可选功能的模块,如 bake 构建系统、netanim 动画制作工具和 pybindgen
  4. ns-3-allinone: similar to the released version of the allinone module, but for development code.
    ns-3-allinone:类似于发布版本的 allinone 模块,但用于开发代码。
The current development snapshot (unreleased) of may be found and cloned from https://gitlab.com/nsnam/ ns-3-dev.git. The developers attempt to keep these repositories in consistent, working states but they are in a development area with unreleased code present, so you may want to consider staying with an official release if you do not need newly- introduced features.
的当前开发快照(未发布版本)可以在 https://gitlab.com/nsnam/ns-3-dev.git 找到并克隆。开发人员尝试保持这些存储库处于一致的工作状态,但它们位于具有未发布代码的开发区域,因此如果您不需要新引入的功能,可能需要考虑使用官方发布版本。
You can find the latest version of the code either by inspection of the repository list or by going to the "ns-3 Releases" web page and clicking on the latest release link. We'll proceed in this tutorial example with .
您可以通过检查存储库列表或转到“ns-3 Releases”网页并单击最新发布链接来找到代码的最新版本。在本教程示例中,我们将继续使用
We are now going to use the bake tool to pull down the various pieces of you will be using. First, we'll say a word about running bake.
现在我们将使用 bake 工具来获取您将使用的 的各个部分。首先,我们将简要介绍运行 bake。
Bake works by downloading source packages into a source directory, and installing libraries into a build directory. bake can be run by referencing the binary, but if one chooses to run bake from outside of the directory it was downloaded into, it is advisable to put bake into your path, such as follows (Linux bash shell example). First, change into the 'bake' directory, and then set the following environment variables:
Bake 通过将源包下载到源目录中,并将库安装到构建目录中来工作。可以通过引用二进制文件来运行 bake,但如果选择从下载到的目录之外运行 bake,则建议将 bake 放入您的路径中,例如以下方式(Linux bash shell 示例)。首先,切换到'bake'目录,然后设置以下环境变量:
$ export BAKE_HOME=`pwd`
$ export PATH=$PATH:$BAKE_HOME/build/bin
$ export PYTHONPATH=$BAKE_HOME/build/lib
$ export LD_LIBRARY_PATH=$BAKE_HOME/build/lib
This will put the bake.py program into the shell's path, and will allow other programs to find executables and libraries created by bake. Although several bake use cases do not require setting PATH and PYTHONPATH as above, full builds of ns-3-allinone (with the optional packages) typically do.
这将把 bake.py 程序放入 shell 的路径中,并允许其他程序找到 bake 创建的可执行文件和库。尽管有几种 bake 用例不需要像上面那样设置 PATH 和 PYTHONPATH,但通常需要完整构建 ns-3-allinone(带有可选包)。
Step into the workspace directory and type the following into your shell:
进入工作区目录,然后在 shell 中键入以下内容:
$ ./bake.py configure -e ns-allinone-3.41
Next, we'll ask bake to check whether we have enough tools to download various components. Type:
接下来,我们将要求 bake 检查我们是否有足够的工具来下载各种组件。输入:
$ ./bake.py check
You should see something like the following:
您应该看到类似以下内容:
> Python - OK
> GNU C++ compiler - OK
> Git - OK
> Tar tool - OK
> Unzip tool - OK
> Make - OK
> cMake - OK
> patch tool - OK
> Path searched for tools: /usr/local/sbin /usr/local/bin /usr/sbin /usr/bin /sbin /
abin ...
Please install missing tools at this stage, in the usual way for your system (if you are able to), or contact your system administrator as needed to install these tools.
请在此阶段以您系统通常的方式安装缺失的工具(如果您能够),或根据需要联系系统管理员安装这些工具。
Next, try to download the software:
接下来,尝试下载软件:
$ ./bake.py download
should yield something like:
应该会产生类似以下的输出:
>> Searching for system dependency libxml2-dev - OK
>> Searching for system dependency gi-cairo - OK
>> Searching for system dependency gir-bindings - OK
>> Searching for system dependency pygobject - OK
>> Searching for system dependency pygraphviz - OK
>> Searching for system dependency python3-dev - OK
>> Searching for system dependency qt - OK
>> Searching for system dependency g++ - OK
>> Searching for system dependency cmake - OK
>> Downloading netanim-3.109 - OK
>> Downloading click-ns-3.37 - OK
>> Downloading BRITE - OK
>> Downloading openflow-dev - OK
>> Downloading ns-3.41 (target directory:ns-3.41) - OK
The above suggests that three sources have been downloaded. Check the source directory now and type 1 s; one should see:
上述表明已下载了三个源。现在检查源目录,然后键入 1 s;应该会看到:
cd source  cd 源代码
BRITE click-ns-3.37 netanim-3.109 ns-3.41 openflow-dev
You are now ready to build the distribution.
您现在可以构建 分发版。

4.4 Building  4.4 构建

As with downloading , there are a few ways to build . The main thing that we wish to emphasize is the following. is built with a build tool called CMake, described below. Most users will end up working most directly with the ns3 command-line wrapper for CMake, for the sake of convenience. Therefore, please have a look at build. py and building with bake, before reading about CMake and the ns3 wrapper below.
与下载 一样,有几种构建 的方法。我们希望强调的主要事项是以下内容。 是使用名为 CMake 的构建工具构建的,下面进行了描述。大多数用户最终将直接使用 ns3 的命令行包装器来使用 CMake,以方便起见。因此,请在阅读下面关于 CMake 和 ns3 包装器之前查看 build.py 和使用 bake 进行构建。

4.4.1 Building with build.py
4.4.1 使用 build.py 进行构建

Note: This build step is only available from a source archive release described above; not from downloading via git or bake.
注意:此构建步骤仅适用于上述描述的源存档版本;而不适用于通过 git 或 bake 下载。
When working from a released tarball, a convenience script available as part of ns-3-allinone can orchestrate a simple build of components. This program is called build.py. This program will get the project configured for you in the most commonly useful way. However, please note that more advanced configuration and work with will typically involve using the native build system, CMake, to be introduced later in this tutorial.
当从发布的 tarball 工作时,作为 ns-3-allinone 的一部分提供的便利脚本可以编排组件的简单构建。该程序称为 build.py。该程序将以最常用的方式为您配置项目。但是,请注意,更高级的配置和与 的工作通常涉及使用本机 构建系统,CMake,将在本教程后面介绍。
If you downloaded using a tarball you should have a directory called something like ns-allinone-3.41 under your workspace directory. Type the following:
如果您使用 tarball 下载,您应该在 工作空间目录下有一个名为 ns-allinone-3.41 之类的目录。键入以下内容:
$ ./build.py --enable-examples --enable-tests
Because we are working with examples and tests in this tutorial, and because they are not built by default in , the arguments for build.py tells it to build them for us. The program also defaults to building all available modules. Later, you can build without examples and tests, or eliminate the modules that are not necessary for your work, if you wish.
因为我们在本教程中使用示例和测试,并且它们在 中不是默认构建的,因此 build.py 的参数告诉它为我们构建它们。该程序还默认构建所有可用的模块。稍后,您可以构建 ,不包括示例和测试,或者如果需要的话,消除不必要的模块。
You will see lots of compiler output messages displayed as the build script builds the various pieces you downloaded. First, the script will attempt to build the netanim animator, and then .
当构建脚本构建您下载的各个部分时,您将看到许多编译器输出消息显示出来。首先,脚本将尝试构建 netanim 动画制作工具,然后

4.4.2 Building with bake
4.4.2 使用 bake 构建

If you used bake above to fetch source code from project repositories, you may continue to use it to build . Type:
如果您之前使用 bake 从项目存储库获取源代码,您可以继续使用它来构建 。输入:
./bake.py build
and you should see something like:
你应该看到类似以下内容:
Building netanim-3.109 - OK
构建 netanim-3.109 - 完成
Building ns-3.41 - OK
构建 ns-3.41 - 完成
There may be failures to build all components, but the build will proceed anyway if the component is optional.
构建所有组件可能会失败,但如果组件是可选的,则构建将继续进行。
If there happens to be a failure, please have a look at what the following command tells you; it may give a hint as to a missing dependency:
如果发生故障,请查看以下命令告诉您的内容;它可能会提示缺少的依赖项:
bake.py show
This will list out the various dependencies of the packages you are trying to build.
这将列出您尝试构建的软件包的各种依赖关系。

4.4.3 Building with the ns3 CMake wrapper
4.4.3 使用 ns3 CMake 包装器构建

Up to this point, we have used either the build.py script, or the bake tool, to get started with building . These tools are useful for building and supporting libraries, and they call into the directory to call the CMake build
到目前为止,我们要么使用 build.py 脚本,要么使用 bake 工具来开始构建 。这些工具对于构建 和支持库很有用,并调用 目录来调用 CMake 构建。

tool to do the actual building. CMake needs to be installed before building . So, to proceed, please change your working directory to the directory that you have initially built.
工具来进行实际构建。在构建 之前需要安装 CMake。因此,请将您的工作目录更改为您最初构建的 目录。
It's not strictly required at this point, but it will be valuable to take a slight detour and look at how to make changes to the configuration of the project. Probably the most useful configuration change you can make will be to build the optimized version of the code. The project will be configured by default using the default build profile, which is an optimized build with debug information (CMAKE_BUILD_TYPE=relwithdebinfo) version. Let's tell the project to make an optimized build.
目前并不是严格要求,但稍微偏离一下并了解如何更改项目配置将非常有价值。您可以进行的最有用的配置更改可能是构建代码的优化版本。项目将默认使用默认构建配置,这是一个带有调试信息(CMAKE_BUILD_TYPE=relwithdebinfo)的优化构建版本。让我们告诉项目进行优化构建。
To maintain a similar interface for command-line users, we include a wrapper script for CMake, . To tell that it should do optimized builds that include the examples and tests, you will need to execute the following commands:
为了保持命令行用户的类似接口,我们为 CMake 包含了一个包装脚本 。要告诉 应该进行包括示例和测试的优化构建,您需要执行以下命令:
clean  清理
$ ./ns3 configure --build-profile=optimized --enable-examples --enable-tests
This runs CMake out of the local directory (which is provided as a convenience for you). The first command to clean out the previous build is not typically strictly necessary but is good practice (but see Build Profiles, below); it will remove the previously built libraries and object files found in directory build/. When the project is reconfigured and the build system checks for various dependencies, you should see output that looks similar to the following:
这将在本地目录中运行 CMake(这是为您提供方便的)。清理之前构建的第一个命令通常不是严格必要的,但是是一个好的实践(但请参阅下面的构建配置文件);它将删除先前构建的库和在目录 build/中找到的目标文件。当项目重新配置并且构建系统检查各种依赖项时,您应该看到类似以下内容的输出:
$ ./ns3 configure --build-profile=optimized --enable-examples --enable-tests
-- CCache is enabled. Precompiled headers are disabled by default.
-- The CXX compiler identification is GNU 11.2.0
-- The C compiler identification is GNU 11.2.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Using default output directory /mnt/dev/tools/source/ns-3-dev/build
-- Found GTK3_GTK: /usr/lib/x86_64-linux-gnu/libgtk-3.so
-- GTK3 was found.
-- LibXML2 was found.
-- LibRT was found.
-- Visualizer requires Python bindings
-- Found Boost: /usr/lib/x86_64-linux-gnu/cmake/Boost-1.74.0/BoostConfig.cmake (found,
<version "1.74.0")
-- Found PkgConfig: /usr/bin/pkg-config (found version "0.29.2")
-- GSL was found.
-- Found Sphinx: /usr/bin/sphinx-build
-- Looking for sys/types.h
-- Looking for sys/types.h - found
-- Looking for stdint.h
-- Looking for stdint.h - found
-- Looking for stddef.h
-- Looking for stddef.h - found
-- Check size of long long
-- Check size of long long - done
-- Check size of int128_t
-- Check size of int128_t - failed
-- Check size of __int128_t
-- Check size of __int128_t - done
-- Performing Test has_hash___int128_t
-- Performing Test has_hash___int128_t - Success
-- 执行测试 has_hash___int128_t - 成功
-- Check size of unsigned long long
-- 检查无符号长长整型的大小
-- Check size of unsigned long long - done
-- 检查无符号长长整型的大小 - 完成
-- Check size of uint128_t
-- 检查 uint128_t 的大小
-- Check size of uint128_t - failed
-- 检查 uint128_t 的大小 - 失败
-- Check size of __uint128_t
-- 检查 __uint128_t 的大小
-- Check size of __uint128_t - done
-- 检查__uint128_t 的大小 - 完成
-- Performing Test has_hash___uint128_t
-- 执行测试 has_hash___uint128_t
-- Performing Test has_hash__uint128_t - Success
-- 执行测试 has_hash__uint128_t - 成功
-- Looking for C++ include inttypes.h
-- 寻找 C++包含 inttypes.h
-- Looking for C++ include inttypes.h - found
-- 寻找 C++包含 inttypes.h - 找到
-- Looking for C++ include stat.h
-- 寻找 C++包含 stat.h
-- Looking for C++ include stat.h - not found
-- 寻找 C++ 包含 stat.h - 未找到
-- Looking for C++ include dirent.h
-- 寻找 C++ 包含 dirent.h
-- Looking for C++ include dirent.h - found
-- 寻找 C++ 包含 dirent.h - 找到
-- Looking for C++ include stdlib.h
-- 寻找 C++包含 stdlib.h
-- Looking for C++ include stdlib.h - found
-- 寻找 C++包含 stdlib.h - 找到
-- Looking for C++ include signal.h
-- 寻找 C++包含 signal.h
-- Looking for C++ include signal.h - found
-- 寻找 C++ 包含 signal.h - 找到
-- Looking for C++ include netpacket/packet.h
-- 寻找 C++ 包含 netpacket/packet.h
-- Looking for C++ include netpacket/packet.h - found
-- 寻找 C++ 包含 netpacket/packet.h - 找到
-- Looking for getenv
-- 寻找 getenv
-- Looking for getenv - found
-- 寻找 getenv - 找到
-- Processing src/antenna
-- 处理 src/antenna
-- Processing src/aodv -- 处理 src/aodv
-- Processing src/applications
-- 处理 src/applications
-- Processing src/bridge
-- 处理 src/bridge
-- Processing src/brite -- 处理 src/brite
-- Brite was not found
-- 未找到 Brite
-- Processing src/buildings
-- 处理 src/buildings
-- Processing src/click -- 处理 src/click
-- Click was not found
-- 未找到 Click
-- Processing src/config-store
-- 处理 src/config-store
-- Processing src/core -- 处理 src/core
-- Looking for C++ include boost/units/quantity.hpp
-- 寻找 C++ 包含 boost/units/quantity.hpp
-- Looking for C++ include boost/units/quantity.hpp - found
-- 寻找 C++ 包含 boost/units/quantity.hpp - 找到
-- Looking for C++ include boost/units/systems/si.hpp
-- 寻找 C++ 包括 boost/units/systems/si.hpp
-- Looking for C++ include boost/units/systems/si.hpp - found
-- 寻找 C++ 包括 boost/units/systems/si.hpp - 已找到
-- Boost Units have been found.
-- 已找到 Boost Units.
-- Processing src/csma -- 处理 src/csma
-- Processing src/csma-layout
-- 处理 src/csma-layout
-- Processing src/dsdv -- 处理 src/dsdv
-- Processing src/dsr -- 处理 src/dsr
-- Processing src/energy
-- 处理 src/energy
-- Processing src/fd-net-device
-- 处理 src/fd-net-device
-- Looking for C++ include net/ethernet.h
-- 寻找 C++ 包含 net/ethernet.h
-- Looking for C++ include net/ethernet.h - found
-- 寻找 C++ 包含 net/ethernet.h - 找到
-- Looking for C++ include netpacket/packet.h
-- 寻找 C++ 包含 netpacket/packet.h
-- Looking for C++ include netpacket/packet.h - found
-- 寻找 C++ 包含 netpacket/packet.h - 找到
-- Looking for C++ include net/if.h
-- 寻找 C++ 包含 net/if.h
-- Looking for C++ include net/if.h - found
-- 寻找 C++ 包含 net/if.h - 找到
-- Looking for C++ include linux/if_tun.h
-- 寻找 C++ 包含 linux/if_tun.h
-- Looking for C++ include linux/if_tun.h - found
-- 寻找 C++ 包含 linux/if_tun.h - 找到
-- Looking for C++ include net/netmap_user.h
-- 寻找 C++ 包含 net/netmap_user.h
-- Looking for C++ include net/netmap_user.h - not found
-- 寻找 C++ 包含文件 net/netmap_user.h - 未找到
-- Looking for C++ include sys/ioctl.h
-- 寻找 C++ 包含文件 sys/ioctl.h
-- Looking for C++ include sys/ioctl.h - found
-- 寻找 C++ 包含文件 sys/ioctl.h - 找到
-- Checking for module 'libdpdk'
-- No package 'libdpdk' found
-- Processing src/flow-monitor
-- Processing src/internet
-- Processing src/internet-apps
-- Processing src/lr-wpan
-- Processing src/lte
-- Processing src/mesh
-- Processing src/mobility
-- Processing src/netanim
-- Processing src/network
-- Processing src/nix-vector-routing
-- Processing src/olsr
-- Processing src/openflow
-- Openflow was not found
-- Processing src/point-to-point
-- Processing src/point-to-point-layout
-- Processing src/propagation
-- Processing src/sixlowpan
-- Processing src/spectrum
-- Processing src/stats
-- Processing src/tap-bridge
-- Processing src/test
-- Processing src/topology-read
-- Processing src/traffic-control
-- Processing src/uan
-- Processing src/virtual-net-device
-- Processing src/wifi
-- Processing src/wimax
-- ---- Summary of optional NS-3 features:
Build profile : optimized
Build directory : /mnt/dev/tools/source/ns-3-dev/build
BRITE Integration : OFF (missing dependency)
DES Metrics event collection : OFF (not requested)
DPDK NetDevice : OFF (missing dependency)
Emulation FdNetDevice : ON
Examples : ON
File descriptor NetDevice : ON
GNU Scientific Library (GSL) : ON
GtkConfigStore : ON
MPI Support : OFF (not requested)
NS-3 Click Integration : OFF (missing dependency)
NS-3 OpenFlow Integration : OFF (missing dependency)
Netmap emulation FdNetDevice : OFF (missing dependency)
PyViz visualizer : OFF (missing dependency)
Python Bindings : OFF (not requested)
Real Time Simulator : ON
SQLite stats support : ON
Tap Bridge : ON
Tap FdNetDevice : ON
Tests : ON
Modules configured to be built:
配置为构建的模块:
antenna aodv applications
天线 aodv 应用程序
bridge buildings config-store
桥梁 建筑 配置存储
core csma csma-layout 核心 csma csma 布局
(continued from previous page)
(续前页)
dsdv dsr
fd-net-device
internet-apps
mesh
network
point-to-point
sixlowpan
tap-bridge
traffic-control
wifi
flow-monitor
lr-wpan
mobility
nix-vector-routing
point-to-point-layout
spectrum
test
uan
energy
wimax
internet
lte
netanim
olsr
propagation
stats
topology-read
virtual-net-device
Modules that cannot be built:
brite click mpi
openflow visualizer
-- Configuring done
-- Generating done
-- Build files have been written to: /mnt/dev/tools/source/ns-3-dev/cmake-cache
Finished executing the following commands:
mkdir cmake-cache
cd cmake-cache; /usr/bin/cmake -DCMAKE_BUILD_TYPE=release -DNS3_NATIVE_
\hookrightarrowOPTIMIZATIONS=ON -DNS3_EXAMPLES=ON -DNS3_TESTS=ON -G Unix Makefiles .. ; cd ..
Note the last part of the above output. Some options are not enabled by default or require support from the underlying system to work properly (OFF (not requested)). Other options might depend on third-party libraries, which if not found will be disabled (OFF(missing dependency)). If this library were not found, the corresponding feature would not be enabled and a message would be displayed. Note further that there is a feature to use the program sudo to set the suid bit of certain programs. This is not enabled by default and so this feature is reported as "not enabled." Finally, to reprint this summary of which optional features are enabled, use the show config option to .
请注意上述输出的最后部分。一些 选项默认情况下未启用,或者需要底层系统的支持才能正常工作(OFF(未请求))。其他选项可能依赖于第三方库,如果找不到这些库,将被禁用(OFF(缺少依赖项))。如果未找到此库,则相应的 功能将不会启用,并将显示一条消息。进一步注意,有一个功能可使用程序 sudo 设置某些程序的 suid 位。默认情况下未启用此功能,因此将此功能报告为“未启用”。最后,要重新打印启用了哪些可选功能的摘要,请使用 show config 选项
Now go ahead and switch back to the debug build that includes the examples and tests.
现在继续切换回包含示例和测试的调试构建。
clean  清理
$ ./ns3 configure --build-profile=debug --enable-examples --enable-tests
The build system is now configured and you can build the debug versions of the programs by simply typing:
构建系统现在已配置完成,您只需键入以下命令即可构建 程序的调试版本:
$ ./ns3 build
Although the above steps made you build the part of the system twice, now you know how to change the configuration and build optimized code.
尽管上述步骤让您构建系统的 部分两次,但现在您知道如何更改配置并构建优化代码。
A command exists for checking which profile is currently active for an already configured project:
有一个命令可用于检查已配置项目当前活动的配置文件:
show profile  显示配置文件
Build profile: debug 构建配置文件:调试
The build.py script discussed above supports also the --enable-examples and enable-tests arguments and passes them through to the ns-3 configuration, but in general, does not directly support other ns 3 options; for example, this will not work:
上面讨论的 build.py 脚本还支持 --enable-examples 和 enable-tests 参数,并将它们传递给 ns-3 配置,但一般来说,并不直接支持其他 ns 3 选项;例如,这样做不起作用:
$ ./build.py --enable-asserts
will result in: 将导致:
build.py: error: no such option: --enable-asserts
build.py: 错误: 没有这个选项: --enable-asserts
However, the special operator -- can be used to pass additional configure options through to ns3, so instead of the above, the following will work:
然而,特殊操作符--可以用来传递额外的配置选项到 ns3,所以,不是上面的方式,以下方式可以工作:
$ ./build.py -- --enable-asserts
as it generates the underlying command ./ns3 configure --enable-asserts.
当它生成底层命令./ns3 configure --enable-asserts 。
Here are a few more introductory tips about CMake.
这里有关于 CMake 的一些更多入门提示。

Handling build errors 处理构建错误

releases are tested against the most recent C++ compilers available in the mainstream Linux and macOS distributions at the time of the release. However, over time, newer distributions are released, with newer compilers, and these newer compilers tend to be more pedantic about warnings. configures its build to treat all warnings as errors, so it is sometimes the case, if you are using an older release version on a newer system, that a compiler warning will cause the build to fail.
在发布时, 会针对主流 Linux 和 macOS 发行版中最新的 C++ 编译器进行测试。然而,随着时间的推移,会发布更新的发行版,配备更新的编译器,这些新编译器往往更加严格地处理警告。 配置其构建过程,将所有警告视为错误。因此,如果您在更新的系统上使用旧版本的发布版,编译器警告可能会导致构建失败。
For instance, ns-3.28 was released prior to Fedora 28, which included a new major version of gcc (gcc-8). Building ns-3.28 or older releases on Fedora 28, when GTK+2 is installed, will result in an error such as:
例如,ns-3.28 在 Fedora 28 之前发布,而 Fedora 28 包含了一个新的主要版本的 gcc(gcc-8)。在 Fedora 28 上构建 ns-3.28 或旧版本时(安装了 GTK+2),将导致错误,例如:
/usr/include/gtk-2.0/gtk/gtkfilechooserbutton.h:59:8: error: unnecessary parentheses_
\hookrightarrowin declaration of '__gtk_reserved1' [-Werror=parentheses]
void (*__gtk_reserved1);
In releases starting with ns-3.28.1, an option is available in CMake to work around these issues. The option disables the inclusion of the '-Werror' flag to g++ and clang++. The option is '-disable-werror' and must be used at configure time; e.g.:
从 ns-3.28.1 开始的发布版本中,CMake 提供了一个选项来解决这些问题。该选项禁用了向 g++ 和 clang++ 添加 '-Werror' 标志。该选项是 '-disable-werror',必须在配置时使用;例如:
./ns3 configure --disable-werror --enable-examples --enable-tests
./ns3 配置 --disable-werror --enable-examples --enable-tests

Configure vs. Build 配置 vs. 构建

Some CMake commands are only meaningful during the configure phase and some commands are valid in the build phase. For example, if you wanted to use the emulation features of , you might want to enable setting the suid bit using sudo as described above. This turns out to be a configuration-time command, and so you could reconfigure using the following command that also includes the examples and tests.
一些 CMake 命令仅在配置阶段有意义,而一些命令在构建阶段有效。例如,如果您想要使用 的仿真功能,您可能希望启用使用 sudo 设置 suid 位,如上所述。这实际上是一个配置时间命令,因此您可以使用以下命令重新配置,该命令还包括示例和测试。
$ ./ns3 configure --enable-sudo --enable-examples --enable-tests
If you do this, ns3 will have run sudo to change the socket creator programs of the emulation code to run as root.
如果您这样做,ns3 将运行 sudo 以更改仿真代码的套接字创建程序,使其以 root 用户身份运行。
There are many other configure- and build-time options available in ns3. To explore these options, type:
在 ns3 中还有许多其他配置和构建选项可用。要探索这些选项,请键入:
We'll use some of the testing-related commands in the next section.
我们将在下一节中使用一些与测试相关的命令。

Build Profiles 构建配置文件

We already saw how you can configure CMake for debug or optimized builds:
我们已经看到您可以如何为调试或优化构建配置 CMake:
$ ./ns3 configure --build-profile=debug
There is also an intermediate build profile, release. -d is a synonym for--build-profile.
还有一个中间构建配置文件,release。-d 是--build-profile 的同义词。
The build profile controls the use of logging, assertions, and compiler optimization:
构建配置文件控制日志记录、断言和编译器优化的使用:
Table 1: Build profiles
表 1:构建配置文件
Feature Build Profile 功能构建配置文件
debug default release optimized
Enabled NS3_BUILD_PROFILE_
已启用 NS3_BUILD_PROFILE_
DWESBJGBUILD_PROF ILE_D E&SG_BUILD_PROFILE_P ENEBSSBUILD_PROFILE_OP
Fea-
tures
NS_LOG...
NS_ASSERT...
NS_LOG...
NS_ASSERT...
Code
Wrap-
per
Macro
NS_BUILD_DEBUG(coc UeNVS_BUILD_DEBUG(code UeNVS_BUILD_DEBUG(代码 ) NS_BUILD_RELEASE (cc dWS_BUILD_OPTIMIZED(c)
Compile
Flags
-03
march=native
-mtune=native
As you can see, logging and assertions are only configured by default in debug builds, although they can be selectively enabled in other build profiles by using the --enable-logs and --enable-asserts flags during CMake configuration time. Recommended practice is to develop your scenario in debug mode, then conduct repetitive runs (for statistics or changing parameters) in optimized build profile.
正如您所看到的,日志记录和断言仅在调试构建中默认配置,尽管它们可以通过在 CMake 配置期间使用--enable-logs 和--enable-asserts 标志来选择性地在其他构建配置文件中启用。建议的做法是在调试模式下开发您的场景,然后在优化的构建配置文件中进行重复运行(用于统计或更改参数)。
If you have code that should only run in specific build profiles, use the indicated Code Wrapper macro:
如果您有应该仅在特定构建配置文件中运行的代码,请使用指定的代码包装器宏:
NS_BUILD_DEBUG(std::cout << "Part of an output line..." << std::flush; timer.Start());
DoLongInvolvedComputation();
NS_BUILD_DEBUG(timer.Stop(); std::cout << "Done: " << timer << std::endl;)
By default ns 3 puts the build artifacts in the build directory. You can specify a different output directory with the --out option, e.g.
默认情况下,ns-3 将构建产物放在构建目录中。您可以使用 --out 选项指定不同的输出目录,例如。
$ ./ns3 configure --out=my-build-dir
Combining this with build profiles lets you switch between the different compile options in a clean way:
将此与构建配置文件结合使用,可以以清晰的方式在不同的编译选项之间切换。
$ ./ns3 configure --build-profile=debug --out=build/debug
$ ./ns3 build
...
$ ./ns3 configure --build-profile=optimized --out=build/optimized
$ ./ns3 build
<.<
This allows you to work with multiple builds rather than always overwriting the last build. When you switch, ns 3 will only compile what it has to, instead of recompiling everything.
这样可以让您使用多个构建,而不是总是覆盖上一个构建。当您切换时,ns 3 只会编译必要的部分,而不是重新编译所有内容。
When you do switch build profiles like this, you have to be careful to give the same configuration parameters each time. It may be convenient to define some environment variables to help you avoid mistakes:
当您像这样切换构建配置时,您必须小心每次都提供相同的配置参数。定义一些环境变量可能会很方便,以帮助您避免错误:
$ export NS3CONFIG="--enable-examples --enable-tests"
$ export NS3DEBUG="--build-profile=debug --out=build/debug"
$ export NS3OPT=="--build-profile=optimized --out=build/optimized"
$ ./ns3 configure $NS3CONFIG $NS3DEBUG
build  构建
$ ./ns3 configure $NS3CONFIG $NS30PT
$ ./ns3 配置$NS3CONFIG $NS30PT
./ns3 build  ./ns3 构建

Compilers and Flags 编译器和标志

In the examples above, CMake uses the GCC C++ compiler, , for building . However, it's possible to change the C++ compiler used by CMake by defining the cxx environment variable. For example, to use the Clang C++ compiler, clang++
在上面的示例中,CMake 使用 GCC C++编译器, ,用于构建 。但是,可以通过定义 cxx 环境变量来更改 CMake 使用的 C++编译器。例如,要使用 Clang C++编译器,clang++
$ cXX="clang++" ./ns3 configure
ns3 build  ns3 构建
One can also set up ns3 to do distributed compilation with distcc in a similar way:
也可以通过类似的方式设置 ns3 来使用 distcc 进行分布式编译:
$ CXX="distcc g++" ./ns3 configure
build  构建
More info on distcc and distributed compilation can be found on it's project page under Documentation section.
有关 distcc 和分布式编译的更多信息可以在其项目页面的文档部分找到。
To add compiler flags, use the CXXFLAGS_EXTRA environment variable when you configure .
要添加编译器标志,请在配置 时使用 CXXFLAGS_EXTRA 环境变量。

Install 安装

ns3 may be used to install libraries in various places on the system. The default location where libraries and executables are built is in the build directory, and because ns3 knows the location of these libraries and executables, it is not necessary to install the libraries elsewhere.
ns3 可以用于在系统的各个位置安装库。默认情况下,库和可执行文件构建的位置在构建目录中,因为 ns3 知道这些库和可执行文件的位置,所以不需要在其他位置安装库。
If users choose to install things outside of the build directory, users may issue the ./ns3 install command. By default, the prefix for installation is /usr/local, so ./ns3 install will install programs into /usr/local/bin, libraries into/usr/local/lib, and headers into/usr/local/include. Superuser privileges are typically needed to install to the default prefix, so the typical command would be sudo ./ns3 install. When running programs with ns3, ns3 will first prefer to use shared libraries in the build directory, then will look for libraries in the library path configured in the local environment. So when installing libraries to the system, it is good practice to check that the intended libraries are being used.
如果用户选择在构建目录之外安装东西,用户可以发出./ns3 install 命令。默认情况下,安装的前缀是/usr/local,因此./ns3 install 将程序安装到/usr/local/bin,库安装到/usr/local/lib,头文件安装到/usr/local/include。通常需要超级用户权限才能安装到默认前缀,因此典型的命令将是 sudo ./ns3 install。在使用 ns3 运行程序时,ns3 首选使用构建目录中的共享库,然后会查找本地环境中配置的库路径。因此,在将库安装到系统时,最好检查所使用的目标库。
Users may choose to install to a different prefix by passing the --prefix option at configure time, such as:
用户可以通过在配置时间传递--prefix 选项来选择安装到不同前缀,例如:
./ns3 configure --prefix=/opt/local
./ns3 配置--prefix=/opt/local
If later after the build the user issues the ./ns3 install command, the prefix /opt/local will be used.
如果在构建后用户发出./ns3 安装命令,则将使用前缀/opt/local。
The. clean command should be used prior to reconfiguring the project if ns 3 will be used to install things at a different prefix.
如果 ns 3 将用于在不同前缀处安装东西,则应先使用 clean 命令重新配置项目。
In summary, it is not necessary to call ./ns3 install to use . Most users will not need this command since ns3 will pick up the current libraries from the build directory, but some users may find it useful if their use case involves working with programs outside of the directory.
简而言之,不需要调用./ns3 install 来使用 。大多数用户不需要此命令,因为 ns3 将从构建目录中获取当前的库,但某些用户可能会发现它在涉及与 目录外的程序一起工作时很有用。
Clean 清理
Cleaning refers to the removal of artifacts (e.g. files) generated or edited by the build process. There are different levels of cleaning possible:
清理是指删除构建过程生成或编辑的工件(例如文件)。有不同级别的清理可能:
Scope Command Description
clean . clean . 清理 Remove artifacts generated by the CMake configuration and the build
移除由 CMake 配置和构建生成的构件
distclean . distclean Remove artifacts from the configuration, build, documentation, test and Python
从配置、构建、文档、测试和 Python 中删除工件
ccache ccache Remove all compiled artifacts from the ccache
从 ccache 中删除所有已编译的工件
clean can be used if the focus is on reconfiguring the way that ns-3 is presently being compiled. distclean can be used if the focus is on restoring the ns-3 directory to an original state.
如果重点是重新配置 ns-3 的编译方式,则可以使用 clean。如果重点是将 ns-3 目录恢复到原始状态,则可以使用 distclean。
The ccache lies outside of the ns-3 directory (typically in a hidden directory at .cache/ccache) and is shared across projects. Users should be aware that cleaning the ccache will cause cache misses on other build directories outside of the current working directory. Cleaning this cache periodically may be helpful to reclaim disk space. Cleaning the ccache is completely separate from cleaning any files within the ns-3 directory.
ccache 位于 ns-3 目录之外(通常位于 .cache/ccache 的隐藏目录中),并且在项目之间共享。用户应该意识到清理 ccache 会导致当前工作目录之外的其他构建目录缓存未命中。定期清理此缓存可能有助于回收磁盘空间。清理 ccache 与清理 ns-3 目录中的任何文件完全分开。
Because clean operations involve removing files, the option conservatively refuses to remove files if one of the deleted files or directories lies outside of the current working directory. Users may wish to precede the actual clean with a , when in doubt about what the clean command will do, because a dry run will print the warning if one exists. For example:
因为清理操作涉及删除文件,所以该选项在删除的文件或目录之一位于当前工作目录之外时会保守地拒绝删除文件。用户可能希望在对清理命令的操作有疑问时,先进行 ,因为干预运行将在存在警告时打印警告。例如:
./ns3 clean --dry-run
. clean . 清理

One ns3 一个 ns3

There is only one ns3 script, at the top level of the source tree. As you work, you may find yourself spending a lot of time in scratch/, or deep in src/ . . , and needing to invoke ns3. You could just remember where you are, and invoke ns 3 like this:
源代码树的顶层只有一个 ns3 脚本。在工作时,您可能会发现自己花费大量时间在 scratch/ 中,或者在 src/ 中深入,需要调用 ns3。您可以记住自己所在的位置,然后像这样调用 ns3:
but that gets tedious, and error prone, and there are better solutions.
但这变得乏味,容易出错,而且有更好的解决方案。
One common way when using a text-based editor such as emacs or vim is to open two terminal sessions and use one to build and one to edit source code.
当使用诸如 emacs 或 vim 之类的基于文本的编辑器时,一种常见的方法是打开两个终端会话,一个用于构建 ,另一个用于编辑源代码。
If you only have the tarball, an environment variable can help:
如果你只有 tarball,环境变量可以帮助:
$ export NS3DIR="$PWD"
$ function ns3f { cd $NS3DIR && ./ns3 $* ; }
$cd scratch
$ ns3f build
It might be tempting in a module directory to add a trivial script along the lines of exec . Please don't. It's confusing to newcomers, and when done poorly it leads to subtle build errors. The solutions above are the way to go.
在模块目录中添加一个类似于执行 的微不足道的 脚本可能很诱人。请不要这样做。这会让新手感到困惑,并且如果做得不好,会导致微妙的构建错误。上面的解决方案是正确的方法。

4.4.4 Building with CMake
4.4.4 使用 CMake 构建

The ns3 wrapper script calls CMake directly, mapping Waf-like options to the verbose settings used by CMake. Calling ./ns3 will execute a series of commands, that will be shown at the end of their execution. The execution of those underlying commands can be skipped to just print them using ./ns3 --dry-run.
ns3 包装脚本直接调用 CMake,将类似于 Waf 的选项映射到 CMake 使用的详细设置。调用./ns3 将执行一系列命令,这些命令将在执行结束时显示出来。可以跳过执行这些底层命令,只需使用./ns3 --dry-run 打印它们。
Here is are a few examples showing why we suggest the use of the ns 3 wrapper script.
这里有一些示例,说明为什么我们建议使用 ns 3 包装脚本。

Configuration command 配置命令

$ ./ns3 configure --enable-tests --enable-examples -d optimized
Corresponds to 对应于
cmake-cache/
$ cmake -DCMAKE_BUILD_TYPE=release -DNS3_NATIVE_OPTIMIZATIONS=ON -DNS3_ASSERT=OFF -
DNS3_LOG=OFF -DNS3_TESTS=ON -DNS3_EXAMPLES=ON . .
DNS3_LOG=关闭 -DNS3_TESTS=打开 -DNS3_EXAMPLES=打开 . .

Build command 构建命令

To build a specific target such as test-runner we use the following ns 3 command:
要构建特定目标,如 test-runner,我们使用以下 ns3 命令:
build test-runner  构建测试运行器
Which corresponds to the following commands:
对应以下命令:
$ cmake --build . -j 16 --target test-runner # This command builds the test-runner, target with the underlying build system
$ cmake --build . -j 16 --target test-runner # 此命令使用底层构建系统构建测试运行器, 目标
To build all targets such as modules, examples and tests, we use the following ns 3 command:
要构建所有目标,如模块、示例和测试,我们使用以下 ns-3 命令:
build  构建
Which corresponds to: 对应于:
$ cd /ns-3-dev/cmake-cache/
$ cmake --build . -j 16 # This command builds all the targets with the underlying
    \hookrightarrowbuild system

Run command 运行命令

$ ./ns3 run test-runner

Corresponds to: 对应于:

$ cd /ns-3-dev/cmake-cache/
$ cmake --build . -j 16 --target test-runner # This command builds the test-runner
    \hookrightarrowtarget calling the underlying build system
$ export PATH=$PATH:/ns-3-dev/build/:/ns-3-dev/build/lib:/ns-3-dev/build/bindings/
    ~python # export library paths
$ export LD_LIBRARY_PATH=/ns-3-dev/build/:/ns-3-dev/build/lib:/ns-3-dev/build/
    \hookrightarrowbindings/python
    $ export PYTHON_PATH=/ns-3-dev/build/:/ns-3-dev/build/lib:/ns-3-dev/build/bindings/
    ~python
$ /ns-3-dev/build/utils/ns3-dev-test-runner-optimized # call the executable with the
    ureal path
Note: the command above would fail if ./ns 3 build was not executed first, since the examples won't be built by the test-runner target.
注意:如果未先执行./ns 3 build 命令,则上述命令将失败,因为示例不会由测试运行程序目标构建。
On Windows, the Msys2/MinGW64/bin directory path must be on the PATH environment variable, otherwise the DLL's required by the C++ runtime will not be found, resulting in crashes without any explicit reasoning.
在 Windows 上,Msys2/MinGW64/bin 目录路径必须在 PATH 环境变量中,否则 C++运行时所需的 DLL 将无法找到,导致崩溃而没有明确的原因。
Note: The ns-3 script adds only the ns-3 lib directory path to the PATH, ensuring the ns-3 DLLs will be found by running programs. If you are using CMake directly or an IDE, make sure to also include the path to ns-3-dev/build/lib in the PATH variable.
注意:ns-3 脚本仅将 ns-3 lib 目录路径添加到 PATH 中,确保运行程序时可以找到 ns-3 DLL。如果您直接使用 CMake 或 IDE,请确保还将 ns-3-dev/build/lib 路径包含在 PATH 变量中。
If you are using one of Windows's terminals (CMD, PowerShell or Terminal), you can use the setx command to change environment variables permanently or set to set them temporarily for that shell:
如果您正在使用 Windows 的终端(CMD、PowerShell 或终端),您可以使用 setx 命令永久更改环境变量或在该终端临时设置它们:
C:\\Windows\\system32>echo %PATH%
C:\\Windows\\system32;C:\\Windows;D:\\tools\\msys64\\mingw64\\bin;
C:\\Windows\\system32>setx PATH "%PATH%;D:\\tools\\msys64\\usr\\bin;" /m
C:\\Windows\\system32>echo %PATH%
C:\\Windows\\system32;C:\\Windows;D:\\tools\\msys64\\mingw64\\bin;
D:\\tools\\msys64\\usr\\bin;
Note: running on an administrator terminal will change the system PATH, while the user terminal will change the user PATH, unless the flag is added.
注意:在管理员终端上运行将更改系统路径,而在用户终端上运行将更改用户路径,除非添加 标志。

4.4.5 Building with IDEs
4.4.5 使用 IDE 构建

With CMake, IDE integration is much easier. We list the steps on how to use ns-3 with a few IDEs.
使用 CMake,IDE 集成变得更加容易。我们列出了如何在几个 IDE 中使用 ns-3 的步骤。

Microsoft Visual Studio Code
微软 Visual Studio Code

Start by downloading VS Code.
首先下载 VS Code。
Then install it and then install the CMake and C++ plugins.
然后安装它,然后安装 CMake 和 C++ 插件。
This can be done accessing the extensions' menu button on the left.
这可以通过访问左侧的扩展菜单按钮来完成。
It will take a while, but it will locate the available toolchains for you to use.
这将需要一些时间,但它会为您定位可用的工具链。
After that, open the ns-3-dev folder. It should run CMake automatically and preconfigure it.
之后,打开 ns-3-dev 文件夹。它应该会自动运行 CMake 并进行预配置。
After this happens, you can choose ns-3 features by opening the CMake cache and toggling them on or off.
当这发生时,您可以通过打开 CMake 缓存并切换它们的开关来选择 ns-3 功能。
Just as an example, here is how to enable examples
仅作为示例,这是如何启用示例的。
After saving the cache, CMake will run, refreshing the cache. Then VsCode will update its list of targets on the left side of the screen in the CMake menu.
保存缓存后,CMake 将运行,刷新缓存。然后 VsCode 将在屏幕左侧的 CMake 菜单中更新其目标列表。
After selecting a target on the left side menu, there are options to build, run or debug it.
在左侧菜单中选择目标后,有构建、运行或调试的选项。
Any of them will automatically build the selected target. If you choose run or debug, the executable targets will be executed. You can open the source files you want, put some breakpoints and then click debug to visually debug programs.
其中任何一个都将自动构建所选目标。如果选择运行或调试,可执行目标将被执行。您可以打开想要的源文件,设置一些断点,然后单击调试以进行可视化调试程序。
Note: If you are running on Windows, you need to manually add your ns-3 library directory to the PATH environment variable. This can be accomplished in two ways.
注意:如果您在 Windows 上运行,您需要手动将 ns-3 库目录添加到 PATH 环境变量中。有两种方法可以实现这一点。
The first, is to set VSCode's settings . json file to include the following:
第一种方法是设置 VSCode 的 settings.json 文件,包括以下内容:
"cmake.environment": {
    "PATH": "${env:PATH};${workspaceFolder}/build/lib"
    }
The second, a more permanent solution, with the following command:
第二种更持久的解决方案是使用以下命令:
> echo %PATH%
C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Windows\system32;C:\Windows;
\hookrightarrowC:\Windows\System32\Wbem;
C:\Windows\System32\OpenSSH\;C:\Program Files\dotnet\;C:\Program Files\PuTTY\;
<C:\Program Files\VSCodium\bin;
C:\Program Files\Meld\;C:\Windows\System32\WindowsPowerShell\v1.0\;
\hookrightarrowC:\Windows\system32;C:\Windows;
C:\Windows\System32\Wbem;C:\Windows\System32\OpenSSH\;C:\Program Files\dotnet\;
~C:\Program Files\PuTTY\;
Welcome - Visual Studio Code
File Edit Selection View Go Run Terminal Help
\(\triangle\) welcome \(\times\)


Visual Studio Code
Editing evolved
Star
7 New File
Lpen Foller.
Recent
Walkthroughs
S.7.dev/mevertdev/tools/source
16 Get Started with vs Code
Customizations to make V5 Code yours.
Learn the Fundamentals
Boost your Productivity
(8)
髙
\(\otimes 0 \Delta 0 \quad 0\)
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-034.jpg?height=30&width=331&top_left_y=1507&top_left_x=252)
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-034.jpg?height=24&width=382&top_left_y=1537&top_left_x=260)
~NS3 CleanAll projects ~NS3 清除所有项目
O > scratch Clean Rebuild All Projects
O > 从头清除并重建所有项目

Oio > stdti_pch_exec Clean Reconfigure All Projets
Oio > stdti_pch_exec 清除并重新配置所有项目
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-034.jpg?height=44&width=384&top_left_y=1618&top_left_x=266)
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-034.jpg?height=35&width=246&top_left_y=1662&top_left_x=255)
<!
|
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-034.jpg?height=41&width=244&top_left_y=1741&top_left_x=271)
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-034.jpg?height=680&width=1366&top_left_y=1731&top_left_x=518)
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-035.jpg?height=2094&width=1658&top_left_y=324&top_left_x=233)

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-036.jpg?height=951&width=1653&top_left_y=327&top_left_x=233)
File Edit Selection view Go Run Terminal Help

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-036.jpg?height=35&width=886&top_left_y=1526&top_left_x=263)

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-036.jpg?height=54&width=562&top_left_y=1545&top_left_x=262)


![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-036.jpg?height=43&width=474&top_left_y=1583&top_left_x=626)

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-036.jpg?height=41&width=464&top_left_y=1610&top_left_x=634)

<ption to disable logging statements

#include <fstream>

#include"ns>/core-module.h"

#include"ns3/aplications-module.f"

using namespace ns3;

NS_LOG_COMPONEN__GEFINE("UdpcrientServerExample");

main (int argc, char *argv[)

// Declare variables used in command-line arguments.

bool logging = true;

Adfess serverAdress;

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-036.jpg?height=41&width=1187&top_left_y=1973&top_left_x=583)

cmd.AddValue ("logging", "Enable logging", loging);

if(logging)

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-036.jpg?height=35&width=190&top_left_y=2109&top_left_x=309)

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-036.jpg?height=46&width=382&top_left_y=2091&top_left_x=611)

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-036.jpg?height=27&width=298&top_left_y=2163&top_left_x=542)

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-036.jpg?height=49&width=293&top_left_y=2187&top_left_x=545)

ind the coB namuel ond other docurettotion resources online ot;

Eor help, type "nelp".

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-036.jpg?height=45&width=699&top_left_y=2298&top_left_x=255)

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-036.jpg?height=35&width=111&top_left_y=2345&top_left_x=271)

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-036.jpg?height=44&width=409&top_left_y=2328&top_left_x=545)

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-036.jpg?height=24&width=1613&top_left_y=2380&top_left_x=257)
udD-client-server.cc- ns-3-dev - Visual studio Code
(continued from previous page)

C:\Program Files\VSCodium\bin;C:\Program Files\Meld;
<:\Users\username\AppData\Local\Microsoft\WindowsApps;
setx PATH "%PATH%;C:\path\to\ns-3-dev\build\lib"
SUCCESS: Specified value was saved.
成功:指定的值已保存。

echo %PATH%
C:\Windows\System32\WindowsPowerShell\v1.0;C:\Windows\system32;C:\Windows;
<C:\Windows\System32\Wbem;
...
C:\Program Files\VSCodium\bin;C:\Program Files\Meld;
~C:\Users\username\AppData\Local\Microsoft\WindowsApps;
C:\tools\source\ns-3-dev\build\lib;

If you do not setup your PATH environment variable, you may end up having problems to debug that look like the following:

=thread-group-added,id="i1"
GNU gdb (GDB) 14.1
Copyright (C) 2023 Free Software Foundation, Inc.
版权所有 (C) 2023 自由软件基金会

License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
许可证 GPLv3+: GNU GPL 版本 3 或更高版本 http://gnu.org/licenses/gpl.html

<.<
ERROR: Unable to start debugging. GDB exited unexpectedly.
错误: 无法启动调试。GDB 意外退出。

The program 'C:\tools\source\ns-3-dev\build\examples\wireless\ns3-dev-wifi-he-network-
程序'C:\tools\source\ns-3-dev\build\examples\wireless\ns3-dev-wifi-he-network-

~debug.exe' has exited with code 0 (0x00000000).
~debug.exe'已退出,代码为 0 (0x00000000)。

ERROR: During startup program exited with code 0xc0000135.
错误: 在启动期间,程序以代码 0xc0000135 退出。

Or
=thread-group-added,id="i1"
=thread-group-added,id="i1"

GNU gdb (GDB) 14.1
GNU gdb(GDB)14.1

Copyright (C) 2023 Free Software Foundation, Inc.
版权所有(C)2023 自由软件基金会。

License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
许可证 GPLv3+: GNU GPL 版本 3 或更高版本 http://gnu.org/licenses/gpl.html

<.<
ERROR: Unable to start debugging. Unexpected GDB output from command "-exec-run"..
错误: 无法启动调试。从命令“-exec-run”收到意外的 GDB 输出。

\hookrightarrowDuring startup program exited with code 0xc0000135.
\hookrightarrow 在启动期间,程序以代码 0xc0000135 退出。

The program 'C:\tools\source\ns-3-dev\build\examples\wireless\ns3-dev-wifi-he-network-
程序'C:\tools\source\ns-3-dev\build\examples\wireless\ns3-dev-wifi-he-network-

<debug.exe' has exited with code 0 (0x00000000).
<debug.exe'已退出,代码为 0 (0x00000000)。

JetBrains CLion

Start by downloading CLion.

The following image contains the toolchain configuration window for CLion running on Windows (only WSLv2 is currently supported).

CLion uses Makefiles for your platform as the default generator. Here you can choose a better generator like ninja by setting the cmake options flag to - \(G\) Ninja. You can also set options to enable examples (-DNS3_EXAMPLES=ON) and tests \(\left(-D N S 3 \_T E S T S=O N\right)\).

To refresh the CMake cache, triggering the discovery of new targets (libraries, executables and/or modules), you can either configure to re-run CMake automatically after editing CMake files (pretty slow and easily triggered) or reload it manually. The following image shows how to trigger the CMake cache refresh.

[ci] Settings [ci] 设置
Q-
Appearance & Behavior 外观和行为
Keymap 快捷键
Editor 编辑器
Plugins 插件
1
  • Version Control 版本控制
    Build, Execution, Deployment
    构建、执行、部署

    Toolchains 工具链
    CMake
    Compilation Database 编译数据库
    Custom Build Targets 自定义构建目标
  • Gradle
    Debugge  调试器
    Python Debugger Python 调试器
    Python Interpreter Python 解释器
  • Deployment 部署
  • Docker
  • Dynamic Analysis Tools 动态分析工具
    Embedded Developmen 嵌入式开发
    Required Plugins 必需插件
  • Languages & Frameworks 语言和框架
    Tools  工具
    Build, Execution, Deployment > Toolchains
    构建、执行、部署 > 工具链


    WSL (default) + Name:
    WSL(默认)+ 名称:

    MinGW
    Visual Studio

    nvironment: 环境:
    WS
    Credentials: 凭据:
    sh://gabriel@localhost:2222
    Connected 连接的
    CMake
    WSL CMake (\usr\bin\cmake)
    Make 制作
    C Compiler: C 编译器
    C++ Compiler C++编译器
    Detecting 检测
    Debugger: 调试器:
    WSL GDB (\usr\bin\gdb)
    OK
    Cance 取消

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-039.jpg?height=2150&width=1639&top_left_y=302&top_left_x=243)

After configuring the project, the available targets are listed in a drop-down list on the top right corner. Select the target you want and then click the hammer symbol to build, as shown in the image below.

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-040.jpg?height=894&width=1647&top_left_y=366&top_left_x=239)

If you have selected and executable target, you can click either the play button to execute the program; the bug to debug the program; the play button with a chip, to run Valgrind and analyze memory usage, leaks and so on.

\section*{Code::Blocks}

Start by installing Code::Blocks.

Code::Blocks does not support CMake project natively, but we can use the corresponding CMake generator to generate a project in order to use it. The generator name depends on the operating system and underlying build system. https://cmake.org/cmake/help/latest/generator/CodeBlocks.html

\$ ./ns3 configure -G"CodeBlocks - Ninja" --enable-examples

\(\cdots\)

\$ -- Build files have been written to: /ns-3-dev/cmake-cache

There will be a NS3.cbp file inside the cache folder used during configuration (in this case cmake-cache). This is a Code::Blocks project file that can be opened by the IDE.

When you first open the IDE, you will be greeted by a window asking you to select the compiler you want.

After that you will get into the landing page where you can open the project.

Loading it will take a while.

After that we can select a target in the top menu (where it says "all") and click to build, run or debug. We can also set breakpoints on the source code.

After clicking to build, the build commands of the underlying build system will be printed in the tab at the bottom. If you clicked to debug, the program will start automatically and stop at the first breakpoint.

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-041.jpg?height=897&width=1647&top_left_y=321&top_left_x=239)

Compilers auto-detection

Note: After auto-detection, at least one compiler's master path is still empty and therefore invalid. Inspect the list below and change the compiler's master path later in the compiler options.

Select you favourite default compiler here:

\begin{tabular}{|c|c|c|}
\hline Compiler & Status & Set as default \\
\hline GNU GCC Compiler & Detected & \\
\hline Intel C/C++ Compiler & Not found & \\
\hline Small Device C Compiler & Not found & \\
\hline Tiny C Compiler & Not found & \\
\hline LLVM Clang Compiler & Detected & \\
\hline GNU GCC Compiler for ARM & Not found & \\
\hline GNU GCC Compiler for AVR & Not found & \\
\hline GNU GCC Compiler for LM8 & Not found & \\
\hline GNU GCC Compiler for LM32 & Not found & \\
\hline ![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-041.jpg?height=70\&width=519\&top_left_y=2213\&top_left_x=240) & \(\cdots \cdots\) & \\
\hline
\end{tabular}
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-042.jpg?height=2020&width=1654&top_left_y=366&top_left_x=234)
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-043.jpg?height=1966&width=1612&top_left_y=392&top_left_x=254)

You can inspect memory and the current stack enabling those views in Debug->Debugging Windows->Watches and Call Stack. Using the debugging buttons, you can advance line by line, continue until the next breakpoint.

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-044.jpg?height=807&width=1651&top_left_y=369&top_left_x=237)

Note: as Code::Blocks doesn't natively support CMake projects, it doesn't refresh the CMake cache, which means you will need to close the project, run the . /ns 3 command to refresh the CMake caches after adding/removing source files to/from the CMakeLists.txt files, adding a new module or dependencies between modules.

\section*{Apple XCode}

Start by installing XCode. Then open it for the first time and accept the license. Then open Xcode->Preferences\(>\) Locations and select the command-line tools location.

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-044.jpg?height=735&width=1651&top_left_y=1595&top_left_x=237)

XCode does not support CMake project natively, but we can use the corresponding CMake generator to generate a project in order to use it. The generator name depends on the operating system and underlying build system. https://cmake.org/cmake/help/latest/generator/Xcode.html

\$ ./ns3 configure -GXcode --enable-examples

\(\cdots\)

\$ -- Build files have been written to: /ns-3-dev/cmake-cache

There will be a NS3.xcodeproj file inside the cache folder used during configuration (in this case cmake-cache). This is a XCode project file that can be opened by the IDE.

Loading the project will take a while, and you will be greeted with the following prompt. Select to automatically create the schemes.

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-045.jpg?height=984&width=1635&top_left_y=711&top_left_x=234)

After that we can select a target in the top menu and click to run, which will build and run (if executable, or debug if build with debugging symbols).

After clicking to build, the build will start and progress is shown in the top bar.

Before debugging starts, Xcode will request for permissions to attach to the process (as an attacker could pretend to be a debugging tool and steal data from other processes).

After attaching, we are greeted with profiling information and call stack on the left panel, source code, breakpoint and warnings on the central panel. At the bottom there are the memory watches panel in the left and the output panel on the right, which is also used to read the command line.

Note: as XCode doesn't natively support CMake projects, it doesn't refresh the CMake cache, which means you will need to close the project, run the.\(/ \mathrm{ns} 3\) command to refresh the CMake caches after adding/removing source files to/from the CMakeLists.txt files, adding a new module or dependencies between modules.

\subsection*{4.5 Testing ns-3}

You can run the unit tests of the \(n s-3\) distribution by running the ./test.py script:

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-046.jpg?height=1236&width=1369&top_left_y=469&top_left_x=253)

/at 13:01

>

\[
\begin{array}{ll}
2 \text { lena-ipv6-ue-rh } & > \\
\text { nms-p2p-nix-distributed } & > \\
2 \text { wifi-wired-bridging } & > \\
2 \text { wifi-spatial-reuse } & >
\end{array}
\]
() copy_headers_network_public
csma-sta
(2) copy_headers_applications_public

\(>\)

- wifi-backward-compatibility >

血 libconfig-store >

益 libbridge >

- lena-intercell-interference >

outdoor-random-walk-example >

\(\square\) Ir-wpan-data >

packet-socket-apps >

- queue-discs-benchmark >

- wifi-rate-adaptation-distance >

Ir-wpan-error-model-plot

\(>\)

>

main-packet-tag >

sample-log-time-format >

libmesh

first \(\rangle=\) My Mac

Building | \(323 / 670\)

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-047.jpg?height=1090&width=1521&top_left_y=279&top_left_x=237)

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-047.jpg?height=995&width=1651&top_left_y=1470&top_left_x=237)

\$ ./test.py --no-build

These tests are run in parallel by ns3. You should eventually see a report saying that

92 of 92 tests passed ( 92 passed, 0 failed, 0 crashed, 0 valgrind errors)

This is the important message to check for; failures, crashes, or valgrind errors indicate problems with the code or incompatibilities between the tools and the code.

You will also see the summary output from ns 3 and the test runner executing each test, which will actually look something like:

-- CCache is enabled -- CCache 已启用
-- The CXX compiler identification is GNU 11.2.0
-- CXX 编译器标识为 GNU 11.2.0

-- The C compiler identification is GNU 11.2.0
-- C 编译器标识为 GNU 11.2.0

<!
-- Configuring done -- 配置完成
-- Generating done -- 生成完成
-- Build files have been written to: /ns-3-dev/cmake-cache
-- 构建文件已写入: /ns-3-dev/cmake-cache
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-048.jpg?height=28&width=52&top_left_y=1119&top_left_x=252)
Scanning dependencies of target tap-creator
扫描目标 tap-creator 的依赖项

[ 1%] Building CXX object src/fd-net-device/CMakeFiles/tap-device-creator.dir/helper/
[ 1%] 正在构建 CXX 对象 src/fd-net-device/CMakeFiles/tap-device-creator.dir/helper/

\hookrightarrowtap-device-creator.cc.o
[ 1%] Building CXX object src/tap-bridge/CMakeFiles/tap-creator.dir/model/tap-
[ 1%] 构建 CXX 对象 src/tap-bridge/CMakeFiles/tap-creator.dir/model/tap-

<creator.cc.o
[ 1%] Building CXX object src/fd-net-device/CMakeFiles/raw-sock-creator.dir/helper/
[ 1%] 构建 CXX 对象 src/fd-net-device/CMakeFiles/raw-sock-creator.dir/helper/

<creator-utils.cc.o
[ 1%] Building CXX object src/tap-bridge/CMakeFiles/tap-creator.dir/model/tap-encode-
[ 1%] 构建 CXX 对象 src/tap-bridge/CMakeFiles/tap-creator.dir/model/tap-encode-

<decode.cc.o
[ 1%] Linking CXX executable ../../../build/src/fd-net-device/ns3-dev-tap-device-
[ 1%] 链接 CXX 可执行文件 ../../../build/src/fd-net-device/ns3-dev-tap-device-

<creator
.</
[100%] Linking CXX executable ../../../build/examples/matrix-topology/ns3-dev-matrix-
[100%] 链接 CXX 可执行文件../../../build/examples/matrix-topology/ns3-dev-matrix-

<topology <拓扑
[100%] Built target manet-routing-compare
[100%] 已构建目标 manet-routing-compare

[100%] Built target matrix-topology
[100%] 构建目标矩阵拓扑

[1/742] PASS: TestSuite aodv-routing-id-cache
[1/742] 通过:测试套件 aodv-routing-id-cache

[2/742] PASS: TestSuite routing-aodv
[2/742] 通过:测试套件 routing-aodv

[3/742] PASS: TestSuite uniform-planar-array-test
[3/742] 通过:测试套件 uniform-planar-array-test

[4/742] PASS: TestSuite angles
[4/742] 通过:测试套件 angles
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-048.jpg?height=33&width=47&top_left_y=2022&top_left_x=257)
[740/742] PASS: Example src/wifi/examples/wifi-manager-example --
[740/742] 通过:示例 src/wifi/examples/wifi-manager-example --

\hookrightarrowwifiManager=MinstrelHt --standard=802.11ax-6GHz --serverChannelWidth=160 --
\hookrightarrowclientChannelWidth=160 --serverShortGuardInterval=3200 --
\hookrightarrowclientShortGuardInterval=3200 --serverNss=4 --clientNss=4 --stepTime=0.1
[741/742] PASS: Example src/lte/examples/lena-radio-link-failure --numberOfEnbs=2 --
[741/742] 通过:示例 src/lte/examples/lena-radio-link-failure --numberOfEnbs=2 --

\hookrightarrowuseIdealRrc=0 --interSiteDistance=700 --simTime=17
[742/742] PASS: Example src/lte/examples/lena-radio-link-failure --numberOfEnbs=2 --
[742/742] 通过:示例 src/lte/examples/lena-radio-link-failure --numberOfEnbs=2 --

\hookrightarrowinterSiteDistance=700 --simTime=17
739 of 742 tests passed (739 passed, 3 skipped, 0 failed, 0 crashed, 0 valgrind,
739 个测试中有 742 个通过(739 个通过,3 个跳过,0 个失败,0 个崩溃,0 个 valgrind,

<errors) <错误)

This command is typically run by users to quickly verify that an \(n s-3\) distribution has built correctly. (Note the order of the PASS: . . . lines can vary, which is okay. What's important is that the summary line at the end report that all tests passed; none failed or crashed.)

Both ns3 and test. py will split up the job on the available CPU cores of the machine, in parallel.

\subsection*{4.6 Running a Script}

We typically run scripts under the control of ns3. This allows the build system to ensure that the shared library paths are set correctly and that the libraries are available at run time. To run a program, simply use the --run option in ns3. Let's run the \(n s-3\) equivalent of the ubiquitous hello world program by typing the following:

\$ ./ns3 run hello-simulator

ns3 first checks to make sure that the program is built correctly and executes a build if required. ns 3 then executes the program, which produces the following output.

Hello Simulator

Congratulations! You are now an ns-3 user!

\section*{What do I do if I don't see the output?}

If you see ns3 messages indicating that the build was completed successfully, but do not see the "Hello Simulator" output, chances are that you have switched your build mode to optimized in the Building with the ns3 CMake wrapper section, but have missed the change back to debug mode. All of the console output used in this tutorial uses a special \(n s-3\) logging component that is useful for printing user messages to the console. Output from this component is automatically disabled when you compile optimized code - it is "optimized out." If you don't see the "Hello Simulator" output, type the following:

\$ ./ns3 configure --build-profile=debug --enable-examples --enable-tests

to tell ns3 to build the debug versions of the \(n s-3\) programs that includes the examples and tests. You must still build the actual debug version of the code by typing

\(\$ . / \mathrm{ns} 3\)

Now, if you run the hello-simulator program, you should see the expected output.

\subsection*{4.6.1 Program Arguments}

To feed command line arguments to an \(n s-3\) program use this pattern:

\$ ./ns3 run <ns3-program> --command-template="\%s <args>"

Substitute your program name for <ns3-program>, and the arguments for <args>. The --command-template argument to ns 3 is basically a recipe for constructing the actual command line ns 3 should use to execute the program. ns 3 checks that the build is complete, sets the shared library paths, then invokes the executable using the provided command line template, inserting the program name for the \%s placeholder.

If you find the above to be syntactically complicated, a simpler variant exists, which is to include the \(n s-3\) program and its arguments enclosed by single quotes, such as:

\$ ./ns3 run '<ns3-program> --arg1=value1 --arg2=value2 ...'

Another particularly useful example is to run a test suite by itself. Let's assume that a mytest test suite exists (it doesn't). Above, we used the ./test.py script to run a whole slew of tests in parallel, by repeatedly invoking the real testing program, test-runner. To invoke test-runner directly for a single test:

\$ ./ns3 run test-runner --command-template="\%s --suite=mytest --verbose"

This passes the arguments to the test-runner program. Since mytest does not exist, an error message will be generated. To print the available test-runner options:

\$ ./ns3 run test-runner --command-template="\%s --help"

\subsection*{4.6.2 Debugging}

To run \(n s-3\) programs under the control of another utility, such as a debugger (e.g. gdb) or memory checker (e.g. valgrind), you use a similar --command-template=" ..." form.

For example, to run your \(n s-3\) program hello-simulator with the arguments <args> under the gdb debugger:

\$ ./ns3 run hello-simulator --command-template="gdb \%s --args <args>"

Notice that the \(n s-3\) program name goes with the --run argument, and the control utility (here gdb) is the first token in the --command-template argument. The --args tells gdb that the remainder of the command line belongs to the "inferior" program. (Some gdb's don't understand the --args feature. In this case, omit the program arguments from the --command-template, and use the gdb command set args.)

We can combine this recipe and the previous one to run a test under the debugger:

\$ ./ns3 run test-runner --command-template="gdb \%s --args --suite=mytest --verbose"

\subsection*{4.6.3 Working Directory}

ns3 needs to run from its location at the top of the \(n s-3\) tree. This becomes the working directory where output files will be written. But what if you want to keep those files out of the \(n s-3\) source tree? Use the --cwd argument:

\$ ./ns3 run program-name --cwd=...

We mention this --cwd command for completeness; most users will simply run ns 3 from the top-level directory and generate the output data files there.

\subsection*{4.6.4 Running without Building}

As of the ns-3.30 release, a new ns3 option was introduced to allow the running of programs while skipping the build step. This can reduce the time to run programs when, for example, running the same program repeatedly through a shell script, or when demonstrating program execution. The option --no-build modifies the run option, skipping the build steps of the program and required ns-3 libraries.

\$ ./ns3 run '<ns3-program> --arg1=value1 --arg2=value2 ...' --no-build

\subsection*{4.6.5 Build version}

As of the ns-3.32 release, a new ns 3 configure option --enable-build-version was introduced which inspects the local ns3 git repository during builds and adds version metadata to the core module.

This configuration option has the following prerequisites:
- The ns-3 directory must be part of a local git repository
- The local git repository must have at least one ns-3 release tag

or
- A file named version.cache, containing version information, is located in the src/core directory

If these prerequisites are not met, the configuration will fail.

When these prerequisites are met and ns-3 is configured with the --enable-build-version option, the ns 3 command show version can be used to query the local git repository and display the current version metadata.

\(\$ . / n s 3\) show version

ns3 will collect information about the build and print out something similar to the output below.

ns-3.33+249@g80e0dd0-dirty-debug

If show version is run when--enable-build-version was not configured, an error message indicating that the option is disabled will be displayed instead.

Build version support is not enabled, reconfigure with --enable-build-version flag

The build information is generated by examining the current state of the git repository. The output of show version will change whenever the state of the active branch changes.

The output of show version has the following format:

<version_tag>[+closest_tag][+distance_from_tag]@<commit_hash>[-tree_state]-<profile>

version_tag version_tag contains the version of the ns-3 code. The version tag is defined as a git tag with the format ns-3*. If multiple git tags match the format, the tag on the active branch which is closest to the current commit is chosen.

closest_tag closest_tag is similar to version_tag except it is the first tag found, regardless of format. The closest tag is not included in the output when closest_tag and version_tag have the same value.

distance_from_tag distance_from_tag contains the number of commits between the current commit and closest_tag. distance_from_tag is not included in the output when the value is 0 (i.e. when closest_tag points to the current commit)

commit_hash commit_hash is the hash of the commit at the tip of the active branch. The value is ' \(\mathrm{g}\) ' followed by the first 7 characters of the commit hash. The ' \(\mathrm{g}\) ' prefix is used to indicate that this is a git hash.

tree_state tree_state indicates the state of the working tree. When the working tree has uncommitted changes this field has the value 'dirty'. The tree state is not included in the version output when the working tree is clean (e.g. when there are no uncommitted changes).

profile The build profile specified in the --build-profile option passed to ns3 configure

A new class, named Version, has been added to the core module. The Version class contains functions to retrieve individual fields of the build version as well as functions to print the full build version like show version. The build-version-example application provides an example of how to use the Version class to retrieve the various build version fields. See the documentation for the Version class for specifics on the output of the Version class functions.

The version information stored in the Version class is updated every time the git repository changes. This may lead to frequent recompilations/linking of the core module when the --enable-build-version option is configured.

build-version-example: 构建版本示例:
Program Version (according to CommandLine): ns-3.33+249@g80e0dd0-dirty-debug
程序版本(根据命令行): ns-3.33+249@g80e0dd0-dirty-debug

Version fields: 版本字段:
LongVersion: ns-3.33+249@g80e0dd0-dirty-debug
长版本:ns-3.33+249@g80e0dd0-dirty-debug

ShortVersion: ns-3.33+* 短版本:ns-3.33+*
BuildSummary: ns-3.33+* 构建摘要:ns-3.33+*
VersionTag: ns-3.33 版本标签: ns-3.33
Major:  主要:
Minor: 33 次要: 33
Patch: 0 补丁: 0
ReleaseCandidate: 发布候选版本:
ClosestAncestorTag: ns-3.33
最近的祖先标签: ns-3.33

TagDistance: 标签距离:
CommitHash: g80e0dd0 提交哈希: g80e0dd0
BuildProfile: debug 构建配置文件: 调试
WorkingTree: dirty 工作树:脏

The CommandLine class has also been updated to support the --version option which will print the full build version and exit.

./ns3 run "command-line-example --version" --no-build

ns-3.33+249@g80e0dd0-dirty-debug

If the --enable-build-version option was not configured, --version will print out a message similar to show version indicating that the build version option is not enabled.

\subsection*{4.6.6 Source version}

An alternative to storing build version information in the \(n s-3\) libraries is to track the source code version used to build the code. When using Git, the following recipe can be added to Bash shell scripts to create a version.txt file with Git revision information, appended with a patch of any changes to that revision if the repository is dirty. The resulting text file can then be saved with any corresponding \(n s-3\) simulation results.

echo git describe > version.txt
gitDiff=git diff git 差异= git diff
if [[ gitDiff" >> version.txt
如果 [[ gitDiff" >> version.txt

fi

\section*{CONCEPTUAL OVERVIEW}

The first thing we need to do before actually starting to look at or write \(n s-3\) code is to explain a few core concepts and abstractions in the system. Much of this may appear transparently obvious to some, but we recommend taking the time to read through this section just to ensure you are starting on a firm foundation.

\subsection*{5.1 Key Abstractions}

In this section, we'll review some terms that are commonly used in networking, but have a specific meaning in \(n s-3\).

\subsection*{5.1.1 Node}

In Internet jargon, a computing device that connects to a network is called a host or sometimes an end system. Because \(n s-3\) is a network simulator, not specifically an Internet simulator, we intentionally do not use the term host since it is closely associated with the Internet and its protocols. Instead, we use a more generic term also used by other simulators that originates in Graph Theory - the node.

In ns-3 the basic computing device abstraction is called the node. This abstraction is represented in C++ by the class Node. The Node class provides methods for managing the representations of computing devices in simulations.

You should think of a Node as a computer to which you will add functionality. One adds things like applications, protocol stacks and peripheral cards with their associated drivers to enable the computer to do useful work. We use the same basic model in \(n s-3\).

\subsection*{5.1.2 Application}

Typically, computer software is divided into two broad classes. System Software organizes various computer resources such as memory, processor cycles, disk, network, etc., according to some computing model. System software usually does not use those resources to complete tasks that directly benefit a user. A user would typically run an application that acquires and uses the resources controlled by the system software to accomplish some goal.

Often, the line of separation between system and application software is made at the privilege level change that happens in operating system traps. In \(n s-3\) there is no real concept of operating system and especially no concept of privilege levels or system calls. We do, however, have the idea of an application. Just as software applications run on computers to perform tasks in the "real world," \(n s-3\) applications run on \(n s-3\) Nodes to drive simulations in the simulated world.

In \(n s-3\) the basic abstraction for a user program that generates some activity to be simulated is the application. This abstraction is represented in C++ by the class Application. The Application class provides methods for managing the representations of our version of user-level applications in simulations. Developers are expected to specialize the Application class in the object-oriented programming sense to create new applications. In this tutorial, we will use specializations of class Application called UdpEchoClientApplication and UdpEchoServerApplication.

As you might expect, these applications compose a client/server application set used to generate and echo simulated network packets

\subsection*{5.1.3 Channel}

In the real world, one can connect a computer to a network. Often the media over which data flows in these networks are called channels. When you connect your Ethernet cable to the plug in the wall, you are connecting your computer to an Ethernet communication channel. In the simulated world of \(n s-3\), one connects a Node to an object representing a communication channel. Here the basic communication subnetwork abstraction is called the channel and is represented in C++ by the class channel.

The Channel class provides methods for managing communication subnetwork objects and connecting nodes to them. Channels may also be specialized by developers in the object oriented programming sense. A channel specialization may model something as simple as a wire. The specialized channel can also model things as complicated as a large Ethernet switch, or three-dimensional space full of obstructions in the case of wireless networks.

We will use specialized versions of the Channel called CsmaChannel, PointToPointChannel and WifiChannel in this tutorial. The CsmaChannel, for example, models a version of a communication subnetwork that implements a carrier sense multiple access communication medium. This gives us Ethernet-like functionality.

\subsection*{5.1.4 Net Device}

It used to be the case that if you wanted to connect a computer to a network, you had to buy a specific kind of network cable and a hardware device called (in PC terminology) a peripheral card that needed to be installed in your computer. If the peripheral card implemented some networking function, they were called Network Interface Cards, or NICs. Today most computers come with the network interface hardware built in and users don't see these building blocks.

A NIC will not work without a software driver to control the hardware. In Unix (or Linux), a piece of peripheral hardware is classified as a device. Devices are controlled using device drivers, and network devices (NICs) are controlled using network device drivers collectively known as net devices. In Unix and Linux you refer to these net devices by names such as eth0.

In ns-3 the net device abstraction covers both the software driver and the simulated hardware. A net device is "installed" in a Node in order to enable the Node to communicate with other Nodes in the simulation via Channels. Just as in a real computer, a Node may be connected to more than one Channel via multiple NetDevices.

The net device abstraction is represented in C++ by the class NetDevice. The NetDevice class provides methods for managing connections to Node and Channel objects; and may be specialized by developers in the objectoriented programming sense. We will use the several specialized versions of the NetDevice called CsmaNetDevice, PointToPointNetDevice, and WifinetDevice in this tutorial. Just as an Ethernet NIC is designed to work with an Ethernet network, the CsmaNetDevice is designed to work with a CsmaChannel; the PointToPointNetDevice is designed to work with a PointToPointChannel and a WifiNetDevice is designed to work with a WifiChannel.

\subsection*{5.1.5 Topology Helpers}

In a real network, you will find host computers with added (or built-in) NICs. In \(n s-3\) we would say that you will find Nodes with attached NetDevices. In a large simulated network you will need to arrange many connections between Nodes, NetDevices and Channels.

Since connecting NetDevices to Nodes, NetDevices to Channels, assigning IP addresses, etc., are such common tasks in \(n s-3\), we provide what we call topology helpers to make this as easy as possible. For example, it may take many distinct \(n s-3\) core operations to create a NetDevice, add a MAC address, install that net device on a Node, configure the node's protocol stack, and then connect the NetDevice to a Channel. Even more operations would be required to connect multiple devices onto multipoint channels and then to connect individual networks together into
internetworks. We provide topology helper objects that combine those many distinct operations into an easy to use model for your convenience.

\subsection*{5.2 A First ns-3 Script}

If you downloaded the system as was suggested above, you will have a release of \(n s\) - 3 in a directory called workspace under your home directory. Change into that release directory, and you should find a directory structure something like the following:

\begin{tabular}{lllll} 
AUTHORS & CMakeLists.txt & examples & RELEASE_NOTES.md & testpy.supp \\
bindings & contrib & LICENSE & scratch & utils \\
build-support & CONTRIBUTING.md & ns3 & src & utils.py \\
CHANGES.md & doc & README.md & test.py & VERSION
\end{tabular}

Change into the examples/tutorial directory. You should see a file named first.cc located there. This is a script that will create a simple point-to-point link between two nodes and echo a single packet between the nodes. Let's take a look at that script line by line, so go ahead and open first.cc in your favorite editor.

\subsection*{5.2.1 Copyright}

The \(n s-3\) simulator is licensed using the GNU General Public License version 2. You will see the appropriate GNU legalese at the head of every file in the \(n s-3\) distribution. Often you will see a copyright notice for one of the institutions involved in the \(n s-3\) project above the GPL text and an author listed below.

This program is free software; you can redistribute it and/or modify
本程序是自由软件;您可以重新发布或修改它

it under the terms of the GNU General Public License version 2 as
以 GNU 通用公共许可证第 2 版的条款进行
  • published by the Free Software Foundation;
    由自由软件基金会发布;

    This program is distributed in the hope that it will be useful,
    本程序是希望它能够有用地分发的,

    but WITHOUT ANY WARRANTY; without even the implied warranty of
    但没有任何保证; 甚至没有默示的保证
  • MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
    特定目的的适销性或适用性。请参阅
  • GNU General Public License for more details.
    GNU 通用公共许可证以获取更多详细信息。
  • You should have received a copy of the GNU General Public License
    您应该已收到 GNU 通用公共许可证的副本
  • along with this program; if not, write to the Free Software
    与本程序一起; 如果没有,请写信给自由软件
  • Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
    基金会,Inc.,59 Temple Place,Suite 330,波士顿,MA 02111-1307 美国

    */

\subsection*{5.2.2 Module Includes}

The code proper starts with a number of include statements.

#include "ns3/core-module.h"
#include "ns3/network-module.h"
#包括 "ns3/network-module.h"

#include "ns3/internet-module.h"
#包括 "ns3/internet-module.h"

#include "ns3/point-to-point-module.h"
#包括 "ns3/point-to-point-module.h"

#include "ns3/applications-module.h"
#包括 "ns3/applications-module.h"

To help our high-level script users deal with the large number of include files present in the system, we group includes according to relatively large modules. We provide a single include file that will recursively load all of the include files used in each module. Rather than having to look up exactly what header you need, and possibly have to get a
number of dependencies right, we give you the ability to load a group of files at a large granularity. This is not the most efficient approach but it certainly makes writing scripts much easier.

Each of the \(n s-3\) include files is placed in a directory called ns 3 (under the build directory) during the build process to help avoid include file name collisions. The ns3/core-module.h file corresponds to the ns-3 module you will find in the directory src/core in your downloaded release distribution. If you list this directory you will find a large number of header files. When you do a build, ns 3 will place public header files in an \(\mathrm{ns} 3\) directory under the appropriate build/debug or build/optimized directory depending on your configuration. CMake will also automatically generate a module include file to load all of the public header files.

Since you are, of course, following this tutorial religiously, you will already have run the following command from the top-level directory:

\$ ./ns3 configure -d debug --enable-examples --enable-tests

in order to configure the project to perform debug builds that include examples and tests. You will also have called

\(\$ . /\) ns3 build

to build the project. So now if you look in the directory . ./. . /build/include/ns 3 you will find the four module include files shown above (among many other header files). You can take a look at the contents of these files and find that they do include all of the public include files in their respective modules.

\subsection*{5.2.3 Ns3 Namespace}

The next line in the first.cc script is a namespace declaration.

using namespace ns3;

The \(n s-3\) project is implemented in a C++ namespace called ns3. This groups all \(n s-3-\)-related declarations in a scope outside the global namespace, which we hope will help with integration with other code. The C++ using statement introduces the \(n s-3\) namespace into the current (global) declarative region. This is a fancy way of saying that after this declaration, you will not have to type \(n s 3\) : : scope resolution operator before all of the \(n s-3\) code in order to use it. If you are unfamiliar with namespaces, please consult almost any C++ tutorial and compare the ns 3 namespace and usage here with instances of the std namespace and the using namespace std; statements you will often find in discussions of cout and streams.

\subsection*{5.2.4 Logging}

The next line of the script is the following,

NS_LOG_COMPONENT_DEFINE ("FirstScriptExample");

We will use this statement as a convenient place to talk about our Doxygen documentation system. If you look at the project web site, ns-3 project, you will find a link to "Documentation" in the navigation bar. If you select this link, you will be taken to our documentation page. There is a link to "Latest Release" that will take you to the documentation for the latest stable release of \(n s-3\). If you select the "API Documentation" link, you will be taken to the \(n s-3\) API documentation page.

Along the left side, you will find a graphical representation of the structure of the documentation. A good place to start is the NS-3 Modules "book" in the \(n s-3\) navigation tree. If you expand Modules you will see a list of \(n s-3\) module documentation. The concept of module here ties directly into the module include files discussed above. The ns-3 logging subsystem is discussed in the Using the Logging Module section, so we'll get to it later in this tutorial, but you can find out about the above statement by looking at the Core module, then expanding the Debugging tools book, and then selecting the Logging page. Click on Logging.

You should now be looking at the Doxygen documentation for the Logging module. In the list of Macros's at the top of the page you will see the entry for NS_LOG_COMPONENT_DEFINE. Before jumping in, it would probably be good to look for the "Detailed Description" of the logging module to get a feel for the overall operation. You can either scroll down or select the "More..." link under the collaboration diagram to do this.

Once you have a general idea of what is going on, go ahead and take a look at the specific NS_LOG_COMPONENT_DEFINE documentation. I won't duplicate the documentation here, but to summarize, this line declares a logging component called FirstScriptExample that allows you to enable and disable console message logging by reference to the name.

\subsection*{5.2.5 Main Function}

The next lines of the script you will find are,

int

main(int argc, char *argv[])

\{

This is just the declaration of the main function of your program (script). Just as in any C++ program, you need to define a main function that will be the first function run. There is nothing at all special here. Your \(n s-3\) script is just a C++ program.

The next line sets the time resolution to one nanosecond, which happens to be the default value:

Time: :SetResolution(Time: :NS)

The resolution is the smallest time value that can be represented (as well as the smallest representable difference between two time values). You can change the resolution exactly once. The mechanism enabling this flexibility is somewhat memory hungry, so once the resolution has been set explicitly we release the memory, preventing further updates. (If you don't set the resolution explicitly, it will default to one nanosecond, and the memory will be released when the simulation starts.)

The next two lines of the script are used to enable two logging components that are built into the Echo Client and Echo Server applications:

LogComponentEnable("UdpEchoClientApplication", LOG_LEVEL_INFO);

LogComponentEnable("UdpEchoServerApplication", LOG_LEVEL_INFO);

If you have read over the Logging component documentation you will have seen that there are a number of levels of logging verbosity/detail that you can enable on each component. These two lines of code enable debug logging at the INFO level for echo clients and servers. This will result in the application printing out messages as packets are sent and received during the simulation.

Now we will get directly to the business of creating a topology and running a simulation. We use the topology helper objects to make this job as easy as possible.

\subsection*{5.2.6 Topology Helpers}

\section*{NodeContainer}

The next two lines of code in our script will actually create the \(n s-3\) Node objects that will represent the computers in the simulation.

NodeContainer nodes;

nodes.Create (2);

Let's find the documentation for the NodeCont ainer class before we continue. Another way to get into the documentation for a given class is via the Classes tab in the Doxygen pages. If you still have the Doxygen handy, just scroll up to the top of the page and select the Classes tab. You should see a new set of tabs appear, one of which is Class List. Under that tab you will see a list of all of the \(n s-3\) classes. Scroll down, looking for ns3: :NodeContainer. When you find the class, go ahead and select it to go to the documentation for the class.

You may recall that one of our key abstractions is the Node. This represents a computer to which we are going to add things like protocol stacks, applications and peripheral cards. The NodeContainer topology helper provides a convenient way to create, manage and access any Node objects that we create in order to run a simulation. The first line above just declares a NodeContainer which we call nodes. The second line calls the create method on the nodes object and asks the container to create two nodes. As described in the Doxygen, the container calls down into the \(n s-3\) system proper to create two Node objects and stores pointers to those objects internally.

The nodes as they stand in the script do nothing. The next step in constructing a topology is to connect our nodes together into a network. The simplest form of network we support is a single point-to-point link between two nodes. We'll construct one of those links here.

\section*{PointToPointHelper}

We are constructing a point to point link, and, in a pattern which will become quite familiar to you, we use a topology helper object to do the low-level work required to put the link together. Recall that two of our key abstractions are the NetDevice and the Channel. In the real world, these terms correspond roughly to peripheral cards and network cables. Typically these two things are intimately tied together and one cannot expect to interchange, for example, Ethernet devices and wireless channels. Our Topology Helpers follow this intimate coupling and therefore you will use a single PointToPointHelper to configure and connect \(n s\) - 3 PointToPointNetDevice and PointToPointChannel objects in this script.

The next three lines in the script are,

PointToPointHelper pointToPoint;

pointToPoint.SetDeviceAttribute("DataRate", StringValue("5Mbps"));

pointToPoint.SetChannelAttribute("Delay", StringValue("2ms"));

The first line

PointToPointHelper pointToPoint;

instantiates a PointToPointHelper object on the stack. From a high-level perspective the next line,

pointToPoint.SetDeviceAttribute("DataRate", StringValue("5Mbps"));

tells the PointToPointHelper object to use the value "5Mbps" (five megabits per second) as the "DataRate" when it creates a PointToPointNetDevice object.

From a more detailed perspective, the string "DataRate" corresponds to what we call an Attribute of the PointToPointNetDevice. If you look at the Doxygen for class ns3::PointToPointNetDevice and find the documentation for the GetTypeId method, you will find a list of Attributes defined for the device. Among these is the "DataRate" Attribute. Most user-visible \(n s-3\) objects have similar lists of Attributes. We use this mechanism to easily configure simulations without recompiling as you will see in a following section.

Similar to the "DataRate" on the PointToPointNetDevice you will find a "Delay" Attribute associated with the PointToPointChannel. The final line,

pointToPoint.SetChannelAttribute("Delay", StringValue("2ms"));

tells the PointToPointHelper to use the value " \(2 \mathrm{ms"}\) "two milliseconds) as the value of the propagation delay of every point to point channel it subsequently creates.

\section*{NetDeviceContainer}

At this point in the script, we have a NodeContainer that contains two nodes. We have a PointToPointHelper that is primed and ready to make PointToPointNetDevices and wire PointToPointChannel objects between them. Just as we used the NodeContainer topology helper object to create the Nodes for our simulation, we will ask the PointToPointHelper to do the work involved in creating, configuring and installing our devices for us. We will need to have a list of all of the NetDevice objects that are created, so we use a NetDeviceContainer to hold them just as we used a NodeContainer to hold the nodes we created. The following two lines of code,

NetDeviceContainer devices;

devices = pointToPoint.Install(nodes);

will finish configuring the devices and channel. The first line declares the device container mentioned above and the second does the heavy lifting. The Install method of the PointToPointHelper takes a NodeContainer as a parameter. Internally, a NetDeviceContainer is created. For each node in the NodeContainer (there must be exactly two for a point-to-point link) a PointToPointNetDevice is created and saved in the device container. A PointToPointChannel is created and the two PointToPointNetDevices are attached. When objects are created by the PointToPointHelper, the Attributes previously set in the helper are used to initialize the corresponding Attributes in the created objects.

After executing the pointToPoint. Install(nodes) call we will have two nodes, each with an installed point-topoint net device and a single point-to-point channel between them. Both devices will be configured to transmit data at five megabits per second over the channel which has a two millisecond transmission delay.

\section*{InternetStackHelper}

We now have nodes and devices configured, but we don't have any protocol stacks installed on our nodes. The next two lines of code will take care of that.

InternetStackHelper stack;

stack.Install(nodes);

The InternetStackHelper is a topology helper that is to internet stacks what the PointToPointHelper is to point-to-point net devices. The Install method takes a NodeContainer as a parameter. When it is executed, it will install an Internet Stack (TCP, UDP, IP, etc.) on each of the nodes in the node container.

\section*{Ipv4AddressHelper}

Next we need to associate the devices on our nodes with IP addresses. We provide a topology helper to manage the allocation of IP addresses. The only user-visible API is to set the base IP address and network mask to use when performing the actual address allocation (which is done at a lower level inside the helper).

The next two lines of code in our example script, first.cc,

Ipv4AddressHelper address;

address.SetBase("10.1.1.0", "255.255.255.0");

declare an address helper object and tell it that it should begin allocating IP addresses from the network 10.1.1.0 using the mask 255.255.255.0 to define the allocatable bits. By default the addresses allocated will start at one and increase monotonically, so the first address allocated from this base will be 10.1.1.1, followed by 10.1.1.2, etc. The low level \(n s-3\) system actually remembers all of the IP addresses allocated and will generate a fatal error if you accidentally cause the same address to be generated twice (which is a very hard to debug error, by the way).

The next line of code,

Ipv4InterfaceContainer interfaces = address.Assign(devices);

performs the actual address assignment. In \(n s-3\) we make the association between an IP address and a device using an Ipv4Interface object. Just as we sometimes need a list of net devices created by a helper for future reference we sometimes need a list of Ipv4Interface objects. The Ipv4InterfaceContainer provides this functionality.

Now we have a point-to-point network built, with stacks installed and IP addresses assigned. What we need at this point are applications to generate traffic.

\subsection*{5.2.7 Applications}

Another one of the core abstractions of the ns-3 system is the Application. In this script we use two specializations of the core \(n s\)-3 class Application called UdpEchoServerApplication and UdpEchoClientApplication. Just as we have in our previous explanations, we use helper objects to help configure and manage the underlying objects. Here, we use UdpEchoServerHelper and UdpEchoClientHelper objects to make our lives easier.

\section*{UdpEchoServerHelper}

The following lines of code in our example script, first.cc, are used to set up a UDP echo server application on one of the nodes we have previously created.

UdpEchoServerHelper echoServer(9);

ApplicationContainer serverApps = echoServer.Install(nodes.Get(1));

serverApps.Start(Seconds(1.0));

serverApps.Stop(Seconds(10.0));

The first line of code in the above snippet declares the UdpEchoServerHelper. As usual, this isn't the application itself, it is an object used to help us create the actual applications. One of our conventions is to place required Attributes in the helper constructor. In this case, the helper can't do anything useful unless it is provided with a port number that the client also knows about. Rather than just picking one and hoping it all works out, we require the port number as a parameter to the constructor. The constructor, in turn, simply does a SetAttribute with the passed value. If you want, you can set the "Port" Attribute to another value later using SetAttribute.

Similar to many other helper objects, the UdpEchoServerHelper object has an Install method. It is the execution of this method that actually causes the underlying echo server application to be instantiated and attached to a node. Interestingly, the Install method takes a NodeContainer as a parameter just as the other Install methods we have seen. This is actually what is passed to the method even though it doesn't look so in this case. There is a C++ implicit conversion at work here that takes the result of nodes. Get (1) (which returns a smart pointer to a node object - Ptr<Node>) and uses that in a constructor for an unnamed NodeContainer that is then passed to Install. If you are ever at a loss to find a particular method signature in C++ code that compiles and runs just fine, look for these kinds of implicit conversions.

We now see that echoServer. Install is going to install a UdpEchoServerApplication on the node found at index number one of the NodeContainer we used to manage our nodes. Install will return a container that holds pointers to all of the applications (one in this case since we passed a NodeContainer containing one node) created by the helper.

Applications require a time to "start" generating traffic and may take an optional time to "stop". We provide both. These times are set using the ApplicationContainer methods Start and Stop. These methods take Time parameters. In this case, we use an explicit \(\mathrm{C}++\) conversion sequence to take the C++ double 1.0 and convert it to an \(n s-3\) Time object using a Seconds cast. Be aware that the conversion rules may be controlled by the model author, and C++ has its own rules, so you can't always just assume that parameters will be happily converted for you. The two lines,

serverApps.Start (Seconds(1.0));

serverApps.Stop(Seconds(10.0));

will cause the echo server application to Start (enable itself) at one second into the simulation and to Stop (disable itself) at ten seconds into the simulation. By virtue of the fact that we have declared a simulation event (the application stop event) to be executed at ten seconds, the simulation will last at least ten seconds.

\section*{UdpEchoClientHelper}

The echo client application is set up in a method substantially similar to that for the server. There is an underlying UdpEchoClientApplication that is managed by an UdpEchoClientHelper.

UdpEchoClientHelper echoClient(interfaces.GetAddress(1), 9);
echoClient.SetAttribute("MaxPackets", UintegerValue(1));
echoClient.SetAttribute("Interval", TimeValue(Seconds(1.0)));
echoClient.SetAttribute("间隔", TimeValue(Seconds(1.0)));

echoClient.SetAttribute("PacketSize", UintegerValue(1024));
echoClient.SetAttribute("数据包大小", UintegerValue(1024));

ApplicationContainer clientApps = echoClient.Install(nodes.Get(0));
clientApps.Start(Seconds(2.0));
clientApps.Stop(Seconds(10.0));

For the echo client, however, we need to set five different Attributes. The first two Attributes are set during construction of the UdpEchoClientHelper. We pass parameters that are used (internally to the helper) to set the "RemoteAddress" and "RemotePort" Attributes in accordance with our convention to make required Attributes parameters in the helper constructors.

Recall that we used an Ipv4InterfaceContainer to keep track of the IP addresses we assigned to our devices. The zeroth interface in the interfaces container is going to correspond to the IP address of the zeroth node in the nodes container. The first interface in the interfaces container corresponds to the IP address of the first node in the nodes container. So, in the first line of code (from above), we are creating the helper and telling it so set the remote address of the client to be the IP address assigned to the node on which the server resides. We also tell it to arrange to send packets to port nine.

The "MaxPackets" Attribute tells the client the maximum number of packets we allow it to send during the simulation. The "Interval" Attribute tells the client how long to wait between packets, and the "PacketSize" Attribute tells the client how large its packet payloads should be. With this particular combination of Attributes, we are telling the client to send one 1024-byte packet.

Just as in the case of the echo server, we tell the echo client to Start and Stop, but here we start the client one second after the server is enabled (at two seconds into the simulation).

\subsection*{5.2.8 Simulator}

What we need to do at this point is to actually run the simulation. This is done using the global function Simulator: : Run.

Simulator: :Run();

When we previously called the methods,

serverApps.Start(Seconds(1.0));
serverApps.Start(Seconds(1.0)); serverApps.Start(Seconds(1.0));

serverApps.Stop(Seconds(10.0));
...
clientApps.Start(Seconds(2.0));
clientApps.Stop(Seconds(10.0));
we actually scheduled events in the simulator at 1.0 seconds, 2.0 seconds and two events at 10.0 seconds. When Simulator::Run is called, the system will begin looking through the list of scheduled events and executing them. First it will run the event at 1.0 seconds, which will enable the echo server application (this event may, in turn, schedule many other events). Then it will run the event scheduled for \(\mathrm{t}=2.0\) seconds which will start the echo client application. Again, this event may schedule many more events. The start event implementation in the echo client application will begin the data transfer phase of the simulation by sending a packet to the server.

The act of sending the packet to the server will trigger a chain of events that will be automatically scheduled behind the scenes and which will perform the mechanics of the packet echo according to the various timing parameters that we have set in the script.

Eventually, since we only send one packet (recall the MaxPacket s Attribute was set to one), the chain of events triggered by that single client echo request will taper off and the simulation will go idle. Once this happens, the remaining events will be the Stop events for the server and the client. When these events are executed, there are no further events to process and Simulator: :Run returns. The simulation is then complete.

All that remains is to clean up. This is done by calling the global function Simulator::Destroy. As the helper functions (or low level \(n s-3\) code) executed, they arranged it so that hooks were inserted in the simulator to destroy all of the objects that were created. You did not have to keep track of any of these objects yourself - all you had to do was to call Simulator::Destroy and exit. The \(n s-3\) system took care of the hard part for you. The remaining lines of our first \(n s\) - 3 script, first.cc, do just that:

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-062.jpg?height=39&width=390&top_left_y=1092&top_left_x=282)
    return 0;
}

When the simulator will stop?
模拟器将何时停止?

is a Discrete Event (DE) simulator. In such a simulator, each event is associated with its execution time, and the simulation proceeds by executing events in the temporal order of simulation time. Events may cause future events to be scheduled (for example, a timer may reschedule itself to expire at the next interval).
是一种离散事件(DE)模拟器。在这种模拟器中,每个事件都与其执行时间相关联,并且模拟按照模拟时间的时间顺序执行事件。事件可能导致将来的事件被调度(例如,定时器可能重新安排自己在下一个间隔到期时)。
The initial events are usually triggered by each object, e.g., IPv6 will schedule Router Advertisements, Neighbor Solicitations, etc., an Application schedule the first packet sending event, etc.
初始事件通常由每个对象触发,例如,IPv6 将安排路由器通告、邻居请求等,应用程序安排第一个数据包发送事件等。
When an event is processed, it may generate zero, one or more events. As a simulation executes, events are consumed, but more events may (or may not) be generated. The simulation will stop automatically when no further events are in the event queue, or when a special Stop event is found. The Stop event is created through the Simulator: :Stop(stopTime); function.
处理事件时,可能会生成零个、一个或多个事件。在模拟执行时,事件会被消耗,但可能(或可能不会)生成更多事件。当事件队列中没有进一步的事件,或者找到一个特殊的停止事件时,模拟将自动停止。停止事件是通过 Simulator::Stop(stopTime); 函数创建的。
There is a typical case where Simulator: : Stop is absolutely necessary to stop the simulation: when there is a selfsustaining event. Self-sustaining (or recurring) events are events that always reschedule themselves. As a consequence, they always keep the event queue non-empty.
有一个典型情况,即 Simulator::Stop 绝对必要用于停止模拟:当存在自维持事件时。自维持(或循环)事件是总是重新安排自己的事件。因此,它们总是保持事件队列非空。
There are many protocols and modules containing recurring events, e.g.:
有许多包含循环事件的协议和模块,例如:
  • FlowMonitor - periodic check for lost packets
    流量监视器 - 定期检查丢失的数据包
  • RIPng - periodic broadcast of routing tables update
    RIPng - 定期广播路由表更新
  • etc. 等等。
In these cases, Simulator::Stop is necessary to gracefully stop the simulation. In addition, when is in emulation mode, the RealtimeSimulator is used to keep the simulation clock aligned with the machine clock, and Simulator: :Stop is necessary to stop the process.
在这些情况下,需要使用 Simulator::Stop 来优雅地停止模拟。此外,当 处于仿真模式时,RealtimeSimulator 用于保持模拟时钟与机器时钟对齐,并且 Simulator::Stop 是必要的以停止进程。
Many of the simulation programs in the tutorial do not explicitly call Simulator: :Stop, since the event queue will automatically run out of events. However, these programs will also accept a call to Simulator: :Stop. For example, the following additional statement in the first example program will schedule an explicit stop at 11 seconds:
教程中的许多模拟程序并不明确调用 Simulator::Stop,因为事件队列会自动耗尽事件。但是,这些程序也会接受对 Simulator::Stop 的调用。例如,在第一个示例程序中的以下附加语句将在 11 秒时安排一个显式停止:
+ Simulator::Stop(Seconds(11.0));
    Simulator::Run();

    return 0;
}
The above will not actually change the behavior of this program, since this particular simulation naturally ends after 10 seconds. But if you were to change the stop time in the above statement from 11 seconds to 1 second, you would notice that the simulation stops before any output is printed to the screen (since the output occurs around time 2 seconds of simulation time).
以上操作实际上不会改变此程序的行为,因为此特定模拟在 10 秒后自然结束。但是,如果您将上述语句中的停止时间从 11 秒更改为 1 秒,您会注意到模拟在任何输出打印到屏幕之前停止(因为输出发生在模拟时间的约 2 秒时)。
It is important to call Simulator: :Stop before calling Simulator: :Run; otherwise, Simulator: :Run may never return control to the main program to execute the stop!
在调用 Simulator::Run 之前调用 Simulator::Stop 是很重要的;否则,Simulator::Run 可能永远不会将控制返回给主程序来执行停止!

5.2.9 Building Your Script
5.2.9 构建您的脚本

We have made it trivial to build your simple scripts. All you have to do is to drop your script into the scratch directory and it will automatically be built if you run ns3. Let's try it. Copy examples/tutorial/first.cc into the scratch directory after changing back into the top level directory.
我们已经让构建简单脚本变得微不足道。您所需要做的就是将脚本放入 scratch 目录,如果运行 ns3,它将自动构建。让我们试试吧。在切换回顶层目录后,将 examples/tutorial/first.cc 复制到 scratch 目录中。
.
$ cp examples/tutorial/first.cc scratch/myfirst.cc
Now build your first example script using ns3:
现在使用 ns3 构建您的第一个示例脚本:
build  构建
You should see messages reporting that your myfirst example was built successfully.
您应该看到有关您的 myfirst 示例已成功构建的消息。
Scanning dependencies of target scratch_myfirst
[ 0%] Building CXX object scratch/CMakeFiles/scratch_myfirst.dir/myfirst.cc.o
[ 0%] Linking CXX executable ../../build/scratch/ns3.36.1-myfirst-debug
Finished executing the following commands:
cd cmake-cache; cmake --build . -j 7 ; cd ..
You can now run the example (note that if you build your program in the scratch directory you must run it out of the scratch directory):
您现在可以运行示例(请注意,如果您在 scratch 目录中构建程序,则必须在 scratch 目录之外运行它):
$ ./ns3 run scratch/myfirst
You should see some output:
您应该看到一些输出:
At time +2s client sent 1024 bytes to 10.1.1.2 port 9
At time +2.00369s server received 1024 bytes from 10.1.1.1 port 49153
At time +2.00369s server sent 1024 bytes to 10.1.1.1 port 49153
At time +2.00737s client received 1024 bytes from 10.1.1.2 port 9
Here you see the logging component on the echo client indicate that it has sent one 1024 byte packet to the Echo Server on 10.1.1.2. You also see the logging component on the echo server say that it has received the 1024 bytes from 10.1.1.1. The echo server silently echoes the packet and you see the echo client log that it has received its packet back from the server.
在这里,您可以看到回显客户端上的日志组件指示它已将一个 1024 字节的数据包发送到 10.1.1.2 上的回显服务器。您还可以看到回显服务器上的日志组件表示它已从 10.1.1.1 接收到 1024 字节。回显服务器默默地回显数据包,然后您会看到回显客户端记录它已从服务器收到其数据包。

5.3 Ns-3 Source Code
5.3 Ns-3 源代码

Now that you have used some of the helpers you may want to have a look at some of the source code that implements that functionality.
现在您已经使用了一些 助手,您可能想看一下实现该功能的一些源代码。
Our example scripts are in the examples directory. If you change to examples directory, you will see a list of subdirectories. One of the files in tutorial subdirectory is first.cc. If you click on first.cc you will find the code you just walked through.
我们的示例脚本位于示例目录中。如果您切换到示例目录,您将看到一个子目录列表。在教程子目录中的一个文件是 first.cc。如果您点击 first.cc,您将找到刚刚浏览过的代码。
The source code is mainly in the src directory. The core of the simulator is in the src/core/model subdirectory. The first file you will find there (as of this writing) is abort. h. If you open that file, you can view macros for exiting scripts if abnormal conditions are detected.
源代码主要位于 src 目录中。模拟器的核心位于 src/core/model 子目录中。您将在那里找到的第一个文件(截至本文撰写时)是 abort.h。如果您打开该文件,您可以查看在检测到异常情况时退出脚本的宏。
The source code for the helpers we have used in this chapter can be found in the src/applications/helper directory. Feel free to poke around in the directory tree to get a feel for what is there and the style of programs.
我们在本章中使用的辅助程序的源代码可以在 src/applications/helper 目录中找到。随意浏览目录树,了解其中的内容和 程序的风格。
TWEAKING 调整

6.1 Using the Logging Module
6.1 使用日志模块

We have already taken a brief look at the logging module while going over the first.cc script. We will now take a closer look and see what kind of use-cases the logging subsystem was designed to cover.
在讨论 first.cc 脚本时,我们已经简要介绍了 日志模块。现在我们将更仔细地看看日志子系统设计的使用案例。

6.1.1 Logging Overview 6.1.1 日志概述

Many large systems support some kind of message logging facility, and is not an exception. In some cases, only error messages are logged to the "operator console" (which is typically stderr in Unix- based systems). In other systems, warning messages may be output as well as more detailed informational messages. In some cases, logging facilities are used to output debug messages which can quickly turn the output into a blur.
许多大型系统支持某种消息记录功能, 也不例外。在某些情况下,只有错误消息被记录到“操作员控制台”(通常在基于 Unix 的系统中为 stderr)。在其他系统中,警告消息可能会被输出以及更详细的信息性消息。在某些情况下,记录设施被用于输出调试消息,这可能会迅速将输出变得模糊。
takes the view that all of these verbosity levels are useful and we provide a selectable, multi-level approach to message logging. Logging can be disabled completely, enabled on a component-by-component basis, or enabled globally; and it provides selectable verbosity levels. The module provides a straightforward, relatively easy to use way to get useful information out of your simulation.
认为所有这些冗长级别都是有用的,我们提供了可选择的、多级别的消息记录方法。记录可以完全禁用,按组件启用,或全局启用;并提供可选择的冗长级别。 模块提供了一种直接、相对易于使用的方法,以从您的模拟中获取有用信息。
You should understand that we do provide a general purpose mechanism - tracing - to get data out of your models which should be preferred for simulation output (see the tutorial section Using the Tracing System for more details on our tracing system). Logging should be preferred for debugging information, warnings, error messages, or any time you want to easily get a quick message out of your scripts or models.
您应该明白,我们提供了一种通用机制 - 跟踪 - 用于从您的模型中获取数据,应优先选择用于模拟输出(有关我们的跟踪系统的更多详细信息,请参阅教程部分“使用跟踪系统”)。记录应优先用于调试信息、警告、错误消息,或者任何时候您希望轻松地从您的脚本或模型中快速获取消息。
There are currently seven levels of messages of increasing verbosity defined in the system.
目前系统中定义了七个级别的 消息,随着详细程度的增加而增加。
  • LOG_ERROR - Log error messages (associated macro: NS_LOG_ERROR);
    LOG_ERROR - 记录错误消息(关联宏:NS_LOG_ERROR);
  • LOG_WARN - Log warning messages (associated macro: NS_LOG_WARN);
    LOG_WARN - 记录警告消息(相关宏:NS_LOG_WARN);
  • LOG_DEBUG — Log relatively rare, ad-hoc debugging messages (associated macro: NS_LOG_DEBUG);
    LOG_DEBUG — 记录相对罕见的、临时的调试消息(相关宏:NS_LOG_DEBUG);
  • LOG_INFO — Log informational messages about program progress (associated macro: NS_LOG_INFO);
    LOG_INFO — 记录有关程序进展的信息消息(相关宏:NS_LOG_INFO);
  • LOG_FUNCTION - Log a message describing each function called (two associated macros: NS_LOG_FUNCTION, used for member functions, and NS_LOG_FUNCTION_NOARGS, used for static functions);
    LOG_FUNCTION - 记录描述每个调用的函数的消息(两个相关的宏:NS_LOG_FUNCTION,用于成员函数,和 NS_LOG_FUNCTION_NOARGS,用于静态函数);
  • LOG_LOGIC - Log messages describing logical flow within a function (associated macro: NS_LOG_LOGIC);
    LOG_LOGIC - 记录描述函数内部逻辑流程的消息(相关宏:NS_LOG_LOGIC);
  • LOG_ALL — Log everything mentioned above (no associated macro).
    LOG_ALL — 记录上述所有内容(无相关宏)。
For each LOG_TYPE there is also LOG_LEVEL_TYPE that, if used, enables logging of all the levels above it in addition to it's level. (As a consequence of this, LOG_ERROR and LOG_LEVEL_ERROR and also LOG_ALL and LOG_LEVEL_ALL are functionally equivalent.) For example, enabling LOG_INFO will only enable messages provided by NS_LOG_INFO macro, while enabling LOG_LEVEL_INFO will also enable messages provided by NS_LOG_DEBUG, NS_LOG_WARN and NS_LOG_ERROR macros.
对于每个 LOG_TYPE,还有 LOG_LEVEL_TYPE,如果使用,将启用所有高于其级别的级别的日志记录(由此导致,LOG_ERROR 和 LOG_LEVEL_ERROR 以及 LOG_ALL 和 LOG_LEVEL_ALL 在功能上是等效的)。例如,启用 LOG_INFO 将仅启用由 NS_LOG_INFO 宏提供的消息,而启用 LOG_LEVEL_INFO 还将启用由 NS_LOG_DEBUG、NS_LOG_WARN 和 NS_LOG_ERROR 宏提供的消息。
We also provide an unconditional logging macro that is always displayed, irrespective of logging levels or component selection.
我们还提供一个无条件记录宏,无论日志级别或组件选择如何,都会始终显示。
  • NS_LOG_UNCOND - Log the associated message unconditionally (no associated log level).
    NS_LOG_UNCOND - 无条件记录相关消息(无关联日志级别)。
Each level can be requested singly or cumulatively; and logging can be set up using a shell environment variable (NS_LOG) or by logging system function call. As was seen earlier in the tutorial, the logging system has Doxygen documentation and now would be a good time to peruse the Logging Module documentation if you have not done so.
每个级别可以单独或累积请求;并且可以使用 shell 环境变量(NS_LOG)或通过日志系统函数调用来设置日志记录。正如在教程中所看到的,日志系统有 Doxygen 文档,现在是时候浏览日志模块文档了,如果您还没有这样做的话。
Now that you have read the documentation in great detail, let's use some of that knowledge to get some interesting information out of the scratch/myfirst.cc example script you have already built.
现在您已经详细阅读了文档,让我们利用其中的一些知识从您已经构建的 scratch/myfirst.cc 示例脚本中获取一些有趣的信息。

6.1.2 Enabling Logging 6.1.2 启用日志记录

Let's use the NS_LOG environment variable to turn on some more logging, but first, just to get our bearings, go ahead and run the last script just as you did previously,
让我们使用 NS_LOG 环境变量来打开更多日志记录,但首先,为了让我们了解情况,继续运行上一个脚本,就像之前做的那样,
run scratch/myfirst
运行 scratch/myfirst
You should see the now familiar output of the first example program
你应该看到第一个 示例程序的现在熟悉的输出
At time +2s client sent 1024 bytes to 10.1.1.2 port 9
At time +2.00369s server received 1024 bytes from 10.1.1.1 port 49153
At time +2.00369s server sent 1024 bytes to 10.1.1.1 port 49153
At time +2.00737s client received 1024 bytes from 10.1.1.2 port 9
It turns out that the "Sent" and "Received" messages you see above are actually logging messages from the UdpEchoClientApplication and UdpEchoServerApplication. We can ask the client application, for example, to print more information by setting its logging level via the NS_LOG environment variable.
原来,您在上面看到的“已发送”和“已接收”消息实际上是来自 UdpEchoClientApplication 和 UdpEchoServerApplication 的日志消息。我们可以通过设置其日志级别通过 NS_LOG 环境变量来要求客户端应用程序打印更多信息,例如。
I am going to assume from here on that you are using an sh-like shell that uses the"VARIABLE=value" syntax. If you are using a csh-like shell, then you will have to convert my examples to the "setenv VARIABLE value" syntax required by those shells.
从现在开始,我假设您正在使用一个类似 sh 的 shell,它使用“VARIABLE=value”的语法。如果您正在使用类似 csh 的 shell,则必须将我的示例转换为这些 shell 所需的“setenv VARIABLE value”语法。
Right now, the UDP echo client application is responding to the following line of code in scratch/myfirst.cc,
现在,UDP 回显客户端应用程序正在响应 scratch/myfirst.cc 中的以下代码行。
LogComponentEnable("UdpEchoClientApplication", LOG_LEVEL_INFO);
This line of code enables the LOG_LEVEL_INFO level of logging. When we pass a logging level flag, we are actually enabling the given level and all lower levels. In this case, we have enabled NS_LOG_INFO, NS_LOG_DEBUG, NS_LOG_WARN and NS_LOG_ERROR. We can increase the logging level and get more information without changing the script and recompiling by setting the NS_LOG environment variable like this:
此行代码启用了 LOG_LEVEL_INFO 级别的日志记录。当我们传递一个日志级别标志时,实际上是启用了给定级别及所有更低级别的日志记录。在这种情况下,我们已启用了 NS_LOG_INFO、NS_LOG_DEBUG、NS_LOG_WARN 和 NS_LOG_ERROR。我们可以通过设置 NS_LOG 环境变量来增加日志记录级别并获取更多信息,而无需更改脚本并重新编译,方法如下:
$ export NS_LOG=UdpEchoClientApplication=level_all
This sets the shell environment variable NS_LOG to the string,
这将将 shell 环境变量 NS_LOG 设置为字符串,
UdpEchoClientApplication=level_all
The left hand side of the assignment is the name of the logging component we want to set, and the right hand side is the flag we want to use. In this case, we are going to turn on all of the debugging levels for the application. If you run the script with NS_LOG set this way, the logging system will pick up the change and you should see something similar to the following output:
赋值语句的左侧是我们要设置的日志组件的名称,右侧是我们要使用的标志。在这种情况下,我们将打开应用程序的所有调试级别。如果以这种方式设置 NS_LOG 运行脚本,则 日志系统将捕捉到更改,您应该看到类似以下输出:
UdpEchoClientApplication:UdpEchoClient(0xef90d0)
UdpEchoClientApplication:SetDataSize(0xef90d0, 1024)
UdpEchoClientApplication:StartApplication(0xef90d0)
(continues on next page) (续下页)
(continued from previous page)
(续上页)
UdpEchoClientApplication:ScheduleTransmit(0xef90d0, +0ns)
UdpEchoClientApplication:Send(0xef90d0)
At time +2s client sent 1024 bytes to 10.1.1.2 port 9
At time +2.00369s server received 1024 bytes from 10.1.1.1 port 49153
At time +2.00369s server sent 1024 bytes to 10.1.1.1 port 49153
UdpEchoClientApplication:HandleRead(0xef90d0, 0xee7b20)
At time +2.00737s client received 1024 bytes from 10.1.1.2 port 9
UdpEchoClientApplication:StopApplication(0xef90d0)
UdpEchoClientApplication:DoDispose(0xef90d0)
UdpEchoClientApplication:~UdpEchoClient(0xef90d0)
The additional debug information provided by the application is from the NS_LOG_FUNCTION level. This shows every time a function in the application is called during script execution. Generally, use of (at least) NS_LOG_FUNCTION(this) in member functions is preferred. Use NS_LOG_FUNCTION_NOARGS() only in static functions. Note, however, that there are no requirements in the system that models must support any particular logging functionality. The decision regarding how much information is logged is left to the individual model developer. In the case of the echo applications, a good deal of log output is available.
应用程序提供的附加调试信息来自 NS_LOG_FUNCTION 级别。这显示了在脚本执行期间每次调用应用程序中的函数。通常,首选在成员函数中使用(至少)NS_LOG_FUNCTION(this)。仅在静态函数中使用 NS_LOG_FUNCTION_NOARGS()。但是,请注意,模型不需要支持任何特定的日志记录功能。有关记录多少信息的决定留给各个模型开发人员。在回声应用程序的情况下,可以获得大量的日志输出。
You can now see a of the function calls that were made to the application. If you look closely you will notice a single colon between the string UdpEchoClientApplication and the method name where you might have expected a C++ scope operator ( . This is intentional.
您现在可以看到对应用程序进行的函数调用 。如果您仔细观察,您会注意到字符串 UdpEchoClientApplication 和方法名之间只有一个冒号,而不是您可能期望的 C++作用域运算符( )。这是有意为之。
The name is not actually a class name, it is a logging component name. When there is a one-to-one correspondence between a source file and a class, this will generally be the class name but you should understand that it is not actually a class name, and there is a single colon there instead of a double colon to remind you in a relatively subtle way to conceptually separate the logging component name from the class name.
名称实际上不是类名,而是日志组件名称。当源文件与类之间存在一对一的对应关系时,通常会是类名,但您应该明白这实际上不是类名,而且在那里只有一个冒号,而不是双冒号,以相对微妙的方式提醒您在概念上将日志组件名称与类名分开。
It turns out that in some cases, it can be hard to determine which method actually generates a log message. If you look in the text above, you may wonder where the string "Received 1024 bytes from 10.1.1.2" comes from. You can resolve this by OR'ing the prefix_func level into the NS_LOG environment variable. Try doing the following,
结果发现,在某些情况下,很难确定哪个方法实际上生成了日志消息。如果您查看上面的文本,您可能会想知道字符串“从 10.1.1.2 接收了 1024 字节”是从哪里来的。您可以通过将 prefix_func 级别 OR 到 NS_LOG 环境变量中来解决此问题。尝试执行以下操作,
$ export 'NS_LOG=UdpEchoClientApplication=level_all|prefix_func'
Note that the quotes are required since the vertical bar we use to indicate an OR operation is also a Unix pipe connector.
请注意,引号是必需的,因为我们用竖线表示 OR 操作,这也是 Unix 管道连接器。
Now, if you run the script you will see that the logging system makes sure that every message from the given log component is prefixed with the component name.
现在,如果您运行脚本,您将看到日志系统确保来自给定日志组件的每条消息都以组件名称为前缀。
UdpEchoClientApplication:UdpEchoClient(0xea8e50)
UdpEchoClientApplication:SetDataSize(0xea8e50, 1024)
UdpEchoClientApplication:StartApplication(0xea8e50)
UdpEchoClientApplication:ScheduleTransmit(0xea8e50, +0ns)
UdpEchoClientApplication:Send(0xea8e50)
UdpEchoClientApplication:Send(): At time +2s client sent 1024 bytes to 10.1.1.2 port 9
At time +2.00369s server received 1024 bytes from 10.1.1.1 port 49153
At time +2.00369s server sent 1024 bytes to 10.1.1.1 port 49153
UdpEchoClientApplication:HandleRead(0xea8e50, 0xea5b20)
UdpEchoClientApplication:HandleRead(): At time +2.00737s client received 1024 bytes
~from 10.1.1.2 port 9
UdpEchoClientApplication:StopApplication(0xea8e50)
UdpEchoClientApplication:DoDispose(0xea8e50)
UdpEchoClientApplication:~UdpEchoClient(0xea8e50)
You can now see all of the messages coming from the UDP echo client application are identified as such. The message "Received 1024 bytes from 10.1.1.2" is now clearly identified as coming from the echo client application. Also, in most statements, you will see a hexadecimal value printed such as 0 xea 8 e 50 ; this is because most statements print out the value of the C++ this pointer, so that objects can be distinguished from one another.
您现在可以看到来自 UDP 回显客户端应用程序的所有消息都被标识为这样。消息“从 10.1.1.2 接收了 1024 字节”现在清楚地被标识为来自回显客户端应用程序。此外,在大多数 语句中,您将看到打印出十六进制值,例如 0 xea 8 e 50;这是因为大多数语句打印出 C++ this 指针的值,以便可以区分对象。
The remaining message must be coming from the UDP echo server application. We can enable that component by entering a colon separated list of components in the NS_LOG environment variable.
剩下的消息必须来自 UDP 回显服务器应用程序。我们可以通过在 NS_LOG 环境变量中输入以冒号分隔的组件列表来启用该组件。
$ export 'NS_LOG=UdpEchoClientApplication=level_all|prefix_func:
UdpEchoServerApplication=level_all|prefix_func
Warning: You will need to remove the newline after the : in the example text above which is only there for document formatting purposes.
警告: 您需要删除上面示例文本中的换行符,该换行符仅用于文档格式化目的。
Now, if you run the script you will see all of the log messages from both the echo client and server applications. You may see that this can be very useful in debugging problems.
现在,如果您运行脚本,您将看到来自回显客户端和服务器应用程序的所有日志消息。您可能会发现这在调试问题时非常有用。
UdpEchoServerApplication:UdpEchoServer(0x2101590)
UdpEchoClientApplication:UdpEchoClient(0x2101820)
UdpEchoClientApplication:SetDataSize(0x2101820, 1024)
UdpEchoServerApplication:StartApplication(0x2101590)
UdpEchoClientApplication:StartApplication(0x2101820)
UdpEchoClientApplication:ScheduleTransmit(0x2101820, +0ns)
UdpEchoClientApplication:Send(0x2101820)
UdpEchoClientApplication:Send(): At time +2s client sent 1024 bytes to 10.1.1.2 port 9
UdpEchoServerApplication:HandleRead(0x2101590, 0x2106240)
UdpEchoServerApplication:HandleRead(): At time +2.00369s server received 1024 bytes
\hookrightarrowfrom 10.1.1.1 port 49153
UdpEchoServerApplication:HandleRead(): Echoing packet
UdpEchoServerApplication:HandleRead(): At time +2.00369s server sent 1024 bytes to 10.
\hookrightarrow1.1.1 port 49153
UdpEchoClientApplication:HandleRead(0x2101820, 0x21134b0)
UdpEchoClientApplication:HandleRead(): At time +2.00737s client received 1024 bytes
\hookrightarrowfrom 10.1.1.2 port 9
UdpEchoClientApplication:StopApplication(0x2101820)
UdpEchoServerApplication:StopApplication(0x2101590)
UdpEchoClientApplication:DoDispose(0x2101820)
UdpEchoServerApplication:DoDispose(0x2101590)
UdpEchoServerApplication:DoDispose(0x2101590)
UdpEchoClientApplication:~UdpEchoClient(0x2101820)
UdpEchoClientApplication:~UdpEchoClient(0x2101820)
UdpEchoServerApplication:~UdpEchoServer(0x2101590)
UdpEchoServerApplication:~UdpEchoServer(0x2101590)
It is also sometimes useful to be able to see the simulation time at which a log message is generated. You can do this by ORing in the prefix_time bit.
有时候,能够查看生成日志消息的模拟时间也是很有用的。您可以通过将 prefix_time 位设置为 OR 来实现这一点。
$ export 'NS_LOG=UdpEchoClientApplication=level_all|prefix_func|prefix_time:
UdpEchoServerApplication=level_all|prefix_func|prefix_time'
Again, you will have to remove the newline above. If you run the script now, you should see the following output:
再次,您将不得不删除上面的换行符。如果您现在运行脚本,您应该会看到以下输出:
+0.000000000s UdpEchoServerApplication:UdpEchoServer(0x8edfc0)
+0.000000000s UdpEchoClientApplication:UdpEchoClient(0x8ee210)
+0.000000000s UdpEchoClientApplication:SetDataSize(0x8ee210, 1024)
+1.000000000s UdpEchoServerApplication:StartApplication(0x8edfc0)
+2.000000000s UdpEchoClientApplication:StartApplication(0x8ee210)
+2.000000000s UdpEchoClientApplication:ScheduleTransmit(0x8ee210, +0ns)
+2.000000000s UdpEchoClientApplication:Send(0x8ee210)
+2.000000000s UdpEchoClientApplication:Send(): At time +2s client sent 1024 bytes tob
<10.1.1.2 port 9
+2.003686400s UdpEchoServerApplication:HandleRead(0x8edfc0, 0x936770)
+2.003686400s UdpEchoServerApplication:HandleRead(): At time +2.00369s server
\hookrightarrowreceived 1024 bytes from 10.1.1.1 port 49153
+2.003686400s UdpEchoServerApplication:HandleRead(): Echoing packet
+2.003686400s UdpEchoServerApplication:HandleRead(): At time +2.00369s server sent,
@1024 bytes to 10.1.1.1 port 49153
(continues on next page)
(continued from previous page)
(续上页)
+2.007372800s UdpEchoClientApplication:HandleRead(0x8ee210, 0x8f3140)
+2.007372800s UdpEchoClientApplication:HandleRead(): At time +2.00737s client
\hookrightarrowreceived 1024 bytes from 10.1.1.2 port 9
+10.000000000s UdpEchoClientApplication:StopApplication(0x8ee210)
+10.000000000s UdpEchoServerApplication:StopApplication(0x8edfc0)
UdpEchoClientApplication:DoDispose(0x8ee210)
UdpEchoServerApplication:DoDispose(0x8edfc0)
UdpEchoClientApplication:~UdpEchoClient(0x8ee210)
UdpEchoServerApplication:~UdpEchoServer(0x8edfc0)
You can see that the constructor for the UdpEchoServer was called at a simulation time of 0 seconds. This is actually happening before the simulation starts, but the time is displayed as zero seconds. The same is true for the UdpEchoClient constructor message.
您可以看到 UdpEchoServer 的构造函数在模拟时间为 0 秒时被调用。实际上,这是在模拟开始之前发生的,但时间显示为零秒。对于 UdpEchoClient 构造函数消息也是如此。
Recall that the scratch/myfirst.cc script started the echo server application at one second into the simulation. You can now see that the StartApplication method of the server is, in fact, called at one second. You can also see that the echo client application is started at a simulation time of two seconds as we requested in the script.
请记住,scratch/myfirst.cc 脚本在模拟开始后的一秒钟启动了回显服务器应用程序。您现在可以看到服务器的 StartApplication 方法实际上在一秒钟时被调用。您还可以看到回显客户端应用程序在模拟时间两秒时启动,正如我们在脚本中请求的那样。
You can now follow the progress of the simulation from the ScheduleTransmit call in the client that calls Send to the HandleRead callback in the echo server application. Note that the elapsed time for the packet to be sent across the point-to-point link is 3.69 milliseconds. You see the echo server logging a message telling you that it has echoed the packet and then, after another channel delay, you see the echo client receive the echoed packet in its HandleRead method.
您现在可以从客户端的 ScheduleTransmit 调用到回显服务器应用程序的 HandleRead 回调跟踪模拟的进展。请注意,将数据包发送到点对点链接的经过时间为 3.69 毫秒。您会看到回显服务器记录一条消息,告诉您它已经回显了数据包,然后经过另一个通道延迟后,您会看到回显客户端在其 HandleRead 方法中接收到了回显的数据包。
There is a lot that is happening under the covers in this simulation that you are not seeing as well. You can very easily follow the entire process by turning on all of the logging components in the system. Try setting the NS_LOG variable to the following,
在这个模拟中,有很多事情是在您看不到的情况下发生的。您可以通过打开系统中的所有日志组件来非常容易地跟踪整个过程。尝试将 NS_LOG 变量设置为以下内容,
$ export 'NS_LOG=*=level_all|prefix_func|prefix_time'
The asterisk above is the logging component wildcard. This will turn on all of the logging in all of the components used in the simulation. I won't reproduce the output here (as of this writing it produces thousands of lines of output for the single packet echo) but you can redirect this information into a file and look through it with your favorite editor if you like,
上面的星号是日志组件通配符。这将打开模拟中使用的所有组件的所有日志记录。我不会在这里重现输出(在撰写本文时,它会为单个数据包回显产生数千行输出),但您可以将此信息重定向到文件中,并使用您喜欢的编辑器查看它,
$ ./ns3 run scratch/myfirst > log.out 2>&1
I personally use this extremely verbose version of logging when I am presented with a problem and I have no idea where things are going wrong. I can follow the progress of the code quite easily without having to set breakpoints and step through code in a debugger. I can just edit up the output in my favorite editor and search around for things I expect, and see things happening that I don't expect. When I have a general idea about what is going wrong, I transition into a debugger for a fine-grained examination of the problem. This kind of output can be especially useful when your script does something completely unexpected. If you are stepping using a debugger you may miss an unexpected excursion completely. Logging the excursion makes it quickly visible.
当我遇到问题并且不知道出了什么问题时,我个人会使用这种非常冗长的日志记录版本。我可以很容易地跟踪代码的进展,而无需设置断点并在调试器中逐步执行代码。我只需在我喜爱的编辑器中编辑输出,搜索我期望的内容,并查看我不期望发生的事情。当我大致了解出了什么问题时,我会切换到调试器,对问题进行细致的检查。当您的脚本执行完全出乎意料时,这种输出尤其有用。如果您在调试器中逐步执行,可能会完全错过意外的情况。记录这种情况可以使其迅速可见。

6.1.3 Adding Logging to your Code
6.1.3 将日志记录添加到您的代码中

You can add new logging to your simulations by making calls to the log component via several macros. Let's do so in the myfirst.cc script we have in the scratch directory.
您可以通过调用几个宏来向您的模拟中添加新的日志记录。让我们在我们在 scratch 目录中拥有的 myfirst.cc 脚本中这样做。
Recall that we have defined a logging component in that script:
回想一下,我们在脚本中定义了一个日志组件:
NS_LOG_COMPONENT_DEFINE("FirstScriptExample");
You now know that you can enable all of the logging for this component by setting the NS_LOG environment variable to the various levels. Let's go ahead and add some logging to the script. The macro used to add an informational level
您现在知道,通过将 NS_LOG 环境变量设置为各种级别,可以启用此组件的所有日志记录。让我们继续向脚本中添加一些日志记录。用于添加信息级别的宏

log message is NS_LOG_INFO. Go ahead and add one (just before we start creating the nodes) that tells you that the script is "Creating Topology." This is done as in this code snippet,
日志消息为 NS_LOG_INFO。继续添加一个(就在我们开始创建节点之前)告诉您脚本正在“创建拓扑”。这是在这段代码中完成的,
Open scratch/myfirst.cc in your favorite editor and add the line,
在您喜欢的编辑器中打开 scratch/myfirst.cc 并添加以下行,
NS_LOG_INFO("Creating Topology");
NS_LOG_INFO("创建拓扑");
right before the lines,
在这些行之前,
NodeContainer nodes;
nodes.Create (2); nodes.Create(2);
Now build the script using ns3 and clear the NS_LOG variable to turn off the torrent of logging we previously enabled:
现在使用 ns3 构建脚本,并清除 NS_LOG 变量以关闭我们之前启用的大量日志记录:
$ export NS_LOG=""
Now, if you run the script,
现在,如果您运行脚本,
run scratch/myfirst
运行 scratch/myfirst
you will not see your new message since its associated logging component (FirstScriptExample) has not been enabled. In order to see your message you will have to enable the FirstScriptExample logging component with a level greater than or equal to NS_LOG_INFO. If you just want to see this particular level of logging, you can enable it by,
由于其关联的日志组件(FirstScriptExample)尚未启用,您将看不到新消息。为了看到您的消息,您必须启用级别大于或等于 NS_LOG_INFO 的 FirstScriptExample 日志组件。如果您只想看到这个特定级别的日志记录,您可以通过以下方式启用它,
$ export NS_LOG=FirstScriptExample=info
If you now run the script you will see your new "Creating Topology" log message,
如果您现在运行脚本,您将看到新的“创建拓扑”日志消息,
Creating Topology
At time +2s client sent 1024 bytes to 10.1.1.2 port 9
At time +2.00369s server received 1024 bytes from 10.1.1.1 port 49153
At time +2.00369s server sent 1024 bytes to 10.1.1.1 port 49153
At time +2.00737s client received 1024 bytes from 10.1.1.2 port 9

6.2 Using Command Line Arguments
6.2 使用命令行参数

6.2.1 Overriding Default Attributes
6.2.1 覆盖默认属性

Another way you can change how scripts behave without editing and building is via command line arguments. We provide a mechanism to parse command line arguments and automatically set local and global variables based on those arguments.
另一种更改 脚本行为的方法是通过命令行参数,而无需编辑和构建。我们提供了一种机制来解析命令行参数,并根据这些参数自动设置本地和全局变量。
The first step in using the command line argument system is to declare the command line parser. This is done quite simply (in your main program) as in the following code,
使用命令行参数系统的第一步是声明命令行解析器。这可以通过以下代码(在您的主程序中)简单完成,
int
main(int argc, char *argv[])
{
    ...
    CommandLine cmd;
    cmd.Parse(argc, argv);
(continued from previous page)
(续前页)
This simple two line snippet is actually very useful by itself. It opens the door to the global variable and Attribute systems. Go ahead and add that two lines of code to the scratch/myfirst.cc script at the start of main. Go ahead and build the script and run it, but ask the script for help in the following way,
这个简单的两行代码片段本身实际上非常有用。它打开了 全局变量和属性系统的大门。继续在 main 函数的开头将这两行代码添加到 scratch/myfirst.cc 脚本中。继续构建脚本并运行它,但请以以下方式向脚本请求帮助,
$ ./ns3 run "scratch/myfirst --PrintHelp"
This will ask ns3 to run the scratch/myfirst script and pass the command line argument --PrintHelp to the script. The quotes are required to sort out which program gets which argument. The command line parser will now see the -- PrintHelp argument and respond with,
这将要求 ns3 运行 scratch/myfirst 脚本,并将命令行参数--PrintHelp 传递给脚本。引号是必需的,以区分哪个程序得到了哪个参数。现在命令行解析器将看到--PrintHelp 参数,并做出响应,
myfirst [General Arguments]
General Arguments:
    --PrintGlobals: Print the list of globals.
    --PrintGroups: Print the list of groups.
    --PrintGroup=[group]: Print all TypeIds of group.
    --PrintTypeIds: Print all TypeIds.
    --PrintAttributes=[typeid]: Print all attributes of typeid.
    --PrintVersion: Print the ns-3 version.
    --PrintHelp: Print this help message.
Let's focus on the --PrintAttributes option. We have already hinted at the Attribute system while walking through the first.cc script. We looked at the following lines of code,
让我们专注于--PrintAttributes 选项。我们在浏览 first.cc 脚本时已经提到了 属性系统。我们看了下面的代码行,
PointToPointHelper pointToPoint;
pointToPoint.SetDeviceAttribute("DataRate", StringValue("5Mbps"));
pointToPoint.SetChannelAttribute("Delay", StringValue("2ms"));
and mentioned that DataRate was actually an Attribute of the Point ToPointNetDevice. Let's use the command line argument parser to take a look at the Attributes of the PointToPointNetDevice. The help listing says that we should provide a TypeId. This corresponds to the class name of the class to which the Attributes belong. In this case it will be ns3::PointToPointNetDevice. Let's go ahead and type in,
并提到 DataRate 实际上是 PointToPointNetDevice 的一个属性。让我们使用命令行参数解析器来查看 PointToPointNetDevice 的属性。帮助列表显示我们应该提供一个 TypeId。这对应于属性所属类的类名。在这种情况下,它将是 ns3::PointToPointNetDevice。让我们继续输入,
$ ./ns3 run "scratch/myfirst --PrintAttributes=ns3::PointToPointNetDevice"
The system will print out all of the Attributes of this kind of net device. Among the Attributes you will see listed is,
系统将打印出这种网络设备的所有属性。您将看到列出的属性中,
--ns3::PointToPointNetDevice: :DataRate=[32768bps]:
The default data rate for point to point links
点对点链接的默认数据速率
This is the default value that will be used when a PointToPointNetDevice is created in the system. We overrode this default with the Attribute setting in the PointToPointHelper above. Let's use the default values for the point-to-point devices and channels by deleting the SetDeviceAttribute call and the SetChannelAttribute call from the myfirst.cc we have in the scratch directory.
这是在系统中创建 PointToPointNetDevice 时将使用的默认值。我们在上面的 PointToPointHelper 中使用 Attribute 设置覆盖了此默认值。通过从我们在 scratch 目录中拥有的 myfirst.cc 中删除 SetDeviceAttribute 调用和 SetChannelAttribute 调用,让我们使用点对点设备和通道的默认值。
Your script should now just declare the PointToPointHelper and not do any set operations as in the following example,
现在,您的脚本应该只声明 PointToPointHelper,并且不执行任何设置操作,如下例所示,
<.<
NodeContainer nodes;
nodes.Create(2);
PointToPointHelper pointToPoint;
NetDeviceContainer devices;
devices = pointToPoint.Install(nodes);
...
Go ahead and build the new script with ns3 and let's go back and enable some logging from the UDP echo server application and turn on the time prefix.
继续使用 ns3 构建新脚本,然后返回并从 UDP 回显服务器应用程序启用一些日志记录,并打开时间前缀。
$ export 'NS_LOG=UdpEchoServerApplication=level_all|prefix_time'
If you run the script, you should now see the following output,
如果您运行脚本,现在应该看到以下输出,
+0.000000000s UdpEchoServerApplication:UdpEchoServer(0x20d0d10)
+1.000000000s UdpEchoServerApplication:StartApplication(0x20d0d10)
At time +2s client sent 1024 bytes to 10.1.1.2 port 9
+2.257324218s UdpEchoServerApplication:HandleRead(0x20d0d10, 0x20900b0)
+2.257324218s At time +2.25732s server received 1024 bytes from 10.1.1.1 port 49153
+2.257324218s Echoing packet
+2.257324218s At time +2.25732s server sent 1024 bytes to 10.1.1.1 port 49153
At time +2.51465s client received 1024 bytes from 10.1.1.2 port 9
+10.000000000s UdpEchoServerApplication:StopApplication(0x20d0d10)
UdpEchoServerApplication:DoDispose(0x20d0d10)
UdpEchoServerApplication:~UdpEchoServer(0x20d0d10)
Recall that the last time we looked at the simulation time at which the packet was received by the echo server, it was at 2.0073728 seconds.
请回想一下,上次我们查看数据包被回显服务器接收的模拟时间是在 2.0073728 秒。
+2.007372800s UdpEchoServerApplication:HandleRead(): Received 1024 bytes from 10.1.1.1
+2.007372800s UdpEchoServerApplication:HandleRead(): 从 10.1.1.1 接收到 1024 字节。
Now it is receiving the packet at 2.25732 seconds. This is because we just dropped the data rate of the PointToPointNetDevice down to its default of 32768 bits per second from five megabits per second.
现在它在 2.25732 秒接收数据包。这是因为我们刚刚将 PointToPointNetDevice 的数据速率从五兆位每秒降至其默认的 32768 位每秒。
If we were to provide a new DataRate using the command line, we could speed our simulation up again. We do this in the following way, according to the formula implied by the help item:
如果我们要使用命令行提供一个新的数据速率,我们可以再次加快我们的模拟速度。根据帮助项目暗示的公式,我们可以按照以下方式操作:
$ ./ns3 run "scratch/myfirst --ns3::PointToPointNetDevice::DataRate=5Mbps"
This will set the default value of the DataRate Attribute back to five megabits per second. Are you surprised by the result? It turns out that in order to get the original behavior of the script back, we will have to set the speed-of-light delay of the channel as well. We can ask the command line system to print out the Attributes of the channel just like we did for the net device:
这将将 DataRate 属性的默认值设置回每秒五兆位。你对结果感到惊讶吗?事实证明,为了恢复脚本的原始行为,我们还需要设置通道的光速延迟。我们可以要求命令行系统打印出通道的属性,就像我们为网络设备所做的那样:
$ ./ns3 run "scratch/myfirst --PrintAttributes=ns3::PointToPointChannel"
We discover the Delay Attribute of the channel is set in the following way:
我们发现通道的延迟属性是以以下方式设置的:
Transmission delay through the channel
信道传输延迟
We can then set both of these default values through the command line system,
然后,我们可以通过命令行系统设置这两个默认值,
$ ./ns3 run "scratch/myfirst
$ ./ns3 运行 "scratch/myfirst"
DataRate=5Mbps  数据速率=5Mbps

in which case we recover the timing we had when we explicitly set the DataRate and Delay in the script:
在这种情况下,我们恢复了在脚本中明确设置 DataRate 和 Delay 时的时间:
+0.000000000s UdpEchoServerApplication:UdpEchoServer(0x1df20f0)
+1.000000000s UdpEchoServerApplication:StartApplication(0x1df20f0)
At time +2s client sent 1024 bytes to 10.1.1.2 port 9
+2.003686400s UdpEchoServerApplication:HandleRead(0x1df20f0, 0x1de0250)
+2.003686400s At time +2.00369s server received 1024 bytes from 10.1.1.1 port 49153
+2.003686400s Echoing packet
+2.003686400s At time +2.00369s server sent 1024 bytes to 10.1.1.1 port 49153
At time +2.00737s client received 1024 bytes from 10.1.1.2 port 9
+10.000000000s UdpEchoServerApplication:StopApplication(0x1df20f0)
UdpEchoServerApplication:DoDispose(0x1df20f0)
UdpEchoServerApplication:~UdpEchoServer(0x1df20f0)
Note that the packet is again received by the server at 2.00369 seconds. We could actually set any of the Attributes used in the script in this way. In particular we could set the UdpEchoClient Attribute MaxPackets to some other value than one.
请注意,数据包再次在 2.00369 秒被服务器接收。我们实际上可以以这种方式设置脚本中使用的任何属性。特别是,我们可以将 UdpEchoClient 属性 MaxPackets 设置为除 1 以外的其他值。
How would you go about that? Give it a try. Remember you have to comment out the place we override the default Attribute and explicitly set MaxPackets in the script. Then you have to rebuild the script. You will also have to find the syntax for actually setting the new default attribute value using the command line help facility. Once you have this figured out you should be able to control the number of packets echoed from the command line. Since we're nice folks, we'll tell you that your command line should end up looking something like,
你会怎么做呢?试一试。记住,你必须注释掉我们覆盖默认属性并在脚本中明确设置 MaxPackets 的地方。然后你必须重新构建脚本。你还需要找到实际使用命令行帮助工具设置新默认属性值的语法。一旦你弄清楚了这一点,你应该能够控制从命令行回显的数据包数量。由于我们是好人,我们会告诉你,你的命令行最终应该看起来像这样,
$ ./ns3 run "scratch/myfirst
    --ns3::PointToPointNetDevice::DataRate=5Mbps
    --ns3::PointToPointChannel::Delay=2ms
    --ns3::UdpEchoClient::MaxPackets=2"
A natural question to arise at this point is how to learn about the existence of all of these attributes. Again, the command line help facility has a feature for this. If we ask for command line help we should see:
在这一点上出现的一个自然问题是如何了解所有这些属性的存在。同样,命令行帮助工具有一个用于此目的的功能。如果我们请求命令行帮助,我们应该看到:
$ ./ns3 run "scratch/myfirst --PrintHelp"
myfirst [General Arguments]
General Arguments:
    --PrintGlobals: Print the list of globals.
    --PrintGroups: Print the list of groups.
    --PrintGroup=[group]: Print all TypeIds of group.
    --PrintTypeIds: Print all TypeIds.
    --PrintAttributes=[typeid]: Print all attributes of typeid.
    --PrintVersion: Print the ns-3 version.
    --PrintHelp: Print this help message.
If you select the "PrintGroups" argument, you should see a list of all registered TypeId groups. The group names are aligned with the module names in the source directory (although with a leading capital letter). Printing out all of the information at once would be too much, so a further filter is available to print information on a per-group basis. So, focusing again on the point-to-point module:
如果你选择"PrintGroups"参数,你应该看到所有已注册的 TypeId 组的列表。组名称与源目录中的模块名称对齐(尽管有一个首字母大写)。一次打印出所有信息会太多,因此还有一个进一步的过滤器可用于按组打印信息。因此,再次关注点对点模块:
./ns3 run "scratch/myfirst --PrintGroup=PointToPoint"
TypeIds in group PointToPoint:
    ns3::PointToPointChannel
    ns3::PointToPointNetDevice
    ns3::PppHeader
and from here, one can find the possible TypeId names to search for attributes, such as in the --PrintAttributes=ns3::PointToPointChannel example shown above.
从这里,可以找到要搜索属性的可能 TypeId 名称,例如上面显示的 --PrintAttributes=ns3::PointToPointChannel 示例。
Another way to find out about attributes is through the ns-3 Doxygen; there is a page that lists out all of the registered attributes in the simulator
了解属性的另一种方法是通过 ns-3 Doxygen;有一个页面列出了模拟器中所有已注册的属性。

6.2.2 Hooking Your Own Values
6.2.2 挂接您自己的值

You can also add your own hooks to the command line system. This is done quite simply by using the AddValue method to the command line parser.
您还可以向命令行系统添加自己的钩子。通过使用 AddValue 方法将其简单地添加到命令行解析器中。
Let's use this facility to specify the number of packets to echo in a completely different way. Let's add a local variable called nPackets to the main function. We'll initialize it to one to match our previous default behavior. To allow the command line parser to change this value, we need to hook the value into the parser. We do this by adding a call to AddValue. Go ahead and change the scratch/myfirst.cc script to start with the following code,
让我们使用这个功能以完全不同的方式指定要回显的数据包数量。让我们在主函数中添加一个名为 nPackets 的本地变量。我们将其初始化为 1,以匹配我们先前的默认行为。为了允许命令行解析器更改此值,我们需要将该值钩入解析器中。我们通过添加一个调用 AddValue 来实现这一点。继续并更改 scratch/myfirst.cc 脚本,以以下代码开始,
int
main(int argc, char *argv[])
{
    uint32_t nPackets = 1;
    CommandLine cmd;
    cmd.AddValue("nPackets", "Number of packets to echo", nPackets);
    cmd.Parse(argc, argv);
    ...
Scroll down to the point in the script where we set the MaxPackets Attribute and change it so that it is set to the variable instead of the constant 1 as is shown below.
滚动到脚本中设置 MaxPackets 属性的位置,并将其更改为设置为变量 ,而不是下面显示的常数 1。
echoClient.SetAttribute("MaxPackets", UintegerValue(nPackets));
Now if you run the script and provide the --PrintHelp argument, you should see your new User Argument listed in the help display.
现在,如果您运行脚本并提供--PrintHelp 参数,您应该在帮助显示中看到您的新用户参数。
Try, 尝试,
build  构建
$ ./ns3 run "scratch/myfirst --PrintHelp"
[Program Options] [General Arguments]
[程序选项] [一般参数]
Program Options: 程序选项:
Number of packets to echo [1]
要回显的数据包数量 [1]
General Arguments: 通用参数:
--PrintGlobals: Print the list of globals.
--PrintGlobals: 打印全局变量列表。
--PrintGroups: Print the list of groups.
--PrintGroups: 打印组列表。
--PrintGroup=[group]: Print all TypeIds of group.
--PrintGroup=[group]: 打印组的所有 TypeIds。
--PrintTypeIds: Print all TypeIds.
--PrintTypeIds: 打印所有 TypeIds。
--PrintAttributes=[typeid]: Print all attributes of typeid.
--PrintAttributes=[typeid]: 打印 typeid 的所有属性。
--PrintVersion: Print the ns-3 version.
--PrintVersion: 打印 ns-3 版本。
--PrintHelp: Print this help message.
--PrintHelp: 打印此帮助消息。
If you want to specify the number of packets to echo, you can now do so by setting the --nPackets argument in the command line,
如果您想指定要回显的数据包数量,现在可以通过在命令行中设置--nPackets 参数来实现,
run "scratch/myfirst --nPackets=2"
运行 "scratch/myfirst --nPackets=2"
You should now see
现在你应该看到
+0.000000000s UdpEchoServerApplication:UdpEchoServer(0x836e50)
+1.000000000s UdpEchoServerApplication:StartApplication(0x836e50)
At time +2s client sent 1024 bytes to 10.1.1.2 port 9
+2.003686400s UdpEchoServerApplication:HandleRead(0x836e50, 0x8450c0)
+2.003686400s At time +2.00369s server received 1024 bytes from 10.1.1.1 port 49153
+2.003686400s Echoing packet
+2.003686400s At time +2.00369s server sent 1024 bytes to 10.1.1.1 port 49153
At time +2.00737s client received 1024 bytes from 10.1.1.2 port 9
At time +3s client sent 1024 bytes to 10.1.1.2 port 9
+3.003686400s UdpEchoServerApplication:HandleRead(0x836e50, 0x8450c0)
+3.003686400s At time +3.00369s server received 1024 bytes from 10.1.1.1 port 49153
+3.003686400s Echoing packet
+3.003686400s At time +3.00369s server sent 1024 bytes to 10.1.1.1 port 49153
At time +3.00737s client received 1024 bytes from 10.1.1.2 port 9
+10.000000000s UdpEchoServerApplication:StopApplication(0x836e50)
UdpEchoServerApplication:DoDispose(0x836e50)
UdpEchoServerApplication:~UdpEchoServer(0x836e50)
You have now echoed two packets. Pretty easy, isn't it?
你现在已经回显了两个数据包。相当容易,不是吗?
You can see that if you are an user, you can use the command line argument system to control global values and Attributes. If you are a model author, you can add new Attributes to your objects and they will automatically be available for setting by your users through the command line system. If you are a script author, you can add new variables to your scripts and hook them into the command line system quite painlessly.
您可以看到,如果您是 用户,您可以使用命令行参数系统来控制全局值和属性。如果您是模型作者,您可以向对象添加新属性,并且这些属性将自动可供用户通过命令行系统进行设置。如果您是脚本作者,您可以向脚本添加新变量,并将它们轻松地连接到命令行系统。

6.3 Using the Tracing System
6.3 使用跟踪系统

The whole point of simulation is to generate output for further study, and the tracing system is a primary mechanism for this. Since is a C++ program, standard facilities for generating output from C++ programs could be used:
模拟的整个目的是生成输出以供进一步研究, 跟踪系统是这一过程的主要机制。由于 是一个 C++程序,因此可以使用 C++程序生成输出的标准工具:
#include <iostream>
...
int main()
    ...
    std::cout << "The value of x is " << x << std::endl;
    ...
}
You could even use the logging module to add a little structure to your solution. There are many well-known problems generated by such approaches and so we have provided a generic event tracing subsystem to address the issues we thought were important.
您甚至可以使用日志模块为您的解决方案添加一些结构。这种方法会产生许多众所周知的问题,因此我们提供了一个通用的事件跟踪子系统来解决我们认为重要的问题。
The basic goals of the tracing system are:
跟踪系统的基本目标是:
  • For basic tasks, the tracing system should allow the user to generate standard tracing for popular tracing sources, and to customize which objects generate the tracing;
    对于基本任务,跟踪系统应允许用户为流行的跟踪源生成标准跟踪,并自定义生成跟踪的对象;
  • Intermediate users must be able to extend the tracing system to modify the output format generated, or to insert new tracing sources, without modifying the core of the simulator;
    中级用户必须能够扩展跟踪系统以修改生成的输出格式,或插入新的跟踪源,而无需修改模拟器的核心;
  • Advanced users can modify the simulator core to add new tracing sources and sinks.
    高级用户可以修改模拟器核心以添加新的跟踪源和接收器。
The tracing system is built on the concepts of independent tracing sources and tracing sinks, and a uniform mechanism for connecting sources to sinks. Trace sources are entities that can signal events that happen in a simulation
跟踪系统建立在独立跟踪源和跟踪接收器的概念之上,并提供将源连接到接收器的统一机制。跟踪源是可以发出模拟中发生事件的实体

and provide access to interesting underlying data. For example, a trace source could indicate when a packet is received by a net device and provide access to the packet contents for interested trace sinks.
并提供对有趣底层数据的访问。例如,跟踪源可以指示数据包何时被网络设备接收,并为感兴趣的跟踪接收器提供对数据包内容的访问。
Trace sources are not useful by themselves, they must be "connected" to other pieces of code that actually do something useful with the information provided by the sink. Trace sinks are consumers of the events and data provided by the trace sources. For example, one could create a trace sink that would (when connected to the trace source of the previous example) print out interesting parts of the received packet.
跟踪源本身并不实用,它们必须与实际利用接收器提供信息的代码的其他部分“连接”起来。跟踪接收器是跟踪源提供的事件和数据的消费者。例如,可以创建一个跟踪接收器(当连接到前面示例的跟踪源时)打印出接收到的数据包的有趣部分。
The rationale for this explicit division is to allow users to attach new types of sinks to existing tracing sources, without requiring editing and recompilation of the core of the simulator. Thus, in the example above, a user could define a new tracing sink in her script and attach it to an existing tracing source defined in the simulation core by editing only the user script.
明确划分的理由是允许用户将新类型的接收器附加到现有的跟踪源,而无需编辑和重新编译模拟器的核心。因此,在上面的示例中,用户可以在她的脚本中定义一个新的跟踪接收器,并将其附加到模拟核心中定义的现有跟踪源,只需编辑用户脚本即可。
In this tutorial, we will walk through some pre-defined sources and sinks and show how they may be customized with little user effort. See the ns-3 manual or how-to sections for information on advanced tracing configuration including extending the tracing namespace and creating new tracing sources.
在本教程中,我们将介绍一些预定义的源和接收器,并展示如何通过少量用户工作进行定制。请参阅 ns-3 手册或操作指南部分,了解有关高级跟踪配置的信息,包括扩展跟踪命名空间和创建新的跟踪源。

6.3.1 ASCII Tracing 6.3.1 ASCII 跟踪

provides helper functionality that wraps the low-level tracing system to help you with the details involved in configuring some easily understood packet traces. If you enable this functionality, you will see output in a ASCII files - thus the name. For those familiar with output, this type of trace is analogous to the out .tr generated by many scripts.
提供了包装低级跟踪系统的辅助功能,帮助您处理配置一些易于理解的数据包跟踪所涉及的细节。如果您启用此功能,您将在 ASCII 文件中看到输出 - 因此得名。对于熟悉 输出的人来说,这种类型的跟踪类似于许多脚本生成的.out.tr。
Let's just jump right in and add some ASCII tracing output to our scratch/myfirst.cc script. Right before the call to Simulator: : Run (), add the following lines of code:
让我们直接开始并向我们的 scratch/myfirst.cc 脚本添加一些 ASCII 跟踪输出。就在调用 Simulator::Run()之前,添加以下代码行:
AsciiTraceHelper ascii;
pointToPoint.EnableAsciiAll(ascii.CreateFileStream("myfirst.tr"));
Like in many other idioms, this code uses a helper object to help create ASCII traces. The second line contains two nested method calls. The "inside" method, CreateFileStream() uses an unnamed object idiom to create a file stream object on the stack (without an object name) and pass it down to the called method. We'll go into this more in the future, but all you have to know at this point is that you are creating an object representing a file named "myfirst.tr" and passing it into . You are telling to deal with the lifetime issues of the created object and also to deal with problems caused by a little-known (intentional) limitation of C++ ofstream objects relating to copy constructors.
就像许多其他 成语一样,此代码使用辅助对象来帮助创建 ASCII 跟踪。第二行包含两个嵌套的方法调用。"inside"方法 CreateFileStream()使用了一个无名对象成语,在堆栈上创建一个文件流对象(没有对象名称)并将其传递给被调用的方法。我们将在未来更深入地讨论这一点,但在这一点上,你需要知道的是你正在创建一个代表名为"myfirst.tr"的文件的对象,并将其传递给 。你告诉 处理所创建对象的生命周期问题,同时处理由 C++ ofstream 对象的复制构造函数的一个鲜为人知的(有意的)限制引起的问题。
The outside call, to EnableAsciiAll(), tells the helper that you want to enable ASCII tracing on all point-to-point devices in your simulation; and you want the (provided) trace sinks to write out information about packet movement in ASCII format.
外部调用 EnableAsciiAll()告诉辅助程序你想要在模拟中的所有点对点设备上启用 ASCII 跟踪;并且你希望(提供的)跟踪接收器以 ASCII 格式写出有关数据包移动的信息。
For those familiar with , the traced events are equivalent to the popular trace points that log "+", "-", "d", and "r" events.
对于熟悉 的人来说,跟踪的事件相当于记录“+”、“-”、“d”和“r”事件的常见跟踪点。
You can now build the script and run it from the command line:
您现在可以构建脚本并从命令行运行它:
$ ./ns3 run scratch/myfirst
Just as you have seen many times before, you will see some messages from ns 3 and then "'build' finished successfully" with some number of messages from the running program.
正如您以前多次看到的那样,您将看到来自 ns 3 的一些消息,然后是带有一些来自运行程序的消息的"'build'成功完成"。
When it ran, the program will have created a file named myfirst.tr. Because of the way that ns 3 works, the file is not created in the local directory, it is created at the top-level directory of the repository by default. If you want to control where the traces are saved you can use the --cwd option of ns 3 to specify this. We have not done so, thus we need to change into the top level directory of our repo and take a look at the ASCII trace file myfirst.tr in your favorite editor.
当程序运行时,它将创建一个名为 myfirst.tr 的文件。由于 ns 3 的工作方式,该文件不会在本地目录中创建,而是默认在存储库的顶层目录中创建。如果您想控制跟踪保存的位置,可以使用 ns 3 的--cwd 选项来指定。我们没有这样做,因此我们需要切换到存储库的顶层目录,并在您喜欢的编辑器中查看 ASCII 跟踪文件 myfirst.tr。

Parsing Ascii Traces 解析 Ascii 跟踪

There's a lot of information there in a pretty dense form, but the first thing to notice is that there are a number of distinct lines in this file. It may be difficult to see this clearly unless you widen your window considerably.
这里有很多信息,以相当密集的形式呈现,但要注意的第一件事是这个文件中有许多不同的行。除非您大幅度地扩展窗口,否则可能很难清楚地看到这一点。
Each line in the file corresponds to a trace event. In this case we are tracing events on the transmit queue present in every point-to-point net device in the simulation. The transmit queue is a queue through which every packet destined for a point-to-point channel must pass. Note that each line in the trace file begins with a lone character (has a space after it). This character will have the following meaning:
文件中的每一行对应一个跟踪事件。在这种情况下,我们正在跟踪模拟中每个点对点网络设备上的传输队列中的事件。传输队列是每个目的地为点对点通道的数据包必须经过的队列。请注意,跟踪文件中的每一行都以一个孤立字符开头(在其后有一个空格)。这个字符将具有以下含义:
  • +: An enqueue operation occurred on the device queue;
    +:设备队列上发生了一个入队操作;
  • -: A dequeue operation occurred on the device queue;
    -: 设备队列上发生了出队操作;
  • d: A packet was dropped, typically because the queue was full;
    d: 数据包被丢弃,通常是因为队列已满;
  • r: A packet was received by the net device.
    r: 网络设备接收到了一个数据包。
Let's take a more detailed view of the first line in the trace file. I'll break it down into sections (indented for clarity) with a reference number on the left side:
让我们更详细地查看跟踪文件中的第一行。我将把它分成几个部分(缩进以便清晰),左侧带有参考编号:
+
2
/NodeList/0/DeviceList/0/$ns3::PointToPointNetDevice/TxQueue/Enqueue
ns3::PppHeader (
    Point-to-Point Protocol: IP (0x0021))
    ns3::Ipv4Header (
        tos 0x0 ttl 64 id 0 protocol 17 offset 0 flags [none]
        length: 1052 10.1.1.1 > 10.1.1.2)
        ns3::UdpHeader (
            length: 1032 49153 > 9)
            Payload (size=1024)
The first section of this expanded trace event (reference number 0 ) is the operation. We have a + character, so this corresponds to an enqueue operation on the transmit queue. The second section (reference 1) is the simulation time expressed in seconds. You may recall that we asked the UdpEchoClientApplication to start sending packets at two seconds. Here we see confirmation that this is, indeed, happening.
这个扩展跟踪事件的第一部分(参考编号 0)是操作。我们有一个+字符,因此这对应于在传输队列上的入队操作。第二部分(参考 1)是以秒为单位表示的模拟时间。您可能还记得我们要求 UdpEchoClientApplication 在两秒时开始发送数据包。在这里,我们看到确认的确正在发生。
The next section of the example trace (reference 2) tell us which trace source originated this event (expressed in the tracing namespace). You can think of the tracing namespace somewhat like you would a filesystem namespace. The root of the namespace is the NodeList. This corresponds to a container managed in the core code that contains all of the nodes that are created in a script. Just as a filesystem may have directories under the root, we may have node numbers in the NodeList. The string /NodeList/o therefore refers to the zeroth node in the NodeList which we typically think of as "node 0 ". In each node there is a list of devices that have been installed. This list appears next in the namespace. You can see that this trace event comes from DeviceList/o which is the zeroth device installed in the node.
示例跟踪的下一部分(参考 2)告诉我们这个事件的跟踪源是哪个(以跟踪命名空间表示)。您可以将跟踪命名空间视为文件系统命名空间。命名空间的根是 NodeList。这对应于 核心代码中的一个容器,其中包含在脚本中创建的所有节点。就像文件系统的根目录下可能有目录一样,NodeList 中可能有节点编号。因此,字符串/NodeList/o 指的是 NodeList 中的第零个节点,我们通常将其视为“节点 0”。在每个节点中,都有一个已安装设备的列表。这个列表出现在命名空间中的下一个位置。您可以看到,这个跟踪事件来自 DeviceList/o,这是节点中安装的第零个设备。
The next string, $ns3: :PointToPointNetDevice tells you what kind of device is in the zeroth position of the device list for node zero. Recall that the operation + found at reference 00 meant that an enqueue operation happened on the transmit queue of the device. This is reflected in the final segments of the "trace path" which are TxQueue/Enqueue.
接下来的字符串$ns3::PointToPointNetDevice 告诉您设备列表中第零位置的设备是什么类型的设备。回想一下,在参考 00 处找到的操作+表示设备的传输队列上发生了入队操作。这反映在“跟踪路径”的最后几个段中,即 TxQueue/Enqueue。
The remaining sections in the trace should be fairly intuitive. References 3-4 indicate that the packet is encapsulated in the point-to-point protocol. References 5-7 show that the packet has an IP version four header and has originated from IP address 10.1.1.1 and is destined for 10.1.1.2. References 8-9 show that this packet has a UDP header and, finally, reference 10 shows that the payload is the expected 1024 bytes.
跟踪中的其余部分应该相当直观。参考文献 3-4 表明数据包封装在点对点协议中。参考文献 5-7 显示数据包具有 IP 版本四头,并源自 IP 地址 10.1.1.1,目的地为 10.1.1.2。参考文献 8-9 显示此数据包具有 UDP 头,最后,参考文献 10 显示有效载荷为预期的 1024 字节。
The next line in the trace file shows the same packet being dequeued from the transmit queue on the same node.
跟踪文件中的下一行显示相同的数据包从同一节点的传输队列中出列。
The Third line in the trace file shows the packet being received by the net device on the node with the echo server. I have reproduced that event below.
跟踪文件中的第三行显示数据包被节点上的网络设备接收,该节点具有回显服务器。我在下面重现了该事件。
r
2.25732
/NodeList/1/DeviceList/0/$ns3::PointToPointNetDevice/MacRx
    ns3::Ipv4Header (
        tos 0x0 ttl 64 id 0 protocol 17 offset 0 flags [none]
        length: 1052 10.1.1.1 > 10.1.1.2)
        ns3::UdpHeader (
            length: 1032 49153 > 9)
            Payload (size=1024)
Notice that the trace operation is now and the simulation time has increased to 2.25732 seconds. If you have been following the tutorial steps closely this means that you have left the DataRate of the net devices and the channel Delay set to their default values. This time should be familiar as you have seen it before in a previous section.
请注意,跟踪操作现在是 ,模拟时间已增加至 2.25732 秒。如果您一直在密切关注教程步骤,这意味着您已将网络设备的 DataRate 和通道延迟保留在其默认值。这个时间应该很熟悉,因为您之前在另一个部分看到过它。
The trace source namespace entry (reference 02) has changed to reflect that this event is coming from node 1 (/ NodeList/1) and the packet reception trace source (/MacRx). It should be quite easy for you to follow the progress of the packet through the topology by looking at the rest of the traces in the file.
跟踪源命名空间条目(参考 02)已更改,以反映此事件来自节点 1(/ NodeList/1)和数据包接收跟踪源(/MacRx)。通过查看文件中其余跟踪,您应该很容易跟踪数据包在拓扑中的进展。

6.3.2 PCAP Tracing 6.3.2 PCAP 跟踪

The device helpers can also be used to create trace files in the .pcap format. The acronym pcap (usually written in lower case) stands for packet capture, and is actually an API that includes the definition of a . pcap file format. The most popular program that can read and display this format is Wireshark (formerly called Ethereal). However, there are many traffic trace analyzers that use this packet format. We encourage users to exploit the many tools available for analyzing pcap traces. In this tutorial, we concentrate on viewing pcap traces with tcpdump.
设备助手也可用于创建 .pcap 格式的跟踪文件。 pcap(通常以小写字母书写)的首字母缩写代表数据包捕获,实际上是一个包含 .pcap 文件格式定义的 API。能够读取和显示这种格式的最流行程序是 Wireshark(以前称为 Ethereal)。然而,有许多流量跟踪分析器使用这种数据包格式。我们鼓励用户利用可用于分析 pcap 跟踪的众多工具。在本教程中,我们专注于使用 tcpdump 查看 pcap 跟踪。
The code used to enable pcap tracing is a one-liner.
用于启用 pcap 跟踪的代码是一行代码。
pointToPoint.EnablePcapAll("myfirst");
Go ahead and insert this line of code after the ASCII tracing code we just added to scratch/myfirst.cc. Notice that we only passed the string "myfirst," and not "myfirst.pcap" or something similar. This is because the parameter is a prefix, not a complete file name. The helper will actually create a trace file for every point-to-point device in the simulation. The file names will be built using the prefix, the node number, the device number and a ".pcap" suffix.
在我们刚刚添加到 scratch/myfirst.cc 的 ASCII 跟踪代码之后,继续插入这行代码。请注意,我们只传递了字符串"myfirst",而不是"myfirst.pcap"或类似的内容。这是因为参数是一个前缀,而不是完整的文件名。助手实际上会为模拟中的每个点对点设备创建一个跟踪文件。文件名将使用前缀、节点编号、设备编号和".pcap"后缀构建。
In our example script, we will eventually see files named "myfirst-0-0.pcap" and "myfirst-1-0.pcap" which are the pcap traces for node 0 -device 0 and node 1 -device 0 , respectively.
在我们的示例脚本中,最终会看到名为"myfirst-0-0.pcap"和"myfirst-1-0.pcap"的文件,它们分别是节点 0-设备 0 和节点 1-设备 0 的 pcap 跟踪文件。
Once you have added the line of code to enable pcap tracing, you can run the script in the usual way:
一旦您添加了启用 pcap 跟踪的代码行,您可以像往常一样运行脚本:
run scratch/myfirst
运行 scratch/myfirst
If you look at the top level directory of your distribution, you should now see three log files: myfirst.tr is the ASCII trace file we have previously examined. myfirst and myfirst-1-0.pcap are the new pcap files we just generated.
如果您查看分发的顶级目录,现在应该看到三个日志文件:myfirst.tr 是我们之前检查过的 ASCII 跟踪文件。myfirst 和 myfirst-1-0.pcap 是我们刚生成的新 pcap 文件。

Reading output with tcpdump
使用 tcpdump 读取输出

The easiest thing to do at this point will be to use tcpdump to look at the pcap files.
在这一点上最容易做的事情将是使用 tcpdump 查看 pcap 文件。
$ tcpdump -nn -tt -r myfirst-0-0.pcap
reading from file myfirst-0-0.pcap, link-type PPP (PPP)
2.000000 IP 10.1.1.1.49153 > 10.1.1.2.9: UDP, length 1024
2.514648 IP 10.1.1.2.9 > 10.1.1.1.49153: UDP, length 1024
(continued from previous page)
(续前页)
tcpdump -nn -tt -r myfirst-1-0.pcap
reading from file myfirst-1-0.pcap, link-type PPP (PPP)
从文件 myfirst-1-0.pcap 中读取,链路类型 PPP(PPP)
2.257324 IP 10.1.1.1.49153 > 10.1.1.2.9: UDP, length 1024
2.257324 IP 10.1.1.1.49153 > 10.1.1.2.9:UDP,长度 1024
2.257324 IP 10.1.1.2.9 > 10.1.1.1.49153: UDP, length 1024
2.257324 IP 10.1.1.2.9 > 10.1.1.1.49153:UDP,长度 1024
You can see in the dump of myfirst-0-0.pcap (the client device) that the echo packet is sent at 2 seconds into the simulation. If you look at the second dump (myfirst-1-0 . pcap) you can see that packet being received at 2.257324 seconds. You see the packet being echoed back at 2.257324 seconds in the second dump, and finally, you see the packet being received back at the client in the first dump at 2.514648 seconds.
您可以在 myfirst-0-0.pcap 的转储中看到(客户端设备)回显数据包在模拟开始后 2 秒被发送。如果您查看第二个转储(myfirst-1-0.pcap),您会看到数据包在 2.257324 秒被接收。您会在第二个转储中看到数据包在 2.257324 秒被回显,最后,在第一个转储中,您会在 2.514648 秒看到数据包被客户端接收回来。

Reading output with Wireshark
使用 Wireshark 阅读输出

If you are unfamiliar with Wireshark, there is a web site available from which you can download programs and documentation: http://www.wireshark.org/.
如果您对 Wireshark 不熟悉,可以从以下网站下载程序和文档:http://www.wireshark.org/。
Wireshark is a graphical user interface which can be used for displaying these trace files. If you have Wireshark available, you can open each of the trace files and display the contents as if you had captured the packets using a packet sniffer.
Wireshark 是一个图形用户界面,可用于显示这些跟踪文件。如果您有 Wireshark 可用,您可以打开每个跟踪文件并显示内容,就好像您使用数据包嗅探器捕获了数据包一样。

BUILDING TOPOLOGIES 构建拓扑结构

7.1 Building a Bus Network Topology
7.1 构建总线网络拓扑

In this section we are going to expand our mastery of network devices and channels to cover an example of a bus network. provides a net device and channel we call CSMA (Carrier Sense Multiple Access).
在本节中,我们将扩展我们对 网络设备和通道的掌握,以涵盖总线网络的示例。 提供了一个称为 CSMA(载波侦听多路访问)的网络设备和通道。
The CSMA device models a simple network in the spirit of Ethernet. A real Ethernet uses CSMA/CD (Carrier Sense Multiple Access with Collision Detection) scheme with exponentially increasing backoff to contend for the shared transmission medium. The CSMA device and channel models only a subset of this.
CSMA 设备模拟了以太网精神中的简单网络。真正的以太网使用 CSMA/CD(带冲突检测的载波监听多路访问)方案,通过指数增长的退避来争夺共享传输介质。 CSMA 设备和信道模型仅涵盖了其中的一个子集。
Just as we have seen point-to-point topology helper objects when constructing point-to-point topologies, we will see equivalent CSMA topology helpers in this section. The appearance and operation of these helpers should look quite familiar to you.
就像我们在构建点对点拓扑时看到的点对点拓扑辅助对象一样,在本节中我们将看到等效的 CSMA 拓扑辅助。这些辅助对象的外观和操作应该对您来说非常熟悉。
We provide an example script in our examples/tutorial directory. This script builds on the first.cc script and adds a CSMA network to the point-to-point simulation we've already considered. Go ahead and open examples/ tutorial/second.cc in your favorite editor. You will have already seen enough ns-3 code to understand most of what is going on in this example, but we will go over the entire script and examine some of the output.
我们在我们的示例/教程目录中提供了一个示例脚本。这个脚本基于 first.cc 脚本,并向我们已经考虑过的点对点仿真中添加了一个 CSMA 网络。继续打开您喜欢的编辑器中的 examples/tutorial/second.cc。您已经看到了足够的 ns-3 代码,以理解这个示例中发生的大部分情况,但我们将逐步审查整个脚本并检查一些输出。
Just as in the first.cc example (and in all ns-3 examples) the file begins with an emacs mode line and some GPL boilerplate.
就像在第一个.cc 示例中(以及所有 ns-3 示例中一样),文件以 emacs 模式行和一些 GPL 样板开头。
The actual code begins by loading module include files just as was done in the first.cc example.
实际代码从加载模块包含文件开始,就像在第一个.cc 示例中所做的那样。
#include "ns3/core-module.h"
#include "ns3/network-module.h"
#include "ns3/csma-module.h"
#include "ns3/internet-module.h"
#include "ns3/point-to-point-module.h"
#include "ns3/applications-module.h"
#include "ns3/ipv4-global-routing-helper.h"
One thing that can be surprisingly useful is a small bit of ASCII art that shows a cartoon of the network topology constructed in the example. You will find a similar "drawing" in most of our examples.
一个令人惊讶地有用的东西是一小段 ASCII 艺术,显示在示例中构建的网络拓扑的卡通。您会在我们的大多数示例中找到类似的“图纸”。
In this case, you can see that we are going to extend our point-to-point example (the link between the nodes and n1 below) by hanging a bus network off of the right side. Notice that this is the default network topology since you can actually vary the number of nodes created on the LAN. If you set to one, there will be a total of two nodes on the LAN (CSMA channel) — one required node and one "extra" node. By default there are three "extra" nodes as seen below:
在这种情况下,您可以看到我们将扩展我们的点对点示例(下面节点 和 n1 之间的链接),通过在右侧挂载总线网络。请注意,这是默认的网络拓扑,因为您实际上可以改变在局域网上创建的节点数量。如果将 设置为一,LAN 上将有两个节点(CSMA 通道)-一个必需节点和一个“额外”节点。默认情况下,如下所示有三个“额外”节点:
Default Network Topology
// 10.1.1.0
// n0 ------------------- n1 n2 n3 n4
//
// point-to-point // 点对点
//
// I
// LAN 10.1 .2 .0
Then the ns-3 namespace is used and a logging component is defined. This is all just as it was in first.cc, so there is nothing new yet.
然后使用 ns-3 命名空间,并定义一个日志组件。这一切都与 first.cc 中的情况一样,所以还没有新内容。
using namespace ns3;
NS_LOG_COMPONENT_DEFINE("SecondScriptExample");
The main program begins with a slightly different twist. We use a verbose flag to determine whether or not the UdpEchoClientApplication and UdpEchoServerApplication logging components are enabled. This flag defaults to true (the logging components are enabled) but allows us to turn off logging during regression testing of this example.
主程序以略有不同的方式开始。我们使用详细标志来确定 UdpEchoClientApplication 和 UdpEchoServerApplication 日志组件是否已启用。此标志默认为 true(日志组件已启用),但允许我们在对此示例进行回归测试期间关闭日志记录。
You will see some familiar code that will allow you to change the number of devices on the CSMA network via command line argument. We did something similar when we allowed the number of packets sent to be changed in the section on command line arguments. The last line makes sure you have at least one "extra" node.
您将看到一些熟悉的代码,这些代码将允许您通过命令行参数更改 CSMA 网络上设备的数量。当我们允许更改发送的数据包数量时,我们做了类似的事情在命令行参数部分。最后一行确保您至少有一个“额外”节点。
The code consists of variations of previously covered API so you should be entirely comfortable with the following code at this point in the tutorial.
代码由先前涵盖的 API 的变体组成,因此您应该对本教程中的以下代码感到完全舒适。
bool verbose = true;
uint32_t nCsma = 3;
CommandLine cmd;
cmd.AddValue("nCsma", "Number of \"extra\" CSMA nodes/devices", nCsma);
cmd.AddValue("verbose", "Tell echo applications to log if true", verbose);
cmd.Parse(argc, argv);
if (verbose)
    {
        LogComponentEnable("UdpEchoClientApplication", LOG_LEVEL_INFO);
        LogComponentEnable("UdpEchoServerApplication", LOG_LEVEL_INFO);
    }
nCsma = nCsma == 0 ? 1 : nCsma;
The next step is to create two nodes that we will connect via the point-to-point link. The NodeContainer is used to do this just as was done in first.cc.
下一步是创建两个节点,我们将通过点对点链接连接这两个节点。NodeContainer 用于执行此操作,就像在 first.cc 中所做的那样。
NodeContainer p2pNodes;
p2pNodes.Create (2);
Next, we declare another NodeContainer to hold the nodes that will be part of the bus (CSMA) network. First, we just instantiate the container object itself.
接下来,我们声明另一个 NodeContainer 来保存将成为总线(CSMA)网络一部分的节点。首先,我们只是实例化容器对象本身。
NodeContainer csmaNodes;
csmaNodes.Add(p2pNodes.Get(1));
csmaNodes.Create(nCsma);
The next line of code Gets the first node (as in having an index of one) from the point-to-point node container and adds it to the container of nodes that will get CSMA devices. The node in question is going to end up with a pointto-point device and a CSMA device. We then create a number of "extra" nodes that compose the remainder of the CSMA network. Since we already have one node in the CSMA network - the one that will have both a point-to-point
下一行代码从点对点节点容器中获取第一个节点(索引为 1),并将其添加到将获得 CSMA 设备的节点容器中。所涉及的节点最终将具有点对点设备和 CSMA 设备。然后,我们创建一些“额外”的节点,这些节点组成 CSMA 网络的其余部分。由于我们已经在 CSMA 网络中有一个节点 - 将同时具有点对点

and CSMA net device, the number of "extra" nodes means the number nodes you desire in the CSMA section minus one.
设备和 CSMA 网络设备,因此“额外”节点的数量意味着您在 CSMA 部分中希望的节点数量减去一。
The next bit of code should be quite familiar by now. We instantiate a PointToPointHelper and set the associated default Attributes so that we create a five megabit per second transmitter on devices created using the helper and a two millisecond delay on channels created by the helper.
现在,下一部分代码应该已经非常熟悉了。我们实例化一个 PointToPointHelper,并设置相关的默认属性,以便在使用助手创建的设备上创建一个每秒五兆位的发射机,并在由助手创建的通道上设置两毫秒的延迟。
PointToPointHelper pointToPoint;
pointToPoint.SetDeviceAttribute("DataRate", StringValue("5Mbps"));
pointToPoint.SetChannelAttribute("Delay", StringValue("2ms"));
NetDeviceContainer p2pDevices;
p2pDevices = pointToPoint.Install(p2pNodes);
We then instantiate a NetDeviceContainer to keep track of the point-to-point net devices and we Install devices on the point-to-point nodes.
然后,我们实例化一个 NetDeviceContainer 来跟踪点对点网络设备,并在点对点节点上安装设备。
We mentioned above that you were going to see a helper for CSMA devices and channels, and the next lines introduce them. The CsmaHelper works just like a PointToPointHelper, but it creates and connects CSMA devices and channels. In the case of a CSMA device and channel pair, notice that the data rate is specified by a channel Attribute instead of a device Attribute. This is because a real CSMA network does not allow one to mix, for example, 10Base and 100Base-T devices on a given channel. We first set the data rate to 100 megabits per second, and then set the speed-of-light delay of the channel to 6560 nano-seconds (arbitrarily chosen as 1 nanosecond per foot over a 2000 meter segment). Notice that you can set an Attribute using its native data type.
我们上面提到,您将看到一个用于 CSMA 设备和通道的助手,接下来的几行介绍了它们。CsmaHelper 的工作方式与 PointToPointHelper 类似,但它创建并连接 CSMA 设备和通道。在 CSMA 设备和通道对的情况下,请注意数据速率由通道属性而不是设备属性指定。这是因为真实的 CSMA 网络不允许在给定通道上混合,例如 10Base 和 100Base-T 设备。我们首先将数据速率设置为每秒 100 兆位,然后将通道的光速延迟设置为 6560 纳秒(任意选择为 2000 米段上的每英尺 1 纳秒)。请注意,您可以使用其本机数据类型设置属性。
CsmaHelper csma;
csma.SetChannelAttribute("DataRate", StringValue("100Mbps"));
csma.SetChannelAttribute("Delay", TimeValue(NanoSeconds(6560)));
NetDeviceContainer csmaDevices;
csmaDevices = csma.Install(csmaNodes);
Just as we created a NetDeviceContainer to hold the devices created by the PointToPointHelper we create a NetDeviceContainer to hold the devices created by our CsmaHelper. We call the Install method of the CsmaHelper to install the devices into the nodes of the csmaNodes NodeContainer.
就像我们创建了一个 NetDeviceContainer 来保存 PointToPointHelper 创建的设备一样,我们创建了一个 NetDeviceContainer 来保存 CsmaHelper 创建的设备。我们调用 CsmaHelper 的 Install 方法将设备安装到 csmaNodes NodeContainer 的节点中。
We now have our nodes, devices and channels created, but we have no protocol stacks present. Just as in the first.cc script, we will use the InternetStackHelper to install these stacks.
现在我们已经创建了节点、设备和信道,但是我们没有协议栈。就像在 first.cc 脚本中一样,我们将使用 InternetStackHelper 来安装这些协议栈。
InternetStackHelper stack;
stack.Install(p2pNodes.Get(0));
stack.Install(csmaNodes);
Recall that we took one of the nodes from the p2pNodes container and added it to the csmaNodes container. Thus we only need to install the stacks on the remaining p2pNodes node, and all of the nodes in the csmaNodes container to cover all of the nodes in the simulation.
回想一下,我们从 p2pNodes 容器中取出一个节点并将其添加到 csmaNodes 容器中。因此,我们只需要在剩余的 p2pNodes 节点上安装协议栈,并在 csmaNodes 容器中的所有节点上安装协议栈,以覆盖模拟中的所有节点。
Just as in the first.cc example script, we are going to use the Ipv4AddressHelper to assign IP addresses to our device interfaces. First we use the network 10.1.1.0 to create the two addresses needed for our two point-to-point devices.
就像在第一个.cc 示例脚本中一样,我们将使用 Ipv4AddressHelper 为设备接口分配 IP 地址。首先,我们使用网络 10.1.1.0 来创建我们两个点对点设备所需的两个地址。
Ipv4AddressHelper address;
address.SetBase("10.1.1.0", "255.255.255.0");
Ipv4InterfaceContainer p2pInterfaces;
p2pInterfaces = address.Assign(p2pDevices);
Recall that we save the created interfaces in a container to make it easy to pull out addressing information later for use in setting up the applications.
请记住,我们将创建的接口保存在容器中,以便稍后轻松提取寻址信息,用于设置应用程序。
We now need to assign IP addresses to our CSMA device interfaces. The operation works just as it did for the pointto-point case, except we now are performing the operation on a container that has a variable number of CSMA devices - remember we made the number of CSMA devices changeable by command line argument. The CSMA devices will be associated with IP addresses from network number 10.1.2.0 in this case, as seen below.
现在,我们需要为我们的 CSMA 设备接口分配 IP 地址。该操作的工作方式与点对点情况相同,只是现在我们正在对具有可变数量的 CSMA 设备的容器执行操作 - 请记住,我们通过命令行参数使 CSMA 设备的数量可变。在这种情况下,CSMA 设备将与网络编号 10.1.2.0 中的 IP 地址关联,如下所示。

Ipv4InterfaceContainer csmaInterfaces;
csmaInterfaces = address.Assign(csmaDevices);

Now we have a topology built, but we need applications. This section is going to be fundamentally similar to the applications section of first.cc but we are going to instantiate the server on one of the nodes that has a CSMA device and the client on the node having only a point-to-point device.

First, we set up the echo server. We create a UdpEchoServerHelper and provide a required Attribute value to the constructor which is the server port number. Recall that this port can be changed later using the SetAttribute method if desired, but we require it to be provided to the constructor.

UdpEchoServerHelper echoServer(9);

ApplicationContainer serverApps = echoServer.Install(csmaNodes.Get(nCsma));

serverApps.Start (Seconds (1.0));

serverApps.Stop(Seconds(10.0));

Recall that the csmaNodes NodeContainer contains one of the nodes created for the point-to-point network and nCsma "extra" nodes. What we want to get at is the last of the "extra" nodes. The zeroth entry of the csmaNodes container will be the point-to-point node. The easy way to think of this, then, is if we create one "extra" CSMA node, then it will be at index one of the csmaNodes container. By induction, if we create nCsma "extra" nodes the last one will be at index nCsma. You see this exhibited in the Get of the first line of code.

The client application is set up exactly as we did in the first.cc example script. Again, we provide required Attributes to the UdpEchoClientHelper in the constructor (in this case the remote address and port). We tell the client to send packets to the server we just installed on the last of the "extra" CSMA nodes. We install the client on the leftmost point-to-point node seen in the topology illustration.

UdpEchoClientHelper echoClient(csmaInterfaces.GetAddress(nCsma), 9);
echoClient.SetAttribute("MaxPackets", UintegerValue(1));
echoClient.SetAttribute("Interval", TimeValue(Seconds(1.0)));
echoClient.SetAttribute("PacketSize", UintegerValue(1024));
ApplicationContainer clientApps = echoClient.Install(p2pNodes.Get(0));
clientApps.Start(Seconds(2.0));
clientApps.Stop(Seconds(10.0));

Since we have actually built an internetwork here, we need some form of internetwork routing. \(n s-3\) provides what we call global routing to help you out. Global routing takes advantage of the fact that the entire internetwork is accessible in the simulation and runs through the all of the nodes created for the simulation - it does the hard work of setting up routing for you without having to configure routers.

Basically, what happens is that each node behaves as if it were an OSPF router that communicates instantly and magically with all other routers behind the scenes. Each node generates link advertisements and communicates them directly to a global route manager which uses this global information to construct the routing tables for each node. Setting up this form of routing is a one-liner:

Ipv4GlobalRoutingHelper: :PopulateRoutingTables();

Next we enable pcap tracing. The first line of code to enable pcap tracing in the point-to-point helper should be familiar to you by now. The second line enables pcap tracing in the CSMA helper and there is an extra parameter you haven't encountered yet.

pointToPoint.EnablePcapAll("second");

csma.EnablePcap("second", csmaDevices.Get(1), true);

The CSMA network is a multi-point-to-point network. This means that there can (and are in this case) multiple endpoints on a shared medium. Each of these endpoints has a net device associated with it. There are two basic alternatives to gathering trace information from such a network. One way is to create a trace file for each net device and store only the packets that are emitted or consumed by that net device. Another way is to pick one of the devices and place it in promiscuous mode. That single device then "sniffs" the network for all packets and stores them in a single pcap file. This is how tcpdump, for example, works. That final parameter tells the CSMA helper whether or not to arrange to capture packets in promiscuous mode.

In this example, we are going to select one of the devices on the CSMA network and ask it to perform a promiscuous sniff of the network, thereby emulating what tcpdump would do. If you were on a Linux machine you might do something like tcpdump -i etho to get the trace. In this case, we specify the device using csmaDevices.Get (1), which selects the first device in the container. Setting the final parameter to true enables promiscuous captures.

The last section of code just runs and cleans up the simulation just like the first.cc example.

Simulator::Run(t) 模拟器::运行(t)
Simulator::Destroy(); 模拟器::销毁();
return 0; 返回 0;

}

In order to run this example, copy the second.cc example script into the scratch directory and use the ns 3 build script to build just as you did with the first.cc example. If you are in the top-level directory of the repository you just type,

\$ cp examples/tutorial/second.cc scratch/mysecond.cc

\(\$ . / n s 3\) build

Warning: We use the file second.cc as one of our regression tests to verify that it works exactly as we think it should in order to make your tutorial experience a positive one. This means that an executable named second already exists in the project. To avoid any confusion about what you are executing, please do the renaming to mysecond.cc suggested above.

If you are following the tutorial closely, you will still have the NS_LOG variable set, so go ahead and clear that variable and run the program.

\$ export NS_LOG=""

\(\$ . / n s 3\) run scratch/mysecond

Since we have set up the UDP echo applications to log just as we did in first. \(с \mathrm{cc}\), you will see similar output when you run the script.

At time +2s client sent 1024 bytes to 10.1.2.4 port 9
在时间+2s 客户端向 10.1.2.4 端口 9 发送了 1024 字节

At time +2.0078s server received 1024 bytes from 10.1.1.1 port 49153
在时间+2.0078s 服务器从 10.1.1.1 端口 49153 接收了 1024 字节

At time +2.0078s server sent 1024 bytes to 10.1.1.1 port 49153
在时间+2.0078s 服务器向 10.1.1.1 端口 49153 发送了 1024 字节

At time +2.01761s client received 1024 bytes from 10.1.2.4 port 9
在时间+2.01761 秒,客户端从 10.1.2.4 端口 9 接收了 1024 字节

Recall that the first message, "Sent 1024 bytes to 10.1.2.4," is the UDP echo client sending a packet to the server. In this case, the server is on a different network (10.1.2.0). The second message, "Received 1024 bytes from 10.1.1.1," is from the UDP echo server, generated when it receives the echo packet. The final message, "Received 1024 bytes from 10.1.2.4," is from the echo client, indicating that it has received its echo back from the server.

If you now look in the top level directory, you will find three trace files:
second-0-0.pcap second-1-0.pcap second-2-0.pcap

Let's take a moment to look at the naming of these files. They all have the same form, <name>-<node>-<device> . pcap. For example, the first file in the listing is second \(-0-0\). pcap which is the pcap trace from node zero, device zero. This is the point-to-point net device on node zero. The file second-1-0.pcap is the pcap trace for device zero on node one, also a point-to-point net device; and the file second-2-0.pcap is the pcap trace for device zero on node two.

If you refer back to the topology illustration at the start of the section, you will see that node zero is the leftmost node of the point-to-point link and node one is the node that has both a point-to-point device and a CSMA device. You will see that node two is the first "extra" node on the CSMA network and its device zero was selected as the device to capture the promiscuous-mode trace.

Now, let's follow the echo packet through the internetwork. First, do a tcpdump of the trace file for the leftmost point-to-point node - node zero.

\$ tcpdump -nn -tt -r second-0-0.pcap

You should see the contents of the pcap file displayed:

reading from file second-0-0.pcap, link-type PPP (PPP)

2.000000 IP 10.1.1.1.49153 > 10.1.2.4.9: UDP, length 1024

2.017607 IP 10.1.2.4.9 > 10.1.1.1.49153: UDP, length 1024

The first line of the dump indicates that the link type is PPP (point-to-point) which we expect. You then see the echo packet leaving node zero via the device associated with IP address 10.1.1.1 headed for IP address 10.1.2.4 (the rightmost CSMA node). This packet will move over the point-to-point link and be received by the point-to-point net device on node one. Let's take a look:

\$ tcpdump -nn -tt -r second-1-0.pcap

You should now see the pcap trace output of the other side of the point-to-point link:

reading from file second-1-0.pcap, link-type PPP (PPP)

2.003686 IP 10.1.1.1.49153 > 10.1.2.4.9: UDP, length 1024

2.013921 IP 10.1.2.4.9 > 10.1.1.1.49153: UDP, length 1024

Here we see that the link type is also PPP as we would expect. You see the packet from IP address 10.1.1.1 (that was sent at 2.000000 seconds) headed toward IP address 10.1.2.4 appear on this interface. Now, internally to this node, the packet will be forwarded to the CSMA interface and we should see it pop out on that device headed for its ultimate destination.

Remember that we selected node 2 as the promiscuous sniffer node for the CSMA network so let's then look at second-2-0.pcap and see if its there.

\$ tcpdump -nn -tt -r second-2-0.pcap

You should now see the promiscuous dump of node two, device zero:

reading from file second-2-0.pcap, link-type EN10MB (Ethernet)
从文件 second-2-0.pcap 读取,链路类型为 EN10MB(以太网)

2.007698 ARP, Request who-has 10.1.2.4 (ff:ff:ff:ff:ff:ff) tell 10.1.2.1, length 50
2.007698 ARP,请求谁有 10.1.2.4(ff:ff:ff:ff:ff:ff)告诉 10.1.2.1,长度 50

2.007710 ARP, Reply 10.1.2.4 is-at 00:00:00:00:00:06, length 50
2.007710 ARP,回复 10.1.2.4 的 MAC 地址是 00:00:00:00:00:06,长度 50

2.007803 IP 10.1.1.1.49153 > 10.1.2.4.9: UDP, length 1024
2.007803 IP 10.1.1.1.49153 > 10.1.2.4.9:UDP,长度 1024

2.013815 ARP, Request who-has 10.1.2.1 (ff:ff:ff:ff:ff:ff) tell 10.1.2.4, length 50
2.013815 ARP,请求谁有 10.1.2.1 的 MAC 地址(ff:ff:ff:ff:ff:ff)告诉 10.1.2.4,长度 50

2.013828 ARP, Reply 10.1.2.1 is-at 00:00:00:00:00:03, length 50
2.013828 ARP,回复 10.1.2.1 的 MAC 地址是 00:00:00:00:00:03,长度 50

2.013921 IP 10.1.2.4.9 > 10.1.1.1.49153: UDP, length 1024
2.013921 IP 10.1.2.4.9 > 10.1.1.1.49153:UDP,长度 1024

As you can see, the link type is now "Ethernet". Something new has appeared, though. The bus network needs ARP, the Address Resolution Protocol. Node one knows it needs to send the packet to IP address 10.1.2.4, but it doesn't know the MAC address of the corresponding node. It broadcasts on the CSMA network (ff:ff:ff:ff:ff:ff) asking for the device that has IP address 10.1.2.4. In this case, the rightmost node replies saying it is at MAC address 00:00:00:00:00:06. Note that node two is not directly involved in this exchange, but is sniffing the network and reporting all of the traffic it sees.

This exchange is seen in the following lines,

2.007698 ARP, Request who-has 10.1.2.4 (ff:ff:ff:ff:ff:ff) tell 10.1.2.1, length 50

2.007710 ARP, Reply 10.1.2.4 is-at 00:00:00:00:00:06, length 50

Then node one, device one goes ahead and sends the echo packet to the UDP echo server at IP address 10.1.2.4.

2.007803 IP 10.1.1.1.49153 > 10.1.2.4.9: UDP, length 1024

The server receives the echo request and turns the packet around trying to send it back to the source. The server knows that this address is on another network that it reaches via IP address 10.1.2.1. This is because we initialized global routing and it has figured all of this out for us. But, the echo server node doesn't know the MAC address of the first CSMA node, so it has to ARP for it just like the first CSMA node had to do.

2.013815 ARP, Request who-has 10.1.2.1 (ff:ff:ff:ff:ff:ff) tell 10.1.2.4, length 50

2.013828 ARP, Reply 10.1.2.1 is-at 00:00:00:00:00:03, length 50

The server then sends the echo back to the forwarding node.

2.013921 IP 10.1.2.4.9 > 10.1.1.1.49153: UDP, length 1024

Looking back at the rightmost node of the point-to-point link,

\$ tcpdump -nn -tt -r second-1-0.pcap

You can now see the echoed packet coming back onto the point-to-point link as the last line of the trace dump.

reading from file second-1-0.pcap, link-type PPP (PPP)

2.003686 IP 10.1.1.1.49153 > 10.1.2.4.9: UDP, length 1024

2.013921 IP 10.1.2.4.9 > 10.1.1.1.49153: UDP, length 1024

Lastly, you can look back at the node that originated the echo

\$ tcpdump -nn -tt -r second-0-0.pcap

and see that the echoed packet arrives back at the source at 2.017607 seconds,

reading from file second-0-0.pcap, link-type PPP (PPP)

2.000000 IP 10.1.1.1.49153 > 10.1.2.4.9: UDP, length 1024

2.017607 IP 10.1.2.4.9 > 10.1.1.1.49153: UDP, length 1024

Finally, recall that we added the ability to control the number of CSMA devices in the simulation by command line argument. You can change this argument in the same way as when we looked at changing the number of packets echoed in the first.cc example. Try running the program with the number of "extra" devices set to four, instead of the default value of three (extra nodes):

\$ ./ns3 run "scratch/mysecond --nCsma=4"

You should now see,

At time +2s client sent 1024 bytes to 10.1.2.5 port 9
在时间 +2s 时,客户端向 10.1.2.5 端口 9 发送了 1024 字节

At time +2.0118s server received 1024 bytes from 10.1.1.1 port 49153
在时间+2.0118 秒,服务器从 10.1.1.1 端口 49153 接收了 1024 字节

At time +2.0118s server sent 1024 bytes to 10.1.1.1 port 49153
在时间+2.0118 秒,服务器向 10.1.1.1 端口 49153 发送了 1024 字节

At time +2.02461s client received 1024 bytes from 10.1.2.5 port 9
在时间+2.02461 秒,客户端从 10.1.2.5 端口 9 接收了 1024 字节

Notice that the echo server has now been relocated to the last of the CSMA nodes, which is 10.1.2.5 instead of the default case, 10.1.2.4.

It is possible that you may not be satisfied with a trace file generated by a bystander in the CSMA network. You may really want to get a trace from a single device and you may not be interested in any other traffic on the network. You can do this fairly easily.

Let's take a look at scratch/mysecond.cc and add that code enabling us to be more specific. ns-3 helpers provide methods that take a node number and device number as parameters. Go ahead and replace the EnablePcap calls with the calls below.

pointToPoint.EnablePcap("second", p2pNodes.Get(0)->GetId(), 0);

csma.EnablePcap("second", csmaNodes.Get(nCsma)->GetId(), 0, false);

csma.EnablePcap("second", csmaNodes.Get(nCsma-1)->GetId(), 0, false);

We know that we want to create a pcap file with the base name "second" and we also know that the device of interest in both cases is going to be zero, so those parameters are not really interesting.

In order to get the node number, you have two choices: first, nodes are numbered in a monotonically increasing fashion starting from zero in the order in which you created them. One way to get a node number is to figure this number out "manually" by contemplating the order of node creation. If you take a look at the network topology illustration at the beginning of the file, we did this for you and you can see that the last CSMA node is going to be node number nCsma +1 . This approach can become annoyingly difficult in larger simulations.

An alternate way, which we use here, is to realize that the NodeContainers contain pointers to \(n s\) - 3 Node Objects. The Node Object has a method called Get Id which will return that node's ID, which is the node number we seek. Let's go take a look at the Doxygen for the Node and locate that method, which is further down in the \(n s-3\) core code than we've seen so far; but sometimes you have to search diligently for useful things.

Go to the Doxygen documentation for your release (recall that you can find it on the project web site). You can get to the Node documentation by looking through at the "Classes" tab and scrolling down the "Class List" until you find ns3::Node. Select ns3::Node and you will be taken to the documentation for the Node class. If you now scroll down to the GetId method and select it, you will be taken to the detailed documentation for the method. Using the Get Id method can make determining node numbers much easier in complex topologies.

Let's clear the old trace files out of the top-level directory to avoid confusion about what is going on,

\(\$ \mathrm{rm} * . \mathrm{pcap}\)

On line 110, notice the following command to enable tracing on one node (the index 1 corresponds to the second CSMA node in the container):

csma.EnablePcap("second", csmaDevices.Get(1), true);

Change the index to the quantity nCsma, corresponding to the last node in the topology- the node that contains the echo server:

csma.EnablePcap("second", csmaDevices.Get(nCsma), true);

If you build the new script and run the simulation setting nCsma to 100 ,

\(\$ . / n s 3\) build

\$ ./ns3 run "scratch/mysecond --nCsma=100"
you will see the following output:

At time +2s client sent 1024 bytes to 10.1.2.101 port 9

At time +2.0068s server received 1024 bytes from 10.1.1.1 port 49153

At time +2.0068s server sent 1024 bytes to 10.1 .1 .1 port 49153

At time +2.01761s client received 1024 bytes from 10.1 .2 .101 port 9

Note that the echo server is now located at 10.1.2.101 which corresponds to having 100 "extra" CSMA nodes with the echo server on the last one. If you list the pcap files in the top level directory you will see,

second-0-0.pcap second-1-0.pcap second-101-0.pcap

The trace file second-0-0.pcap is the "leftmost" point-to-point device which is the echo packet source. The file second-101-0.pcap corresponds to the rightmost CSMA device which is where the echo server resides. You may have noticed that the final parameter on the call to enable pcap tracing on the echo server node was true. This means that the trace gathered on that node was in promiscuous mode.

To illustrate the difference between promiscuous and non-promiscuous traces, let's add a non-promiscuous trace for the next-to-last node. Add the following line before or after the existing PCAP trace line; the last argument of false indicates that you would like a non-promiscuous trace:

csma.EnablePcap("second", csmaDevices.Get(nCsma - 1), false);

Now build and run as before:

\(\$ \mathrm{rm} * . \mathrm{pcap}\)

\(\$ . / n s 3\) build

\$ ./ns3 run "scratch/mysecond --nCsma=100"

This will produce a new PCAP file, second-100-0.pcap. Go ahead and take a look at the tcpdump for second-100-0.pcap.

\$ tcpdump -nn -tt -r second-100-0.pcap

You can now see that node 100 is really a bystander in the echo exchange. The only packets that it receives are the ARP requests which are broadcast to the entire CSMA network.

reading from file second-100-0.pcap, link-type EN10MB (Ethernet)

2.006698 ARP, Request who-has 10.1.2.101 (ff:ff:ff:ff:ff:ff) tell 10.1.2.1, length 50

2.013815 ARP, Request who-has 10.1.2.1 (ff:ff:ff:ff:ff:ff) tell 10.1.2.101, length 50

Now take a look at the tcpdump for second-101-0.pcap.

\$ tcpdump -nn -tt -r second-101-0.pcap

Node 101 is really the participant in the echo exchange; the following trace will exist regardless of whether promiscuous mode is set on that PCAP statement.

reading from file second-101-0.pcap, link-type EN10MB (Ethernet)
从文件 second-101-0.pcap 读取,链路类型为 EN10MB(以太网)

2.006698 ARP, Request who-has 10.1.2.101 (ff:ff:ff:ff:ff:ff) tell 10.1.2.1, length 50
2.006698 ARP,请求谁有 10.1.2.101(ff:ff:ff:ff:ff:ff)告诉 10.1.2.1,长度 50

2.006698 ARP, Reply 10.1.2.101 is-at 00:00:00:00:00:67, length 50
2.006698 ARP,回复 10.1.2.101 的地址是 00:00:00:00:00:67,长度 50

2.006803 IP 10.1.1.1.49153 > 10.1.2.101.9: UDP, length 1024
2.006803 IP 10.1.1.1.49153 > 10.1.2.101.9:UDP,长度 1024

2.013803 ARP, Request who-has 10.1.2.1 (ff:ff:ff:ff:ff:ff) tell 10.1.2.101, length 50
2.013803 ARP,请求谁有 10.1.2.1(ff:ff:ff:ff:ff:ff)告诉 10.1.2.101,长度 50

2.013828 ARP, Reply 10.1.2.1 is-at 00:00:00:00:00:03, length 50
2.013828 ARP,回复 10.1.2.1 是 00:00:00:00:00:03,长度 50

2.013828 IP 10.1.2.101.9 > 10.1.1.1.49153: UDP, length 1024
2.013828 IP 10.1.2.101.9 > 10.1.1.1.49153:UDP,长度 1024

\subsection*{7.2 Models, Attributes and Reality}

This is a convenient place to make a small excursion and make an important point. It may or may not be obvious to you, but whenever one is using a simulation, it is important to understand exactly what is being modeled and what is not. It is tempting, for example, to think of the CSMA devices and channels used in the previous section as if they were real Ethernet devices; and to expect a simulation result to directly reflect what will happen in a real Ethernet. This is not the case.

A model is, by definition, an abstraction of reality. It is ultimately the responsibility of the simulation script author to determine the so-called "range of accuracy" and "domain of applicability" of the simulation as a whole, and therefore its constituent parts.

In some cases, like csma, it can be fairly easy to determine what is not modeled. By reading the model description (csma.h) you can find that there is no collision detection in the CSMA model and decide on how applicable its use will be in your simulation or what caveats you may want to include with your results. In other cases, it can be quite easy to configure behaviors that might not agree with any reality you can go out and buy. It will prove worthwhile to spend some time investigating a few such instances, and how easily you can swerve outside the bounds of reality in your simulations.

As you have seen, \(n s-3\) provides Attributes which a user can easily set to change model behavior. Consider two of the Attributes of the CsmaNetDevice: Mtu and EncapsulationMode. The Mtu attribute indicates the Maximum Transmission Unit to the device. This is the size of the largest Protocol Data Unit (PDU) that the device can send.

The MTU defaults to 1500 bytes in the CsmaNetDevice. This default corresponds to a number found in RFC 894, "A Standard for the Transmission of IP Datagrams over Ethernet Networks." The number is actually derived from the maximum packet size for 10Base5 (full-spec Ethernet) networks - 1518 bytes. If you subtract the DIX encapsulation overhead for Ethernet packets ( 18 bytes) you will end up with a maximum possible data size (MTU) of 1500 bytes. One can also find that the MTU for IEEE 802.3 networks is 1492 bytes. This is because LLC/SNAP encapsulation adds an extra eight bytes of overhead to the packet. In both cases, the underlying hardware can only send 1518 bytes, but the data size is different.

In order to set the encapsulation mode, the CsmaNetDevice provides an Attribute called EncapsulationMode which can take on the values Dix or Llc. These correspond to Ethernet and LLC/SNAP framing respectively.

If one leaves the Mtu at 1500 bytes and changes the encapsulation mode to Llc, the result will be a network that encapsulates 1500 byte PDUs with LLC/SNAP framing resulting in packets of 1526 bytes, which would be illegal in many networks, since they can transmit a maximum of 1518 bytes per packet. This would most likely result in a simulation that quite subtly does not reflect the reality you might be expecting.

Just to complicate the picture, there exist jumbo frames ( \(1500<\) MTU \(<=9000\) bytes) and super-jumbo (MTU \(>9000\) bytes) frames that are not officially sanctioned by IEEE but are available in some high-speed (Gigabit) networks and NICs. One could leave the encapsulation mode set to Dix, and set the Mtu Attribute on a CsmaNetDevice to 64000 bytes - even though an associated CsmaChannel DataRate was set at 10 megabits per second. This would essentially model an Ethernet switch made out of vampire-tapped 1980s-style 10Base5 networks that support superjumbo datagrams. This is certainly not something that was ever made, nor is likely to ever be made, but it is quite easy for you to configure.

In the previous example, you used the command line to create a simulation that had \(100 \mathrm{Csma}\) nodes. You could have just as easily created a simulation with 500 nodes. If you were actually modeling that 10 Base 5 vampire-tap network, the maximum length of a full-spec Ethernet cable is 500 meters, with a minimum tap spacing of 2.5 meters. That means there could only be 200 taps on a real network. You could have quite easily built an illegal network in that way as well. This may or may not result in a meaningful simulation depending on what you are trying to model.

Similar situations can occur in many places in \(n s-3\) and in any simulator. For example, you may be able to position nodes in such a way that they occupy the same space at the same time, or you may be able to configure amplifiers or noise levels that violate the basic laws of physics.
\(n s-3\) generally favors flexibility, and many models will allow freely setting Attributes without trying to enforce any arbitrary consistency or particular underlying spec.

The thing to take home from this is that \(n s-3\) is going to provide a super-flexible base for you to experiment with. It is up to you to understand what you are asking the system to do and to make sure that the simulations you create have some meaning and some connection with a reality defined by you.

\subsection*{7.3 Building a Wireless Network Topology}

In this section we are going to further expand our knowledge of \(n s-3\) network devices and channels to cover an example of a wireless network. \(n s\) - 3 provides a set of 802.11 models that attempt to provide an accurate MAC-level implementation of the 802.11 specification and a "not-so-slow" PHY-level model of the 802.11a specification.

Just as we have seen both point-to-point and CSMA topology helper objects when constructing point-to-point topologies, we will see equivalent Wifi topology helpers in this section. The appearance and operation of these helpers should look quite familiar to you.

We provide an example script in our examples/tutorial directory. This script builds on the second.cc script and adds a Wi-Fi network. Go ahead and open examples/tutorial/third.cc in your favorite editor. You will have already seen enough \(n s-3\) code to understand most of what is going on in this example, but there are a few new things, so we will go over the entire script and examine some of the output.

Just as in the second.cc example (and in all \(n s-3\) examples) the file begins with an emacs mode line and some GPL boilerplate.

Take a look at the ASCII art (reproduced below) that shows the default network topology constructed in the example. You can see that we are going to further extend our example by hanging a wireless network off of the left side. Notice that this is a default network topology since you can actually vary the number of nodes created on the wired and wireless networks. Just as in the second.cc script case, if you change nCsma, it will give you a number of "extra" CSMA nodes. Similarly, you can set nWifi to control how many STA (station) nodes are created in the simulation. There will always be one AP (access point) node on the wireless network. By default there are three "extra" CSMA nodes and three wireless STA nodes.

The code begins by loading module include files just as was done in the second.cc example. There are a couple of new includes corresponding to the wifi module and the mobility module which we will discuss below.

#include "ns3/core-module.h"
#包括“ns3/core-module.h”

#include "ns3/point-to-point-module.h"
#包括“ns3/point-to-point-module.h”

#include "ns3/network-module.h"
#包括 "ns3/network-module.h"

#include "ns3/applications-module.h"
#包括 "ns3/applications-module.h"

#include "ns3/wifi-module.h"
#包括 "ns3/wifi-module.h"

#include "ns3/mobility-module.h"
#包括 "ns3/mobility-module.h"

#include "ns3/csma-module.h"
#包括 "ns3/csma-module.h"

#include "ns3/internet-module.h"
#包括 "ns3/internet-module.h"

The network topology illustration follows:

Default Network Topology 默认网络拓扑
Wifi 10.1.3.0
// AP
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-091.jpg?height=73&width=596&top_left_y=2277&top_left_x=255)
/ n5 n6 n7 n0 -...........----n1 n2 n3 n4
point-to-point 点对点
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-091.jpg?height=149&width=328&top_left_y=2332&top_left_x=931)

//

//

//

//

You can see that we are adding a new network device to the node on the left side of the point-to-point link that becomes the access point for the wireless network. A number of wireless STA nodes are created to fill out the new 10.1.3.0 network as shown on the left side of the illustration.

After the illustration, the \(\mathrm{ns}-3\) namespace is used and a logging component is defined. This should all be quite familiar by now.

using namespace ns3;

NS_LOG_COMPONENT_DEFINE("ThirdScriptExample");

The main program begins just like second.cc by adding some command line parameters for enabling or disabling logging components and for changing the number of devices created.

bool verbose = true;
uint32_t nCsma = 3;
uint32_t nWifi = 3;
CommandLine cmd;
cmd.AddValue("nCsma", "Number of "extra" CSMA nodes/devices", nCsma);
cmd.AddValue("nCsma", "额外的 CSMA 节点/设备数量", nCsma);

cmd.AddValue("nWifi", "Number of wifi STA devices", nWifi);
cmd.AddValue("nWifi", "Wifi STA 设备数量", nWifi);

cmd.AddValue("verbose", "Tell echo applications to log if true", verbose);
cmd.AddValue("verbose", "如果为真,告诉回声应用记录", verbose);

cmd.Parse(argc,argv);
if (verbose)
{
LogComponentEnable("UdpEchoClientApplication", LOG_LEVEL_INFO);
LogComponentEnable("UdpEchoServerApplication", LOG_LEVEL_INFO);
}

Just as in all of the previous examples, the next step is to create two nodes that we will connect via the point-to-point link.

NodeContainer p2pNodes;

p2pNodes.Create(2);

Next, we see an old friend. We instantiate a PointToPointHelper and set the associated default Attributes so that we create a five megabit per second transmitter on devices created using the helper and a two millisecond delay on channels created by the helper. We then Install the devices on the nodes and the channel between them.

PointToPointHelper pointToPoint;
pointToPoint.SetDeviceAttribute("DataRate", StringValue("5Mbps"));
pointToPoint.SetChannelAttribute("Delay", StringValue("2ms"));
NetDeviceContainer p2pDevices;
p2pDevices = pointToPoint.Install(p2pNodes);

Next, we declare another NodeContainer to hold the nodes that will be part of the bus (CSMA) network.

NodeContainer csmaNodes;
csmaNodes.Add(p2pNodes.Get(1));
csmaNodes.Create(nCsma);

The next line of code Gets the first node (as in having an index of one) from the point-to-point node container and adds it to the container of nodes that will get CSMA devices. The node in question is going to end up with a point-topoint device and a CSMA device. We then create a number of "extra" nodes that compose the remainder of the CSMA network.

We then instantiate a CsmaHelper and set its Attributes as we did in the previous example. We create a NetDeviceContainer to keep track of the created CSMA net devices and then we Install CSMA devices on the selected nodes.

CsmaHelper csma;
csma.SetChannelAttribute("DataRate", StringValue("100Mbps"));
csma.SetChannelAttribute("Delay", TimeValue(NanoSeconds(6560)));
NetDeviceContainer csmaDevices;
csmaDevices = csma.Install(csmaNodes);

Next, we are going to create the nodes that will be part of the Wi-Fi network. We are going to create a number of "station" nodes as specified by the command line argument, and we are going to use the "leftmost" node of the point-to-point link as the node for the access point.

NodeContainer wifiStaNodes;
wifiStaNodes.Create(nWifi);
NodeContainer wifiApNode = p2pNodes.Get(0);

The next bit of code constructs the wifi devices and the interconnection channel between these wifi nodes. First, we configure the PHY and channel helpers:

YansWifiChannelHelper channel = YansWifiChannelHelper::Default();

YansWifiPhyHelper phy = YansWifiPhyHelper:

For simplicity, this code uses the default PHY layer configuration and channel models which are documented in the API Doxygen documentation for the YansWifiChannelHelper::Default and YansWifiPhyHelper::Default methods. Once these objects are created, we create a channel object and associate it to our PHY layer object manager to make sure that all the PHY layer objects created by the YansWifiPhyHelper share the same underlying channel, that is, they share the same wireless medium and can communicate and interfere:

phy.SetChannel(channel.Create());

Once the PHY helper is configured, we can focus on the MAC layer. The WifiMacHelper object is used to set MAC parameters. The second statement below creates an 802.11 service set identifier (SSID) object that will be used to set the value of the "Ssid" Attribute of the MAC layer implementation.

WifiMacHelper mac;

Ssid ssid = Ssid("ns-3-ssid");

WifiHelper will, by default, configure the standard in use to be 802.11ax (known commercially as Wi-Fi 6) and configure a compatible rate control algorithm (IdealWifiManager).

WifiHelper wifi;

We are now ready to install Wi-Fi models on the nodes, using these four helper objects (YansWifiChannelHelper, YansWifiPhyHelper, WifiMacHelper, WifiHelper) and the Ssid object created above. These helpers have encapsulated a lot of default configuration, and can be further tailored using additional attribute configuration if desired. We also will create NetDevice containers to store pointers to the WifiNetDevice objects that the helper create.

NetDeviceContainer staDevices
mac.SetType("ns3::StaWifiMac",
"Ssid", SsidValue(ssid),
"ActiveProbing", BooleanValue(false));

In the above code, the specific kind of MAC layer that will be created by the helper is specified by the TypeId value of ns3::StaWifiMac type. The "QosSupported" attribute is set to true by default for WifiMacHelper objects when
the standard is at least \(802.11 \mathrm{n}\) or newer. The combination of these two configurations means that the MAC instance next created will be a QoS-aware, non-AP station (STA) in an infrastructure BSS (i.e., a BSS with an AP). Finally, the "ActiveProbing" Attribute is set to false. This means that probe requests will not be sent by MACs created by this helper, and stations will listen for AP beacons.

Once all the station-specific parameters are fully configured, both at the MAC and PHY layers, we can invoke our now-familiar Install method to create the Wi-Fi devices of these stations:

NetDeviceContainer staDevices;

staDevices = wifi.Install(phy, mac, wifiStaNodes);

We have configured Wi-Fi for all of our STA nodes, and now we need to configure the AP (access point) node. We begin this process by changing the default Attributes of the WifiMacHelper to reflect the requirements of the AP.

mac.SetType("ns3::ApWifiMac",
"Ssid", SsidValue(ssid));

In this case, the WifiMacHelper is going to create MAC layers of the "ns3::ApWifiMac", the latter specifying that a MAC instance configured as an AP should be created.

The next lines create the single AP which shares the same set of PHY-level Attributes (and channel) as the stations:

NetDeviceContainer apDevices;

apDevices = wifi.Install(phy, mac, wifiApNode);

Now, we are going to add mobility models. We want the STA nodes to be mobile, wandering around inside a bounding box, and we want to make the AP node stationary. We use the MobilityHelper to make this easy for us. First, we instantiate a MobilityHelper object and set some Attributes controlling the "position allocator" functionality.

MobilityHelper mobility

mobility.SetPositionAllocator("ns3::GridPositionAllocator",

"MinX", DoubleValue(0.0),

"MinY", DoubleValue(0.0),

"DeltaX", DoubleValue(5.0),

"DeltaY", DoubleValue(10.0),

"GridWidth", UintegerValue(3),

"LayoutType", StringValue("RowFirst"));

This code tells the mobility helper to use a two-dimensional grid to initially place the STA nodes. Feel free to explore the Doxygen for class ns3::GridPositionAllocator to see exactly what is being done.

We have arranged our nodes on an initial grid, but now we need to tell them how to move. We choose the RandomWalk2dMobilityModel which has the nodes move in a random direction at a random speed around inside a bounding box.

mobility.SetMobilityModel("ns3::RandomWalk2dMobilityModel",

"Bounds", RectangleValue(Rectangle(-50, 50, -50, 50)));

We now tell the MobilityHelper to install the mobility models on the STA nodes.

mobility.Install(wifistaNodes);

We want the access point to remain in a fixed position during the simulation. We accomplish this by setting the mobility model for this node to be the ns3::ConstantPositionMobilityModel:

mobility.SetMobilityModel("ns3::ConstantPositionMobilityModel");

mobility.Install(wifiApNode);

We now have our nodes, devices and channels created, and mobility models chosen for the Wi-Fi nodes, but we have no protocol stacks present. Just as we have done previously many times, we will use the InternetStackHelper to install these stacks.

InternetStackHelper stack;
stack.Install(csmaNodes);
stack.Install(wifiApNode);
stack.Install(wifiStaNodes);

Just as in the second.cc example script, we are going to use the Ipv4AddressHelper to assign IP addresses to our device interfaces. First we use the network 10.1.1.0 to create the two addresses needed for our two point-to-point devices. Then we use network 10.1.2.0 to assign addresses to the CSMA network and then we assign addresses from network 10.1.3.0 to both the STA devices and the AP on the wireless network.

Ipv4AddressHelper address;
Ipv4AddressHelper 地址;
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-095.jpg?height=38&width=846&top_left_y=875&top_left_x=241)
Ipv4InterfaceContainer p2pInterfaces;
Ipv4InterfaceContainer p2p 接口;

p2pInterfaces = address.Assign(p2pDevices);
p2p 接口 = 地址.Assign(p2p 设备);
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-095.jpg?height=38&width=843&top_left_y=1030&top_left_x=241)
Ipv4InterfaceContainer csmaInterfaces;
csmaInterfaces = address.Assign(csmaDevices);
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-095.jpg?height=38&width=846&top_left_y=1182&top_left_x=241)
address.Assign(staDevices);
address.Assign(apDevices);

We put the echo server on the "rightmost" node in the illustration at the start of the file. We have done this before.

UdpEchoServerHelper echoServer(9);
ApplicationContainer serverApps = echoServer.Install(csmaNodes.Get(nCsma));
serverApps.Start(Seconds(1.0));
serverApps.Stop(Seconds(10.0));

And we put the echo client on the last STA node we created, pointing it to the server on the CSMA network. We have also seen similar operations before.

UdpEchoClientHelper echoClient(csmaInterfaces.GetAddress(nCsma), 9);
echoClient.SetAttribute("MaxPackets", UintegerValue(1));
echoClient.SetAttribute("Interval", TimeValue(Seconds(1.0)));
echoClient.SetAttribute("PacketSize", UintegerValue(1024));
ApplicationContainer clientApps =
echoClient.Install(wifiStaNodes.Get(nWifi - 1));
clientApps.Start(Seconds(2.0));
clientApps.Stop(Seconds(10.0));

Since we have built an internetwork here, we need to enable internetwork routing just as we did in the second.cc example script.

Ipv4GlobalRoutingHelper::PopulateRoutingTables();

One thing that can surprise some users is the fact that the simulation we just created will never "naturally" stop. This is because we asked the wireless access point to generate beacons. It will generate beacons forever, and this will result in simulator events being scheduled into the future indefinitely, so we must tell the simulator to stop even though it
may have beacon generation events scheduled. The following line of code tells the simulator to stop so that we don't simulate beacons forever and enter what is essentially an endless loop.

Simulator: :Stop(Seconds(10.0));

We create just enough tracing to cover all three networks:

pointToPoint.EnablePcapAll("third");

phy.EnablePcap("third", apDevices.Get(0));

csma.EnablePcap("third", csmaDevices.Get(0), true);

These three lines of code will start pcap tracing on both of the point-to-point nodes that serves as our backbone, will start a promiscuous (monitor) mode trace on the Wi-Fi network, and will start a promiscuous trace on the CSMA network. This will let us see all of the traffic with a minimum number of trace files.

Finally, we actually run the simulation, clean up and then exit the program.

Simulator::Run(t)
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-096.jpg?height=41&width=393&top_left_y=931&top_left_x=281)
return 0;

}

In order to run this example, you have to copy the third.cc example script into the scratch directory and use CMake to build just as you did with the second.cc example. If you are in the top-level directory of the repository, type the following:

\$ cp examples/tutorial/third.cc scratch/mythird.cc

\(\$ . / n s 3\) run 'scratch/mythird --tracing=1'

Again, since we have set up the UDP echo applications just as we did in the second.cc script, you will see similar output.

At time +2s client sent 1024 bytes to 10.1.2.4 port 9
在时间+2 秒时,客户端向 10.1.2.4 端口 9 发送了 1024 字节

At time +2.01624s server received 1024 bytes from 10.1.3.3 port 49153
在时间+2.01624 秒,服务器从 10.1.3.3 端口 49153 接收了 1024 字节

At time +2.01624s server sent 1024 bytes to 10.1.3.3 port 49153
在时间+2.01624 秒,服务器向 10.1.3.3 端口 49153 发送了 1024 字节

At time +2.02849s client received 1024 bytes from 10.1.2.4 port 9
在时间+2.02849 秒,客户端从 10.1.2.4 端口 9 接收了 1024 字节

Recall that the first message, Sent 1024 bytes to 10.1 .2 .4 ," is the UDP echo client sending a packet to the server. In this case, the client is on the wireless network (10.1.3.0). The second message, "Received 1024 bytes from 10.1.3.3," is from the UDP echo server, generated when it receives the echo packet. The final message, "Received 1024 bytes from 10.1.2.4," is from the echo client, indicating that it has received its echo back from the server.

If you now look in the top level directory, and you enabled tracing at the command-line as suggested above, you will find four trace files from this simulation, two from node zero and two from node one:

third-0-0.pcap third-0-1.pcap third-1-0.pcap third-1-1.pcap

The file "third-0-0.pcap" corresponds to the point-to-point device on node zero - the left side of the "backbone". The file "third-1-0.pcap" corresponds to the point-to-point device on node one - the right side of the "backbone". The file "third-0-1.pcap" will be the promiscuous (monitor mode) trace from the Wi-Fi network and the file "third-1-1.pcap" will be the promiscuous trace from the CSMA network. Can you verify this by inspecting the code?

Since the echo client is on the Wi-Fi network, let's start there. Let's take a look at the promiscuous (monitor mode) trace we captured on that network.

\$ tcpdump -nn -tt -r third-0-1.pcap

You should see some wifi-looking contents you haven't seen here before:

reading from file third-0-1.pcap, link-type IEEE802_11_RADIO (802.11 plus radiotap,
从文件 third-0-1.pcap 中读取,链路类型为 IEEE802_11_RADIO(802.11 加 radiotap,

\hookrightarrowheader) \hookrightarrowheader)
0.033119 33119us tsft 6.0 Mb/s 5210 MHz 11a Beacon (ns-3-ssid) [6.0* 9.0 12.0* 18.0
0.033119 33119 微秒 tsft 6.0 Mb/s 5210 MHz 11a 信标(ns-3-ssid)[6.0* 9.0 12.0* 18.0

\hookrightarrow24.0* 36.0 48.0 54.0 Mbit] ESS
0.120504 120504us tsft 6.0 Mb/s 5210 MHz 11a -62dBm signal -94dBm noise Assoc Request
\hookrightarrow(ns-3-ssid) [6.0 9.0 12.0 18.0 24.0 36.0 48.0 54.0 Mbit]
0.120520 120520us tsft 6.0 Mb/s 5210 MHz 11a Acknowledgment RA:00:00:00:00:00:08
0.120520 120520 微秒 tsft 6.0 Mb/s 5210 MHz 11a 确认 RA:00:00:00:00:00:08

0.120632 120632us tsft 6.0 Mb/s 5210 MHz 11a -62dBm signal -94dBm noise CF-End
0.120632 120632 微秒 tsft 6.0 Mb/s 5210 MHz 11a -62dBm 信号 -94dBm 噪声 CF-End

<RA:ff:ff:ff:ff:ff:ff
0.120666 120666us tsft 6.0 Mb/s 5210 MHz 11a Assoc Response AID(1) :: Successful
0.120666 120666us tsft 6.0 Mb/s 5210 MHz 11a Assoc Response AID(1) :: 成功
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-097.jpg?height=38&width=68&top_left_y=745&top_left_x=241)

You can see that the link type is now 802.11 as you would expect. You can probably understand what is going on and find the IP echo request and response packets in this trace. We leave it as an exercise to completely parse the trace dump.

Now, look at the pcap file of the left side of the point-to-point link,

\$ tcpdump -nn -tt -r third-0-0.pcap

Again, you should see some familiar looking contents:

reading from file third-0-0.pcap, link-type PPP (PPP)
从文件 third-0-0.pcap 读取,链路类型 PPP (PPP)

2.006440 IP 10.1.3.3.49153 > 10.1.2.4.9: UDP, length 1024
2.006440 IP 10.1.3.3.49153 > 10.1.2.4.9: UDP,长度 1024

2.025048 IP 10.1.2.4.9 > 10.1.3.3.49153: UDP, length 1024
2.025048 IP 10.1.2.4.9 > 10.1.3.3.49153:UDP,长度 1024

This is the echo packet going from left to right (from Wi-Fi to CSMA) and back again across the point-to-point link.

Now, look at the pcap file of the right side of the point-to-point link,

\$ tcpdump -nn -tt -r third-1-0.pcap

Again, you should see some familiar looking contents:

reading from file third-1-0.pcap, link-type PPP (PPP)

2.010126 IP 10.1.3.3.49153 > 10.1.2.4.9: UDP, length 1024

2.021361 IP 10.1.2.4.9 > 10.1.3.3.49153: UDP, length 1024

This is also the echo packet going from left to right (from Wi-Fi to CSMA) and back again across the point-to-point link with slightly different timings as you might expect.

The echo server is on the CSMA network, let's look at the promiscuous trace there:

\$ tcpdump -nn -tt -r third-1-1.pcap

You should see some familiar looking contents:

reading from file third-1-1.pcap, link-type EN10MB (Ethernet)
从文件 third-1-1.pcap 读取,链路类型 EN10MB(以太网)

2.016126 ARP, Request who-has 10.1.2.4 (ff:ff:ff:ff:ff:ff) tell 10.1.2.1, length 50
2.016126 ARP,请求 who-has 10.1.2.4(ff:ff:ff:ff:ff:ff)告诉 10.1.2.1,长度 50

2.016151 ARP, Reply 10.1.2.4 is-at 00:00:00:00:00:06, length 50
2.016151 ARP,回复 10.1.2.4 的 MAC 地址是 00:00:00:00:00:06,长度 50

2.016151 IP 10.1.3.3.49153 > 10.1.2.4.9: UDP, length 1024
2.016151 IP 10.1.3.3.49153 > 10.1.2.4.9:UDP,长度 1024

2.021255 ARP, Request who-has 10.1.2.1 (ff:ff:ff:ff:ff:ff) tell 10.1.2.4, length 50
2.021255 ARP,请求谁有 10.1.2.1(ff:ff:ff:ff:ff:ff)告诉 10.1.2.4,长度 50

2.021255 ARP, Reply 10.1.2.1 is-at 00:00:00:00:00:03, length 50
2.021255 ARP,回复 10.1.2.1 的 MAC 地址是 00:00:00:00:00:03,长度 50

2.021361 IP 10.1.2.4.9 > 10.1.3.3.49153: UDP, length 1024
2.021361 IP 10.1.2.4.9 > 10.1.3.3.49153:UDP,长度 1024

This should be easily understood. If you've forgotten, go back and look at the discussion in second.cc. This is the same sequence.

Now, we spent a lot of time setting up mobility models for the wireless network and so it would be a shame to finish up without even showing that the STA nodes are actually moving around during the simulation. Let's do this by hooking into the Mobilit yModel course change trace source. This is just a sneak peek into the detailed tracing section which is coming up, but this seems a very nice place to get an example in.

As mentioned in the "Tweaking ns-3" section, the \(n s-3\) tracing system is divided into trace sources and trace sinks, and we provide functions to connect the two. We will use the mobility model predefined course change trace source to originate the trace events. We will need to write a trace sink to connect to that source that will display some pretty information for us. Despite its reputation as being difficult, it's really quite simple. Just before the main program of the scratch/mythird.cc script (i.e., just after the NS_LOG_COMPONENT_DEFINE statement), add the following function:

void 
CourseChange(std::string context, Ptr model)
{
Vector position = model->GetPosition();
NS_LOG_UNCOND(context <<
" x = " << position.x << ", y = " << position.y);

This code just pulls the position information from the mobility model and unconditionally logs the \(\mathrm{x}\) and \(\mathrm{y}\) position of the node. We are going to arrange for this function to be called every time the wireless node with the echo client changes its position. We do this using the config: : connect function. Add the following lines of code to the script just before the Simulator: :Run call.

std::ostringstream oss;
oss << "/NodeList/" << wifiStaNodes.Get(nWifi - 1)->GetId()
<< "/$ns3::MobilityModel/CourseChange";
Config::Connect(oss.str(), MakeCallback(&CourseChange));

What we do here is to create a string containing the tracing namespace path of the event to which we want to connect. First, we have to figure out which node it is we want using the Get Id method as described earlier. In the case of the default number of CSMA and wireless nodes, this turns out to be node seven and the tracing namespace path to the mobility model would look like,

/NodeList/7/\$ns3::MobilityModel/CourseChange

Based on the discussion in the tracing section, you may infer that this trace path references the seventh node in the global NodeList. It specifies what is called an aggregated object of type ns3::MobilityMode1. The dollar sign prefix implies that the MobilityModel is aggregated to node seven. The last component of the path means that we are hooking into the "CourseChange" event of that model.

We make a connection between the trace source in node seven with our trace sink by calling Config: :Connect and passing this namespace path. Once this is done, every course change event on node seven will be hooked into our trace sink, which will in turn print out the new position.

If you now run the simulation, you will see the course changes displayed as they happen.

$ ./ns3 build
$ ./ns3 run scratch/mythird
/NodeList/7/ ns3::MobilityModel/CourseChange x = 9.36083, y = -0.769065
/NodeList/7/ ns3::MobilityModel/CourseChange x = 9.42533, y = 1.17601
(continued from previous page)

/NodeList/7/ ns3::MobilityModel/CourseChange x = 7.79244, y = 1.55559
/NodeList/7/ ns3::MobilityModel/CourseChange x = 8.72774, y = 2.06461
/NodeList/7/ ns3::MobilityModel/CourseChange x = 10.523, y = 2.77665
/NodeList/7/ ns3::MobilityModel/CourseChange x = 10.143, y = 2.93301
/NodeList/7/ ns3::MobilityModel/CourseChange x = 11.2152, y = 1.73647
/NodeList/7/ ns3::MobilityModel/CourseChange x = 10.4491, y = 0.971199
/NodeList/7/ ns3::MobilityModel/CourseChange x = 9.11607, y = 2.32513
/NodeList/7/ ns3::MobilityModel/CourseChange x = 8.79149, y = 1.05934
/NodeList/7/ ns3::MobilityModel/CourseChange x = 9.83369, y = -0.631617
/NodeList/7/ ns3::MobilityModel/CourseChange x = 8.32714, y = 0.665266
/NodeList/7/ ns3::MobilityModel/CourseChange x = 7.40394, y = -0.837367
/NodeList/7/ ns3::MobilityModel/CourseChange x = 7.62062, y = -2.49388
/NodeList/7/$ns3::MobilityModel/CourseChange x = 7.99793, y = -1.56779

\subsection*{7.4 Queues in ns-3}

The selection of queueing disciplines in \(n s-3\) can have a large impact on performance, and it is important for users to understand what is installed by default and how to change the defaults and observe the performance.

Architecturally, \(n s-3\) separates the device layer from the IP layers or traffic control layers of an Internet host. Since recent releases of \(n s-3\), outgoing packets traverse two queueing layers before reaching the channel object. The first queueing layer encountered is what is called the 'traffic control layer' in \(n s-3\); here, active queue management (RFC7567) and prioritization due to quality-of-service (QoS) takes place in a device-independent manner through the use of queueing disciplines. The second queueing layer is typically found in the NetDevice objects. Different devices (e.g. LTE, Wi-Fi) have different implementations of these queues. This two-layer approach mirrors what is found in practice, (software queues providing prioritization, and hardware queues specific to a link type). In practice, it may be even more complex than this. For instance, address resolution protocols have a small queue. Wi-Fi in Linux has four layers of queueing (https://lwn.net/Articles/705884/).

The traffic control layer is effective only if it is notified by the NetDevice when the device queue is full, so that the traffic control layer can stop sending packets to the NetDevice. Otherwise, the backlog of the queueing disciplines is always null and they are ineffective. Currently, flow control, i.e., the ability of notifying the traffic control layer, is supported by the following NetDevices, which use Queue objects (or objects of Queue subclasses) to store their packets:
- Point-To-Point
- Csma
- Wi-Fi
- SimpleNetDevice

The performance of queueing disciplines is highly impacted by the size of the queues used by the NetDevices. Currently, queues by default in \(n s-3\) are not autotuned for the configured link properties (bandwidth, delay), and are typically the simplest variants (e.g. FIFO scheduling with drop-tail behavior). However, the size of the queues can be dynamically adjusted by enabling BQL (Byte Queue Limits), the algorithm implemented in the Linux kernel to adjust the size of the device queues to fight bufferbloat while avoiding starvation. Currently, \(\mathrm{BQL}\) is supported by the NetDevices that support flow control. An analysis of the impact of the size of the device queues on the effectiveness of the queueing disciplines conducted by means of \(n s-3\) simulations and real experiments is reported in:

P. Imputato and S. Avallone. An analysis of the impact of network device buffers on packet schedulers through experiments and simulations. Simulation Modelling Practice and Theory, 80(Supplement C):1-18, January 2018. DOI: \(10.1016 /\) j.simpat.2017.09.008

\subsection*{7.4.1 Available queueing models in \(n s-3\)}

At the traffic-control layer, these are the options:
- PFifoFastQueueDisc: The default maximum size is 1000 packets
- FifoQueueDisc: The default maximum size is 1000 packets
- RedQueueDisc: The default maximum size is 25 packets
- CoDelQueueDisc: The default maximum size is 1500 kilobytes
- FqCoDelQueueDisc: The default maximum size is 10240 packets
- PieQueueDisc: The default maximum size is 25 packets
- MqQueueDisc: This queue disc has no limits on its capacity
- TbfQueueDisc: The default maximum size is 1000 packets

By default, a pfifo_fast queueing discipline is installed on a NetDevice when an IPv4 or IPv6 address is assigned to an interface associated with the NetDevice, unless a queueing discipline has been already installed on the NetDevice.

At the device layer, there are device specific queues:
- PointToPointNetDevice: The default configuration (as set by the helper) is to install a DropTail queue of default size (100 packets)
- CsmaNetDevice: The default configuration (as set by the helper) is to install a DropTail queue of default size (100 packets)
- WiFiNetDevice: The default configuration is to install a DropTail queue of default size (100 packets) for nonQoS stations and four DropTail queues of default size (100 packets) for QoS stations
- SimpleNetDevice: The default configuration is to install a DropTail queue of default size (100 packets)
- LteNetDevice: Queueing occurs at the RLC layer (RLC UM default buffer is \(10 * 1024\) bytes, RLC AM does not have a buffer limit).
- UanNetDevice: There is a default 10 packet queue at the MAC layer

\subsection*{7.4.2 Changing from the defaults}
- The type of queue used by a NetDevice can be usually modified through the device helper:

NodeContainer nodes; NodeContainer 节点;
nodes.Create(2);
PointToPointHelper p2p;
p2p.SetQueue("ns3::DropTailQueue", "MaxSize", StringValue("50p"));
NetDeviceContainer devices = p2p.Install(nodes);
- The type of queue disc installed on a NetDevice can be modified through the traffic control helper:

InternetStackHelper stack;
stack.Install(nodes);
TrafficControlHelper tch;
tch.SetRootQueueDisc("ns3::CoDelQueueDisc", "MaxSize", StringValue("1000p"));
tch.Install(devices);
- \(\mathrm{BQL}\) can be enabled on a device that supports it through the traffic control helper:

InternetStackHelper stack;
stack.Install(nodes);
TrafficControlHelper tch;
tch.SetRootQueueDisc("ns3::CoDelQueueDisc", "MaxSize", StringValue("1000p"));
tch.SetQueueLimits("ns3::DynamicQueueLimits", "HoldTime", StringValue("4ms"));
tch.Install(devices);

\section*{TRACING}

\subsection*{8.1 Background}

As mentioned in Using the Tracing System, the whole point of running an \(n s-3\) simulation is to generate output for study. You have two basic strategies to obtain output from \(n s-3\) : using generic pre-defined bulk output mechanisms and parsing their content to extract interesting information; or somehow developing an output mechanism that conveys exactly (and perhaps only) the information wanted.

Using pre-defined bulk output mechanisms has the advantage of not requiring any changes to \(n s-3\), but it may require writing scripts to parse and filter for data of interest. Often, PCAP or NS_LOG output messages are gathered during simulation runs and separately run through scripts that use grep, sed or awk to parse the messages and reduce and transform the data to a manageable form. Programs must be written to do the transformation, so this does not come for free. NS_LOG output is not considered part of the ns-3 API, and can change without warning between releases. In addition, NS_LOG output is only available in debug builds, so relying on it imposes a performance penalty. Of course, if the information of interest does not exist in any of the pre-defined output mechanisms, this approach fails.

If you need to add some tidbit of information to the pre-defined bulk mechanisms, this can certainly be done; and if you use one of the \(n s-3\) mechanisms, you may get your code added as a contribution.

ns-3 provides another mechanism, called Tracing, that avoids some of the problems inherent in the bulk output mechanisms. It has several important advantages. First, you can reduce the amount of data you have to manage by only tracing the events of interest to you (for large simulations, dumping everything to disk for post-processing can create I/O bottlenecks). Second, if you use this method, you can control the format of the output directly so you avoid the postprocessing step with sed, awk, perl or python scripts. If you desire, your output can be formatted directly into a form acceptable by gnuplot, for example (see also GnuplotHelper). You can add hooks in the core which can then be accessed by other users, but which will produce no information unless explicitly asked to do so. For these reasons, we believe that the \(n s-3\) tracing system is the best way to get information out of a simulation and is also therefore one of the most important mechanisms to understand in \(n s-3\).

\subsection*{8.1.1 Blunt Instruments}

There are many ways to get information out of a program. The most straightforward way is to just print the information directly to the standard output, as in:

#include
...
void
SomeFunction()
{
uint32_t x = SOME_INTERESTING_VALUE;
uint32_t x = 一些有趣的值;

...
std::cout << "The value of x is " << x << std::endl;
std::cout << "x 的值为" << x << std::endl;

Nobody is going to prevent you from going deep into the core of \(n s-3\) and adding print statements. This is insanely easy to do and, after all, you have complete control of your own \(n s-3\) branch. This will probably not turn out to be very satisfactory in the long term, though.

As the number of print statements increases in your programs, the task of dealing with the large number of outputs will become more and more complicated. Eventually, you may feel the need to control what information is being printed in some way, perhaps by turning on and off certain categories of prints, or increasing or decreasing the amount of information you want. If you continue down this path you may discover that you have re-implemented the NS_LOG mechanism (see Using the Logging Module). In order to avoid that, one of the first things you might consider is using NS_LOG itself.

We mentioned above that one way to get information out of \(n s-3\) is to parse existing NS_LOG output for interesting information. If you discover that some tidbit of information you need is not present in existing log output, you could edit the core of \(n s-3\) and simply add your interesting information to the output stream. Now, this is certainly better than adding your own print statements since it follows \(n s-3\) coding conventions and could potentially be useful to other people as a patch to the existing core.

Let's pick a random example. If you wanted to add more logging to the ns-3 TCP socket (tcp-socket-base.cc) you could just add a new message down in the implementation. Notice that in TcpSocketBase: :ProcessEstablished() there is no log message for the reception of a SYN+ACK in ESTABLISHED state. You could simply add one, changing the code. Here is the original:

/* Received a packet upon ESTABLISHED state. This function is mimicking the
/* 在已建立状态下收到一个数据包。此函数模拟

role of tcp_rcv_established() in tcp_input.c in Linux kernel. */
Linux 内核中 tcp_input.c 中 tcp_rcv_established()的作用。*/

void
TcpSocketBase::ProcessEstablished(Ptr packet, const TcpHeader& tcpHeader)
{
NS_LOG_FUNCTION(this << tcpHeader);
...
else if (tcpflags == (TcpHeader::SYN | TcpHeader::ACK))
{ // No action for received SYN+ACK, it is probably a duplicated packet
{ // 对于接收到的 SYN+ACK 不执行任何操作,这可能是重复的数据包

}
...

To \(\log\) the SYN+ACK case, you can add a new NS_LOG_LOGIC in the if statement body:

/* Received a packet upon ESTABLISHED state. This function is mimicking the
/* 在建立状态下收到一个数据包。此函数模仿 Linux 内核中 tcp_input.c 中的 tcp_rcv_established() 的作用。 */

role of tcp_rcv_established() in tcp_input.c in Linux kernel. */
void

void
TcpSocketBase::ProcessEstablished(Ptr packet, const TcpHeader& tcpHeader)
{
NS_LOG_FUNCTION(this << tcpHeader);
...
else if (tcpflags == (TcpHeader::SYN | TcpHeader::ACK))
{ // No action for received SYN+ACK, it is probably a duplicated packet
{ // 收到 SYN+ACK 但没有动作,可能是重复的数据包

NS_LOG_LOGIC("TcpSocketBase " << this << " ignoring SYN+ACK");
NS_LOG_LOGIC("TcpSocketBase " << this << " 忽略 SYN+ACK");

}
...

This may seem fairly simple and satisfying at first glance, but something to consider is that you will be writing code to add NS_LOG statements and you will also have to write code (as in grep, sed or awk scripts) to parse the log output in order to isolate your information. This is because even though you have some control over what is output by the logging system, you only have control down to the log component level, which is typically an entire source code file.

If you are adding code to an existing module, you will also have to live with the output that every other developer has found interesting. You may find that in order to get the small amount of information you need, you may have to wade through huge amounts of extraneous messages that are of no interest to you. You may be forced to save huge log files to disk and process them down to a few lines whenever you want to do anything.

Since there are no guarantees in \(n s-3\) about the stability of NS_LOG output, you may also discover that pieces of log output which you depend on disappear or change between releases. If you depend on the structure of the output, you may find other messages being added or deleted which may affect your parsing code.

Finally, NS_LOG output is only available in debug builds, you can't get log output from optimized builds, which run about twice as fast. Relying on NS_LOG imposes a performance penalty.

For these reasons, we consider prints to std: :cout and NS_LOG messages to be quick and dirty ways to get more information out of \(n s-3\), but not suitable for serious work.

It is desirable to have a stable facility using stable APIs that allow one to reach into the core system and only get the information required. It is desirable to be able to do this without having to change and recompile the core system. Even better would be a system that notified user code when an item of interest changed or an interesting event happened so the user doesn't have to actively poke around in the system looking for things.

The \(n s-3\) tracing system is designed to work along those lines and is well-integrated with the Attribute and Config subsystems allowing for relatively simple use scenarios.

\subsection*{8.2 Overview}

The \(n s-3\) tracing system is built on the concepts of independent tracing sources and tracing sinks, along with a uniform mechanism for connecting sources to sinks.

Trace sources are entities that can signal events that happen in a simulation and provide access to interesting underlying data. For example, a trace source could indicate when a packet is received by a net device and provide access to the packet contents for interested trace sinks. A trace source might also indicate when an interesting state change happens in a model. For example, the congestion window of a TCP model is a prime candidate for a trace source. Every time the congestion window changes connected trace sinks are notified with the old and new value.

Trace sources are not useful by themselves; they must be connected to other pieces of code that actually do something useful with the information provided by the source. The entities that consume trace information are called trace sinks. Trace sources are generators of data and trace sinks are consumers. This explicit division allows for large numbers of trace sources to be scattered around the system in places which model authors believe might be useful. Inserting trace sources introduces a very small execution overhead.

There can be zero or more consumers of trace events generated by a trace source. One can think of a trace source as a kind of point-to-multipoint information link. Your code looking for trace events from a particular piece of core code could happily coexist with other code doing something entirely different from the same information.

Unless a user connects a trace sink to one of these sources, nothing is output. By using the tracing system, both you and other people hooked to the same trace source are getting exactly what they want and only what they want out of the system. Neither of you are impacting any other user by changing what information is output by the system. If you happen to add a trace source, your work as a good open-source citizen may allow other users to provide new utilities that are perhaps very useful overall, without making any changes to the \(n s-3\) core.

\subsection*{8.2.1 Simple Example}

Let's take a few minutes and walk through a simple tracing example. We are going to need a little background on Callbacks to understand what is happening in the example, so we have to take a small detour right away.

\section*{Callbacks}

The goal of the Callback system in ns-3 is to allow one piece of code to call a function (or method in C++) without any specific inter-module dependency. This ultimately means you need some kind of indirection - you treat the address of the called function as a variable. This variable is called a pointer-to-function variable. The relationship between function and pointer-to-function is really no different that that of object and pointer-to-object.

In \(\mathrm{C}\) the canonical example of a pointer-to-function is a pointer-to-function-returning-integer (PFI). For a PFI taking one int parameter, this could be declared like,

int (*pfi)(int arg) \(=0\);

(But read the C++-FAQ Section 33 before writing code like this!) What you get from this is a variable named simply pfi that is initialized to the value 0 . If you want to initialize this pointer to something meaningful, you need to have a function with a matching signature. In this case, you could provide a function that looks like:

int MyFunction(int arg) \{\}

If you have this target, you can initialize the variable to point to your function:

pfi \(=\) MyFunction;

You can then call MyFunction indirectly using the more suggestive form of the call:

int result \(=(*\) pfi)(1234);

This is suggestive since it looks like you are dereferencing the function pointer just like you would dereference any pointer. Typically, however, people take advantage of the fact that the compiler knows what is going on and will just use a shorter form:

int result \(=\) pfi(1234);

This looks like you are calling a function named \(\mathrm{pfi}\), but the compiler is smart enough to know to call through the variable pfi indirectly to the function MyFunction.

Conceptually, this is almost exactly how the tracing system works. Basically, a trace sink is a callback. When a trace sink expresses interest in receiving trace events, it adds itself as a Callback to a list of Callbacks internally held by the trace source. When an interesting event happens, the trace source invokes its operator ( . . ) providing zero or more arguments. The operator (...) eventually wanders down into the system and does something remarkably like the indirect call you just saw, providing zero or more parameters, just as the call to pfi above passed one parameter to the target function MyFunction.

The important difference that the tracing system adds is that for each trace source there is an internal list of Callbacks. Instead of just making one indirect call, a trace source may invoke multiple Callbacks. When a trace sink expresses interest in notifications from a trace source, it basically just arranges to add its own function to the callback list.

If you are interested in more details about how this is actually arranged in \(n s-3\), feel free to peruse the Callback section of the \(n s-3\) Manual.

Walkthrough: fourth.cc

We have provided some code to implement what is really the simplest example of tracing that can be assembled. You can find this code in the tutorial directory as fourth.cc. Let's walk through it:

* This program is free software; you can redistribute it and/or modify
(continued from previous page)

it under the terms of the GNU General Public License version 2 as
根据 GNU 通用公共许可证第 2 版的条款进行

published by the Free Software Foundation;
由自由软件基金会发布;

This program is distributed in the hope that it will be useful,
本程序是希望它能够有用地分发的,

but WITHOUT ANY WARRANTY; without even the implied warranty of
但没有任何保证; 甚至没有默示的保证

MERCHANTABILITY Or FITNESS FOR A PARTICULAR PURPOSE. See the
适销性或特定用途适用性。请参阅

GNU General Public License for more details.
GNU 通用公共许可证以获取更多详细信息。

You should have received a copy of the GNU General Public License
您应该已收到 GNU 通用公共许可证的副本。

along with this program; if not, write to the Free Software
与本程序一起提供;如果没有,请写信给自由软件

Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
基金会,Inc.,59 Temple Place,Suite 330,Boston,MA 02111-1307 美国

*/
#include "ns3/object.h"
#include "ns3/uinteger.h"
#包括 "ns3/uinteger.h"

#include "ns3/traced-value.h"
#包括 "ns3/traced-value.h"

#include "ns3/trace-source-accessor.h"
#包括 "ns3/trace-source-accessor.h"

#include
using namespace ns3; 使用命名空间 ns3;

Most of this code should be quite familiar to you. As mentioned above, the trace system makes heavy use of the Object and Attribute systems, so you will need to include them. The first two includes above bring in the declarations for those systems explicitly. You could use the core module header to get everything at once, but we do the includes explicitly here to illustrate how simple this all really is.

The file, traced-value.h brings in the required declarations for tracing of data that obeys value semantics. In general, value semantics just means that you can pass the object itself around, rather than passing the address of the object. What this all really means is that you will be able to trace all changes made to a TracedValue in a really simple way.

Since the tracing system is integrated with Attributes, and Attributes work with Objects, there must be an \(n s-3\) ob ject for the trace source to live in. The next code snippet declares and defines a simple Object we can work with.

class MyObject : public Object
类 MyObject:公共对象

{
public: 公共:
static TypeId GetTypeId()
静态 TypeId GetTypeId()

{
static TypeId tid = TypeId("MyObject")
静态 TypeId tid = TypeId("MyObject")

.SetParent(Object::GetTypeId())
.SetGroupName("MyGroup")
.SetGroupName("我的分组")

.AddConstructor()
.AddTraceSource("MyInteger",
.AddTraceSource("我的整数",

"An integer value to trace.",
用于跟踪的整数值。

MakeTraceSourceAccessor(&MyObject::m_myInt),
"ns3::TracedValueCallback::Int32")
ns3::TracedValueCallback::Int32

<
return tid; 返回 tid;
}
MyObject() {}
TracedValue<int32_t> m_myInt;
};

The two important lines of code, above, with respect to tracing are the .AddTraceSource and the TracedValue
declaration of \(m \_\)myInt.

The .AddTraceSource provides the "hooks" used for connecting the trace source to the outside world through the Config system. The first argument is a name for this trace source, which makes it visible in the Config system. The second argument is a help string. Now look at the third argument, in fact focus on the argument of the third argument: \(\& M y O b j e c t:: m \_m y I n t\). This is the TracedValue which is being added to the class; it is always a class data member. (The final argument is the name of a typedef for the TracedValue type, as a string. This is used to generate documentation for the correct Callback function signature, which is useful especially for more general types of Callbacks.)

The TracedValue<> declaration provides the infrastructure that drives the callback process. Any time the underlying value is changed the TracedValue mechanism will provide both the old and the new value of that variable, in this case an int 32_t value. The trace sink function traceSink for this TracedValue will need the signature

void (*traceSink)(int32_t oldValue, int32_t newValue);

All trace sinks hooking this trace source must have this signature. We'll discuss below how you can determine the required callback signature in other cases.

Sure enough, continuing through fourth.cc we see:

void 
IntTrace(int32_t oldValue, int32_t newValue)
{
std::cout << "Traced " << oldValue << " to " << newValue << std::endl;
std::cout << "已跟踪 " << oldValue << " 至 " << newValue << std::endl;

}

This is the definition of a matching trace sink. It corresponds directly to the callback function signature. Once it is connected, this function will be called whenever the TracedValue changes.

We have now seen the trace source and the trace sink. What remains is code to connect the source to the sink, which happens in main:

int
main(int argc, char *argv[])
主函数(int argc, char *argv[])

{
Ptr myObject = CreateObject();
myObject->TraceConnectWithoutContext("MyInteger", MakeCallback(&IntTrace));
myObject->m_myInt = 1234;
}

Here we first create the MyObject instance in which the trace source lives.

The next step, the TraceConnectWithoutContext, forms the connection between the trace source and the trace sink. The first argument is just the trace source name "MyInteger" we saw above. Notice the MakeCallback template function. This function does the magic required to create the underlying \(n s-3\) Callback object and associate it with the function IntTrace. TraceConnect makes the association between your provided function and overloaded operator() in the traced variable referred to by the "MyInteger" Attribute. After this association is made, the trace source will "fire" your provided callback function.

The code to make all of this happen is, of course, non-trivial, but the essence is that you are arranging for something that looks just like the pfi() example above to be called by the trace source. The declaration of the TracedValue<int32_t> m_myInt; in the Object itself performs the magic needed to provide the overloaded assignment operators that will use the operator() to actually invoke the Callback with the desired parameters. The .AddTraceSource performs the magic to connect the Callback to the Config system, and TraceConnectWithoutContext performs the magic to connect your function to the trace source, which is specified by Attribute name.

Let's ignore the bit about context for now.

Finally, the line assigning a value to \(m \_\)myInt:

myObject->m_myInt \(=1234\);

should be interpreted as an invocation of operator= on the member variable m_myInt with the integer 1234 passed as a parameter.

Since m_myInt is a TracedValue, this operator is defined to execute a callback that returns void and takes two integer values as parameters - an old value and a new value for the integer in question. That is exactly the function signature for the callback function we provided - Int Trace.

To summarize, a trace source is, in essence, a variable that holds a list of callbacks. A trace sink is a function used as the target of a callback. The Attribute and object type information systems are used to provide a way to connect trace sources to trace sinks. The act of "hitting" a trace source is executing an operator on the trace source which fires callbacks. This results in the trace sink callbacks who registering interest in the source being called with the parameters provided by the source.

If you now build and run this example,

\(\$ . / n s 3\) run fourth

you will see the output from the Int Trace function execute as soon as the trace source is hit:

Traced 0 to 1234

When we executed the code, my0bject->m_myInt \(=1234\); , the trace source fired and automatically provided the before and after values to the trace sink. The function Int Trace then printed this to the standard output.

\subsection*{8.2.2 Connect with Config}

The TraceConnectWithoutContext call shown above in the simple example is actually very rarely used in the system. More typically, the Config subsystem is used to select a trace source in the system using what is called a Config path. We saw an example of this in the previous section where we hooked the "CourseChange" event when we were experimenting with third.cc.

Recall that we defined a trace sink to print course change information from the mobility models of our simulation. It should now be a lot more clear to you what this function is doing:

void 
CourseChange(std::string context, Ptr model)
{
Vector position = model->GetPosition();
矢量位置 = model->GetPosition();

NS_LOG_UNCOND(context <<
" x = " << position.x << ", y = " << position.y);

When we connected the "CourseChange" trace source to the above trace sink, we used a Config path to specify the source when we arranged a connection between the pre-defined trace source and the new trace sink:

std::ostringstream oss;
oss << "/NodeList/"
<< wifiStaNodes.Get(nWifi - 1)->GetId()
<< "/$ns3::MobilityModel/CourseChange";
Config::Connect(oss.str(), MakeCallback(&CourseChange));

Let's try and make some sense of what is sometimes considered relatively mysterious code. For the purposes of discussion, assume that the Node number returned by the GetId() is " 7 ". In this case, the path above turns out to be

"/NodeList/7/\$ns3: :MobilityModel/CourseChange"

The last segment of a config path must be an Attribute of an Object. In fact, if you had a pointer to the object that has the "CourseChange" Attribute handy, you could write this just like we did in the previous example. You know by now that we typically store pointers to our Nodes in a NodeContainer. In the third.cc example, the Nodes of interest are stored in the wifiStaNodes NodeContainer. In fact, while putting the path together, we used this container to get a \(\mathrm{Ptr}<\mathrm{Node}>\) which we used to call GetId (). We could have used this \(\mathrm{Ptr}<\mathrm{Node}>\) to call a Connect method directly:

Ptr<Object> theObject = wifiStaNodes.Get(nWifi - 1);

theObject->GetObject<MobilityModel>()->TraceConnectWithoutContext("CourseChange",

\(\hookrightarrow\) MakeCallback(\&CourseChange));

In the third.cc example, we actually wanted an additional "context" to be delivered along with the Callback parameters (which will be explained below) so we could actually use the following equivalent code:

Ptr<Object> theObject = wifiStaNodes.Get(nWifi - 1);

theObject->GetObject<MobilityModel>()->TraceConnect("CourseChange", MakeCallback(\&

\(\hookrightarrow\) CourseChange));

It turns out that the internal code for Config::ConnectWithoutContext and Config::Connect actually find a Ptr<Object> and call the appropriate TraceConnect method at the lowest level.

The Config functions take a path that represents a chain of ob ject pointers. Each segment of a path corresponds to an Object Attribute. The last segment is the Attribute of interest, and prior segments must be typed to contain or find Objects. The Config code parses and "walks" this path until it gets to the final segment of the path. It then interprets the last segment as an Attribute on the last Object it found while walking the path. The Config functions then call the appropriate TraceConnect or TraceConnectWithoutContext method on the final Object. Let's see what happens in a bit more detail when the above path is walked.

The leading """ character in the path refers to a so-called namespace. One of the predefined namespaces in the config system is "NodeList" which is a list of all of the nodes in the simulation. Items in the list are referred to by indices into the list, so "/NodeList/7" refers to the eighth Node in the list of nodes created during the simulation (recall indices start at 0 '). This reference is actually a "Ptr \(<N o d e>\) ' and so is a subclass of an ns \(3:: 0 \mathrm{~b}\) ject.

As described in the Object Model section of the ns-3 Manual, we make widespread use of object aggregation. This allows us to form an association between different Objects without building a complicated inheritance tree or predeciding what objects will be part of a Node. Each Object in an Aggregation can be reached from the other Objects.

In our example the next path segment being walked begins with the "\$" character. This indicates to the config system that the segment is the name of an object type, so a GetObject call should be made looking for that type. It turns out that the MobilityHelper used in third.cc arranges to Aggregate, or associate, a mobility model to each of the wireless Nodes. When you add the "\$" you are asking for another Object that has presumably been previously aggregated. You can think of this as switching pointers from the original Ptr<Node> as specified by "/NodeList/7" to its associated mobility model — which is of type ns3::MobilityModel. If you are familiar with GetObject, we have asked the system to do the following:

Ptr<MobilityModel> mobilityModel = node->GetObject<MobilityModel>()

We are now at the last Object in the path, so we turn our attention to the Attributes of that Object. The Mobilit yModel class defines an Attribute called "CourseChange". You can see this by looking at the source code in src/mobility/ model/mobility-model.cc and searching for "CourseChange" in your favorite editor. You should find

.AddTraceSource("CourseChange",
"The value of the position and/or velocity vector changed",
位置和/或速度向量的值已更改

MakeTraceSourceAccessor(&MobilityModel::m_courseChangeTrace),
MakeTraceSourceAccessor(&MobilityModel::m_courseChangeTrace)
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-111.jpg?height=38&width=808&top_left_y=390&top_left_x=542)

which should look very familiar at this point.

If you look for the corresponding declaration of the underlying traced variable in mobility-model.h you will find

TracedCallback<Ptr<const MobilityModel>> m_courseChangeTrace;

The type declaration TracedCallback identifies m_courseChangeTrace as a special list of Callbacks that can be hooked using the Config functions described above. The typedef for the callback function signature is also defined in the header file:

typedef void (* CourseChangeCallback)(Ptr<const MobilityModel> * model);

The MobilityModel class is designed to be a base class providing a common interface for all of the specific subclasses. If you search down to the end of the file, you will see a method defined called NotifyCourseChange ():

void
MobilityModel::NotifyCourseChange() const
{
m_courseChangeTrace(this);
}

Derived classes will call into this method whenever they do a course change to support tracing. This method invokes operator() on the underlying m_courseChangeTrace, which will, in turn, invoke all of the registered Callbacks, calling all of the trace sinks that have registered interest in the trace source by calling a Config function.

So, in the third.cc example we looked at, whenever a course change is made in one of the RandomWalk2dMobilityModel instances installed, there will be a NotifyCourseChange() call which calls up into the MobilityModel base class. As seen above, this invokes operator() on m_courseChangeTrace, which in turn, calls any registered trace sinks. In the example, the only code registering an interest was the code that provided the Config path. Therefore, the Coursechange function that was hooked from Node number seven will be the only Callback called.

The final piece of the puzzle is the "context". Recall that we saw an output looking something like the following from third.cc:

/NodeList/7/\$ns3::MobilityModel/CourseChange \(x=7.27897, \mathrm{y}=\)

2.22677

The first part of the output is the context. It is simply the path through which the config code located the trace source. In the case we have been looking at there can be any number of trace sources in the system corresponding to any number of nodes with mobility models. There needs to be some way to identify which trace source is actually the one that fired the Callback. The easy way is to connect with Config::Connect, instead of Config::ConnectWithoutContext.

\subsection*{8.2.3 Finding Sources}

The first question that inevitably comes up for new users of the Tracing system is, "Okay, I know that there must be trace sources in the simulation core, but how do I find out what trace sources are available to me?"

The second question is, "Okay, I found a trace source, how do I figure out the Config path to use when I connect to it?"

The third question is, "Okay, I found a trace source and the Config path, how do I figure out what the return type and formal arguments of my callback function need to be?"

The fourth question is, "Okay, I typed that all in and got this incredibly bizarre error message, what in the world does it mean?"

We'll address each of these in turn.

\subsection*{8.2.4 Available Sources}

Okay, I know that there must be trace sources in the simulation core, but how do I find out what trace sources are available to me?

The answer to the first question is found in the \(n s-3\) API documentation. If you go to the project web site, ns-3 project, you will find a link to "Documentation" in the navigation bar. If you select this link, you will be taken to our documentation page. There is a link to "Latest Release" that will take you to the documentation for the latest stable release of \(n s-3\). If you select the "API Documentation" link, you will be taken to the \(n s-3\) API documentation page.

In the sidebar you should see a hierarchy that begins
- ns-3
- ns-3 Documentation
- All TraceSources
- All Attributes
- All GlobalValues

The list of interest to us here is "All TraceSources". Go ahead and select that link. You will see, perhaps not too surprisingly, a list of all of the trace sources available in \(n s-3\).

As an example, scroll down to ns3::MobilityModel. You will find an entry for

CourseChange: The value of the position and/or velocity vector changed

You should recognize this as the trace source we used in the third.cc example. Perusing this list will be helpful.

\subsection*{8.2.5 Config Paths}

Okay, I found a trace source, how do I figure out the Config path to use when I connect to it?

If you know which object you are interested in, the "Detailed Description" section for the class will list all available trace sources. For example, starting from the list of "All TraceSources," click on the ns3::MobilityModel link, which will take you to the documentation for the MobilityModel class. Almost at the top of the page is a one line brief description of the class, ending in a link "More...". Click on this link to skip the API summary and go to the "Detailed Description" of the class. At the end of the description will be (up to) three lists:
- Config Paths: a list of typical Config paths for this class.
- Attributes: a list of all attributes supplied by this class.
- TraceSources: a list of all TraceSources available from this class.

First we'll discuss the Config paths.

Let's assume that you have just found the "CourseChange" trace source in the "All TraceSources" list and you want to figure out how to connect to it. You know that you are using (again, from the third.cc example) an ns3::RandomWalk2dMobilityModel. So either click on the class name in the "All TraceSources" list, or
find ns3::RandomWalk2dMobilityModel in the "Class List". Either way you should now be looking at the "ns3::RandomWalk2dMobilityModel Class Reference" page.

If you now scroll down to the "Detailed Description" section, after the summary list of class methods and attributes (or just click on the "More..." link at the end of the class brief description at the top of the page) you will see the overall documentation for the class. Continuing to scroll down, find the "Config Paths" list:

\section*{Config Paths}

ns3::RandomWalk2dMobilityModel is accessible through the following paths with Config::Set and Config: :Connect:
- “/NodeList/[i]/\$ns3::MobilityModel/\$ns3::RandomWalk2dMobilityModel"

The documentation tells you how to get to the RandomWalk2dMobilityModel Object. Compare the string above with the string we actually used in the example code:

"/NodeList/7/\$ns3: :MobilityModel"

The difference is due to the fact that two Get0bject calls are implied in the string found in the documentation. The first, for \$ns3: :MobilityModel will query the aggregation for the base class. The second implied Get0bject call, for \(\$ n s 3:\) :RandomWalk2dMobilityModel, is used to cast the base class to the concrete implementation class. The documentation shows both of these operations for you. It turns out that the actual trace source you are looking for is found in the base class.

Look further down in the "Detailed Description" section for the list of trace sources. You will find

No TraceSources are defined for this type.

\section*{TraceSources defined in parent class "ns3::MobilityModel"}
- CourseChange: The value of the position and/or velocity vector changed.

Callback signature: ns3: :MobilityModel: :CourseChangeCallback

This is exactly what you need to know. The trace source of interest is found in ns3::MobilityModel (which you knew anyway). The interesting thing this bit of API Documentation tells you is that you don't need that extra cast in the config path above to get to the concrete class, since the trace source is actually in the base class. Therefore the additional Get Ob ject is not required and you simply use the path:

"/NodeList/[i]/\$ns3:

which perfectly matches the example path:

"/NodeList/7/\$ns3: :MobilityModel"

As an aside, another way to find the Config path is to grep around in the \(n s-3\) codebase for someone who has already figured it out. You should always try to copy someone else's working code before you start to write your own. Try something like:

\$ find . -name '*.cc' | xargs grep CourseChange | grep Connect

and you may find your answer along with working code. For example, in this case, src/mobility/examples/ main-random-topology.cc has something just waiting for you to use:

Config::Connect("/NodeList/*/\$ns3::MobilityModel/CourseChange",

MakeCallback (\&CourseChange));

We'll return to this example in a moment.

\subsection*{8.2.6 Callback Signatures}

Okay, I found a trace source and the Config path, how do I figure out what the return type and formal arguments of my callback function need to be?

The easiest way is to examine the callback signature typedef, which is given in the "Callback signature" of the trace source in the "Detailed Description" for the class, as shown above.

Repeating the "CourseChange" trace source entry from ns3: :RandomWalk2dMobilityModel we have:
- CourseChange: The value of the position and/or velocity vector changed.

Callback signature: ns3::MobilityModel: :CourseChangeCallback

The callback signature is given as a link to the relevant typedef, where we find

typedef void (* CourseChangeCallback)(std::string context, Ptr<const

MobilityModel> * model);

TracedCallback signature for course change notifications.

If the callback is connected using ConnectWithoutContext omit the context argument from the signature.

\section*{Parameters:}

[in] context The context string supplied by the Trace source.

[in] model The MobilityModel which is changing course.

As above, to see this in use grep around in the \(n s-3\) codebase for an example. The example above, from src/mobility/examples/main-random-topology.cc, connects the "CourseChange" trace source to the CourseChange function in the same file:

static void 静态方法
CourseChange(std::string context, Ptr model)
{
...

Notice that this function:
- Takes a "context" string argument, which we'll describe in a minute. (If the callback is connected using the ConnectWithoutContext function the context argument will be omitted.)
- Has the MobilityModel supplied as the last argument (or only argument if connectWithoutContext is used).
- Returns void.

If, by chance, the callback signature hasn't been documented, and there are no examples to work from, determining the right callback function signature can be, well, challenging to actually figure out from the source code.

Before embarking on a walkthrough of the code, I'll be kind and just tell you a simple way to figure this out: The return value of your callback will always be void. The formal parameter list for a TracedCallback can be found from the template parameter list in the declaration. Recall that for our current example, this is in mobility-model.h, where we have previously found:

TracedCallback<Ptr<const MobilityModel>> m_courseChangeTrace;

There is a one-to-one correspondence between the template parameter list in the declaration and the formal arguments of the callback function. Here, there is one template parameter, which is a Ptr<const MobilityModel>. This tells you that you need a function that returns void and takes a Ptr<const MobilityModel>. For example:

void
CourseChange(Ptr model)
{
...

That's all you need if you want to Config::ConnectWithoutContext. If you want a context, you need to Config::Connect and use a Callback function that takes a string context, then the template arguments:

\section*{void}

CourseChange(std: :string context, Ptr<const MobilityModel> model)

\{

\(\cdots\)

\}

If you want to ensure that your CourseChangeCallback function is only visible in your local file, you can add the keyword static and come up with:

static void 静态空
CourseChange(std::string path, Ptr model)
{
...

which is exactly what we used in the third.cc example.

\section*{Implementation}

This section is entirely optional. It is going to be a bumpy ride, especially for those unfamiliar with the details of templates. However, if you get through this, you will have a very good handle on a lot of the \(n s-3\) low level idioms.

So, again, let's figure out what signature of callback function is required for the "CourseChange" trace source. This is going to be painful, but you only need to do this once. After you get through this, you will be able to just look at a TracedCallback and understand it.

The first thing we need to look at is the declaration of the trace source. Recall that this is in mobility-model.h, where we have previously found:

TracedCallback<Ptr<const MobilityModel>> m_courseChangeTrace;

This declaration is for a template. The template parameter is inside the angle-brackets, so we are really interested in finding out what that TracedCallback<> is. If you have absolutely no idea where this might be found, grep is your friend.

We are probably going to be interested in some kind of declaration in the \(n s-3\) source, so first change into the src directory. Then, we know this declaration is going to have to be in some kind of header file, so just grep for it using:

\$ find . -name '*.h' | xargs grep TracedCallback

You'll see 303 lines fly by (I piped this through wc to see how bad it was). Although that may seem like a lot, that's not really a lot. Just pipe the output through more and start scanning through it. On the first page, you will see some very suspiciously template-looking stuff.

TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>: :TracedCallback()

TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>: ConnectWithoutContext(c ...

TracedCallback<T1,T2,T3,T4,T5,T6,T7, T8>: Connect(const CallbackB ...
(continued from previous page)

TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::DisconnectWithoutContext ...
TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::Disconnect(const Callba ...
TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::operator()() const ...
TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::operator()(T1 a1) const ...
TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::operator()(T1 a1, T2 a2 ...
TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::operator()(T1 a1, T2 a2 ...
TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::operator()(T1 a1, T2 a2 ...
TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::operator()(T1 a1, T2 a2 ...
TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::operator()(T1 a1, T2 a2 ...
TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::operator()(T1 a1, T2 a2 ...

It turns out that all of this comes from the header file traced-callback. \(\mathrm{h}\) which sounds very promising. You can then take a look at mobility-model. \(\mathrm{h}\) and see that there is a line which confirms this hunch:

\#include "ns3/traced-callback.h"

Of course, you could have gone at this from the other direction and started by looking at the includes in mobility-model.h and noticing the include of traced-callback.h and inferring that this must be the file you want.

In either case, the next step is to take a look at src/core/model/traced-callback.h in your favorite editor to see what is happening.

You will see a comment at the top of the file that should be comforting:

An ns3::TracedCallback has almost exactly the same API as a normal ns3::Callback but instead of forwarding calls to a single function (as an ns3::Callback normally does), it forwards calls to a chain of ns3::Callback.

This should sound very familiar and let you know you are on the right track.

Just after this comment, you will find

template 模板
class TracedCallback 跟踪回调类
{
...

This tells you that TracedCallback is a templated class. It has eight possible type parameters with default values. Go back and compare this with the declaration you are trying to understand:

TracedCallback<Ptr<const MobilityModel>> m_courseChangeTrace;

The typename \(\mathrm{T1}\) in the templated class declaration corresponds to the Ptr<const MobilityModel> in the declaration above. All of the other type parameters are left as defaults. Looking at the constructor really doesn't tell you much. The one place where you have seen a connection made between your Callback function and the tracing system is in the Connect and ConnectWithoutContext functions. If you scroll down, you will see a ConnectWithoutContext method here:

template<typename T1, typename T2,
模板<typename T1, typename T2,

typename T3, typename T4,
类型名称 T3,类型名称 T4,

typename T5, typename T6,
类型名称 T5,类型名称 T6,

typename T7, typename T8>
类型名称 T7,类型名称 T8>

void 
(continued from previous page)

TracedCallback<T1,T2,T3,T4,T5,T6,T7,T8>::ConnectWithoutContext ...
{
Callback<void,T1,T2,T3,T4,T5,T6,T7,T8> cb;
回调<void,T1,T2,T3,T4,T5,T6,T7,T8> cb;

cb.Assign(callback);
m_callbackList.push_back(cb);
}

You are now in the belly of the beast. When the template is instantiated for the declaration above, the compiler will replace T1 with Ptr<const MobilityModel>.

void
TracedCallback<Ptr::ConnectWithoutContext ... cb
{
Callback<void, Ptr> cb;
cb.Assign(callback);
m_callbackList.push_back(cb);
}

You can now see the implementation of everything we've been talking about. The code creates a Callback of the right type and assigns your function to it. This is the equivalent of the pfi = MyFunction we discussed at the start of this section. The code then adds the Callback to the list of Callbacks for this source. The only thing left is to look at the definition of Callback. Using the same grep trick as we used to find TracedCallback, you will be able to find that the file ./core/callback. \(\mathrm{h}\) is the one we need to look at.

If you look down through the file, you will see a lot of probably almost incomprehensible template code. You will eventually come to some API Documentation for the Callback template class, though. Fortunately, there is some English:

Callback template class.

This class template implements the Functor Design Pattern. It is used to declare the type of a Callback:
- the first non-optional template argument represents the return type of the callback.
- the remaining (optional) template arguments represent the type of the subsequent arguments to the callback.
- up to nine arguments are supported.

We are trying to figure out what the

Callback<void, Ptr<const MobilityModel>> cb;

declaration means. Now we are in a position to understand that the first (non-optional) template argument, void, represents the return type of the Callback. The second (optional) template argument, Ptr<const MobilityModel> represents the type of the first argument to the callback.

The Callback in question is your function to receive the trace events. From this you can infer that you need a function that returns void and takes a Ptr<const MobilityModel>. For example,

\section*{void}

CourseChangeCallback(Ptr<const MobilityModel> model)

\{

\(\cdots\)

\}

That's all you need if you want to Config::ConnectWithoutContext. If you want a context, you need to Config::Connect and use a Callback function that takes a string context. This is because the Connect function will provide the context for you. You'll need:

void
CourseChangeCallback(std::string context, Ptr model)
{
...

If you want to ensure that your CourseChangeCallback is only visible in your local file, you can add the keyword static and come up with:

static void

CourseChangeCallback(std::string path, Ptr<const MobilityModel> model)

\{

\(\cdots\)

\}

which is exactly what we used in the third.cc example. Perhaps you should now go back and reread the previous section (Take My Word for It).

If you are interested in more details regarding the implementation of Callbacks, feel free to take a look at the \(n s-3\) manual. They are one of the most frequently used constructs in the low-level parts of \(n s-3\). It is, in my opinion, a quite elegant thing.

\subsection*{8.2.7 TracedValues}

Earlier in this section, we presented a simple piece of code that used a TracedValue<int32_t> to demonstrate the basics of the tracing code. We just glossed over the what a TracedValue really is and how to find the return type and formal arguments for the callback.

As we mentioned, the file, traced-value. \(\mathrm{h}\) brings in the required declarations for tracing of data that obeys value semantics. In general, value semantics just means that you can pass the object itself around, rather than passing the address of the object. We extend that requirement to include the full set of assignment-style operators that are pre-defined for plain-old-data (POD) types:

\begin{tabular}{|l|l|}
\hline \multicolumn{2}{|l|}{ operator \(=(\) assignment \()\)} \\
\hline operator \(\star=\) & operator \(/=\) \\
\hline operator \(+=\) & operator \(-=\) \\
\hline operator ++ (both prefix and postfix) \\
\hline operator \(--(\) both prefix and postfix) \\
\hline operator \(<<=\) & operator \(>>=\) \\
\hline operator \(\delta=\) & operator \(\mid=\) \\
\hline operator \(\%=\) & operator \({ }^{\wedge}=\) \\
\hline
\end{tabular}

What this all really means is that you will be able to trace all changes made using those operators to a C++ object which has value semantics.

The TracedValue<> declaration we saw above provides the infrastructure that overloads the operators mentioned above and drives the callback process. On use of any of the operators above with a TracedValue it will provide both the old and the new value of that variable, in this case an int 32_t value. By inspection of the TracedValue declaration, we know the trace sink function will have arguments (int32_t oldValue, int32_t newValue). The return type for a TracedValue callback function is always void, so the expected callback signature for the sink function traceSink will be:

void (* traceSink)(int32_t oldValue, int32_t newValue);

The .AddTraceSource in the GetTypeId method provides the "hooks" used for connecting the trace source to the outside world through the Config system. We already discussed the first three arguments to AddTraceSource: the Attribute name for the Config system, a help string, and the address of the TracedValue class data member.

The final string argument, "ns3::TracedValueCallback::Int32" in the example, is the name of a typedef for the callback function signature. We require these signatures to be defined, and give the fully qualified type name to AddTraceSource, so the API documentation can link a trace source to the function signature. For TracedValue the signature is straightforward; for TracedCallbacks we've already seen the API docs really help.

\subsection*{8.3 Real Example}

Let's do an example taken from one of the best-known books on TCP around. "TCP/IP Illustrated, Volume 1: The Protocols," by W. Richard Stevens is a classic. I just flipped the book open and ran across a nice plot of both the congestion window and sequence numbers versus time on page 366. Stevens calls this, "Figure 21.10. Value of cwnd and send sequence number while data is being transmitted." Let's just recreate the cwnd part of that plot in \(n s-3\) using the tracing system and gnuplot.

\subsection*{8.3.1 Available Sources}

The first thing to think about is how we want to get the data out. What is it that we need to trace? So let's consult "All Trace Sources" list to see what we have to work with. Recall that this is found in the ns-3 API Documentation. If you scroll through the list, you will eventually find:

\section*{ns3::TcpSocketBase}
- CongestionWindow: The TCP connection's congestion window
- SlowStartThreshold: TCP slow start threshold (bytes)

It turns out that the \(n s-3 \mathrm{TCP}\) implementation lives (mostly) in the file src/internet/model/tcp-socket-base. cc while congestion control variants are in files such as src/internet/model/tcp-bic.cc. If you don't know this a priori, you can use the recursive grep trick:

\$ find . -name '*.cc' | xargs grep -i tcp

You will find page after page of instances of tcp pointing you to that file.

Bringing up the class documentation for TcpSocketBase and skipping to the list of TraceSources you will find

\section*{TraceSources}
- CongestionWindow: The TCP connection's congestion window

Callback signature: ns3::TracedValueCallback::Uint32

Clicking on the callback typedef link we see the signature you now know to expect:

typedef void(* ns3::TracedValueCallback::Int32)(int32_t oldValue, int32_t newValue)

You should now understand this code completely. If we have a pointer to the TcpSocketBase object, we can TraceConnect to the "CongestionWindow" trace source if we provide an appropriate callback target. This is the same kind of trace source that we saw in the simple example at the start of this section, except that we are talking about uint32_t instead of int32_t. And we know that we have to provide a callback function with that signature.

\subsection*{8.3.2 Finding Examples}

It's always best to try and find working code laying around that you can modify, rather than starting from scratch. So the first order of business now is to find some code that already hooks the "CongestionWindow" trace source and see if we can modify it. As usual, grep is your friend:

\$ find . -name '*.cc' | xargs grep CongestionWindow

This will point out a couple of promising candidates: examples/tcp/tcp-large-transfer.cc and src/test/ ns3tcp/ns3tcp-cwnd-test-suite.cc.

We haven't visited any of the test code yet, so let's take a look there. You will typically find that test code is fairly minimal, so this is probably a very good bet. Open src/test/ns3tcp/ns3tcp-cwnd-test-suite.cc in your favorite editor and search for "CongestionWindow". You will find,

ns3TcpSocket->TraceConnectWithoutContext("CongestionWindow",

MakeCallback(\&Ns3TcpCwndTestCase1::CwndChange, this));

This should look very familiar to you. We mentioned above that if we had a pointer to the TcpSocketBase, we could TraceConnect to the "CongestionWindow" trace source. That's exactly what we have here; so it turns out that this line of code does exactly what we want. Let's go ahead and extract the code we need from this function (Ns3TcpCwndTestCase1: :DoRun()). If you look at this function, you will find that it looks just like an \(n s\) - 3 script. It turns out that is exactly what it is. It is a script run by the test framework, so we can just pull it out and wrap it in main instead of in DoRun. Rather than walk through this, step, by step, we have provided the file that results from porting this test back to a native \(n s\) - 3 script - examples/tutorial/fifth.cc.

\subsection*{8.3.3 Dynamic Trace Sources}

The \(\mathrm{fifth} . \mathrm{cc}\) example demonstrates an extremely important rule that you must understand before using any kind of trace source: you must ensure that the target of a Config::Connect command exists before trying to use it. This is no different than saying an object must be instantiated before trying to call it. Although this may seem obvious when stated this way, it does trip up many people trying to use the system for the first time.

Let's return to basics for a moment. There are three basic execution phases that exist in any \(n s-3\) script. The first phase is sometimes called "Configuration Time" or "Setup Time," and exists during the period when the main function of your script is running, but before Simulator::Run is called. The second phase is sometimes called "Simulation Time" and exists during the time period when Simulator: :Run is actively executing its events. After it completes executing the simulation, Simulator: :Run will return control back to the main function. When this happens, the script enters what can be called the "Teardown Phase," which is when the structures and objects created during setup are taken apart and released.

Perhaps the most common mistake made in trying to use the tracing system is assuming that entities constructed dynamically during simulation time are available during configuration time. In particular, an \(n s-3\) Socket is a dynamic object often created by Applications to communicate between Nodes. An \(n s-3\) Application always has a "Start Time" and a "Stop Time" associated with it. In the vast majority of cases, an Application will not attempt to create a dynamic object until its StartApplication method is called at some "Start Time". This is to ensure that the simulation is completely configured before the app tries to do anything (what would happen if it tried to connect to a Node that didn't exist yet during configuration time?). As a result, during the configuration phase you can't connect a trace source to a trace sink if one of them is created dynamically during the simulation.

The two solutions to this conundrum are

1. Create a simulator event that is run after the dynamic object is created and hook the trace when that event is executed; or

2. Create the dynamic object at configuration time, hook it then, and give the object to the system to use during simulation time.

We took the second approach in the fifth.cc example. This decision required us to create the TutorialApp Application, the entire purpose of which is to take a Socket as a parameter.

\subsection*{8.3.4 Walkthrough: fifth.cc}

Now, let's take a look at the example program we constructed by dissecting the congestion window test. Open examples/tutorial/fifth.cc in your favorite editor. You should see some familiar looking code:

\(/ *\)
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation;
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Include., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA

*/

\#include "tutorial-app.h"

\#include "ns3/applications-module.h"

\#include "ns3/core-module.h"

\#include "ns3/internet-module.h"

\#include "ns3/network-module.h"

\#include "ns3/point-to-point-module.h"

\#include <fstream>

using namespace ns3;

NS_LOG_COMPONENT_DEFINE("FifthScriptExample");

The next lines of source are the network illustration and a comment addressing the problem described above with Socket.

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-121.jpg?height=623&width=1485&top_left_y=1827&top_left_x=233)

(continues on next page)
(continued from previous page)

// to crank up a flow and hook the CongestionWindow attribute on the socket
// 扭动流并将 CongestionWindow 属性挂钩到套接字上

// of the sender. Normally one would use an on-off application to generate a
// 发送者。通常,人们会使用开关应用程序来生成一个

// flow, but this has a couple of problems. First, the socket of the on-off
// 流,但这有一些问题。首先,开关的套接字

// application is not created until Application Start time, so we wouldn't be
// 应用程序直到应用程序启动时间才会创建,因此我们无法在配置时间挂接套接字(现在)。

// able to hook the socket (now) at configuration time. Second, even if we
// 即使我们可以在启动时间之后安排调用,套接字也不是公共的,所以我们

// could arrange a call after start time, the socket is not public so we
// 即使我们可以在启动时间之后安排调用,套接字也不是公共的,所以我们

// couldn't get at it.
// 无法接触它。

//
// So, we can cook up a simple version of the on-off application that does what
// 因此,我们可以设计一个简化版本的开关应用程序,实现我们想要的功能。另一方面,我们不需要所有开关的复杂性。

// we want. On the plus side we don't need all of the complexity of the on-off
// 我们想要的。另一方面,我们不需要所有开关的复杂性。

// application. On the minus side, we don't have a helper, so we have to get
// 应用程序。不过,我们没有助手,所以我们必须更深入地了解细节,但这是微不足道的。

// a little more involved in the details, but this is trivial.
// 因此,首先,我们创建一个套接字,并在其上执行跟踪连接;然后我们传递

//
// So first, we create a socket and do the trace connect on it; then we pass
// this socket into the constructor of our simple application which we then
// 将此套接字放入我们简单应用程序的构造函数中,然后我们将其安装在源节点中。

// install in the source node.
// 在源节点中安装。

// ============================================================================
//

This should also be self-explanatory.

Previous versions of \(n s-3\) declared a custom application called MуApp for use in this program. Current versions of \(n s-3\) have moved this to a separate header file (tutorial-app.h) and implementation file (tutorial-app.cc). This simple application allows the socket to be created at configuration time.

/**
* Tutorial - a simple Application sending packets.
* 教程 - 一个简单的应用程序发送数据包。

/
class TutorialApp : public Application
class TutorialApp:public Application

{
public: public:
TutorialApp();
~TutorialApp() override;
~TutorialApp() 覆盖;

/
*
* Register this type.
* 注册此类型。

* \return The TypeId.
* \return TypeId。

/
static TypeId GetTypeId();
/
*
* Setup the socket.
* 设置套接字。

* \param socket The socket.
* \param socket 套接字。

* lparam address The destination address.
* lparam 地址 目的地地址。

* \param packetSize The packet size to transmit.
* \param packetSize 要传输的数据包大小。

* \param nPackets The number of packets to transmit.
* \param nPackets 要传输的数据包数量。

* \param dataRate the data rate to use.
* \param dataRate 要使用的数据速率。

*/
void Setup(Ptr socket, void Setup(Ptr socket,
Address address, Address address,
uint32_t packetSize, uint32_t 数据包大小,
uint32_t nPackets, uint32_t 包数,
DataRate dataRate); DataRate 数据速率);
private: 私有:
void StartApplication() override;
void StartApplication() 覆盖;

void StopApplication() override;
void StopApplication() 覆盖;
(continued from previous page)

/// Schedule a new transmission.
/// 安排新的传输。

void ScheduleTx();
/// Send a packet.
/// 发送数据包。

void SendPacket(); void 发送数据包();
Ptr m_socket; //!< The transmission socket.
Ptr m_socket; //!< 传输套接字。

Address m_peer; //!< The destination address.
Address m_peer; //!< 目标地址。

uint32_t m_packetSize; //!< The packet size.
uint32_t m_packetSize; //!< 数据包大小。

uint32_t m_nPackets; //!< The number of packets to send.
uint32_t m_nPackets; //!< 要发送的数据包数量。

DataRate m_dataRate; //!< The data rate to use.
DataRate m_dataRate; //!< 要使用的数据速率。

EventId m_sendEvent; //!< Send event.
事件标识 m_sendEvent; //!< 发送事件。

bool m_running; //!< True if the application is running.
bool m_running; //!< 如果应用程序正在运行,则为真。

uint32_t m_packetsSent; //!< The number of packets sent.
uint32_t m_packetsSent; //!< 发送的数据包数量。

};

You can see that this class inherits from the ns-3 Application class. Take a look at src/network/model/ application.h if you are interested in what is inherited. The TutorialApp class is obligated to override the StartApplication and StopApplication methods. These methods are automatically called when TutorialApp is required to start and stop sending data during the simulation.

\section*{Starting/Stopping Applications}

It is worthwhile to spend a bit of time explaining how events actually get started in the system. This is another fairly deep explanation, and can be ignored if you aren't planning on venturing down into the guts of the system. It is useful, however, in that the discussion touches on how some very important parts of \(n s-3\) work and exposes some important idioms. If you are planning on implementing new models, you probably want to understand this section.

The most common way to start pumping events is to start an Application. This is done as the result of the following (hopefully) familiar lines of an \(n s-3\) script:

ApplicationContainer apps \(=\ldots\)

apps.Start(Seconds(1.0));

apps.Stop(Second.s(10.0));

The application container code (see src/network/helper/application-container.h if you are interested) loops through its contained applications and calls,

app->SetStartTime(startTime);

as a result of the apps. Start call and

app->SetStopTime(stopTime);

as a result of the apps. Stop call.

The ultimate result of these calls is that we want to have the simulator automatically make calls into our Applications to tell them when to start and stop. In the case of TutorialApp, it inherits from class Application and overrides StartApplication, and StopApplication. These are the functions that will be called by the simulator at the appropriate time. In the case of TutorialApp you will find that TutorialApp: :StartApplication does the initial Bind, and Connect on the socket, and then starts data flowing by calling TutorialApp: :SendPacket. TutorialApp::StopApplication stops generating packets by cancelling any pending send events then closes the socket.

One of the nice things about \(n s-3\) is that you can completely ignore the implementation details of how your Application is "automagically" called by the simulator at the correct time. But since we have already ventured
deep into \(n s-3\) already, let's go for it.

If you look at src/network/model/application.cc you will find that the SetStartTime method of an Application just sets the member variable m_startTime and the SetStopTime method just sets m_stopTime. From there, without some hints, the trail will probably end.

The key to picking up the trail again is to know that there is a global list of all of the nodes in the system. Whenever you create a node in a simulation, a pointer to that Node is added to the global NodeList.

Take a look at src/network/model/node-1ist.cc and search for NodeList : : Add. The public static implementation calls into a private implementation called NodeListPriv: : Add. This is a relatively common idiom in \(n s-3\). So, take a look at NodeListPriv: : Add. There you will find,

Simulator::ScheduleWithContext(index, TimeStep(0), \&Node::Initialize, node);

This tells you that whenever a Node is created in a simulation, as a side-effect, a call to that node's Initialize method is scheduled for you that happens at time zero. Don't read too much into that name, yet. It doesn't mean that the Node is going to start doing anything, it can be interpreted as an informational call into the Node telling it that the simulation has started, not a call for action telling the Node to start doing something.

So, NodeList: : Add indirectly schedules a call to Node::Initialize at time zero to advise a new Node that the simulation has started. If you look in src/network/model/node.h you will, however, not find a method called Node::Initialize. It turns out that the Initialize method is inherited from class object. All objects in the system can be notified when the simulation starts, and objects of class Node are just one kind of those objects.

Take a look at src/core/model/object.cc next and search for object::Initialize. This code is not as straightforward as you might have expected since \(n s-3\) objects support aggregation. The code in object::Initialize then loops through all of the objects that have been aggregated together and calls their DoInitialize method. This is another idiom that is very common in \(n s-3\), sometimes called the "template design pattern.": a public non-virtual API method, which stays constant across implementations, and that calls a private virtual implementation method that is inherited and implemented by subclasses. The names are typically something like MethodName for the public API and DoMethodName for the private API.

This tells us that we should look for a Node::DoInitialize method in src/network/model/node.cc for the method that will continue our trail. If you locate the code, you will find a method that loops through all of the devices in the Node and then all of the applications in the Node calling device->Initialize and application->Initialize respectively.

You may already know that classes Device and Application both inherit from class object and so the next step will be to look at what happens when Application::DoInitialize is called. Take a look at src/network/ model/application.cc and you will find:

void 
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-124.jpg?height=43&width=518&top_left_y=1865&top_left_x=240)
{
NS_LOG_FUNCTION(this);
m_startEvent = Simulator::Schedule(m_startTime, &Application::StartApplication,
<this);
if (m_stopTime != TimeStep(0))
{
m_stopEvent = Simulator::Schedule(m_stopTime, &Application::StopApplication,
<this);
}
Object::DoInitialize();
}

Here, we finally come to the end of the trail. If you have kept it all straight, when you implement an \(n s-3\) Application, your new application inherits from class Application. You override the StartApplication and StopApplication methods and provide mechanisms for starting and stopping the flow of data out of your new

Application. When a Node is created in the simulation, it is added to a global NodeList. The act of adding a Node to this NodeList causes a simulator event to be scheduled for time zero which calls the Node::Initialize method of the newly added Node to be called when the simulation starts. Since a Node inherits from object, this calls the Object::Initialize method on the Node which, in turn, calls the DoInitialize methods on all of the objects aggregated to the Node (think mobility models). Since the Node object has overridden DoInitialize, that method is called when the simulation starts. The Node::DoInitialize method calls the Initialize methods of all of the Applications on the node. Since Applications are also Objects, this causes Application::DoInitialize to be called. When Application: :DoInitialize is called, it schedules events for the StartApplication and StopApplication calls on the Application. These calls are designed to start and stop the flow of data from the Application

This has been another fairly long journey, but it only has to be made once, and you now understand another very deep piece of \(n s-3\).

\section*{The TutorialApp Application}

The TutorialApp Application needs a constructor and a destructor, of course:

TutorialApp::TutorialApp()
: m_socket(nullptr), m_socket(nullptr),
m_peer(),
m_packetSize(0),
m_nPackets(0),
m_dataRate(0),
m_sendEvent(),
m_running(false),
m_packetsSent(0)
TutorialApp:<~TutorialApp()
{
m_socket = nullptr;
}

}

The existence of the next bit of code is the whole reason why we wrote this Application in the first place.

void
TutorialApp::Setup(Ptr socket,
Address address, 地址地址,
uint32_t packetSize, uint32_t 数据包大小,
uint32_t nPackets, uint32_t 包数,
DataRate dataRate) 数据速率 数据速率)
{
m_socket = socket; m_socket = 套接字;
m_peer = address; m_peer = 地址;
m_packetSize = packetSize;
m_nPackets = nPackets;
m_dataRate = dataRate;
}

This code should be pretty self-explanatory. We are just initializing member variables. The important one from the perspective of tracing is the \(\mathrm{Ptr}<\mathrm{Socket>}\) socket which we needed to provide to the application during configuration time. Recall that we are going to create the Socket as a TcpSocket (which is implemented by TcpSocketBase) and hook its "CongestionWindow" trace source before passing it to the Setup method.

void 
TutorialApp::StartApplication()
{
m_running = true
m_packetsSent = 0;
m_socket->Bind();
m_socket->Connect(m_peer);
SendPacket(); 发送数据包();
}

The above code is the overridden implementation Application: :StartApplication that will be automatically called by the simulator to start our Application running at the appropriate time. You can see that it does a Socket Bind operation. If you are familiar with Berkeley Sockets this shouldn't be a surprise. It performs the required work on the local side of the connection just as you might expect. The following connect will do what is required to establish a connection with the TCP at Address m_peer. It should now be clear why we need to defer a lot of this to simulation time, since the Connect is going to need a fully functioning network to complete. After the connect, the Application then starts creating simulation events by calling SendPacket.

The next bit of code explains to the Application how to stop creating simulation events.

void 
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-126.jpg?height=40&width=567&top_left_y=1086&top_left_x=237)
{
m_running = false;
if (m_sendEvent.IsRunning())
如果 (m_sendEvent.IsRunning())

{
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-126.jpg?height=38&width=582&top_left_y=1309&top_left_x=392)
}
if (m_socket)
{
    m_socket->Close();
}
}

Every time a simulation event is scheduled, an Event is created. If the Event is pending execution or executing, its method IsRunning will return true. In this code, if IsRunning () returns true, we cancel the event which removes it from the simulator event queue. By doing this, we break the chain of events that the Application is using to keep sending its Packets and the Application goes quiet. After we quiet the Application we close the socket which tears down the TCP connection.

The socket is actually deleted in the destructor when the m_socket \(=0\) is executed. This removes the last reference to the underlying \(\mathrm{Ptr}<\) Socket> which causes the destructor of that Object to be called.

Recall that StartApplication called SendPacket to start the chain of events that describes the Application behavior.

void
TutorialApp::SendPacket()
{
Ptr packet = Create(m_packetSize);
m_socket->Send(packet);
if (++m_packetsSent < m_nPackets)
{
ScheduleTx(); 安排事务();

Here, you see that SendPacket does just that. It creates a Packet and then does a Send which, if you know Berkeley Sockets, is probably just what you expected to see.

It is the responsibility of the Application to keep scheduling the chain of events, so the next lines call ScheduleTx to schedule another transmit event (a SendPacket) until the Application decides it has sent enough.

void 
TutorialApp::ScheduleTx()
TutorialApp::安排事务()

{
if (m_running) 如果 (m_running)
{
Time tNext(Seconds(m_packetSize * 8 / static_cast(m_dataRate.
时间 tNext(秒(m_packetSize * 8 / static_cast(m_dataRate.

\hookrightarrowGetBitRate())));
m_sendEvent = Simulator::Schedule(tNext, &TutorialApp::SendPacket, this);
}
}

Here, you see that ScheduleTx does exactly that. If the Application is running (if StopApplication has not been called) it will schedule a new event, which calls SendPacket again. The alert reader will spot something that also trips up new users. The data rate of an Application is just that. It has nothing to do with the data rate of an underlying Channel. This is the rate at which the Application produces bits. It does not take into account any overhead for the various protocols or channels that it uses to transport the data. If you set the data rate of an Application to the same data rate as your underlying Channel you will eventually get a buffer overflow.

\section*{Trace Sinks}

The whole point of this exercise is to get trace callbacks from TCP indicating the congestion window has been updated. The next piece of code implements the corresponding trace sink:

static void 静态 void
CwndChange(uint32_t oldCwnd, uint32_t newCwnd)
{
NS_LOG_UNCOND(Simulator::Now().GetSeconds() << "\t" << newCwnd);
}

This should be very familiar to you now, so we won't dwell on the details. This function just logs the current simulation time and the new value of the congestion window every time it is changed. You can probably imagine that you could load the resulting output into a graphics program (gnuplot or Excel) and immediately see a nice graph of the congestion window behavior over time.

We added a new trace sink to show where packets are dropped. We are going to add an error model to this code also, so we wanted to demonstrate this working.

static void 静态 void
RxDrop(Ptr p)
{
NS_LOG_UNCOND("RxDrop at " << Simulator::Now().GetSeconds());
}

This trace sink will be connected to the "PhyRxDrop" trace source of the point-to-point NetDevice. This trace source fires when a packet is dropped by the physical layer of a NetDevice. If you take a small detour to the source (src/point-to-point/model/point-to-point-net-device.cc) you will see that this trace
source refers to PointToPointNetDevice::m_phyRxDropTrace. If you then look in src/point-to-point/ model/point-to-point-net-device.h for this member variable, you will find that it is declared as a TracedCallback<Ptr<const Packet>>. This should tell you that the callback target should be a function that returns void and takes a single parameter which is a Ptr<const Packet> (assuming we use ConnectWithoutContext) - just what we have above.

\section*{Main Program}

The main function starts off by configuring the TCP type to use a legacy NewReno congestion control variant, with what is called the classic TCP loss recovery mechanism. When this tutorial program was originally written, these were the default TCP configurations, but over time, \(n s-3\) TCP has evolved to use the current Linux TCP defaults of Cubic and Prr loss recovery. The first statements also configure the command-line argument processing.

int
main(int argc, char* argv[])
主要(int argc, char* argv[])

{
CommandLine cmd(FILE); 命令行 cmd(FILE);
cmd.Parse(argc, argv);
// In the following three lines, TCP NewReno is used as the congestion
// 在接下来的三行中,TCP NewReno 作为拥塞控制算法,TCP 连接的初始拥塞窗口是

// control algorithm, the initial congestion window of a TCP connection is
// 控制算法,TCP 连接的初始拥塞窗口是

// set to I packet, and the classic fast recovery algorithm is used. Note
// 设置为 I 数据包,并使用经典的快速恢复算法。请注意

// that this configuration is used only to demonstrate how TCP parameters
// 仅用于演示如何在 ns-3 中配置 TCP 参数。否则,建议使用默认值

// can be configured in ns-3. Otherwise, it is recommended to use the default
// 可以配置在 ns-3 中。否则,建议使用默认值

// settings of TCP in ns-3.
// 在 ns-3 中设置 TCP 的设置。

Config::SetDefault("ns3::TcpL4Protocol::SocketType", StringValue("ns3::TcpNewReno
\hookrightarrow"));
Config::SetDefault("ns3::TcpSocket::InitialCwnd", UintegerValue(1));
Config::SetDefault("ns3::TcpL4Protocol::RecoveryType",
TypeIdValue(TypeId::LookupByName("ns3::TcpClassicRecovery")));

The following code should be very familiar to you by now:

NodeContainer nodes; NodeContainer 节点;
nodes.Create(2);
PointToPointHelper pointToPoint;
pointToPoint.SetDeviceAttribute("DataRate", StringValue("5Mbps"));
pointToPoint.SetChannelAttribute("Delay", StringValue("2ms"));
NetDeviceContainer devices;
devices = pointToPoint.Install(nodes);

This creates two nodes with a point-to-point channel between them, just as shown in the illustration at the start of the file.

The next few lines of code show something new. If we trace a connection that behaves perfectly, we will end up with a monotonically increasing congestion window. To see any interesting behavior, we really want to introduce link errors which will drop packets, cause duplicate ACKs and trigger the more interesting behaviors of the congestion window.

ns-3 provides ErrorModel objects which can be attached to Channels. We are using the RateErrorModel which allows us to introduce errors into a Channel at a given rate.

Ptr em = CreateObject();
em->SetAttribute("ErrorRate", DoubleValue(0.00001));
devices.Get(1)->SetAttribute("ReceiveErrorModel", PointerValue(em));

The above code instantiates a RateErrorModel Object, and we set the "ErrorRate" Attribute to the desired value. We then set the resulting instantiated RateErrorModel as the error model used by the point-to-point NetDevice. This will give us some retransmissions and make our plot a little more interesting.

InternetStackHelper stack;
stack.Install(nodes);
Ipv4AddressHelper address;
Ipv4AddressHelper 地址;
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-129.jpg?height=40&width=884&top_left_y=560&top_left_x=241)
Ipv4InterfaceContainer interfaces = address.Assign(devices);
Ipv4InterfaceContainer 接口 = 地址.Assign(设备);

The above code should be familiar. It installs internet stacks on our two nodes and creates interfaces and assigns IP addresses for the point-to-point devices.

Since we are using TCP, we need something on the destination Node to receive TCP connections and data. The PacketSink Application is commonly used in \(n s\) - 3 for that purpose.

uint16_t sinkPort = 8080;
uint16_t 接收端口 = 8080;

Address sinkAddress(InetSocketAddress(interfaces.GetAddress(1), sinkPort));
PacketSinkHelper packetSinkHelper("ns3::TcpSocketFactory",
InetSocketAddress(Ipv4Address::GetAny(), sinkPort));
ApplicationContainer sinkApps = packetSinkHelper.Install(nodes.Get(1));
sinkApps.Start(Seconds(0.));
sinkApps.Stop(Seconds(20.));

This should all be familiar, with the exception of,

PacketSinkHelper packetSinkHelper("ns3::TcpSocketFactory", InetSocketAddress(Ipv4Address: :GetAny(), sinkPort));

This code instantiates a PacketSinkHelper and tells it to create sockets using the class ns 3::TcpSocketFactory. This class implements a design pattern called "object factory" which is a commonly used mechanism for specifying a class used to create objects in an abstract way. Here, instead of having to create the objects themselves, you provide the PacketSinkHelper a string that specifies a TypeId string used to create an object which can then be used, in turn, to create instances of the Objects created by the factory.

The remaining parameter tells the Application which address and port it should Bind to.

The next two lines of code will create the socket and connect the trace source.

Ptr ns3TcpSocket = Socket::CreateSocket(nodes.Get(0),`
Ptr ns3TcpSocket = Socket::CreateSocket(nodes.Get(0),

~TcpSocketFactory::GetTypeId());
ns3TcpSocket->TraceConnectWithoutContext("CongestionWindow", MakeCallback(&
<CwndChange));

The first statement calls the static member function Socket: :CreateSocket and provides a Node and an explicit TypeId for the object factory used to create the socket. This is a slightly lower level call than the PacketSinkHelper call above, and uses an explicit C++ type instead of one referred to by a string. Otherwise, it is conceptually the same thing.

Once the TcpSocket is created and attached to the Node, we can use TraceConnectWithoutContext to connect the CongestionWindow trace source to our trace sink.

Recall that we coded an Application so we could take that Socket we just made (during configuration time) and use it in simulation time. We now have to instantiate that Application. We didn't go to any trouble to create a helper to manage the Application so we are going to have to create and install it "manually". This is actually quite easy:

Ptr app = CreateObject();
app->Setup(ns3TcpSocket, sinkAddress, 1040, 1000, DataRate("1Mbps"));
nodes.Get(0)->AddApplication(app);
app->Start(Seconds(1.));
app->Stop(Seconds(20.));

The first line creates an ob ject of type TutorialApp - our Application. The second line tells the Application what Socket to use, what address to connect to, how much data to send at each send event, how many send events to generate and the rate at which to produce data from those events.

Next, we manually add the TutorialApp Application to the source Node and explicitly call the Start and Stop methods on the Application to tell it when to start and stop doing its thing.

We need to actually do the connect from the receiver point-to-point NetDevice drop event to our RxDrop callback now.

devices.Get(1)->TraceConnectWithoutContext("PhyRxDrop", MakeCallback(\&RxDrop));

It should now be obvious that we are getting a reference to the receiving Node NetDevice from its container and connecting the trace source defined by the attribute "PhyRxDrop" on that device to the trace sink RxDrop.

Finally, we tell the simulator to override any Applications and just stop processing events at 20 seconds into the simulation.

Simulator::Stop(Seconds(20));
模拟器::停止(秒(20));

Simulator::Run(t); 模拟器::运行(t);
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-130.jpg?height=41&width=393&top_left_y=1248&top_left_x=281)
return 0;

}

Recall that as soon as Simulator::Run is called, configuration time ends, and simulation time begins. All of the work we orchestrated by creating the Application and teaching it how to connect and send data actually happens during this function call.

As soon as Simulator::Run returns, the simulation is complete and we enter the teardown phase. In this case, Simulator: : Destroy takes care of the gory details and we just return a success code after it completes.

\subsection*{8.3.5 Running fifth.cc}

Since we have provided the file \(f\) ifth.cc for you, if you have built your distribution (in debug or default mode since it uses NS_LOG - recall that optimized builds optimize out NS_LOG) it will be waiting for you to run.

\(\$ . / n s 3\) run fifth

1.00419536

\(1.0093 \quad 1072\)

1.015281608

1.021672144

. . .

1.113198040

1.121518576

1.129839112

RxDrop at 1.13696

\(\cdots\)

You can probably see immediately a downside of using prints of any kind in your traces. We get those extraneous ns3 messages printed all over our interesting information along with those RxDrop messages. We will remedy that soon,
but I'm sure you can't wait to see the results of all of this work. Let's redirect that output to a file called cwnd. dat:

\$ ./ns3 run fifth > cwnd.dat \(2>\& 1\)

Now edit up "cwnd.dat" in your favorite editor and remove the ns3 build status and drop lines, leaving only the traced data (you could also comment out the TraceConnectWithoutContext("PhyRxDrop", MakeCallback (\&RxDrop)); in the script to get rid of the drop prints just as easily.

You can now run gnuplot (if you have it installed) and tell it to generate some pretty pictures:

$ gnuplot
gnuplot> set terminal png size 640,480
gnuplot> 设置终端 png 大小 640,480

gnuplot> set output "cwnd.png"
gnuplot> 设置输出 "cwnd.png"

gnuplot> plot "cwnd.dat" using 1:2 title 'Congestion Window' with linespoints
gnuplot> 绘制 "cwnd.dat" 使用 1:2 标题 '拥塞窗口' 使用线点连结

gnuplot> exit gnuplot> 退出

You should now have a graph of the congestion window versus time sitting in the file "cwnd.png" that looks like:

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-131.jpg?height=1192&width=1548&top_left_y=949&top_left_x=294)

\subsection*{8.3.6 Using Mid-Level Helpers}

In the previous section, we showed how to hook a trace source and get hopefully interesting information out of a simulation. Perhaps you will recall that we called logging to the standard output using std: :cout a "blunt instrument" much earlier in this chapter. We also wrote about how it was a problem having to parse the log output in order to isolate interesting information. It may have occurred to you that we just spent a lot of time implementing an example
that exhibits all of the problems we purport to fix with the \(n s-3\) tracing system! You would be correct. But, bear with us. We're not done yet.

One of the most important things we want to do is to have the ability to easily control the amount of output coming out of the simulation; and we also want to save those data to a file so we can refer back to it later. We can use the mid-level trace helpers provided in \(n s-3\) to do just that and complete the picture.

We provide a script that writes the cwnd change and drop events developed in the example fifth.cc to disk in separate files. The cwnd changes are stored as a tab-separated ASCII file and the drop events are stored in a PCAP file. The changes to make this happen are quite small.

\section*{Walkthrough: sixth.cc}

Let's take a look at the changes required to go from fifth.cc to sixth.cc. Open examples/tutorial/sixth. cc in your favorite editor. You can see the first change by searching for CwndChange. You will find that we have changed the signatures for the trace sinks and have added a single line to each sink that writes the traced information to a stream representing a file.

static void 静态 void
CwndChange(Ptr stream, uint32_t oldCwnd, uint32_t newCwnd)
CwndChange(Ptr 流, uint32_t 旧的拥塞窗口大小, uint32_t 新的拥塞窗口大小)

{
NS_LOG_UNCOND(Simulator::Now().GetSeconds() << "\t" << newCwnd);
*stream->GetStream() << Simulator::Now().GetSeconds() << "\t" << oldCwnd << "\t" <<
<newCwnd << std::endl;
}
static void 静态 空
RxDrop(Ptr file, Ptr p)
RxDrop(指针 文件, 指针 p)

{
NS_LOG_UNCOND("RxDrop at " << Simulator::Now().GetSeconds());
NS_LOG_UNCOND("RxDrop 在 " << Simulator::Now().GetSeconds());

file->Write(Simulator:<Now(), p);
}

We have added a "stream" parameter to the CwndChange trace sink. This is an object that holds (keeps safely alive) a C++ output stream. It turns out that this is a very simple object, but one that manages lifetime issues for the stream and solves a problem that even experienced C++ users run into. It turns out that the copy constructor for std: : ostream is marked private. This means that std: :ostreams do not obey value semantics and cannot be used in any mechanism that requires the stream to be copied. This includes the \(n s-3\) callback system, which as you may recall, requires objects that obey value semantics. Further notice that we have added the following line in the CwndChange trace sink implementation:

*stream->GetStream() << Simulator::Now().GetSeconds() << "\t" << oldCwnd << "\t" <<u
unewCwnd << std::endl;

This would be very familiar code if you replaced \(*\) stream->GetStream() with std: : cout, as in:

std::cout << Simulator::Now().GetSeconds() << "\t" << oldCwnd << "\t" << newCwnd <<u

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-132.jpg?height=38&width=217&top_left_y=2106&top_left_x=255)

This illustrates that the Ptr<OutputStreamWrapper> is really just carrying around a std: :ofstream for you, and you can use it here like any other output stream.

A similar situation happens in RxDrop except that the object being passed around (a Ptr<PcapFileWrapper>) represents a PCAP file. There is a one-liner in the trace sink to write a timestamp and the contents of the packet being dropped to the PCAP file:

file->Write(Simulator:

Of course, if we have objects representing the two files, we need to create them somewhere and also cause them to be passed to the trace sinks. If you look in the main function, you will find new code to do just that:

AsciiTraceHelper asciiTraceHelper<
Ptr stream = asciiTraceHelper.CreateFileStream("sixth.cwnd");
ns3TcpSocket->TraceConnectWithoutContext("CongestionWindow", MakeBoundCallback(&
ns3TcpSocket->TraceConnectWithoutContext("拥塞窗口", MakeBoundCallback(&

<CwndChange, stream)); <CwndChange, 流));
.<
PcapHelper pcapHelper<
Ptr file = pcapHelper.CreateFile("sixth.pcap", std::ios::out,u
\hookrightarrowPcapHelper::DLT_PPP);
devices.Get(1)->TraceConnectWithoutContext("PhyRxDrop", MakeBoundCallback(&RxDrop,
\hookrightarrowfile));

In the first section of the code snippet above, we are creating the ASCII trace file, creating an object responsible for managing it and using a variant of the callback creation function to arrange for the object to be passed to the sink. Our ASCII trace helpers provide a rich set of functions to make using text (ASCII) files easy. We are just going to illustrate the use of the file stream creation function here.

The CreateFileStream function is basically going to instantiate a std::ofstream object and create a new file (or truncate an existing file). This std::ofstream is packaged up in an \(n s-3\) object for lifetime management and copy constructor issue resolution.

We then take this \(n s-3\) object representing the file and pass it to MakeBoundCallback (). This function creates a callback just like MakeCallback (), but it "binds" a new value to the callback. This value is added as the first argument to the callback before it is called.

Essentially, MakeBoundCallback (\&CwndChange, stream) causes the trace source to add the additional "stream" parameter to the front of the formal parameter list before invoking the callback. This changes the required signature of the CwndChange sink to match the one shown above, which includes the "extra" parameter Ptr<OutputStreamWrapper> stream.

In the second section of code in the snippet above, we instantiate a PcapHelper to do the same thing for our PCAP trace file that we did with the AsciitraceHelper. The line of code,

Ptr<PcapFileWrapper> file = pcapHelper.CreateFile("sixth.pcap",

"w", PcapHelper::DL_PPP);

creates a PCAP file named "sixth.pcap" with file mode "w". This means that the new file is truncated (contents deleted) if an existing file with that name is found. The final parameter is the "data link type" of the new PCAP file. These are the same as the PCAP library data link types defined in bpf. \(\mathrm{h}\) if you are familiar with PCAP. In this case, DLT_PPP indicates that the PCAP file is going to contain packets prefixed with point to point headers. This is true since the packets are coming from our point-to-point device driver. Other common data link types are DLT_EN10MB (10 MB Ethernet) appropriate for csma devices and DLT_IEEE802_11 (IEEE 802.11) appropriate for wifi devices. These are defined in src/network/helper/trace-helper.h if you are interested in seeing the list. The entries in the list match those in bpf. h but we duplicate them to avoid a PCAP source dependence.

A \(n s-3\) object representing the PCAP file is returned from CreateFile and used in a bound callback exactly as it was in the ASCII case.

An important detour: It is important to notice that even though both of these objects are declared in very similar ways,

Ptr<PcapFileWrapper> file ...

Ptr<OutputStreamWrapper> stream ...

The underlying objects are entirely different. For example, the Ptr<PcapFileWrapper> is a smart pointer to an \(n s-3\) Object that is a fairly heavyweight thing that supports Attributes and is integrated into the Config system. The Ptr<OutputStreamWrapper>, on the other hand, is a smart pointer to a reference counted object that is a very lightweight thing. Remember to look at the object you are referencing before making any assumptions about the "powers" that object may have.

For example, take a look at src/network/utils/pcap-file-wrapper.h in the distribution and notice,

class PcapFileWrapper : public Object

that class PcapFileWrapper is an \(n s-3\) Object by virtue of its inheritance. Then look at src/network/model/ output-stream-wrapper.h and notice,

class OutputStreamWrapper : public

SimpleRefCount<OutputStreamWrapper>

that this object is not an \(n s-3\) Object at all, it is "merely" a C++ object that happens to support intrusive reference counting.

The point here is that just because you read Ptr<something> it does not necessarily mean that something is an \(n s-3\) Object on which you can hang \(n s-3\) Attributes, for example.

Now, back to the example. If you build and run this example,

\(\$ . / n s 3\) run sixth

you will see the same messages appear as when you ran "fifth", but two new files will appear in the top-level directory of your \(n s-3\) distribution.

sixth.cwnd sixth.pcap

Since "sixth.cwnd" is an ASCII text file, you can view it with cat or your favorite file viewer.

\begin{tabular}{lll}
1 & 0 & 536 \\
1.0093 & 536 & 1072 \\
1.01528 & 1072 & 1608 \\
1.02167 & 1608 & 2144 \\
\(\cdots\) & & \\
9.69256 & 5149 & 5204 \\
9.89311 & 5204 & 5259
\end{tabular}

You have a tab separated file with a timestamp, an old congestion window and a new congestion window suitable for directly importing into your plot program. There are no extraneous prints in the file, no parsing or editing is required.

Since "sixth.pcap" is a PCAP file, you can view it with tcpdump.

reading from file sixth.pcap, link-type PPP (PPP)
从文件 sixth.pcap 读取,链路类型 PPP (PPP)

1.136956 IP 10.1.1.1.49153 > 10.1.1.2.8080: Flags [.], seq 17177:17681, ack 1, win,
1.136956 IP 10.1.1.1.49153 > 10.1.1.2.8080: 标志 [.], 序列号 17177:17681, 确认号 1, 窗口大小

\hookrightarrow32768, options [TS val 1133 ecr 1127,eol], length 504
\hookrightarrow32768,选项[TS val 1133 ecr 1127,eol],长度 504

1.403196 IP 10.1.1.1.49153 > 10.1.1.2.8080: Flags [.], seq 33280:33784, ack 1, win
1.403196 IP 10.1.1.1.49153 > 10.1.1.2.8080:标志[.],序列号 33280:33784,确认号 1,窗口

\hookrightarrow32768, options [TS val 1399 ecr 1394,eol], length 504
\hookrightarrow32768,选项[TS val 1399 ecr 1394,eol],长度 504

<.<
7.426220 IP 10.1.1.1.49153 > 10.1.1.2.8080: Flags [.], seq 785704:786240, ack 1, win
7.426220 IP 10.1.1.1.49153 > 10.1.1.2.8080:标志[.],序列号 785704:786240,确认号 1,窗口 32768,选项[TS 值 7423 ecr 7421,eol],长度 536

\hookrightarrow32768, options [TS val 7423 ecr 7421,eol], length 536
9.630693 IP 10.1.1.1.49153 > 10.1.1.2.8080:标志[.],序列号 882688:883224,确认号 1,窗口

9.630693 IP 10.1.1.1.49153 > 10.1.1.2.8080: Flags [.], seq 882688:883224, ack 1, win,
\hookrightarrow32768, options [TS val 9620 ecr 9618,eol], length 536
\hookrightarrow32768,选项[TS val 9620 ecr 9618,eol],长度 536

You have a PCAP file with the packets that were dropped in the simulation. There are no other packets present in the file and there is nothing else present to make life difficult.

It's been a long journey, but we are now at a point where we can appreciate the \(n s-3\) tracing system. We have pulled important events out of the middle of a TCP implementation and a device driver. We stored those events directly in files usable with commonly known tools. We did this without modifying any of the core code involved, and we did this in only 18 lines of code:

static void 静态 void
CwndChange(Ptr stream, uint32_t oldCwnd, uint32_t newCwnd)
CwndChange(Ptr 流,uint32_t oldCwnd,uint32_t newCwnd)

{
NS_LOG_UNCOND(Simulator::Now().GetSeconds() << "\t" << newCwnd);
*stream->GetStream() << Simulator::Now().GetSeconds() << "\t" << oldCwnd << "\t" <<b
<newCwnd << std::endl;
}
<.<
AsciiTraceHelper asciiTraceHelper;
Ptr stream = asciiTraceHelper.CreateFileStream("sixth.cwnd");
ns3TcpSocket->TraceConnectWithoutContext("CongestionWindow", MakeBoundCallback(&
<CwndChange, stream));
...
static void 静态的 void
RxDrop(Ptr file, Ptr p)
RxDrop(Ptr 文件, Ptr p)

{
NS_LOG_UNCOND("RxDrop at " << Simulator:

');
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-135.jpg?height=35&width=621&top_left_y=1251&top_left_x=286)
}
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-135.jpg?height=33&width=43&top_left_y=1366&top_left_x=255)
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-135.jpg?height=38&width=410&top_left_y=1439&top_left_x=245)
Ptr file = pcapHelper.CreateFile("sixth.pcap", "w", PcapHelper::DLT_
~PPP);
devices.Get(1)->TraceConnectWithoutContext("PhyRxDrop", MakeBoundCallback(&RxDrop,u
\hookrightarrowfile));

\subsection*{8.4 Trace Helpers}

The \(n s-3\) trace helpers provide a rich environment for configuring and selecting different trace events and writing them to files. In previous sections, primarily Building Topologies, we have seen several varieties of the trace helper methods designed for use inside other (device) helpers.

Perhaps you will recall seeing some of these variations:

pointToPoint.EnablePcapAll("second");
pointToPoint.EnablePcap("second", p2pNodes.Get(0)->GetId(), 0);
csma.EnablePcap("third", csmaDevices.Get(0), true);
pointToPoint.EnableAsciiAll(ascii.CreateFileStream("myfirst.tr"));

What may not be obvious, though, is that there is a consistent model for all of the trace-related methods found in the system. We will now take a little time and take a look at the "big picture".

There are currently two primary use cases of the tracing helpers in \(n s-3\) : device helpers and protocol helpers. Device helpers look at the problem of specifying which traces should be enabled through a (node, device) pair. For example, you may want to specify that PCAP tracing should be enabled on a particular device on a specific node. This follows
from the \(n s-3\) device conceptual model, and also the conceptual models of the various device helpers. Following naturally from this, the files created follow a <prefix>-<node>-<device> naming convention.

Protocol helpers look at the problem of specifying which traces should be enabled through a protocol and interface pair. This follows from the \(n s-3\) protocol stack conceptual model, and also the conceptual models of internet stack helpers. Naturally, the trace files should follow a <prefix>-<protocol>-<interface> naming convention.

The trace helpers therefore fall naturally into a two-dimensional taxonomy. There are subtleties that prevent all four classes from behaving identically, but we do strive to make them all work as similarly as possible; and whenever possible there are analogs for all methods in all classes.

\begin{tabular}{|l|l|l|}
\hline & PCAP & ASCII \\
\hline Device Helper & \(\checkmark\) & \(\checkmark\) \\
\hline Protocol Helper & \(\checkmark\) & \(\checkmark\) \\
\hline
\end{tabular}

We use an approach called a mixin to add tracing functionality to our helper classes. A mixin is a class that provides functionality when it is inherited by a subclass. Inheriting from a mixin is not considered a form of specialization but is really a way to collect functionality.

Let's take a quick look at all four of these cases and their respective mixins.

\subsection*{8.4.1 Device Helpers}

PCAP

The goal of these helpers is to make it easy to add a consistent PCAP trace facility to an \(n s-3\) device. We want all of the various flavors of PCAP tracing to work the same across all devices, so the methods of these helpers are inherited by device helpers. Take a look at src/network/helper/trace-helper.h if you want to follow the discussion while looking at real code.

The class PcapHelperForDevice is a mixin provides the high level functionality for using PCAP tracing in an \(n s-3\) device. Every device must implement a single virtual method inherited from this class.

virtual void EnablePcapInternal(std::string prefix, Ptr<NetDevice> nd, bool

\(\hookrightarrow\) promiscuous, bool explicitFilename) = 0;

The signature of this method reflects the device-centric view of the situation at this level. All of the public methods inherited from class PcapUserHelperForDevice reduce to calling this single device-dependent implementation method. For example, the lowest level PCAP method,

void EnablePcap(std::string prefix, Ptr<NetDevice> nd, bool promiscuous = false, bool \(\hookrightarrow\) explicitFilename = false);

will call the device implementation of EnablePcapInternal directly. All other public PCAP tracing methods build on this implementation to provide additional user-level functionality. What this means to the user is that all device helpers in the system will have all of the PCAP trace methods available; and these methods will all work in the same way across devices if the device implements EnablePcapInternal correctly.

\section*{Methods}

void EnablePcap(std::string prefix, Ptr nd, bool promiscuous = false, boolu
\hookrightarrowexplicitFilename = false);
void EnablePcap(std::string prefix, std::string ndName, bool promiscuous = false,
\hookrightarrowbool explicitFilename = false);
(continued from previous page)

void EnablePcap(std::string prefix, NetDeviceContainer d, bool promiscuous = false);
void EnablePcap(std::string prefix, NodeContainer n, bool promiscuous = false);
void EnablePcap(std::string prefix, uint32_t nodeid, uint32_t deviceid, boolu
~promiscuous = false);
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-137.jpg?height=38&width=1220&top_left_y=455&top_left_x=241)

In each of the methods shown above, there is a default parameter called promiscuous that defaults to false. This parameter indicates that the trace should not be gathered in promiscuous mode. If you do want your traces to include all traffic seen by the device (and if the device supports a promiscuous mode) simply add a true parameter to any of the calls above. For example,

Ptr nd;
...<
helper.EnablePcap("prefix", nd, true);

will enable promiscuous mode captures on the NetDevice specified by nd.

The first two methods also include a default parameter called explicitFilename that will be discussed below.

You are encouraged to peruse the API Documentation for class PcapHelperForDevice to find the details of these methods; but to summarize \(\ldots\)
- You can enable PCAP tracing on a particular node/net-device pair by providing a Ptr<NetDevice> to an EnablePcap method. The Ptr<Node> is implicit since the net device must belong to exactly one Node. For example,

Ptr nd;
...
helper.EnablePcap("prefix", nd);
- You can enable PCAP tracing on a particular node/net-device pair by providing a std: :string representing an object name service string to an EnablePcap method. The Ptr<NetDevice> is looked up from the name string. Again, the <Node> is implicit since the named net device must belong to exactly one Node. For example,

Names::Add("server" ...);
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-137.jpg?height=40&width=564&top_left_y=1628&top_left_x=328)
...
helper.EnablePcap("prefix", "server/ath0");
- You can enable PCAP tracing on a collection of node/net-device pairs by providing a NetDeviceContainer. For each NetDevice in the container the type is checked. For each device of the proper type (the same type as is managed by the device helper), tracing is enabled. Again, the <Node> is implicit since the found net device must belong to exactly one Node. For example,

NetDeviceContainer d = ...;
...
helper.EnablePcap("prefix", d);
- You can enable PCAP tracing on a collection of node/net-device pairs by providing a NodeContainer. For each Node in the NodeContainer its attached NetDevices are iterated. For each NetDevice attached to each Node in the container, the type of that device is checked. For each device of the proper type (the same type as is managed by the device helper), tracing is enabled.

NodeContainer n;
...
helper.EnablePcap("prefix", n);
helper.EnablePcap("前缀", n);
- You can enable PCAP tracing on the basis of Node ID and device ID as well as with explicit Ptr. Each Node in the system has an integer Node ID and each device connected to a Node has an integer device ID.

helper.EnablePcap("prefix", 21, 1);
- Finally, you can enable PCAP tracing for all devices in the system, with the same type as that managed by the device helper.

helper.EnablePcapAll("prefix");

\section*{Filenames}

Implicit in the method descriptions above is the construction of a complete filename by the implementation method. By convention, PCAP traces in the \(n s-3\) system are of the form <prefix>-<node id>-<device id>.pcap

As previously mentioned, every Node in the system will have a system-assigned Node id; and every device will have an interface index (also called a device id) relative to its node. By default, then, a PCAP trace file created as a result of enabling tracing on the first device of Node 21 using the prefix "prefix" would be prefix-21-1.pcap.

You can always use the \(n s-3\) object name service to make this more clear. For example, if you use the object name service to assign the name "server" to Node 21, the resulting PCAP trace file name will automatically become, prefix-server-1.pcap and if you also assign the name "eth0" to the device, your PCAP file name will automatically pick this up and be called prefix-server-eth0.pcap.

Finally, two of the methods shown above,

void EnablePcap(std::string prefix, Ptr nd, bool promiscuous = false, bool
void EnablePcap(std::string 前缀, Ptr nd, bool 混杂 = false, bool

\hookrightarrowexplicitFilename = false);
void EnablePcap(std::string prefix, std::string ndName, bool promiscuous = false,
\hookrightarrowbool explicitFilename = false);

have a default parameter called explicitFilename. When set to true, this parameter disables the automatic filename completion mechanism and allows you to create an explicit filename. This option is only available in the methods which enable PCAP tracing on a single device.

For example, in order to arrange for a device helper to create a single promiscuous PCAP capture file of a specific name my-pcap-file.pcap on a given device, one could:

Ptr<NetDevice> nd;

. . .

helper.EnablePcap("my-pcap-file.pcap", nd, true, true);

The first true parameter enables promiscuous mode traces and the second tells the helper to interpret the prefix parameter as a complete filename.

\section*{ASCII}

The behavior of the ASCII trace helper mixin is substantially similar to the PCAP version. Take a look at src/ network/helper/trace-helper. h if you want to follow the discussion while looking at real code.

The class AsciiTraceHelperForDevice adds the high level functionality for using ASCII tracing to a device helper class. As in the PCAP case, every device must implement a single virtual method inherited from the ASCII trace mixin.
virtual void EnableAsciiInternal(Ptr<OutputStreamWrapper> stream, std: string prefix, Ptr<NetDevice> nd, bool explicitFilename) \(=0\);

The signature of this method reflects the device-centric view of the situation at this level; and also the fact that the helper may be writing to a shared output stream. All of the public ASCII-trace-related methods inherited from class AsciiTraceHelperForDevice reduce to calling this single device- dependent implementation method. For example, the lowest level ascii trace methods,

void EnableAscii(std::string prefix, Ptr nd, bool explicitFilename =
\hookrightarrowfalse);
void EnableAscii(Ptr stream, Ptr nd);

will call the device implementation of EnableAsciiInternal directly, providing either a valid prefix or stream. All other public ASCII tracing methods will build on these low-level functions to provide additional user-level functionality. What this means to the user is that all device helpers in the system will have all of the ASCII trace methods available; and these methods will all work in the same way across devices if the devices implement EnableAsciiInternal correctly.

\section*{Methods}

void EnableAscii(std::string prefix, Ptr nd, bool explicitFilename =
\leftrightarrowfalse);
void EnableAscii(Ptr stream, Ptr nd);
void EnableAscii(std::string prefix, std::string ndName, bool explicitFilename =
<false);
void EnableAscii(Ptr stream, std::string ndName);
void EnableAscii(std::string prefix, NetDeviceContainer d);
void EnableAscii(Ptr stream, NetDeviceContainer d);
void EnableAscii(Ptr 流, NetDeviceContainer d);

void EnableAscii(std::string prefix, NodeContainer n);
void EnableAscii(std::string 前缀, NodeContainer n);

void EnableAscii(Ptr stream, NodeContainer n);
void EnableAscii(Ptr 流, NodeContainer n);

void EnableAsciiAll(std::string prefix);
void EnableAsciiAll(Ptr stream);
void EnableAscii(std::string prefix, uint32_t nodeid, uint32_t deviceid, boolb
\hookrightarrowexplicitFilename);
\hookrightarrow 显式文件名);

void EnableAscii(Ptr stream, uint32_t nodeid, uint32_t deviceid);
void EnableAscii(Ptr 流, uint32_t 节点 ID, uint32_t 设备 ID);

You are encouraged to peruse the API Documentation for class AsciiTraceHelperForDevice to find the details of these methods; but to summarize ...
- There are twice as many methods available for ASCII tracing as there were for PCAP tracing. This is because, in addition to the PCAP-style model where traces from each unique node/device pair are written to a unique file, we support a model in which trace information for many node/device pairs is written to a common file. This means that the <prefix>-<node>-<device> file name generation mechanism is replaced by a mechanism to refer to a common file; and the number of API methods is doubled to allow all combinations.
- Just as in PCAP tracing, you can enable ASCII tracing on a particular (node, net-device) pair by providing a Ptr<NetDevice> to an EnableAscii method. The Ptr<Node> is implicit since the net device must belong to exactly one Node. For example,

Ptr nd;
...
helper.EnableAscii("prefix", nd);
helper.EnableAscii("前缀", nd);
- The first four methods also include a default parameter called explicitFilename that operate similar to equivalent parameters in the PCAP case.

In this case, no trace contexts are written to the ASCII trace file since they would be redundant. The system will pick the file name to be created using the same rules as described in the PCAP section, except that the file will have the suffix .tr instead of .pcap.
- If you want to enable ASCII tracing on more than one net device and have all traces sent to a single file, you can do that as well by using an object to refer to a single file. We have already seen this in the "cwnd" example above:

Ptr nd1;
Ptr nd2;
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-140.jpg?height=24&width=62&top_left_y=927&top_left_x=331)
Ptr stream = asciiTraceHelper.CreateFileStream("trace-file-
<name.tr");
...
helper.EnableAscii(stream, nd1);
helper.EnableAscii(stream, nd2);

In this case, trace contexts are written to the ASCII trace file since they are required to disambiguate traces from the two devices. Note that since the user is completely specifying the file name, the string should include the , tr suffix for consistency.
- You can enable ASCII tracing on a particular (node, net-device) pair by providing a std: :string representing an object name service string to an EnablePcap method. The Ptr<NetDevice> is looked up from the name string. Again, the <Node> is implicit since the named net device must belong to exactly one Node. For example,

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-140.jpg?height=28&width=469&top_left_y=1493&top_left_x=329)
Names::Add("client/eth0" ...);


...
helper.EnableAscii("prefix", "client/eth0");
helper.EnableAscii("prefix", "server/eth0");
This would result in two files named prefix-client-eth0.tr and prefix-server-eth0.tr with traces for each device in the respective trace file. Since all of the EnableAscii functions are overloaded to take a stream wrapper, you can use that form as well:
这将导致两个文件,分别命名为 prefix-client-eth0.tr 和 prefix-server-eth0.tr,其中包含每个设备的跟踪信息。由于所有的 EnableAscii 函数都被重载为接受流包装器,您也可以使用该形式:
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-140.jpg?height=41&width=564&top_left_y=1992&top_left_x=328)
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-140.jpg?height=36&width=472&top_left_y=2031&top_left_x=328)
Names::Add("server/eth0" ...);
...
Ptr stream = asciiTraceHelper.CreateFileStream("trace-file-
\leftrightarrowname.tr");
...
helper.EnableAscii(stream, "client/eth0");
helper.EnableAscii(stream, "server/eth0");

This would result in a single trace file called trace-file-name.tr that contains all of the trace events for both devices. The events would be disambiguated by trace context strings.
- You can enable ASCII tracing on a collection of (node, net-device) pairs by providing a NetDeviceCont ainer. For each NetDevice in the container the type is checked. For each device of the proper type (the same type as is managed by the device helper), tracing is enabled. Again, the <Node> is implicit since the found net device must belong to exactly one Node. For example,

NetDeviceContainer d = ...;
...
helper.EnableAscii("prefix", d);

This would result in a number of ASCII trace files being created, each of which follows the \(<\) prefix>-<node id>-<device id>.tr convention.

Combining all of the traces into a single file is accomplished similarly to the examples above:

NetDeviceContainer d = ...;
![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-141.jpg?height=30&width=62&top_left_y=835&top_left_x=331)
Ptr stream = asciiTraceHelper.CreateFileStream("trace-file-
\hookrightarrowname.tr");
...
helper.EnableAscii(stream, d);
- You can enable ASCII tracing on a collection of (node, net-device) pairs by providing a NodeContainer. For each Node in the NodeContainer its attached NetDevices are iterated. For each NetDevice attached to each Node in the container, the type of that device is checked. For each device of the proper type (the same type as is managed by the device helper), tracing is enabled.

NodeContainer n;
...
helper.EnableAscii("prefix", n);

This would result in a number of ASCII trace files being created, each of which follows the \(<\) prefix \(>-<\) node id>-<device id>.tr convention. Combining all of the traces into a single file is accomplished similarly to the examples above.
- You can enable ASCII tracing on the basis of Node ID and device ID as well as with explicit Ptr. Each Node in the system has an integer Node ID and each device connected to a Node has an integer device ID.

helper.EnableAscii("prefix", 21, 1);

Of course, the traces can be combined into a single file as shown above.
- Finally, you can enable ASCII tracing for all devices in the system, with the same type as that managed by the device helper.

helper.EnableAsciiAll("prefix");

This would result in a number of ASCII trace files being created, one for every device in the system of the type managed by the helper. All of these files will follow the <prefix>-<node id>-<device id>.tr convention. Combining all of the traces into a single file is accomplished similarly to the examples above.

\section*{Filenames}

Implicit in the prefix-style method descriptions above is the construction of the complete filenames by the implementation method. By convention, ASCII traces in the \(n s-3\) system are of the form <prefix>-<node id>-<device id>.tr

As previously mentioned, every Node in the system will have a system-assigned Node id; and every device will have an interface index (also called a device id) relative to its node. By default, then, an ASCII trace file created as a result of enabling tracing on the first device of Node 21, using the prefix "prefix", would be prefix-21-1.tr.

You can always use the \(n s-3\) object name service to make this more clear. For example, if you use the object name service to assign the name "server" to Node 21, the resulting ASCII trace file name will automatically become, prefix-server-1.tr and if you also assign the name "eth0" to the device, your ASCII trace file name will automatically pick this up and be called prefix-server-eth0.tr.

Several of the methods have a default parameter called explicitFilename. When set to true, this parameter disables the automatic filename completion mechanism and allows you to create an explicit filename. This option is only available in the methods which take a prefix and enable tracing on a single device.

\subsection*{8.4.2 Protocol Helpers}

\section*{PCAP}

The goal of these mixins is to make it easy to add a consistent PCAP trace facility to protocols. We want all of the various flavors of PCAP tracing to work the same across all protocols, so the methods of these helpers are inherited by stack helpers. Take a look at src/network/helper/trace-helper.h if you want to follow the discussion while looking at real code.

In this section we will be illustrating the methods as applied to the protocol Ipv4. To specify traces in similar protocols, just substitute the appropriate type. For example, use a Ptr<Ipv6> instead of a Ptr<Ipv4> and call EnablePcapIpv6 instead of EnablePcapIpv4.

The class PcapHelperForIpv4 provides the high level functionality for using PCAP tracing in the Ipv4 protocol. Each protocol helper enabling these methods must implement a single virtual method inherited from this class. There will be a separate implementation for Ipv6, for example, but the only difference will be in the method names and signatures. Different method names are required to disambiguate class \(\operatorname{Ipv} 4\) from Ipv 6 which are both derived from class Object, and methods that share the same signature.

virtual void EnablePcapIpv4Internal(std::string prefix,
虚拟 void EnablePcapIpv4Internal(std::string 前缀,

Ptr ipv4,
uint32_t interface, uint32_t 接口,
bool explicitFilename) = 0;

The signature of this method reflects the protocol and interface-centric view of the situation at this level. All of the public methods inherited from class PcapHelperForIpv4 reduce to calling this single device-dependent implementation method. For example, the lowest level PCAP method,

void EnablePcapIpv4(std::string prefix, Ptr<Ipv4> ipv4, uint32_t interface, bool \(\hookrightarrow\) explicitFilename = false);

will call the device implementation of EnablePcapIpv4Internal directly. All other public PCAP tracing methods build on this implementation to provide additional user-level functionality. What this means to the user is that all protocol helpers in the system will have all of the PCAP trace methods available; and these methods will all work in the same way across protocols if the helper implements EnablePcapIpv 4 Internal correctly.

\section*{Methods}

These methods are designed to be in one-to-one correspondence with the Node- and NetDevice- centric versions of the device versions. Instead of Node and NetDevice pair constraints, we use protocol and interface constraints.

Note that just like in the device version, there are six methods:

void EnablePcapIpv4(std::string prefix, Ptr ipv4, uint32_t interface, bool_
\hookrightarrowexplicitFilename = false);
void EnablePcapIpv4(std::string prefix, std::string ipv4Name, uint32_t interface,
void EnablePcapIpv4(std::string 前缀, std::string ipv4 名称, uint32_t 接口,

\hookrightarrowbool explicitFilename = false);
\hookrightarrowbool 显式文件名 = false);

void EnablePcapIpv4(std::<tring prefix, Ipv4InterfaceContainer c);
void EnablePcapIpv4(std::<tring 前缀, Ipv4InterfaceContainer c);

void EnablePcapIpv4(std::string prefix, NodeContainer n);
void EnablePcapIpv4(std::string prefix, uint32_t nodeid, uint32_t interface, bool_
\hookrightarrowexplicitFilename);
void EnablePcapIpv4All(std:</string">

You are encouraged to peruse the API Documentation for class PcapHelperForIpv4 to find the details of these methods; but to summarize ...
- You can enable PCAP tracing on a particular protocol/interface pair by providing a \(\mathrm{Ptr}<\operatorname{Ipv} 4>\) and interface to an EnablePcap method. For example,

Ptr ipv4 = node->GetObject();
...
helper.EnablePcapIpv4("prefix", ipv4, 0);
- You can enable PCAP tracing on a particular node/net-device pair by providing a std::string representing an object name service string to an EnablePcap method. The Ptr<Ipv4> is looked up from the name string. For example,

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-143.jpg?height=40&width=548&top_left_y=1193&top_left_x=325)
<.<
helper.EnablePcapIpv4("prefix", "serverIpv4", 1);
  • You can enable PCAP tracing on a collection of protocol/interface pairs by providing an Ipv4InterfaceContainer. For each Ipv4 / interface pair in the container the protocol type is checked. For each protocol of the proper type (the same type as is managed by the device helper), tracing is enabled for the corresponding interface. For example,
    通过提供 Ipv4InterfaceContainer,您可以在一组协议/接口对上启用 PCAP 跟踪。对于容器中的每个 Ipv4 /接口对,都会检查协议类型。对于设备助手管理的相同类型的每个协议,都会为相应的接口启用跟踪。例如,
NodeContainer nodes;
...
NetDeviceContainer devices = deviceHelper.Install(nodes);

Ipv4AddressHelper ipv4;
ipv4.SetBase("10.1.1.0", "255.255.255.0");
Ipv4InterfaceContainer interfaces = ipv4.Assign(devices);
...
helper.EnablePcapIpv4("prefix", interfaces);
  • You can enable PCAP tracing on a collection of protocol/interface pairs by providing a NodeContainer. For each Node in the NodeContainer the appropriate protocol is found. For each protocol, its interfaces are enumerated and tracing is enabled on the resulting pairs. For example,
    通过提供 NodeContainer,您可以在一组协议/接口对上启用 PCAP 跟踪。对于 NodeContainer 中的每个节点,会找到适当的协议。对于每个协议,都会枚举其接口,并在生成的对上启用跟踪。例如,
NodeContainer n;
...
helper.EnablePcapIpv4("prefix", n);
  • You can enable PCAP tracing on the basis of Node ID and interface as well. In this case, the node-id is translated to a and the appropriate protocol is looked up in the node. The resulting protocol and interface are used to specify the resulting trace source.
    您还可以根据节点 ID 和接口启用基于 Node ID 的 PCAP 跟踪。在这种情况下,节点 ID 被翻译为 ,并在节点中查找适当的协议。生成的协议和接口用于指定生成的跟踪源。
helper.EnablePcapIpv4("prefix", 21, 1);
helper.EnablePcapIpv4("前缀", 21, 1);
  • Finally, you can enable PCAP tracing for all interfaces in the system, with associated protocol being the same type as that managed by the device helper.
    最后,您可以为系统中的所有接口启用 PCAP 跟踪,相关协议与设备助手管理的相同类型。
helper.EnablePcapIpv4All("prefix");
helper.EnablePcapIpv4All("前缀");

Filenames 文件名

Implicit in all of the method descriptions above is the construction of the complete filenames by the implementation method. By convention, PCAP traces taken for devices in the system are of the form "--.pcap". In the case of protocol traces, there is a one-to-one correspondence between protocols and Nodes. This is because protocol objects are aggregated to Node Objects. Since there is no global protocol id in the system, we use the corresponding Node id in file naming. Therefore there is a possibility for file name collisions in automatically chosen trace file names. For this reason, the file name convention is changed for protocol traces.
在上述所有方法描述中,隐含的是通过实现方法构建完整文件名。按照惯例,为 系统中的设备获取的 PCAP 跟踪的形式为"--.pcap"。在协议跟踪的情况下,协议与节点之间存在一对一的对应关系。这是因为协议对象被聚合到节点对象中。由于系统中没有全局协议标识,我们使用文件命名中对应的节点标识。因此,在自动选择的跟踪文件名中存在文件名冲突的可能性。因此,协议跟踪的文件名约定发生了变化。
As previously mentioned, every Node in the system will have a system-assigned Node id. Since there is a one-to-one correspondence between protocol instances and Node instances we use the Node id. Each interface has an interface id relative to its protocol. We use the convention " prefix node id>-i.pcap" for trace file naming in protocol helpers.
如前所述,系统中的每个节点都将有一个系统分配的节点标识。由于协议实例与节点实例之间存在一对一的对应关系,我们使用节点标识。每个接口都有一个相对于其协议的接口标识。我们在协议助手中使用约定“ 前缀 节点标识>-i.pcap”来命名协议助手中的跟踪文件。
Therefore, by default, a PCAP trace file created as a result of enabling tracing on interface 1 of the Ipv4 protocol of Node 21 using the prefix "prefix" would be "prefix-n21-i1.pcap".
因此,默认情况下,通过在节点 21 的 Ipv4 协议的接口 1 上启用跟踪而创建的 PCAP 跟踪文件,使用前缀"prefix"将会是"prefix-n21-i1.pcap"。
You can always use the object name service to make this more clear. For example, if you use the object name service to assign the name "serverIpv4" to the Ptr on Node 21 , the resulting PCAP trace file name will automatically become, "prefix-nserverIpv4-i1.pcap".
您始终可以使用 对象名称服务来使其更清晰。例如,如果您使用对象名称服务将名称"serverIpv4"分配给节点 21 上的 Ptr ,则生成的 PCAP 跟踪文件名称将自动变为"prefix-nserverIpv4-i1.pcap"。
Several of the methods have a default parameter called explicitFilename. When set to true, this parameter disables the automatic filename completion mechanism and allows you to create an explicit filename. This option is only available in the methods which take a prefix and enable tracing on a single device.
几种方法中有一个名为 explicitFilename 的默认参数。当设置为 true 时,此参数将禁用自动文件名完成机制,并允许您创建显式文件名。此选项仅适用于采用前缀并在单个设备上启用跟踪的方法。

ASCII

The behavior of the ASCII trace helpers is substantially similar to the PCAP case. Take a look at src/network/ helper/trace-helper. if you want to follow the discussion while looking at real code.
ASCII 跟踪助手的行为与 PCAP 情况基本相似。查看 src/network/helper/trace-helper。 ,如果您想在查看实际代码的同时跟踪讨论。
In this section we will be illustrating the methods as applied to the protocol Ipv4. To specify traces in similar protocols, just substitute the appropriate type. For example, use a Ptr instead of a Ptr and call EnableAsciiIpv6 instead of EnableAsciiIpv4.
在本节中,我们将说明应用于协议 Ipv4 的方法。要指定类似协议的跟踪,只需替换适当的类型。例如,使用 Ptr 而不是 Ptr,并调用 EnableAsciiIpv6 而不是 EnableAsciiIpv4。
The class AsciiTraceHelperForIpv4 adds the high level functionality for using ASCII tracing to a protocol helper. Each protocol that enables these methods must implement a single virtual method inherited from this class.
AsciiTraceHelperForIpv4 类为协议助手添加了使用 ASCII 跟踪的高级功能。启用这些方法的每个协议都必须实现从该类继承的单个虚拟方法。
virtual void EnableAsciiIpv4Internal(Ptr<OutputStreamWrapper> stream,
    std::string prefix,
    Ptr<Ipv4> ipv4,
    uint32_t interface,
    bool explicitFilename) = 0;
The signature of this method reflects the protocol- and interface-centric view of the situation at this level; and also the fact that the helper may be writing to a shared output stream. All of the public methods inherited from class PcapAndAsciiTraceHelperForIpv 4 reduce to calling this single device- dependent implementation method. For example, the lowest level ASCII trace methods,
此方法的签名反映了此级别的协议和接口中心化视图;以及助手可能正在写入共享输出流的事实。从 PcapAndAsciiTraceHelperForIpv4 类继承的所有公共方法都会简化为调用此单个设备相关实现方法。例如,最低级别的 ASCII 跟踪方法,
void EnableAsciiIpv4(std::string prefix, Ptr<Ipv4> ipv4, uint32_t interface, bool\
\hookrightarrowexplicitFilename = false);
void EnableAsciiIpv4(Ptr<OutputStreamWrapper> stream, Ptr<Ipv4> ipv4, uint32_t
\hookrightarrowinterface);
will call the device implementation of EnableAsciiIpv4Internal directly, providing either the prefix or the stream. All other public ASCII tracing methods will build on these low-level functions to provide additional userlevel functionality. What this means to the user is that all device helpers in the system will have all of the ASCII trace methods available; and these methods will all work in the same way across protocols if the protocols implement EnableAsciiIpv4Internal correctly.
将直接调用 EnableAsciiIpv4Internal 的设备实现,提供前缀或流。所有其他公共 ASCII 跟踪方法将基于这些低级功能,以提供额外的用户级功能。对用户的意义在于,系统中的所有设备助手都将具有所有可用的 ASCII 跟踪方法;如果协议正确实现 EnableAsciiIpv4Internal,则这些方法将在各个协议中以相同方式工作。

Methods 方法

void EnableAsciiIpv4(std::string prefix, Ptr<Ipv4> ipv4, uint32_t interface, bool_
    \hookrightarrowexplicitFilename = false);
void EnableAsciiIpv4(Ptr<OutputStreamWrapper> stream, Ptr<Ipv4> ipv4, uint32_t`
    \hookrightarrowinterface);
void EnableAsciiIpv4(std::string prefix, std::string ipv4Name, uint32_t interface,
    \hookrightarrowbool explicitFilename = false);
void EnableAsciiIpv4(Ptr<OutputStreamWrapper> stream, std::string ipv4Name, uint32_t
    \hookrightarrowinterface);
void EnableAsciiIpv4(std::string prefix, Ipv4InterfaceContainer c);
void EnableAsciiIpv4(Ptr<OutputStreamWrapper> stream, Ipv4InterfaceContainer c);
void EnableAsciiIpv4(std::string prefix, NodeContainer n);
void EnableAsciiIpv4(Ptr<OutputStreamWrapper> stream, NodeContainer n);

void EnableAsciiIpv4All(Ptr<OutputStreamWrapper> stream);
void EnableAsciiIpv4(std::string prefix, uint32_t nodeid, uint32_t deviceid, bool`
    \hookrightarrowexplicitFilename);
void EnableAsciiIpv4(Ptr<OutputStreamWrapper> stream, uint32_t nodeid, uint32_tb
    \hookrightarrowinterface);
You are encouraged to peruse the API Documentation for class PcapAndAsciiHelperForIpv4 to find the details of these methods; but to summarize ...
鼓励您查阅类 PcapAndAsciiHelperForIpv4 的 API 文档,以找到这些方法的详细信息;但总结一下...
  • There are twice as many methods available for ASCII tracing as there were for PCAP tracing. This is because, in addition to the PCAP-style model where traces from each unique protocol/interface pair are written to a unique file, we support a model in which trace information for many protocol/interface pairs is written to a common file. This means that the -n-<interface file name generation mechanism is replaced by a mechanism to refer to a common file; and the number of API methods is doubled to allow all combinations.
    ASCII 跟踪可用的方法是 PCAP 跟踪方法的两倍。这是因为,除了 PCAP 风格模型,其中来自每个唯一协议/接口对的跟踪被写入一个唯一文件之外,我们还支持一种模型,其中许多协议/接口对的跟踪信息被写入一个公共文件。这意味着-n-
  • Just as in PCAP tracing, you can enable ASCII tracing on a particular protocol/interface pair by providing a Ptr and an interface to an EnableAscii method. For example,
    就像在 PCAP 跟踪中一样,您可以通过提供 Ptr 和一个接口到 EnableAscii 方法来在特定协议/接口对上启用 ASCII 跟踪。例如,
Ptr ipv4;
...
helper.EnableAsciiIpv4("prefix", ipv4, 1);
helper.EnableAsciiIpv4("前缀", ipv4, 1);
In this case, no trace contexts are written to the ASCII trace file since they would be redundant. The system will pick the file name to be created using the same rules as described in the PCAP section, except that the file will have the suffix ".tr" instead of ".pcap".
在这种情况下,由于它们是多余的,没有跟踪上下文会被写入 ASCII 跟踪文件。系统将选择使用与 PCAP 部分中描述的相同规则创建文件名,不同之处在于文件将以“.tr”作为后缀而不是“.pcap”。
  • If you want to enable ASCII tracing on more than one interface and have all traces sent to a single file, you can do that as well by using an object to refer to a single file. We have already something similar to this in the "cwnd" example above:
    如果您想要在多个接口上启用 ASCII 跟踪并将所有跟踪发送到单个文件,您也可以通过使用对象引用单个文件来实现。我们在上面的“cwnd”示例中已经有类似的内容:
Ptr<Ipv4> protocol1 = node1->GetObject<Ipv4>();
Ptr<Ipv4> protocol2 = node2->GetObject<Ipv4>();
...
Ptr<OutputStreamWrapper> stream = asciiTraceHelper.CreateFileStream("trace-file-
<name.tr");
...<
helper.EnableAsciiIpv4(stream, protocol1, 1);
helper.EnableAsciiIpv4(stream, protocol2, 1);
In this case, trace contexts are written to the ASCII trace file since they are required to disambiguate traces from the two interfaces. Note that since the user is completely specifying the file name, the string should include the ",tr" for consistency.
在这种情况下,由于需要消除来自两个接口的跟踪之间的歧义,跟踪上下文将被写入 ASCII 跟踪文件。请注意,由于用户完全指定了文件名,字符串应包含“,tr”以保持一致。
  • You can enable ASCII tracing on a particular protocol by providing a std: : string representing an object name service string to an EnablePcap method. The Ptr is looked up from the name string. The in the resulting filenames is implicit since there is a one-to-one correspondence between protocol instances and nodes, For example,
    您可以通过向 EnablePcap 方法提供表示对象名称服务字符串的 std::string 来在特定协议上启用 ASCII 跟踪。 Ptr 是从名称字符串中查找的。由于协议实例和节点之间存在一对一的对应关系,因此结果文件名中的 in 是隐式的,例如,

Names::Add("node2Ipv4" ...);
...
helper.EnableAsciiIpv4("prefix", "node1Ipv4", 1);
helper.EnableAsciiIpv4("prefix", "node2Ipv4", 1);

This would result in two files named "prefix-nnode1Ipv4-i1.tr" and "prefix-nnode2Ipv4-i1.tr" with traces for each interface in the respective trace file. Since all of the EnableAscii functions are overloaded to take a stream wrapper, you can use that form as well:

![](https://cdn.mathpix.com/cropped/2024_05_10_73d2d172242addade300g-146.jpg?height=38&width=534&top_left_y=1493&top_left_x=329)

...
Ptr<OutputStreamWrapper> stream = asciiTraceHelper.CreateFileStream("trace-file-
~name.tr");
...
helper.EnableAsciiIpv4(stream, "node1Ipv4", 1);
helper.EnableAsciiIpv4(stream, "node2Ipv4", 1);
This would result in a single trace file called "trace-file-name.tr" that contains all of the trace events for both interfaces. The events would be disambiguated by trace context strings.
这将导致一个名为"trace-file-name.tr"的单个跟踪文件,其中包含两个接口的所有跟踪事件。事件将通过跟踪上下文字符串进行消歧。
  • You can enable ASCII tracing on a collection of protocol/interface pairs by providing an Ipv4InterfaceContainer. For each protocol of the proper type (the same type as is managed by the device helper), tracing is enabled for the corresponding interface. Again, the is implicit since there is a one-to-one correspondence between each protocol and its node. For example,
    您可以通过提供一个 Ipv4InterfaceContainer 在一组协议/接口对上启用 ASCII 跟踪。对于每个正确类型的协议(与设备助手管理的相同类型),将为相应接口启用跟踪。再次,由于每个协议及其节点之间存在一对一的对应关系,因此是隐式的。例如,
NodeContainer nodes;

NetDeviceContainer devices = deviceHelper.Install(nodes);

Ipv4AddressHelper ipv4;
ipv4.SetBase("10.1.1.0", "255.255.255.0");
Ipv4InterfaceContainer interfaces = ipv4.Assign(devices);
...
(continued from previous page)
(续前页)
helper.EnableAsciiIpv4("prefix", interfaces);
This would result in a number of ASCII trace files being created, each of which follows the prefix -n node id>-i.tr convention. Combining all of the traces into a single file is accomplished similarly to the examples above:
这将导致创建多个 ASCII 跟踪文件,每个文件都遵循 前缀 -n 节点 id>-i.tr 约定。将所有跟踪合并到单个文件中的方法与上面的示例类似:
NodeContainer nodes;
...
NetDeviceContainer devices = deviceHelper.Install(nodes);
<.<
Ipv4AddressHelper ipv4;
ipv4.SetBase("10.1.1.0", "255.255.255.0");
Ipv4InterfaceContainer interfaces = ipv4.Assign(devices);
...
Ptr<OutputStreamWrapper> stream = asciiTraceHelper.CreateFileStream("trace-file-
    \hookrightarrowname.tr");
...
helper.EnableAsciiIpv4(stream, interfaces);
  • You can enable ASCII tracing on a collection of protocol/interface pairs by providing a NodeContainer. For each Node in the NodeContainer the appropriate protocol is found. For each protocol, its interfaces are enumerated and tracing is enabled on the resulting pairs. For example,
    通过提供 NodeContainer,您可以在一组协议/接口对上启用 ASCII 跟踪。对于 NodeContainer 中的每个 Node,会找到相应的协议。对于每个协议,枚举其接口并在生成的对上启用跟踪。例如,
NodeContainer n;
...
helper.EnableAsciiIpv4("prefix", n);
This would result in a number of ASCII trace files being created, each of which follows the prefix node id>-.tr convention. Combining all of the traces into a single file is accomplished similarly to the examples above.
这将导致创建多个 ASCII 跟踪文件,每个文件遵循 前缀 节点 id>-.tr 约定。将所有跟踪合并到单个文件中的方法与上面的示例类似。
  • You can enable ASCII tracing on the basis of Node ID and device ID as well. In this case, the node-id is translated to a Ptr and the appropriate protocol is looked up in the node. The resulting protocol and interface are used to specify the resulting trace source.
    您还可以根据节点 ID 和设备 ID 启用 ASCII 跟踪。在这种情况下,节点 ID 被翻译为 Ptr,并在节点中查找适当的协议。生成的协议和接口用于指定生成的跟踪源。
helper.EnableAsciiIpv4("prefix", 21, 1);
Of course, the traces can be combined into a single file as shown above.
当然,可以将跟踪合并为单个文件,如上所示。
  • Finally, you can enable ASCII tracing for all interfaces in the system, with associated protocol being the same type as that managed by the device helper.
    最后,您可以为系统中的所有接口启用 ASCII 跟踪,相关协议与设备助手管理的协议类型相同。
helper.EnableAsciiIpv4All("prefix");
helper.EnableAsciiIpv4All("前缀");
This would result in a number of ASCII trace files being created, one for every interface in the system related to a protocol of the type managed by the helper. All of these files will follow the -n-i<interface.tr convention. Combining all of the traces into a single file is accomplished similarly to the examples above.
这将导致创建许多 ASCII 跟踪文件,每个文件与助手管理的类型协议相关的系统中的每个接口对应。所有这些文件将遵循-n-i< interface.tr 约定。将所有跟踪合并到单个文件中的方法与上面的示例类似。

Filenames 文件名

Implicit in the prefix-style method descriptions above is the construction of the complete filenames by the implementation method. By convention, ASCII traces in the system are of the form "<prefix node id device id .tr"
在上述前缀样式方法描述中隐含的是通过实现方法构建完整文件名。按照惯例, 系统中的 ASCII 跟踪的形式为 "<前缀 节点 id 设备 id .tr"
As previously mentioned, every Node in the system will have a system-assigned Node id. Since there is a one-to-one correspondence between protocols and nodes we use to node-id to identify the protocol identity. Every interface on a
如前所述,系统中的每个节点都将有一个系统分配的节点 id。由于协议和节点之间是一对一的对应关系,我们使用节点 id 来识别协议标识。系统中给定协议的每个接口将具有相对于其协议的接口索引(也简称为接口)。因此,默认情况下,通过在节点 21 的第一个设备上启用跟踪而创建的 ASCII 跟踪文件,使用前缀 "prefix",将是 " prefix-n21-i1.tr"。使用前缀来消除每个节点上的多个协议的歧义。

given protocol will have an interface index (also called simply an interface) relative to its protocol. By default, then, an ASCII trace file created as a result of enabling tracing on the first device of Node 21, using the prefix "prefix", would be "prefix-n21-i1.tr". Use the prefix to disambiguate multiple protocols per node.
You can always use the object name service to make this more clear. For example, if you use the object name service to assign the name "serverIpv4" to the protocol on Node 21, and also specify interface one, the resulting ASCII trace file name will automatically become, "prefix-nserverIpv4-1.tr".
您始终可以使用 对象名称服务,使其更加清晰。例如,如果您使用对象名称服务将名称“serverIpv4”分配给节点 21 上的协议,并且还指定接口一,则生成的 ASCII 跟踪文件名称将自动变为“prefix-nserverIpv4-1.tr”。
Several of the methods have a default parameter called explicitFilename. When set to true, this parameter disables the automatic filename completion mechanism and allows you to create an explicit filename. This option is only available in the methods which take a prefix and enable tracing on a single device.
几种方法中都有一个名为 explicitFilename 的默认参数。当设置为 true 时,此参数将禁用自动文件名完成机制,并允许您创建显式文件名。此选项仅适用于需要前缀并在单个设备上启用跟踪的方法。

8.5 Summary 8.5 总结

includes an extremely rich environment allowing users at several levels to customize the kinds of information that can be extracted from simulations.
包含一个非常丰富的环境,允许用户在多个级别上自定义可以从模拟中提取的信息类型。
There are high-level helper functions that allow users to simply control the collection of pre-defined outputs to a fine granularity. There are mid-level helper functions to allow more sophisticated users to customize how information is extracted and saved; and there are low-level core functions to allow expert users to alter the system to present new and previously unexported information in a way that will be immediately accessible to users at higher levels.
有高级辅助功能,允许用户简单地控制对预定义输出的收集,精细到一个很小的粒度。有中级辅助功能,允许更复杂的用户定制信息的提取和保存方式;还有低级核心功能,允许专家用户修改系统,以呈现新的和以前未导出的信息,使其立即可供高级用户访问。
This is a very comprehensive system, and we realize that it is a lot to digest, especially for new users or those not intimately familiar with C++ and its idioms. We do consider the tracing system a very important part of and so recommend becoming as familiar as possible with it. It is probably the case that understanding the rest of the system will be quite simple once you have mastered the tracing system
这是一个非常全面的系统,我们意识到这对于新用户或那些不熟悉 C++及其习惯用法的人来说是很多东西需要消化的。我们认为跟踪系统是 的一个非常重要的部分,因此建议尽可能熟悉它。也许一旦您掌握了跟踪系统,理解 系统的其余部分将会变得非常简单。

DATA COLLECTION 数据收集

Our final tutorial chapter introduces some components that were added to in version 3.18 , and that are still under development. This tutorial section is also a work-in-progress.
我们的最终教程章节介绍了一些在版本 3.18 中添加到 中的组件,这些组件仍在开发中。本教程部分也是一个正在进行中的工作。

9.1 Motivation 9.1 动机

One of the main points of running simulations is to generate output data, either for research purposes or simply to learn about the system. In the previous chapter, we introduced the tracing subsystem and the example sixth.cc. from which PCAP or ASCII trace files are generated. These traces are valuable for data analysis using a variety of external tools, and for many users, such output data is a preferred means of gathering data (for analysis by external tools).
运行模拟的主要目的之一是生成输出数据,无论是为了研究目的还是简单地了解系统。在上一章中,我们介绍了跟踪子系统和示例 sixth.cc。从中生成了 PCAP 或 ASCII 跟踪文件。这些跟踪对于使用各种外部工具进行数据分析以及对于许多用户来说,这样的输出数据是收集数据的首选方式(用于外部工具进行分析)。
However, there are also use cases for more than trace file generation, including the following:
然而,还有更多的用例需要生成跟踪文件,包括以下内容:
  • generation of data that does not map well to PCAP or ASCII traces, such as non-packet data (e.g. protocol state machine transitions),
    生成的数据与 PCAP 或 ASCII 跟踪不匹配,例如非数据包数据(例如协议状态机转换),
  • large simulations for which the disk I/O requirements for generating trace files is prohibitive or cumbersome, and
    大型模拟,生成跟踪文件的磁盘 I/O 要求过高或繁琐,
  • the need for online data reduction or computation, during the course of the simulation. A good example of this is to define a termination condition for the simulation, to tell it when to stop when it has received enough data to form a narrow-enough confidence interval around the estimate of some parameter.
    在模拟过程中需要在线数据减少或计算。一个很好的例子是为模拟定义终止条件,告诉它何时停止,当它接收到足够的数据以围绕某个参数的估计形成足够窄的置信区间时。
The data collection framework is designed to provide these additional capabilities beyond trace-based output. We recommend that the reader interested in this topic consult the Manual for a more detailed treatment of this framework; here, we summarize with an example program some of the developing capabilities.
数据收集框架旨在提供这些基于跟踪输出之外的附加功能。我们建议对这个主题感兴趣的读者查阅 手册,以获取有关该框架更详细的处理;在这里,我们通过一个示例程序总结了一些正在开发的功能。

9.2 Example Code 9.2 示例代码

The tutorial example examples/tutorial/seventh.cc resembles the sixth.cc example we previously reviewed, except for a few changes. First, it has been enabled for IPv6 support with a command-line option:
教程示例 examples/tutorial/seventh.cc 类似于我们之前审查过的 sixth.cc 示例,只是有一些变化。首先,它已经通过命令行选项启用了 IPv6 支持:
CommandLine cmd;
cmd.AddValue("useIpv6", "Use Ipv6", useV6);
cmd.Parse(argc, argv);
If the user specifies useIpv6, option, the program will be run using IPv6 instead of IPv4. The help option, available on all programs that support the CommandLine object as shown above, can be invoked as follows (please note the use of double quotes):
如果用户指定了 useIpv6 选项,则程序将使用 IPv6 而不是 IPv4 运行。如上所示,支持 CommandLine 对象的所有 程序上都可以调用帮助选项,调用方式如下(请注意双引号的使用):

./ns3 run "seventh --help"
which produces: 产生的输出为:
ns3-dev-seventh-debug [Program Arguments] [General Arguments]
Program Arguments:
    --useIpv6: Use Ipv6 [false]
General Arguments:
    --PrintGlobals: Print the list of globals.
    --PrintGroups: Print the list of groups.
    --PrintGroup=[group]: Print all TypeIds of group.
    --PrintTypeIds: Print all TypeIds.
    --PrintAttributes=[typeid]: Print all attributes of typeid.
    --PrintHelp: Print this help message.
This default (use of IPv4, since useIpv6 is false) can be changed by toggling the boolean value as follows:
默认情况下(使用 IPv4,因为 useIpv6 为 false)可以通过如下切换布尔值来更改:
./ns3 run "seventh --useIpv6=1"
./ns3 运行"第七 --useIpv6=1"
and have a look at the pcap generated, such as with tcpdump:
并查看生成的 pcap 文件,例如使用 tcpdump:
tcpdump -r seventh.pcap -nn -tt
This has been a short digression into IPv6 support and the command line, which was also introduced earlier in this tutorial. For a dedicated example of command line usage, please see src/core/examples/command-line-example. cc.
这是对 IPv6 支持和命令行的简短插曲,这也是在本教程中早些时候介绍的。有关命令行用法的专用示例,请参阅 src/core/examples/command-line-example.cc。
Now back to data collection. In the examples/tutorial/ directory, type the following command: diff -u sixth.cc seventh.cc, and examine some of the new lines of this diff:
现在回到数据收集。在 examples/tutorial/目录中,键入以下命令:diff -u sixth.cc seventh.cc,并检查此差异的一些新行:
std::<ring probeType <returns>
std::string tracePath;
if (useV6 == false)
    {
    <.<
        probeType = "ns3::Ipv4PacketProbe";
        tracePath = "/NodeList/*/$ns3::Ipv4L3Protocol/Tx";
    }
else
    {
...
    probeType = "ns3::Ipv6PacketProbe";
    tracePath = "/NodeList/*/$ns3::Ipv6L3Protocol/Tx";
    }
// Use GnuplotHelper to plot the packet byte count over time
GnuplotHelper plotHelper;
// Configure the plot. The first argument is the file name prefix
// for the output files generated. The second, third, and fourth
// arguments are, respectively, the plot title, x-axis, and y-axis labels
plotHelper.ConfigurePlot("seventh-packet-byte-count",
"Packet Byte Count vs. Time",
"Time(Seconds)",
"Packet Byte Count");
(continued from previous page)
(续前页)
// Specify the probe type, trace source path (in configuration namespace), and
// probe output trace source ("OutputBytes") to plot. The fourth argument
// specifies the name of the data series label on the plot. The last
// argument formats the plot by specifying where the key should be placed.
plotHelper.PlotProbe(probeType,
    tracePath,
    "OutputBytes",
    "Packet Byte Count",
    GnuplotAggregator:</KEY_BELOW);
// Use FileHelper to write out the packet byte count over time
FileHelper fileHelper<
// Configure the file to be written, and the formatting of output data.
fileHelper.ConfigureFile("seventh-packet-byte-count",
    FileAggregator::FORMATTED);
// Set the labels for this formatted output file.
fileHelper.Set2dFormat("Time (Seconds) = %.3e\tPacket Byte Count = %.0f");
// Specify the probe type, probe path (in configuration namespace), and
// probe output trace source ("OutputBytes") to write.
fileHelper.WriteProbe(probeType,
    tracePath,
    "OutputBytes");
Simulator::Stop(Seconds(20));
Simulator::Run();

The careful reader will have noticed, when testing the IPv6 command line attribute above, that seventh.cc had created a number of new output files:
细心的读者在测试上述 IPv6 命令行属性时会注意到,seventh.cc 已经创建了许多新的输出文件:
seventh-packet-byte-count-0.txt
seventh-packet-byte-count-1.txt
seventh-packet-byte-count.dat
seventh-packet-byte-count.plt
seventh-packet-byte-count.png
seventh-packet-byte-count.sh
These were created by the additional statements introduced above; in particular, by a GnuplotHelper and a FileHelper. This data was produced by hooking the data collection components to trace sources, and marshaling the data into a formatted gnuplot and into a formatted text file. In the next sections, we'll review each of these.
这些是由上面介绍的额外语句创建的;特别是通过 GnuplotHelper 和 FileHelper。这些数据是通过将数据收集组件连接到 跟踪源,并将数据编组成格式化的 gnuplot 和格式化的文本文件而生成的。在接下来的章节中,我们将逐一审查这些。

9.3 GnuplotHelper 9.3 Gnuplot 助手

The GnuplotHelper is an helper object aimed at the production of gnuplot plots with as few statements as possible, for common cases. It hooks trace sources with data types supported by the data collection system. Not all trace sources data types are supported, but many of the common trace types are, including TracedValues with plain old data (POD) types.
GnuplotHelper 是一个 助手对象,旨在以尽可能少的语句生成 gnuplot 图表,适用于常见情况。它将 跟踪源与数据收集系统支持的数据类型进行了连接。并非所有 跟踪源的数据类型都受支持,但许多常见的跟踪类型都受支持,包括具有普通旧数据(POD)类型的 TracedValues。
Let's look at the output produced by this helper:
让我们来看看这个助手生成的输出:

seventh-packet-byte-count.dat
seventh-packet-byte-count.plt
The first is a gnuplot data file with a series of space-delimited timestamps and packet byte counts. We'll cover how this particular data output was configured below, but let's continue with the output files. The file seventh-packet-byte-count.plt is a gnuplot plot file, that can be opened from within gnuplot. Readers who understand gnuplot syntax can see that this will produce a formatted output PNG file named seventh-packet-byte-count.png. Finally, a small shell script seventh-packet-byte-count.sh runs this plot file through gnuplot to produce the desired PNG (which can be viewed in an image editor); that is, the command:
第一个是一个 gnuplot 数据文件,其中包含一系列以空格分隔的时间戳和数据包字节计数。我们将在下面介绍如何配置这个特定的数据输出,但让我们继续讨论输出文件。文件 seventh-packet-byte-count.plt 是一个 gnuplot 绘图文件,可以从 gnuplot 内部打开。了解 gnuplot 语法的读者可以看到,这将生成一个格式化的输出 PNG 文件,命名为 seventh-packet-byte-count.png。最后,一个小的 shell 脚本 seventh-packet-byte-count.sh 通过 gnuplot 运行这个绘图文件,以生成所需的 PNG(可以在图像编辑器中查看);即,命令:
sh seventh-packet-byte-count.sh
运行 seventh-packet-byte-count.sh
will yield seventh-packet-byte-count.png. Why wasn't this PNG produced in the first place? The answer is that by providing the plt file, the user can hand-configure the result if desired, before producing the PNG.
将生成 seventh-packet-byte-count.png。为什么一开始没有生成这个 PNG 呢?答案是通过提供 plt 文件,用户可以在生成 PNG 之前根据需要手动配置结果。
The PNG image title states that this plot is a plot of "Packet Byte Count vs. Time", and that it is plotting the probed data corresponding to the trace source path:
PNG 图像标题说明这个图是“数据包字节计数 vs. 时间”的图,正在绘制与跟踪源路径对应的被探测数据。
/NodeList/*/$ns3::Ipv6L3Protocol/Tx
Note the wild-card in the trace path. In summary, what this plot is capturing is the plot of packet bytes observed at the transmit trace source of the Ipv6L3Protocol object; largely 596-byte TCP segments in one direction, and 60-byte TCP acks in the other (two node trace sources were matched by this trace source).
注意跟踪路径中的通配符。简而言之,此图捕获的是在 Ipv6L3Protocol 对象的传输跟踪源处观察到的数据包字节的图表;主要是一个方向上的 596 字节 TCP 段和另一个方向上的 60 字节 TCP 确认(两个节点跟踪源被此跟踪源匹配)。
How was this configured? A few statements need to be provided. First, the GnuplotHelper object must be declared and configured:
这是如何配置的?需要提供一些语句。首先,必须声明和配置 GnuplotHelper 对象:
// Use GnuplotHelper to plot the packet byte count over time
GnuplotHelper plotHelper;
// Configure the plot. The first argument is the file name prefix
// for the output files generated. The second, third, and fourth
// arguments are, respectively, the plot title, x-axis, and y-axis labels
plotHelper.ConfigurePlot("seventh-packet-byte-count",
    "Packet Byte Count vs. Time",
    "Time (Seconds)",
    "Packet Byte Count");
To this point, an empty plot has been configured. The filename prefix is the first argument, the plot title is the second, the -axis label the third, and the -axis label the fourth argument.
到目前为止,已配置一个空绘图。文件名前缀是第一个参数,绘图标题是第二个参数, -轴标签是第三个参数, -轴标签是第四个参数。
The next step is to configure the data, and here is where the trace source is hooked. First, note above in the program we declared a few variables for later use:
下一步是配置数据,这里是跟踪源连接的地方。首先,注意在程序中我们声明了一些变量以供以后使用:

std::<tring tracePath;
probeType = "ns3::Ipv6PacketProbe";
tracePath = "/NodeList/*/$ns3::Ipv6L3Protocol/Tx";

We use them here:

// Specify the probe type, trace source path (in configuration namespace), and
// 指定探针类型、跟踪源路径(在配置命名空间中)和

// probe output trace source ("OutputBytes") to plot. The fourth argument
// 探查输出跟踪源("OutputBytes")以绘制。第四个参数

// specifies the name of the data series label on the plot. The last
// 指定绘图上数据系列标签的名称。最后

// argument formats the plot by specifying where the key should be placed.
// 参数通过指定键应放置的位置来格式化绘图。

plotHelper.PlotProbe(probeType,
(continued from previous page)

  • tracePath,
    "OutputBytes",
  • "Packet Byte Count", "数据包字节计数",
    GnuplotAggregator::KEY_BELOW);

The first two arguments are the name of the probe type and the trace source path. These two are probably the hardest to determine when you try to use this framework to plot other traces. The probe trace here is the Tx trace source of class Ipv6L3Protocol. When we examine this class implementation (src/internet/model/ipv6-13-protocol. cc) we can observe:

.AddTraceSource("Tx", "Send IPv6 packet to outgoing interface.",

MakeTraceSourceAccessor(\&Ipv6L3Protocol:

This says that Tx is a name for variable m_txTrace, which has a declaration of:

/**
* \brief Callback to trace TX (transmission) packets.
* \brief 用于跟踪 TX(传输)数据包的回调函数。

*/
TracedCallback<Ptr, Ptr, uint32_t> m_txTrace;
TracedCallback<Ptr,Ptr,uint32_t> m_txTrace;

It turns out that this specific trace source signature is supported by a Probe class (what we need here) of class Ipv6PacketProbe. See the files src/internet/model/ipv6-packet-probe. \(\{\) h,cc\}.

So, in the PlotProbe statement above, we see that the statement is hooking the trace source (identified by path string) with a matching \(n s-3\) Probe type of Ipv6PacketProbe. If we did not support this probe type (matching trace source signature), we could have not used this statement (although some more complicated lower-level statements could have been used, as described in the manual).

The Ipv6PacketProbe exports, itself, some trace sources that extract the data out of the probed Packet object:

TypeId
Ipv6PacketProbe::GetTypeId()
{
static TypeId tid = TypeId("ns3::Ipv6PacketProbe")
静态 TypeId tid = TypeId("ns3::Ipv6PacketProbe")

.SetParent()
.SetGroupName("Stats")
.AddConstructor()
.AddTraceSource("Output",
"The packet plus its IPv6 object and interface that serve as the b
数据包及其 IPv6 对象和接口的总和,用作 b

<output for this probe">
<输出为此探针">

MakeTraceSourceAccessor(&Ipv6PacketProbe::m_output))
.AddTraceSource("OutputBytes",
"The number of bytes in the packet",
"数据包中的字节数",

MakeTraceSourceAccessor(&Ipv6PacketProbe:</m_outputBytes))
return tid; 返回 tid;

The third argument of our PlotProbe statement specifies that we are interested in the number of bytes in this packet; specifically, the "OutputBytes" trace source of Ipv6PacketProbe. Finally, the last two arguments of the statement provide the plot legend for this data series ("Packet Byte Count"), and an optional gnuplot formatting statement (GnuplotAggregator::KEY_BELOW) that we want the plot key to be inserted below the plot. Other options include NO_KEY, KEY_INSIDE, and KEY_ABOVE.

\subsection*{9.4 Supported Trace Types}

The following traced values are supported with Probes as of this writing:

\begin{tabular}{|l|l|l|}
\hline TracedValue type & Probe type & File \\
\hline double & DoubleProbe & stats/model//double-probe.h \\
\hline uint8_t & Uinteger8Probe & stats/model/uinteger-8-probe.h \\
\hline uint16_t & Uinteger16Probe & stats/model/uinteger-16-probe.h \\
\hline uint32_t & Uinteger32Probe & stats/model/uinteger-32-probe.h \\
\hline bool & BooleanProbe & stats/model//uinteger-16-probe.h \\
\hline ns3::Time & TimeProbe & stats/model/time-probe.h \\
\hline
\end{tabular}

The following TraceSource types are supported by Probes as of this writing:

\begin{tabular}{|c|c|c|c|}
\hline TracedSource type & Probe type & \begin{tabular}{l} 
Probe \\
outputs
\end{tabular} & File \\
\hline Ptr<const Packet> & PacketProbe & \begin{tabular}{l} 
Output- \\
Bytes
\end{tabular} & network/utils/packet-probe.h \\
\hline \begin{tabular}{l} 
Ptr<const Packet>, \\
Ptr<Ipv4>, uint32_t
\end{tabular} & Ipv4PacketProbe & \begin{tabular}{l} 
Output- \\
Bytes
\end{tabular} & \begin{tabular}{l} 
internet/model/ipv4-packet- \\
probe.h
\end{tabular} \\
\hline \begin{tabular}{l} 
Ptr<const Packet>, \\
Ptr<Ipv6>, uint32_t
\end{tabular} & Ipv6PacketProbe & \begin{tabular}{l} 
Output- \\
Bytes
\end{tabular} & \begin{tabular}{l} 
internet/model/ipv6-packet- \\
probe.h
\end{tabular} \\
\hline \begin{tabular}{l} 
Ptr<const Packet>, \\
Ptr<Ipv6>, uint32_t
\end{tabular} & Ipv6PacketProbe & \begin{tabular}{l} 
Output- \\
Bytes
\end{tabular} & \begin{tabular}{l} 
internet/model/ipv6-packet- \\
probe.h
\end{tabular} \\
\hline \begin{tabular}{l} 
Ptr<const Packet>, const Ad- \\
dress\&
\end{tabular} & \begin{tabular}{l} 
ApplicationPack- \\
etProbe
\end{tabular} & \begin{tabular}{l} 
Output- \\
Bytes
\end{tabular} & \begin{tabular}{l} 
applications/model/application- \\
packet-probe.h
\end{tabular} \\
\hline
\end{tabular}

As can be seen, only a few trace sources are supported, and they are all oriented towards outputting the Packet size (in bytes). However, most of the fundamental data types available as TracedValues can be supported with these helpers.

\subsection*{9.5 FileHelper}

The FileHelper class is just a variation of the previous GnuplotHelper example. The example program provides formatted output of the same timestamped data, such as follows:

\(\begin{array}{ll}\text { Time }(\text { Seconds) }=9.312 e+00 & \text { Packet Byte Count }=596 \\ \text { Time }(\text { Seconds) }=9.312 e+00 & \text { Packet Byte Count }=564\end{array}\)

Two files are provided, one for node " 0 " and one for node " 1 " as can be seen in the filenames. Let's look at the code piece-by-piece:

  • // Use FileHelper to write out the packet byte count over time
    // 使用 FileHelper 将随时间写出的数据包字节计数

    FileHelper fileHelper;
    // Configure the file to be written, and the formatting of output data.
    // 配置要写入的文件以及输出数据的格式。

    fileHelper.ConfigureFile("seventh-packet-byte-count",
    fileHelper.ConfigureFile("第七数据包字节计数",

    FileAggregator: :FORMATTED);
    FileAggregator::FORMATTED);

The file helper file prefix is the first argument, and a format specifier is next. Some other options for formatting include SPACE_SEPARATED, COMMA_SEPARATED, and TAB_SEPARATED. Users are able to change the formatting (if FORMATTED is specified) with a format string such as follows:

+ // Set the labels for this formatted output file.

+ fileHelper.Set2dFormat("Time (Seconds) = \%.3e\tPacket Byte Count = \%.0£");

Finally, the trace source of interest must be hooked. Again, the probeType and tracePath variables in this example are used, and the probe's output trace source "OutputBytes" is hooked:

// Specify the probe type, trace source path (in configuration namespace), and
// 指定探针类型、跟踪源路径(在配置命名空间中),以及

// probe output trace source ("OutputBytes") to write.
// 探测输出跟踪源("OutputBytes")以写入。

fileHelper.WriteProbe(probeType,
fileHelper.WriteProbe(probeType,

tracePath, tracePath,
"OutputBytes"); "OutputBytes";

The wildcard fields in this trace source specifier match two trace sources. Unlike the GnuplotHelper example, in which two data series were overlaid on the same plot, here, two separate files are written to disk.

\subsection*{9.6 Summary}

Data collection support is new as of ns-3.18, and basic support for providing time series output has been added. The basic pattern described above may be replicated within the scope of support of the existing probes and trace sources. More capabilities including statistics processing will be added in future releases.

\section*{CONCLUSION}

\subsection*{10.1 Futures}

This document is intended as a living document. We hope and expect it to grow over time to cover more and more of the nuts and bolts of \(n s-3\).

Writing manual and tutorial chapters is not something we all get excited about, but it is very important to the project. If you are an expert in one of these areas, please consider contributing to \(n s-3\) by providing one of these chapters; or any other chapter you may think is important.

\subsection*{10.2 Closing}

\(n s-3\) is a large and complicated system. It is impossible to cover all of the things you will need to know in one small tutorial. Readers who want to learn more are encouraged to read the following additional documentation:
- The \(n s-3\) manual
- The \(n s-3\) model library documentation
- The \(n s-3\) Doxygen (API documentation)
- The \(n s-3\) wiki

- The \(n s-3\) development team.