这是用户在 2024-3-22 10:50 为 https://trac.ffmpeg.org/wiki/HWAccelIntro 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
wiki:  维基百科:HWAccelIntro  HWAcel简介

Many platforms offer access to dedicated hardware to perform a range of video-related tasks. Using such hardware allows some operations like decoding, encoding or filtering to be completed faster or using less of other resources (particularly CPU), but may give different or inferior results, or impose additional restrictions which are not present when using software only. On PC-like platforms, video hardware is typically integrated into a GPU (from AMD, Intel or NVIDIA), while on mobile SoC-type platforms it is generally an independent IP core (many different vendors).
许多平台提供对专用硬件的访问来执行一系列与视频相关的任务。使用此类硬件允许更快地完成某些操作(例如解码、编码或过滤)或使用更少的其他资源(特别是 CPU),但可能会给出不同或较差的结果,或者施加仅使用软件时不存在的额外限制。在类似 PC 的平台上,视频硬件通常集成到 GPU(来自 AMD、Intel 或 NVIDIA)中,而在移动 SoC 类型的平台上,它通常是独立的 IP 核(许多不同的供应商)。

Hardware decoders will generate equivalent output to software decoders, but may use less power and CPU to do so. Feature support varies – for more complex codecs with many different profiles, hardware decoders rarely implement all of them (for example, hardware decoders tend not to implement anything beyond YUV 4:2:0 at 8-bit depth for H.264). A common feature of many hardware decoders to be able to generate output in hardware surfaces suitable for use by other components (with discrete graphics cards, this means surfaces in the memory on the card rather than in system memory) – this is often useful for playback, as no further copying is required before rendering the output, and in some cases it can also be used with encoders supporting hardware surface input to avoid any copying at all in transcode cases.
硬件解码器将生成与软件解码器相同的输出,但可能使用更少的功率和 CPU 来完成此操作。功能支持各不相同 - 对于具有许多不同配置文件的更复杂的编解码器,硬件解码器很少实现所有这些(例如,硬件解码器往往不会在 H.264 的 8 位深度实现超出 YUV 4:2:0 的任何内容)。许多硬件解码器的一个共同特征是能够在适合其他组件使用的硬件表面中生成输出(对于独立显卡,这意味着卡上内存中的表面而不是系统内存中的表面) - 这通常对于播放有用,因为在渲染输出之前不需要进一步复制,并且在某些情况下,它还可以与支持硬件表面输入的编码器一起使用,以避免在转码情况下进行任何复制。

Hardware encoders typically generate output of significantly lower quality than good software encoders like x264, but are generally faster and do not use much CPU resource. (That is, they require a higher bitrate to make output with the same perceptual quality, or they make output with a lower perceptual quality at the same bitrate.)
硬件编码器生成的输出质量通常明显低于 x264 等良好的软件编码器,但通常速度更快,并且不占用太多 CPU 资源。 (也就是说,它们需要更高的比特率才能产生具有相同感知质量的输出,或者它们以相同的比特率产生较低感知质量的输出。)

Systems with decode and/or encode capability may also offer access to other related filtering features. Things like scaling and deinterlacing are common, other postprocessing may be available depending on the system. Where hardware surfaces are usable, these filters will generally act on them rather than on normal frames in system memory.

There are a lot of different APIs of varying standardisation status available. FFmpeg offers access to many of these, with varying support.
有许多具有不同标准化状态的不同 API 可供使用。 FFmpeg 提供对其中许多的访问,并提供不同的支持。

Platform API Availability
平台 API 可用性 ¶

Linux Windows 视窗 Android 安卓 Apple 苹果 Other 其他
AMD Intel   英特尔 NVIDIA   英伟达 AMD Intel   英特尔 NVIDIA   英伟达 macOS  苹果系统 iOS Raspberry Pi   树莓派
Direct3D 11 N N N Y Y Y N N N N
Direct3D 9 (DXVA2) N N N Y Y Y N N N N
libmfx N Y N N Y N N N N N
MediaCodec 媒体编解码器 N N N N N N Y N N N
Media Foundation 媒体基金会 N N N Y Y Y N N N N
OpenCL Y Y Y Y Y Y P Y N N
OpenMAX 开放式MAX P N N N N N P N N Y
V4L2 M2M V4L2机对机 N N N N N N P N N N
VideoToolbox 视频工具箱 N N N N N N N Y Y N
Vulkan 伏尔甘 Y Y Y Y Y Y N N N N

Key:  钥匙:

  • Y Fully usable.   Y 完全可用。
  • P Partial support (some devices / some features).
    P 部分支持(某些设备/某些功能)。
  • N Not possible.   N 不可能。

FFmpeg API Implementation Status
FFmpeg API 实现状态 ¶

Decoder  解码器 Encoder   编码器 Other support   其他支持
Internal  内部的 Standalone   独立式 Hardware output   硬件输出 Standalone   独立式 Hardware input   硬件输入 Filtering   过滤 Hardware context   硬件环境 Usable from ffmpeg CLI
可从 ffmpeg CLI 使用
Direct3D 11 Y - Y - - F Y Y
Direct3D 9 / DXVA2 Y - Y - - N Y Y
libmfx - Y Y Y Y Y Y Y
MediaCodec  媒体编解码器 - Y Y Y Y - N N
Media Foundation  媒体基金会 - N N N N N N N
MMAL - Y Y N N - N N
OpenCL - - - - - Y Y Y
OpenMAX  开放式MAX - N N Y N N N Y
RockChip MPP  瑞芯微MPP - Y Y N N - Y Y
V4L2 M2M  V4L2机对机 - Y N Y N N N Y
VDPAU Y - Y - - N Y Y
VideoToolbox  视频工具箱 Y N Y Y Y - Y Y
Vulkan  伏尔甘 Y - Y N N Y Y Y

Key:  钥匙:

  • - Not applicable to this API.
    - 不适用于此 API。
  • Y Working.   Y 工作。
  • N Possible but not implemented.
    N 可能但未实现。
  • F Not yet integrated, but work is being done in this area.
    F 尚未集成,但该领域的工作正在完成。

Use with the ffmpeg command-line tool
ffmpeg 命令行工具一起使用 ¶

Internal hwaccel decoders are enabled via the -hwaccel option (not supported in ffplay). The software decoder starts normally, but if it detects a stream which is decodable in hardware then it will attempt to delegate all significant processing to that hardware. If the stream is not decodable in hardware (for example, it is an unsupported codec or profile) then it will still be decoded in software automatically. If the hardware requires a particular device to function (or needs to distinguish between multiple devices, say if several graphics cards are available) then one can be selected using -hwaccel_device.
内部 hwaccel 解码器通过 -hwaccel 选项启用(ffplay 不支持)。软件解码器正常启动,但如果它检测到可在硬件中解码的流,则它将尝试将所有重要处理委托给该硬件。如果流无法在硬件中解码(例如,它是不受支持的编解码器或配置文件),则仍会在软件中自动解码。如果硬件需要特定设备才能运行(或者需要区分多个设备,例如是否有多个显卡可用),则可以使用 -hwaccel_device 选择一个设备。

External wrapper decoders are used by setting a specific decoder with the -codec:v (-c:v) option. Typically they are named codec_api (for example: h264_cuvid). These decoders require the codec to be known in advance, and do not support any fallback to software or other HW decoder if the stream is not supported.
通过使用 -codec:v (-c:v) 选项设置特定解码器来使用外部包装解码器。通常它们被命名为 codec_api (例如: h264_cuvid )。这些解码器要求提前知道编解码器,并且如果不支持流,则不支持软件或其他硬件解码器的任何回退。

Encoder wrappers are also selected by -codec:v. Encoders generally have lots of options – look at the documentation for the particular encoder for details.
编码器包装器也由 -codec:v 选择。编码器通常有很多选项 - 有关详细信息,请参阅特定编码器的文档。

Hardware filters can be used in a filter graph like any other filter. Note, however, that they may not support any formats in common with software filters – in such cases it may be necessary to make use of hwupload and hwdownload filter instances to move frame data between hardware surfaces and normal memory.
硬件过滤器可以像任何其他过滤器一样在过滤器图中使用。但请注意,它们可能不支持与软件过滤器相同的任何格式 - 在这种情况下,可能需要使用 hwuploadhwdownload 过滤器实例在之间移动帧数据硬件表面和正常内存。


Video Decode and Presentation API for Unix. Developed by NVIDIA for Unix/Linux systems. To enable this you typically need the libvdpau development package in your distribution, and a compatible graphics card.
​Unix 下的视频解码和演示 API。由 NVIDIA 为 Unix/Linux 系统开发。要启用此功能,您通常需要发行版中的 libvdpau 开发包以及兼容的显卡。

Note that VDPAU cannot be used to decode frames in memory, the compressed frames are sent by libavcodec to the GPU device supported by VDPAU and then the decoded image can be accessed using the VDPAU API. This is not done automatically by FFmpeg, but must be done at the application level (check for example the ffmpeg_vdpau.c file used by ffmpeg.c). Also, note that with this API it is not possible to move the decoded frame back to RAM, for example in case you need to encode again the decoded frame (e.g. when doing transcoding on a server).
请注意,VDPAU 不能用于解码内存中的帧,压缩后的帧由 libavcodec 发送到 VDPAU 支持的 GPU 设备,然后可以使用 VDPAU API 访问解码后的图像。这不是由 FFmpeg 自动完成的,而是必须在应用程序级别完成(例如检查 ffmpeg.c 使用的 ffmpeg_vdpau.c 文件)。另请注意,使用此 API 无法将解码的帧移回 RAM,例如,如果您需要再次对解码的帧进行编码(例如在服务器上进行转码时)。

Several decoders are currently supported through VDPAU in libavcodec, in particular H.264, MPEG-1/2/4, and VC-1, AV1.
目前,libavcodec 中的 VDPAU 支持多种解码器,特别是 H.264、MPEG-1/2/4 和 VC-1、AV1。


Video Acceleration API (VAAPI) is a non-proprietary and royalty-free open source software library ("libva") and API specification, initially developed by Intel but can be used in combination with other devices.
视频加速 API (VAAPI) 是一个非专有且免版税的开源软件库 (“libva”) 和 API 规范,最初由英特尔开发,但可以与其他设备结合使用。

It can be used to access the Quick Sync hardware in Intel GPUs and the UVD/VCE hardware in AMD GPUs. See VAAPI.
它可用于访问 Intel GPU 中的快速同步硬件和 AMD GPU 中的 UVD/VCE 硬件。参见 VAAPI。


Direct-X Video Acceleration API, developed by Microsoft (supports Windows and XBox360).
​Direct-X 视频加速 API,由 Microsoft 开发(支持 Windows 和 XBox360)。

Several decoders are currently supported, in particular H.264, MPEG-2, VC-1 and WMV 3, AV1, HEVC.
目前支持多种解码器,特别是 H.264、MPEG-2、VC-1 和 WMV 3、AV1、HEVC。

DXVA2 hardware acceleration only works on Windows. In order to build FFmpeg with DXVA2 support, you need to install the dxva2api.h header. For MinGW this can be done by downloading the header maintained by VLC and installing it in the include path (for example in /usr/include/).
DXVA2 硬件加速仅适用于 Windows。为了构建支持 DXVA2 的 FFmpeg,您需要安装 dxva2api.h 标头。对于 MinGW,这可以通过下载 VLC 维护的标头并将其安装在包含路径中(例如在 /usr/include/ 中)来完成。

For MinGW64, dxva2api.h is provided by default. One way to install mingw-w64 is through a pacman repository, and can be installed using one of the two following commands, depending on the architecture:
对于 MinGW64,默认提供 dxva2api.h 。安装 mingw-w64 的一种方法是通过 pacman 存储库,并且可以使用以下两个命令之一进行安装,具体取决于体系结构:

pacman -S mingw-w64-i686-gcc
pacman -S mingw-w64-x86_64-gcc

To enable DXVA2, use the --enable-dxva2 ffmpeg configure switch.
要启用 DXVA2,请使用 --enable-dxva2 ffmpeg 配置开关。

To test decoding, use the following command:

ffmpeg -hwaccel dxva2 -threads 1 -i INPUT -f null - -benchmark

VideoToolbox 视频工具箱 ¶

VideoToolbox is the macOS framework for video decoding and encoding.
VideoToolbox 是用于视频解码和编码的 macOS 框架。

The following codecs are supported:

  • Decoding: H.263, H.264, HEVC, MPEG-1, MPEG-2, MPEG-4 Part 2, ProRes
    解码:H.263、H.264、HEVC、MPEG-1、MPEG-2、MPEG-4 第 2 部分、ProRes
  • Encoding: H.264, HEVC, ProRes

To use H.264/HEVC hardware encoding in macOS, just use the encoder -c:v h264_videotoolbox or -c:v hevc_videotoolbox for H.264 or HEVC respectively.
要在 macOS 中使用 H.264/HEVC 硬件编码,只需分别使用 H.264 或 HEVC 的编码器 -c:v h264_videotoolbox-c:v hevc_videotoolbox 即可。

Check ffmpeg -h encoder=... to see encoder options.
检查 ffmpeg -h encoder=... 以查看编码器选项。

VideoToolbox supports two types of rate control:
VideoToolbox 支持两种类型的速率控制:

  • Bitrate-based using -b:v
    使用 -b:v 基于比特率
  • Constant quality with -q:v. Note that the scale is 1-100, with 1 being the lowest and 100 the highest. Constant quality mode is only available for Apple Silicon and from ffmpeg 4.4 and higher.
    -q:v 保持稳定的质量。请注意,范围为 1-100,其中 1 为最低,100 为最高。恒定质量模式仅适用于 Apple Silicon 以及 ffmpeg 4.4 及更高版本。

Vulkan 伏尔甘 ¶

Vulkan video decoding is a new specification for vendor-generic hardware accelerated video decoding. Currently, the following codecs are supported:
Vulkan 视频解码是供应商通用硬件加速视频解码的新规范。目前支持以下编解码器:

  • Decoding: H.264, HEVC, AV1

The AV1 specification is currently an experimental specification developed in collaboration with the Mesa project. As such, it should not be expected to be implemented on any other drivers currently, but once an official specification is available, the decoder will be ported to use it.
AV1 规范目前是与 Mesa 项目合作开发的实验性规范。因此,目前不应期望在任何其他驱动程序上实现它,但一旦官方规范可用,解码器将被移植以使用它。

To test decoding, use the following command:

ffmpeg -init_hw_device "vulkan=vk:0" -hwaccel vulkan -hwaccel_output_format vulkan -i INPUT -f null - -benchmark

Documentation on how to initialize the device, as well as filtering, is available on our documentation page.


NVENC and NVDEC are NVIDIA's hardware-accelerated encoding and decoding APIs. They used to be called CUVID. They can be used for encoding and decoding on Windows and Linux. FFmpeg refers to NVENC/NVDEC interconnect as CUDA.
NVENC和NVDEC是NVIDIA的硬件加速编码和解码API。它们曾经被称为 CUVID。它们可用于 Windows 和 Linux 上的编码和解码。 FFmpeg 将 NVENC/NVDEC 互连称为 CUDA。


NVENC can be used for H.264 and HEVC encoding. FFmpeg supports NVENC through the h264_nvenc and hevc_nvenc encoders. In order to enable it in FFmpeg you need:
NVENC 可用于 H.264 和 HEVC 编码。 FFmpeg 通过 h264_nvenchevc_nvenc 编码器支持 NVENC。为了在 FFmpeg 中启用它,您需要:

  • A supported GPU ​受支持的 GPU
  • Supported drivers for your operating system
  • The NVIDIA Codec SDK or compiling FFmpeg with --enable-cuda-llvm
    ​NVIDIA Codec SDK 或使用 --enable-cuda-llvm 编译 FFmpeg
  • ffmpeg configured with --enable-ffnvcodec (default if the nv-codec-headers are detected while configuring)
    ffmpeg 使用 --enable-ffnvcodec 配置(如果在配置时检测到 nv-codec-headers,则默认)

Note: FFmpeg uses its own slightly modified runtime-loader for NVIDIA's CUDA/NVENC/NVDEC-related libraries. If you get an error from configure complaining about missing ffnvcodec, this project is what you need. It has a working Makefile with an install target: make install PREFIX=/usr. FFmpeg will look for its pkg-config file, called ffnvcodec.pc. Make sure it is in your PKG_CONFIG_PATH.
注意:FFmpeg 对 NVIDIA 的 CUDA/NVENC/NVDEC 相关库使用自己稍微修改过的运行时加载程序。如果您收到 configure 抱怨缺少 ffnvcodec 的错误,​这个项目就是您所需要的。它有一个有效的 Makefile 和一个安装目标: make install PREFIX=/usr 。 FFmpeg 将查找其名为 ffnvcodec.pc 的 pkg-config 文件。确保它在您的 PKG_CONFIG_PATH 中。

This means that running the following before compiling ffmpeg should suffice:
这意味着在编译 ffmpeg 之前运行以下命令就足够了:

git clone https://git.videolan.org/git/ffmpeg/nv-codec-headers.git
cd nv-codec-headers
sudo make install

After compilation, you can use NVENC.

Usage example:  使用示例:

ffmpeg -i input -c:v h264_nvenc -profile high444p -pixel_format yuv444p -preset default output.mp4

You can see available presets (including lossless for both hevc and h264), other options, and encoder info with ffmpeg -h encoder=h264_nvenc or ffmpeg -h encoder=hevc_nvenc.
您可以使用 ffmpeg -h encoder=h264_nvencffmpeg -h encoder=hevc_nvenc 查看​​可用预设(包括 hevc 和 h264 的无损)、其他选项以及编码器信息。

Note: If you get the No NVENC capable devices found error make sure you're encoding to a supported pixel format. See encoder info as shown above.
注意:如果出现 No NVENC capable devices found 错误,请确保编码为受支持的像素格式。查看编码器信息,如上所示。

NVENC can accept d3d11 frames context directly.
NVENC 可以直接接受 d3d11 帧上下文。

ffmpeg -y -hwaccel_output_format d3d11 -hwaccel d3d11va -i input.mp4 -c:v
hevc_nvenc out.mp4


NVDEC offers decoders for H.264, HEVC, MJPEG, MPEG-1/2/4, VP8/VP9, VC-1, AV1. Codec support varies by hardware (see the GPU compatibility table).
NVDEC 提供 H.264、HEVC、MJPEG、MPEG-1/2/4、VP8/VP9、VC-1、AV1 的解码器。编解码器支持因硬件而异(请参阅 GPU 兼容性表)。

Note that FFmpeg offers both NVDEC and CUVID hwaccels. They differ in how frames are decoded and forwarded in memory.
请注意,FFmpeg 提供 NVDEC 和 CUVID hwaccel 。它们的不同之处在于帧在内存中的解码和转发方式。

The full set of codecs being available only on Pascal hardware, which adds VP9 and 10 bit support. The note about missing ffnvcodec from NVENC applies for NVDEC as well.
全套编解码器仅在 Pascal 硬件上可用,这增加了 VP9 和 10 位支持。关于 NVENC 中缺少 ffnvcodec 的注释也适用于 NVDEC。

Sample decode using CUDA:
使用 CUDA 解码的示例:

ffmpeg -hwaccel cuda -i input output

Sample decode using CUVID:
使用 CUVID 的解码示例:

ffmpeg -c:v h264_cuvid -i input output

FFplay only supports older option -vcodec (not -c:v) and only CUVID.
FFplay 仅支持旧选项 -vcodec(不支持 -c:v)并且仅支持 CUVID。

ffplay -vcodec hevc_cuvid file.mp4

Full hardware transcode with NVDEC and NVENC:
使用 NVDEC 和 NVENC 进行完整硬件转码:

ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i input -c:v h264_nvenc -preset slow output

AV1 NVDEC HW decoding requires using -c:v av1:
AV1 NVDEC HW 解码需要使用 -c:v av1:

ffmpeg -hwaccel nvdec -c:v av1 -i input_av1.mp4 output.ts

An example using scale_cuda and encoding in hardware, scale_cuda is available if compiled with ffnvcodec and --enable-cuda-llvm (default is on, requires nvidia llvm to be present at runtime):
使用scale_cuda和硬件编码的示例,如果使用ffnvcodec和 --enable-cuda-llvm 编译,则scale_cuda可用(默认打开,需要nvidia llvm在运行时存在):

ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i file.mkv -noautoscale -filter_complex [0:0]scale_cuda=1280:-2[out] -map [out] -c:v hevc_nvenc -cq 28 output.mp4

another example  另一个例子

ffmpeg -hwaccel_device 0 -hwaccel cuda -i input -vf scale_cuda=-1:720 -c:v h264_nvenc -preset slow output.mkv

The -hwaccel_device option can be used to specify the GPU to be used by the hwaccel in ffmpeg.
-hwaccel_device 选项可用于指定ffmpeg中hwaccel要使用的GPU。

cuda-nvcc and libnpp
cuda-nvcc 和 libnpp ¶

   --enable-libnpp          enable Nvidia Performance Primitives-based code [no]
   --enable-cuda-nvcc       enable Nvidia CUDA compiler [no]

Both of these are basically "older" cuvid options that require the nvidia SDK to be present when compiled and run. libnpp provides scale_npp (and a few other _npp filters). They might have different options/flexibility than their XX_cuda equivalent. Might be similar performance. These have more funky licensing (nonfree). cuda-nvcc has basically been replaced with ffnvcodec + cuda-llvm, scale_npp with scale_cuda.
这两个基本上都是“较旧”的 cuvid 选项,需要在编译和运行时提供 nvidia SDK。 libnpp 提供scale_npp(以及一些其他_npp 过滤器)。它们可能比 XX_cuda 具有不同的选项/灵活性。可能会有类似的表现。这些有更时髦的许可(非免费)。 cuda-nvcc 基本上已替换为 ffnvcodec + cuda-llvm,scale_npp 替换为scale_cuda。

Example:  例子:

ffmpeg -hwaccel cuda -i input -vf scale_npp=-1:720 -c:v h264_nvenc -preset slow output.mkv

libmfx (Intel Media SDK)
libmfx(英特尔媒体 SDK) ¶

libmfx is a proprietary library from Intel for use of Quick Sync hardware on both Linux and Windows. On Windows it is the primary way to use for decoding, video processing and encoding beyond those accessible via DXVA2/D3D11VA. On Linux it provides a different and mostly wider range of features compared to VAAPI, specifically for encoding and often better performance.
libmfx 是 Intel 的专有库,用于在 Linux 和 Windows 上使用快速同步硬件。在 Windows 上,它是除通过 DXVA2/D3D11VA 访问之外的解码、视频处理和编码的主要方式。在 Linux 上,与 VAAPI 相比,它提供了不同且范围更广的功能,特别是编码和通常更好的性能。

See QuickSync.   请参阅快速同步。

OpenCL OpenCL ¶

OpenCL can be used for a number of filters. To build, OpenCL 1.2 or later headers are required, along with an ICD or ICD loader to link to - it is recommended (but not required) to link with the ICD loader, so that the implementation can be chosen at run-time rather than build-time. At run-time, an OpenCL 1.2 driver is required - most GPU manufacturers will provide one as part of their standard drivers. CPU implementations are also usable, but may be slower than using native filters in ffmpeg directly.
​OpenCL 可用于多种过滤器。要构建,需要 OpenCL 1.2 或更高版本的标头,以及要链接到的 ICD 或 ICD 加载器 - 建议(但不是必需)与 ICD 加载器链接,以便可以在运行时选择实现,而不是在运行时选择实现。构建时间。在运行时,需要 OpenCL 1.2 驱动程序 - 大多数 GPU 制造商都会提供一个作为其标准驱动程序的一部分。 CPU 实现也是可用的,但可能比直接在 ffmpeg 中使用本机过滤器慢。

OpenCL can interoperate with other GPU APIs to avoid redundant copies between GPU and CPU memory. The supported methods are:
OpenCL 可以与其他 GPU API 互操作,以避免 GPU 和 CPU 内存之间的冗余副本。支持的方法有:

  • DXVA2: NV12 surfaces only, all platforms.
    DXVA2:仅限 NV12 表面,所有平台。
  • D3D11: NV12 textures on Intel only.
    D3D11:仅 Intel 上的 NV12 纹理。
  • VAAPI: all surface types.
  • ARM Mali: all surface types, via DRM object sharing.
    ARM Mali:所有表面类型,通过 DRM 对象共享。
  • libmfx: NV12 surfaces only, via VAAPI or DXVA2.
    libmfx:仅 NV12 表面,通过 VAAPI 或 DXVA2。


AMD UVD is usable for decode via VDPAU and VAAPI in Mesa on Linux. VCE also has some initial support for encode via VAAPI, but should be considered experimental.
AMD UVD 可用于通过 Linux 上 Mesa 中的 VDPAU 和 VAAPI 进行解码。 VCE 还对通过 VAAPI 进行编码提供了一些初步支持,但应视为实验性的。

On Windows, UVD is accessible via standard DXVA2/D3D11VA APIs, while VCE is supported via AMF. The Advanced Media Framework (AMF) SDK provides developers with easy access to AMD GPUs for multimedia processing.
在 Windows 上,UVD 可通过标准 DXVA2/D3D11VA API 访问,而 VCE 可通过 AMF 支持。高级媒体框架 (AMF) SDK 使开发人员能够轻松访问 AMD GPU 进行多媒体处理。

AMF is effectively supported by FFmpeg to significantly speed up video encoding, decoding, and transcoding via AMD GPUs.
AMF 得到 FFmpeg 的有效支持,可通过 AMD GPU 显着加快视频编码、解码和转码速度。

Decoding 解码 ¶

AMD supports hardware decoding via DirectX in FFmpeg. Currently supports DX9 and DX11 in FFmpeg.
AMD 支持通过 FFmpeg 中的 DirectX 进行硬件解码。目前FFmpeg支持DX9和DX11。

Hardware decoding via DX9

ffmpeg -hwaccel dxva2 -i input.mkv output.yuv

Note: Currently AMD hardware doesn’t support AV1 elementary stream decoding via DX9. So, this command line is not applicable for AV1 bitstream as input.
注意:目前 AMD 硬件不支持通过 DX9 进行 AV1 基本流解码。因此,该命令行不适用于 AV1 比特流作为输入。

Hardware decoding via DX11

ffmpeg -hwaccel d3d11va -i input.mkv output.yuv

In the above command line, “input.mkv” is only an example. The AMD hardware accelerated decoder supports most widely used containers and video elementary stream types. The following table lists detailed information about the widely used containers and video elementary streams which the AMD hardware accelerated decoder supports.
在上面的命令行中,“input.mkv”只是一个示例。 AMD 硬件加速解码器支持最广泛使用的容器和视频基本流类型。下表列出了有关 AMD 硬件加速解码器支持的广泛使用的容器和视频基本流的详细信息。

Table: Containers and video elementary streams supported by the AMD hardware accelerated decoder
表:AMD 硬件加速解码器支持的容器和视频基本流

Format  格式 Filename Extension   文件扩展名 H.264/AVC H.265/HEVC AV1
Matroska  马特罗斯卡 .mkv Y Y Y
MPEG-4 Part 14 (MP4)
MPEG-4 第 14 部分 (MP4)
.mp4 Y Y Y
Audio Video Interleave (AVI)
音频视频交错 (AVI)
.avi Y N Y
Material Exchange Format (MXF)
材料交换格式 (MXF)
.mxf Y n/a n/a
MPEG transport stream (TS)
MPEG 传输流 (TS)
.ts Y Y N
3GPP (3GP)  3GPP(3GP) .3gp Y n/a n/a
Flash Video (FLV)  闪存视频 (FLV) .flv Y n/a n/a
WebM  网络管理 .webm n/a n/a Y
Advanced Systems Format (ASF)
高级系统格式 (ASF)
.asf .wmv Y Y Y
QuickTime File Format (QTFF)
QuickTime 文件格式 (QTFF)
.mov Y Y n/a

Key:  钥匙:

  • 'Y': Hardware accelerated decoder supports this input
  • 'N': Hardware accelerated decoder doesn’t support this input
  • 'n/a': This input is not applicable in specification

Encoding 编码 ¶

Currently AMF encoder supports H.264/AVC, H.265/HEVC, AV1 encoder. FFmpeg uses _amf as the postfix for the AMF encoder names. The command lines shown below may use h264_amf, and should be replaced by hevc_amf for H.265/HEVC encoder and av1_amf for AV1 encoder.
目前AMF编码器支持H.264/AVC、H.265/HEVC、AV1编码器。 FFmpeg 使用 _amf 作为 AMF 编码器名称的后缀。下面显示的命令行可能使用 h264_amf ,对于 H.265/HEVC 编码器应替换为 hevc_amf ,对于 AV1 编码器应替换为 av1_amf

ffmpeg -s 1920x1080 -pix_fmt yuv420p -i input.yuv -c:v h264_amf output.mp4

ffmpeg -s 1920x1080 -pix_fmt yuv420p -i input.yuv -c:v hevc_amf output.mp4

ffmpeg -s 1920x1080 -pix_fmt yuv420p -i input.yuv -c:v av1_amf output.mp4

Transcode 转码 ¶

There are two possible methods for transcoding: hardware decoding and hardware encoding, or software decoding and hardware encoding.

Hardware Decode and Hardware Encode

Use DX9 hardware decoder

ffmpeg -hwaccel dxva2 -hwaccel_output_format dxva2_vld -i input.mkv -c:v av1_amf output.mp4

Use DX11 hardware decoder

ffmpeg -hwaccel d3d11va -hwaccel_output_format d3d11 -i input.mkv -c:v hevc_amf output.mp4

The parameter hwaccel_output_format will specify the raw data (YUV) format after decoding.
参数 hwaccel_output_format 将指定解码后的原始数据(YUV)格式。

To avoid raw data copy between GPU memory and system memory, use -hwaccel_output_format dxva2_vld when using DX9 and use -hwaccel_output_format d3d11 when using DX11. This will improve transcoding speed greatly. This is the best setting we recommend for transcoding.
为了避免 GPU 内存和系统内存之间的原始数据复制,请在使用 DX9 时使用 -hwaccel_output_format dxva2_vld ,在使用 DX11 时使用 -hwaccel_output_format d3d11 。这将大大提高转码速度。这是我们推荐的转码最佳设置。

Software Decode and Hardware Encode

Use the CPU to decode the input bitstream, and the GPU to encode the output stream.

ffmpeg -i input.mkv -c:v av1_amf output.mp4

The default software decoder corresponding to the elementary video stream will be used as the decoder.

Transcode with Scaling
缩放转码 ¶

Scaling is a very common operation in transcoding. It is done through video filter in FFmpeg.
缩放是转码中非常常见的操作。这是通过 FFmpeg 中的视频过滤器完成的。

Hardware decode and hardware encode with scaling

ffmpeg -hwaccel d3d11va -i input.mkv  -vf scale=1280x720 -c:v h264_amf output.mp4

If filter parameters are used in transcoding, users can’t set hwaccel_output_format parameters. In fact, the filter processing is finished in the CPU in the above example.
如果转码时使用了过滤参数,则无法设置 hwaccel_output_format 参数。实际上,上例中的过滤处理是在CPU中完成的。

Software decode and hardware encode with scaling

In the following command line, both decoding and scaling are done via the CPU, and encoding is done via the GPU.

ffmpeg -i input.mkv -vf scale=1280x720 -c:v h264_amf output.mp4

External resources 外部资源 ¶

Last modified 4天 ago
最后修改于 4 天前
Last modified on 2024年3月18日 下午3:20:04
Note: See TracWiki for help on using the wiki.
注意:有关使用 wiki 的帮助,请参阅 TracWiki。