网站的轮播图怎么做,建筑资料下载网,运营推广怎么做,网站怎么换主机适合只有cpu,没有显卡的AI 玩家#xff0c;初学者
问题#xff1a;AVX AVX2 AVX_VNNI FMA 是什么?
AVX、AVX2、AVX_VNNI 和 FMA 都是针对 Intel CPU 的特殊指令集#xff08;instruction set#xff09;#xff0c;用于加速数值计算和向量化操作。这些指令集提供了高级…适合只有cpu,没有显卡的AI 玩家初学者
问题AVX AVX2 AVX_VNNI FMA 是什么?
AVX、AVX2、AVX_VNNI 和 FMA 都是针对 Intel CPU 的特殊指令集instruction set用于加速数值计算和向量化操作。这些指令集提供了高级别的并行性和向量化支持可以在适用的硬件上加快特定计算任务的速度。 AVX (Advanced Vector Extensions)AVX 是 Intel CPU 中引入的指令集扩展其提供了更宽的 SIMD (Single Instruction, Multiple Data) 向量寄存器允许同时处理更多的数据元素。AVX 可以加速包含向量化计算的一些数值操作例如加法、乘法和平方根等。 AVX2AVX2 是对 AVX 指令集的扩展引入了更多的 SIMD 指令包括整数运算和更复杂的浮点运算。AVX2 提供更高级别的向量化支持可以加速更多类型的计算任务。 AVX_VNNI (AVX Vector Neural Network Instructions)AVX_VNNI 是用于加速神经网络计算的指令集扩展。它针对矩阵乘法和卷积等神经网络中常见的计算操作进行了优化可以显著加快神经网络的训练和推理速度。 FMA (Fused Multiply-Add)FMA 是一种融合乘法和加法操作的指令可以在单个指令中同时进行乘法和加法运算。FMA 可以加速包含乘法和加法的复杂数值计算例如矩阵乘法和向量点积等。
这些指令集通常用于高性能计算和深度学习等需要大量数值计算的任务。当软件如 TensorFlow在支持这些指令集的硬件上进行优化时可以显著提高计算速度和性能。因此在使用 TensorFlow 或其他数值计算库时如果你的 CPU 支持这些指令集建议启用它们以获得更好的性能和效率。
使用Docker 环境编译 tensorflow
1 先安装Docker
sudo apt-get install docker2 拉取 Docker 镜像 tensorflow/tensorflow:devel
docker pull tensorflow/tensorflow:devel3 运行 TensorFlow 开发版
我这里 $PWD 是当前路径 /home/wmx/software/tensorDocker 挂载到 Docker 目录 /mnt
docker run -it -w /tensorflow_src -v $PWD:/mnt -e HOST_PERMS$(id -u):$(id -g) tensorflow/tensorflow:devel bash这是一个用于运行 TensorFlow 开发版 Docker 容器的命令。让我们逐步解析这个命令 docker run: 这是运行 Docker 容器的命令。 -it: 这是两个选项的组合用于以交互式终端运行容器并将终端连接到容器的输入/输出stdin/stdout。 -w /tensorflow_src: 这是一个选项指定容器内的工作目录为 /tensorflow_src。换句话说当容器启动时它会进入到容器内的 /tensorflow_src 目录以供后续的命令在该目录下执行。 -v $PWD:/mnt: 这是一个选项用于将当前主机的当前工作目录 ($PWD) 挂载到容器内的 /mnt 目录。这样当前主机的文件和目录就可以在容器内通过 /mnt 目录访问。 -e HOST_PERMS$(id -u):$(id -g): 这是一个选项设置名为 HOST_PERMS 的环境变量并将其值设为当前用户的用户 ID (id -u) 和用户组 ID (id -g)。这样可以确保容器中的操作使用与主机相同的用户权限避免权限问题。 tensorflow/tensorflow:devel: 这是指定使用的 TensorFlow Docker 镜像及其标签tag。devel 是 TensorFlow Docker 镜像的一个标签表示开发版的 TensorFlow 镜像。 bash: 这是在容器内运行的命令。在这个命令中我们指定容器在启动后直接运行 bash 终端。
综上所述这个命令将在一个 TensorFlow 开发版 Docker 容器中启动一个交互式终端并将主机当前工作目录挂载到容器中的 /mnt 目录同时设置容器中的用户权限与主机相同。然后容器启动后会进入 /tensorflow_src 目录并在该目录下运行 bash 终端从而可以在容器内进行 TensorFlow 开发和测试工作。
更新tensorflow源码
git pull 查看所有版本
git branch -a 使用对应的版本,我这里使用 v2.12.0
git checkout v2.12.04 配置 bazel 编译参数
1 ./configure 2 配置python路径直接回车使用默认值 3 配置Python library path直接回车使用默认配置 4 Do you wish to build TensorFlow with ROCm support? [y/N]: n # 不是AMD显卡 ,我选择 n 5 Do you wish to build TensorFlow with CUDA support? [y/N]: n # 我没有显卡目前i9-13900k 所我选择 n 6 Do you wish to download a fresh release of clang? (Experimental) [y/N]: n # 我ubuntu20.04已经安装了clang 我使用默认的,选择 n 7 Please specify optimization flags to use during compilation when bazel option “–configopt” is specified [Default is -Wno-sign-compare]: --copt-marchnative # 配置 bazel 编译的参数我这是纯 cpu 加速所以填写 --copt-marchnative
到这里配置完毕,上面配置针对纯CPU 加速如果有显卡大家根据自己的显卡配置下面是 配置过程的shell :
(base) wmxwmx-ubuntu:/media/wmx/ws1/ai/tensorflow_src$ ./configure
You have bazel 5.3.0 installed.
Please specify the location of python. [Default is /bin/python3]: Found possible Python library paths:/home/wmx/software/tensorBuild/lib/python3.11/site-packages/opt/ros/noetic/lib/python3/dist-packages
Please input the desired Python library path to use. Default is [/usr/lib/python3.8.10/site-packages]Do you wish to build TensorFlow with ROCm support? [y/N]: n
No ROCm support will be enabled for TensorFlow.Do you wish to build TensorFlow with CUDA support? [y/N]: n
No CUDA support will be enabled for TensorFlow.Do you wish to download a fresh release of clang? (Experimental) [y/N]: n
Clang will not be downloaded.Please specify optimization flags to use during compilation when bazel option --configopt is specified [Default is -Wno-sign-compare]: --copt-marchnativeWould you like to interactively configure ./WORKSPACE for Android builds? [y/N]: n
Not configuring the WORKSPACE for Android builds.Preconfigured Bazel build configs. You can use any of the below by adding --config to your build command. See .bazelrc for more details.--configmkl # Build with MKL support.--configmkl_aarch64 # Build with oneDNN and Compute Library for the Arm Architecture (ACL).--configmonolithic # Config for mostly static monolithic build.--confignuma # Build with NUMA support.--configdynamic_kernels # (Experimental) Build kernels into separate shared objects.--configv1 # Build with TensorFlow 1 API instead of TF 2 API.
Preconfigured Bazel build configs to DISABLE default on features:--confignogcp # Disable GCP support.--confignonccl # Disable NVIDIA NCCL support.
Configuration finished
5 编译 tensorFlow 源码 创建 pip 软件包的工具
需要提前开启科学上网哈哈哈不然编译不通过一些依赖国内获取不了
bazel build --configopt //tensorflow/tools/pip_package:build_pip_package6 运行该工具以创建 pip 软件包 指定输出目录 /mnt
./bazel-bin/tensorflow/tools/pip_package/build_pip_package /mnt7 调整文件在容器外部的所有权
我这里是 tensorflow-2.12.0-cp38-cp38-linux_x86_64.whl
chown $HOST_PERMS /mnt/tensorflow-2.12.0-cp38-cp38-linux_x86_64.whl我这里主机目录 /home/wmx/software/tensorDocker 挂载到 Docker 目录 /mnt
8 在主机环境中安装 生成的 tensorflow-2.12.0-cp38-cp38-linux_x86_64.whl
注意 python版本必须对应
因为我docker 里面默认的python 版本是 3.8.10 所以我主机环境需要安装 python3.8.10 才能安装 tensorflow-2.12.0-cp38-cp38-linux_x86_64.whl
conda 创建对应python版本的虚拟环境 conda create --prefix ./tensorBuild python3.8.10conda activate ./tensorBuild# 在tensorBuild虚拟环境中,安装python -m pip install /home/wmx/software/tensorDocker/tensorflow-2.12.0-cp38-cp38-linux_x86_64.whl 9 测试main.py
import tensorflow as tf# 准备数据集
(x_train, y_train), (x_test, y_test) tf.keras.datasets.cifar10.load_data()# 构建模型
model tf.keras.Sequential([tf.keras.layers.Conv2D(32, (3, 3), activationrelu, input_shape(32, 32, 3)),tf.keras.layers.MaxPooling2D((2, 2)),tf.keras.layers.Conv2D(64, (3, 3), activationrelu),tf.keras.layers.MaxPooling2D((2, 2)),tf.keras.layers.Conv2D(64, (3, 3), activationrelu),tf.keras.layers.Flatten(),tf.keras.layers.Dense(64, activationrelu),tf.keras.layers.Dense(10, activationsoftmax)
])# 编译模型
model.compile(optimizeradam,losssparse_categorical_crossentropy,metrics[accuracy])# 训练模型
model.fit(x_train, y_train, epochs10, validation_data(x_test, y_test))# 评估模型
test_loss, test_acc model.evaluate(x_test, y_test)
print(Test accuracy:, test_acc)# 保存模型
model.save(model.h5)
shell 输出
2023-07-30 12:31:03.740540: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-07-30 12:31:03.795251: I tensorflow/core/util/port.cc:104] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS0.
2023-07-30 12:31:03.809482: E tensorflow/tsl/lib/monitoring/collection_registry.cc:81] Cannot register 2 metrics with the same name: /tensorflow/core/bfc_allocator_delay
2023-07-30 12:31:04.681085: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3 SSE4.1 SSE4.2 AVX AVX2 AVX_VNNI FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Epoch 1/10
1563/1563 [] - 5s 3ms/step - loss: 1.8800 - accuracy: 0.3663 - val_loss: 1.5088 - val_accuracy: 0.4526
Epoch 2/10
1563/1563 [] - 4s 3ms/step - loss: 1.3361 - accuracy: 0.5216 - val_loss: 1.4776 - val_accuracy: 0.4846
Epoch 3/10
1563/1563 [] - 4s 3ms/step - loss: 1.1816 - accuracy: 0.5824 - val_loss: 1.1282 - val_accuracy: 0.6061
Epoch 4/10
1563/1563 [] - 4s 3ms/step - loss: 1.0730 - accuracy: 0.6242 - val_loss: 1.1308 - val_accuracy: 0.6108
Epoch 5/10
1563/1563 [] - 5s 3ms/step - loss: 0.9949 - accuracy: 0.6540 - val_loss: 1.1160 - val_accuracy: 0.6223
Epoch 6/10
1563/1563 [] - 4s 3ms/step - loss: 0.9268 - accuracy: 0.6784 - val_loss: 1.0251 - val_accuracy: 0.6576
Epoch 7/10
1563/1563 [] - 4s 3ms/step - loss: 0.8666 - accuracy: 0.6949 - val_loss: 1.0190 - val_accuracy: 0.6523
Epoch 8/10
1563/1563 [] - 4s 3ms/step - loss: 0.8127 - accuracy: 0.7170 - val_loss: 1.0383 - val_accuracy: 0.6534
Epoch 9/10
1563/1563 [] - 5s 3ms/step - loss: 0.7579 - accuracy: 0.7362 - val_loss: 1.0633 - val_accuracy: 0.6542
Epoch 10/10
1563/1563 [] - 5s 3ms/step - loss: 0.7169 - accuracy: 0.7483 - val_loss: 1.0449 - val_accuracy: 0.6719
313/313 [] - 0s 1ms/step - loss: 1.0449 - accuracy: 0.6719
Test accuracy: 0.6718999743461609可以看到已经启用了 CPU 指令加速 耗时44秒
对比以下是没有开启CPU 指令加速的输出
2023-07-30 19:40:50.104394: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: SSE4.1 SSE4.2 AVX AVX2 AVX_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-07-30 19:40:51.399995: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
Epoch 1/10
1563/1563 [] - 144s 92ms/step - loss: 1.6986 - accuracy: 0.4010 - val_loss: 1.3685 - val_accuracy: 0.5083
Epoch 2/10
1563/1563 [] - 135s 87ms/step - loss: 1.2644 - accuracy: 0.5548 - val_loss: 1.3069 - val_accuracy: 0.5370
Epoch 3/10
1563/1563 [] - 158s 101ms/step - loss: 1.1150 - accuracy: 0.6121 - val_loss: 1.1304 - val_accuracy: 0.5997
Epoch 4/10
1563/1563 [] - 160s 103ms/step - loss: 1.0098 - accuracy: 0.6516 - val_loss: 1.0305 - val_accuracy: 0.6457
Epoch 5/10
1563/1563 [] - 167s 107ms/step - loss: 0.9326 - accuracy: 0.6744 - val_loss: 1.0547 - val_accuracy: 0.6364
Epoch 6/10
1563/1563 [] - 176s 113ms/step - loss: 0.8756 - accuracy: 0.6958 - val_loss: 1.0110 - val_accuracy: 0.6595
Epoch 7/10
1563/1563 [] - 177s 113ms/step - loss: 0.8207 - accuracy: 0.7145 - val_loss: 1.0024 - val_accuracy: 0.6663
Epoch 8/10
1563/1563 [] - 173s 111ms/step - loss: 0.7732 - accuracy: 0.7323 - val_loss: 1.0233 - val_accuracy: 0.6614
Epoch 9/10
1563/1563 [] - 162s 103ms/step - loss: 0.7310 - accuracy: 0.7463 - val_loss: 0.9851 - val_accuracy: 0.6783
Epoch 10/10
1563/1563 [] - 169s 108ms/step - loss: 0.6951 - accuracy: 0.7576 - val_loss: 1.0829 - val_accuracy: 0.6524
313/313 [] - 2s 7ms/step - loss: 1.0829 - accuracy: 0.6524
Test accuracy: 0.652400016784668耗时 (144135158160167176177173162169 ) 秒 / 60 1621秒 / 60 27分钟
性能
没有cpu加速 耗时增加 1621÷44 36.8 倍 或者说开启CPU加速性能提高 36.8 倍 我配置 CPU是 i9-13900k 华硕z790A-wifi 吹雪 D5 开启自动超频 三星990pro 1T 海盗船 ddr5 32x264G 6000MHz 内存