DeepFaceLab历史更新日志 history update log

Windows 10 用户重要通知!

您需要设置此选项才能正确工作。

系统 – 显示 – 图形设置

============ 更新日志 ============

== 2021.11.2021 ==

修复了在合并中颜色转移的 rct

修复了模型导出问题。

== 2021.10.2021 ==

SAEHD, AMP: 随机缩放增加到 -0.15+0.15。改进了 lr_dropout 能力以达到较低的损失值。
SAEHD: 更改了 bg_style_power 算法。现在可以更好地拼接丢失 src 类似的面部。

添加了随机色调/饱和度/亮度强度选项,仅应用于神经网络输入的源面部集合。稳定了面部替换过程中的颜色扰动。通过选择与源面部集合最接近的颜色来降低颜色转移的质量。因此源面部集合必须足够多样化。典型细值是 0.05。

Liae arhi: 当关闭随机扭曲时,inter_AB 网络不再训练以保持面部更接近源。

== 2021.10.09 ==

SAEHD: 添加了 -t arhi 选项。使面部更接近源。

SAEHD, AMP:

移除了隐式函数,以定期重新训练最后 16 个“高损失”样本
修复了 .dfm 格式在 DirectX12 DeepFaceLive 构建中正确工作的错误。
在样本生成器中,随机缩放从 -0.05+0.05 增加到 -0.125+0.125,这改善了面部的一般化。

== 2021.09.06 ==

修复了模型保存错误。

AMP, SAEHD: 添加了选项 ‘模糊移除掩码’
模糊应用面部掩码附近的区域。
结果是面部附近的背景变得柔和,且在面部替换过程中不那么显眼。
需要精确的 xseg 掩模在源和目标面部集合中。

AMP, SAEHD: 样本处理器数量不再限制为 8,因此如果您有具有 16+ 核心的 AMD 处理器,请增加页面文件大小。

DirectX12 构建:将 tensorflow-directml 更新到 1.15.5 版本。

== 2021.08.12 ==

XSeg 模型:提高了预训练选项。

通用 XSeg:添加了更多面部(面部集合不公开)且以预训练选项重新训练。质量现在更高。

更新了 RTM WF 数据集,应用了新的通用 XSeg 掩模,还添加了 490 个带有闭眼的面部。

== 2021.07.30 ==

导出 AMP/SAEHD:添加了“导出量化”选项。(之前已启用)
使导出的模型更快。如果出现问题,请禁用此选项。

AMP 模型:

更改了 ct 模式的帮助信息:
改变 src 样本与 dst 样本接近的颜色分布。如果 src 面部集合足够多样化,那么在大多数情况下 lct 模式就足够了。
默认中间维度现在为 1024
返回 lr_dropout 选项
最后高损失样本的行为 – 与 SAEHD 相同

XSeg 模型:添加了预训练选项。

通用 XSeg:以预训练选项重新训练。质量现在更高。

更新了 RTM WF 数据集,应用了新的通用 XSeg 掩模。

== 2021.07.17 ==

SAE/AMP: GAN 模型恢复到更高版本,该版本通过了高分辨率假脸的测试。

AMP: 默认形变因子现在为 0.5
移除了 eyes_mouth_prio 选项,永久启用。
移除了遮罩训练,永久启用。

添加了脚本

6) train AMP SRC-SRC.bat

稳定训练 AMP 的方法:

1) 获取相当多样化源面部集合
2) 将形变因子设置为 0.5
3) 对 SRC-SRC 训练 500k+ 次迭代(越多越好)
4) 从模型文件中删除 inter_dst
5) 按常规方式训练

== 2021.07.01 ==

AMP 模型:修复了预览历史

添加了 ‘中间维度’ 选项。模型未更改。应等于或大于 AutoEncoder 维度。
更多维度更好,但需要更多 VRAM。您可以微调模型大小以适应您的 GPU。

移除了预训练选项。

默认形变因子现在为 0.1

如何训练 AMP:

1) 如常训练 src-dst。
2) 删除 inters 模型文件。
3) 训练 src-src。这意味着将对齐 src 到 data_dst
4) 删除 inters 模型文件。
5) 如常训练 src-dst。

添加了脚本

6) export AMP as dfm.bat
6) export SAEHD as dfm.bat
以 .dfm 格式导出模型以在 DeepFaceLive 中工作。

== 2021.06.02 ==

AMP 模型:添加了 ‘morph_factor’ 选项。[0.1 .. 0.5]
值越小,面部表情越接近源。
值越大,训练大量目标面部集合的空间就越少。
典型值是 0.33

AMP 模型:添加了 ‘预训练’ 模式,与 SAEHD 相同

默认预训练数据集已更新,应用了通用 XSeg 掩模

== 2021.05.30 ==

添加了新实验模型 ‘AMP’(因其作为放大器,目标面部表情被放大到源)

它具有可控的 ‘形变因子’,您可以在合并过程中指定值(0.0 .. 1.0)。
如果面部形状不同,将获得不同的下颌线,这需要艰苦的后处理。

但是,您可以使用大型目标面部集合预先训练名人(包括种子文件中包含的),然后继续使用假脸进行训练。
在这种情况下,您将获得更“缝合”的面部。

合并后的面部看起来不错:

大型目标 WF 面部集合现在包含在种子文件中。
如果源面部集合多样化且足够大,则在预训练期间应使用 lct 颜色转移模式。

XSeg 编辑器:删除按钮现在将面部移动到 _trash 目录,并已移动到窗口右边界

面部集打包器现在询问是否删除原始文件

训练器现在每 25 分钟保存一次,而不是 15 分钟。

== 2021.05.12 ==

SAEHD: random_flip 选项替换为

random_src_flip (默认为 OFF)

随机水平翻转源面部集。覆盖更多角度,但面部可能看起来不太自然

random_dst_flip (默认为 ON)

随机水平翻转目标面部集。如果源面部未启用随机翻转,将提高 src->dst 的泛化性。

通过以下脚本添加了面部集重置工具

4.2) data_src util faceset resize.bat

5.2) data_dst util faceset resize.bat

将面部集调整为与模型分辨率匹配以减少训练期间的 CPU 负载。

不要忘记保留原始面部集。

== 2021.01.04 ==

SAEHD: GAN 有所改进。现在产生的伪影更少,预览更干净。

所有 GAN 选项:

GAN 力量

迫使神经网络学习面部的小细节。
仅当面部已用 lr_dropout(开启)和 random_warp(关闭)充分训练时,才启用它。
值越大,产生伪影的可能性越高。典型值是 0.1

GAN 贴片大小(3-640)

值越大,质量越高,需要更多 VRAM。

即使在最低设置下也可以获得更清晰的边缘。

典型值是分辨率 / 8。

GAN 维度(4-64)

GAN 网络的维度。

值越大,需要的 VRAM 越多。

即使在最低设置下也可以获得更清晰的边缘。

典型值是 16。

不同设置的比较:

== 2021.01.01 ==

“2080TI 以及更早版本”现在存在了一个构建版本。

== 2021.12.22 ==

训练数据加载时间显著减少。

== 2021.12.20 ==

SAEHD:

lr_dropout 现在可以在 AdaBelief 与 AdaBelief 一起使用

眼睛优先级被眼睛和嘴巴优先级取代

有助于解决训练期间的眼睛问题,如“异形眼睛”和错误的眼睛方向。

同时也会提高牙齿细节。

新默认值与新模型:

Archi : ‘liae-ud’

AdaBelief : 启用

== 2021.12.18 ==

现在为所有显卡提供单个构建版本。

升级到 Tensorflow 2.4.0, CUDA 11.2, CuDNN 8.0.5。

无需安装任何内容。

== 2021.12.11 ==

升级到 Tensorflow 2.4.0rc4

现在支持 RTX 3000 系列。

3.0 计算能力的显卡不再支持。

不支持带有 AVX 的 CPU。

SAEHD: 添加了一个新选项

使用 AdaBelief 优化器?

实验性 AdaBelief 优化器。需要更多 VRAM,但模型精度更高,不需要 lr_dropout。

== 2021.08.02 ==

SAEHD: 默认情况下,预训练模式下现在禁用了 random_warp

合并:修复了 xseg 如果没有模型文件加载时间的问题

== 2021.07.18 ==

修复

SAEHD: write_preview_history 现在更快地工作

保存预览的频率现在取决于分辨率。

例如 64×64 – 每 10 次迭代。448×448 – 每 70 次迭代。

合并:添加了一个选项 “工作线程数?”

指定处理的线程数。

较低的值可能会影响性能。

较高的值可能导致内存错误。

该值可能不能大于 CPU 核心数。

== 2021.07.17 ==

SAEHD:

pretrain 数据集被高质 FFHQ 数据集替换。

更改了 “学习率丢弃” 选项的帮助信息:

当面部已充分训练时,可以启用此选项以在较少的迭代次数中获得额外的锐度并减少亚像素抖动。

在启用 random warp 之前和 GAN 之前启用。 n 未启用。 y 启用

cpu 启用在 CPU 上。这允许不使用额外的 VRAM,牺牲 20% 迭代时间。

更改了 GAN 选项的帮助信息:

以生成对抗的方式训练网络。

迫使神经网络学习面部的小细节。

仅当面部已充分训练且不要禁用时才启用它。

典型值是 0.1

改进了 GAN。现在它产生更好的皮肤细节,更少的模式化攻击性伪影,速度快。

== 2021.07.04 ==

修复错误。

重命名了一些 5.XSeg) 脚本。

更改了 GAN_power 的帮助信息。

== 2021.06.27 ==

提取器:

现在可以在不重新指定选项的情况下继续提取,但您必须再次指定相同的选项。

添加了“每图像的最大面部数”选项。

如果提取带有大量面部的源面部集,

建议将 max faces 设置为 3 以加快提取速度。

0 – 无限

添加了“图像大小”选项。

图像大小越高,面部增强器的效果越差。

仅在源图像足够锐利和面部不需要增强的情况下才使用高于 512 的值。

添加了“Jpeg 质量”选项,范围为 1-100。JPEG 质量越高,输出文件越大。

排序器:改进了根据模糊度和最佳面部排序。

== 2021.06.22 ==

XSeg 编辑器:

更改了 xseg 覆盖掩模的快捷键

“overlay xseg mask” 现在以多边形模式工作

== 2021.06.21 ==

SAEHD:

–d arhi 的分辨率现在自动调整为 32 的倍数。

‘uniform_yaw’ 现在在预训练模式中始终启用。

 

后台程序现在在未启动时会写入错误。

XSeg 编辑器:修复了标记图像错误计数不正确的问题。

XNViewMP:默认启用深色主题

== 2021.06.19 ==

SAEHD:

最大分辨率增加到 640。

‘hd’ arhi 已被删除。‘hd’ 是实验性架构,创建目的是去除亚像素抖动,但 lr_dropout 和 disable random warping 做得更好。

‘uhd’ 重命名为 ‘-u’

dfuhd 和 liaeuhd 将会自动重命名为 df-u 和 liae-u 在现有模型中。

添加了一个新的实验性架构(键 -d),该架构在相同计算成本下将分辨率翻倍。

意思是相同配置将快两倍,或者例如你将设置 448 分辨率并将其训练为 224。

强烈推荐不要从头开始训练并使用预训练模型。

新架构命名:

‘df’ 保留更多身份保留的面部。

‘liae’ 可以修复过于不同面部形状。

‘-u’ 增强了面部相似度。

‘-d’(实验性)使用相同的计算成本将分辨率翻倍

选项可以混合(-ud)

例如:df, liae, df-d, df-ud, liae-ud, …

不那么好的一个 448 df-ud 训练在 11GB 上的例子:

改进了 GAN 训练(GAN_power 选项)。它之前只用于 dst 模型,但实际上我们不需要它用于 dst。

取而代之的是,添加了一个第二个 src GAN 模型,其 x2 较小的 patch size,因此对高分辨率模型的整体质量应该更高。

添加了选项 ‘统一样本的俯仰分布(y/n)’:

帮助修复由于面部集中小部分面部而导致的模糊侧面面部。

Quick96:

现在基于 df-ud 架构并且 20% 更快。

XSeg 训练器:

改进了样本生成器。

现在它会随机从其他样本中添加背景。

结果是减少了随机掩码噪声在面部外区域出现的机会。

现在您可以指定‘batch_size’在 2-16 范围内。

减少了带 xseg 掩模的样本的大小。因此,带 xseg 掩模的打包样本的大小也减少了。

 

== 2021.06.11 ==

训练器:修复了“选择图像以进行预览历史记录”。现在可以使用空格键在子预览之间切换。

修复了“写入预览历史记录”。现在它会在单独的文件夹中写入所有子预览

另外,将最新预览保存为 _last.jpg 在第一个文件之前

因此,您可以在相册查看器中轻松检查变化

 

XSeg 编辑器:添加了“查看 XSeg 掩模覆盖面部”按钮

更改了帧线设计

更改了加载帧设计

 

== 2021.06.08 ==

SAEHD: 分辨率 >= 256 现在有两个 dssim 损失函数

SAEHD: lr_dropout 现在可以是 ‘n’, ‘y’, ‘cpu’. ‘n’ 和 ‘y’ 与之前相同。

‘cpu’ 表示在 CPU 上启用。这允许不使用额外的 VRAM,但会牺牲迭代 20% 的时间。

修复了错误

减少了错误“磁盘分区文件太小,无法完成此操作”的机会

将 XNViewMP 更新到 0.96.2 版本

== 2021.06.04 ==

手动提取器:现在可以手动指定面部矩形(右键点击鼠标)。

这对于小、模糊、不可检测的面部及动物面部非常有用。

警告:

面部关键点不能被精确放置,并且实际上用于定位红色框。

因此,这样的帧只能与 XSeg 流程一起使用!

尝试保持红色框与相邻帧一致。

添加了脚本

10. 其他) make CPU only.bat

此脚本将转换您的 DeepFaceLab 文件夹以在没有问题的情况下在 CPU 上工作。需要互联网连接。

这在使用 Colab 训练且合并时没有 GPU 的情况下非常有用。

== 2021.05.31 ==

XSeg 编辑器:添加了“查看 XSeg 掩模覆盖面部”按钮

== 2021.05.06 ==

一些修复

SAEHD: 改变了 UHD arhis。您需要从头开始重新训练 uhd 模型。

== 2021.04.20 ==

XSeg 编辑器:修复了错误
合并:修复了错误

== 2021.04.15 ==

XSeg 编辑器:添加了查看中心锁定的视图(在绘制模式时按住 shift 键)。
合并:颜色转移“sot-m”:优化了 5-10% 的速度
修复了样本加载中的小错误
更新了 XNViewMP 到 0.96.2 版本

== 2021.04.14 ==

合并:
颜色转移 ‘sot-m’ : 减少了颜色闪烁,但处理时间增加了五倍
添加了掩模模式 ‘learned-prd + learned-dst’ – 产生 DST 和预测掩模的最大面积
XSeg 编辑器:现在在绘制模式中透明显示多边形

新示例数据_dst

 

Windows 10 users important notice!

You should set this setting in order to work correctly.

 

System – Display – Graphics settings

============ CHANGELOG ============

== 20.11.2021 ==

Fixed rct color transfer in merger

Fixed model export.

== 20.10.2021 ==

SAEHD, AMP: random scale increased to -0.15+0.15. Improved lr_dropout capability to reach lower value of the loss.

SAEHD: changed algorithm for bg_style_power. Now can better stitch a face losing src-likeness.

added option Random hue/saturation/light intensity applied to the src face set only at the input of the neural network. Stabilizes color perturbations during face swapping. Reduces the quality of the color transfer by selecting the closest one in the src faceset. Thus the src faceset must be diverse enough. Typical fine value is 0.05.

Liae arhi: when random_warp is off, inter_AB network is no longer trained to keep the face more src-like.

== 09.10.2021 ==

SAEHD: added -t arhi option. Makes the face more src-like.

SAEHD, AMP:

removed the implicit function of periodically retraining last 16 “high-loss” samples

fixed export to .dfm format to work correctly in DirectX12 DeepFaceLive build.

In the sample generator, the random scaling was increased from -0.05+0.05 to -0.125+0.125, which improves the generalization of faces.

== 06.09.2021 ==

Fixed error in model saving.

AMP, SAEHD: added option ‘blur out mask’

Blurs nearby area outside of applied face mask of training samples.

The result is the background near the face is smoothed and less noticeable on swapped face.

The exact xseg mask in src and dst faceset is required.

AMP, SAEHD: Sample processors count are no more limited to 8, thus if you have AMD processor with 16+ cores, increase paging file size.

DirectX12 build: update tensorflow-directml to 1.15.5 version.

== 12.08.2021 ==

XSeg model: improved pretrain option

Generic XSeg: added more faces (the faceset is not publicly available) and retrained with pretrain option. The quality is now higher.

Updated RTM WF Dataset with the new Generic XSeg mask applied, also added 490 faces with closed eyes.

== 30.07.2021 ==

Export AMP/SAEHD: added “Export quantized” option. (was enabled before)

Makes the exported model faster. If you have problems, disable this option.

AMP model:

changed help of ct mode:

       Change color distribution of src samples close to dst samples. If src faceset is deverse enough, then lct mode is fine in most cases.

Default inter dims now 1024

return lr_dropout option

last high loss samples behaviour – same as SAEHD

XSeg model: added pretrain option.

Generic XSeg: retrained with pretrain option. The quality is now higher.

Updated RTM WF Dataset with the new Generic XSeg mask applied.

== 17.07.2021 ==

SAE/AMP: GAN model is reverted to December version, which is better, tested on high-res fakes.

AMP:   default morph factor is now 0.5

       Removed eyes_mouth_prio option, enabled permanently.

       Removed masked training, enabled permanently.

Added script

6) train AMP SRC-SRC.bat

Stable approach to train AMP:

1)  Get fairly diverse src faceset

2)  Set morph factor to 0.5

3)  train AMP SRC-SRC for 500k+ iters (more is better)

4)  delete inter_dst from model files

5)  train as usual

== 01.07.2021 ==

AMP model:   fixed preview history

added ‘Inter dimensions’ option. The model is not changed. Should be equal or more than AutoEncoder dimensions.

More dims are better, but require more VRAM. You can fine-tune model size to fit your GPU.

Removed pretrain option.

Default morph factor is now 0.1

How to train AMP:

1)  Train as usual src-dst.

2)  Delete inters model files.

3)  Train src-src. It’s mean place src aligned to data_dst

4)  Delete inters model files.

5)  Train as usual src-dst.

Added scripts

6) export AMP as dfm.bat

6) export SAEHD as dfm.bat

Export model as .dfm format to work in DeepFaceLive.

== 02.06.2021 ==

AMP model: added ‘morph_factor’ option. [0.1 .. 0.5]

The smaller the value, the more src-like facial expressions will appear.

The larger the value, the less space there is to train a large dst faceset in the neural network.

Typical fine value is 0.33

AMP model: added ‘pretrain’ mode as in SAEHD

Default pretrain dataset is updated with applied Generic XSeg mask

== 30.05.2021 ==

Added new experimental model ‘AMP’ (as amplifier, because dst facial expressions are amplified to src)

It has controllable ‘morph factor’, you can specify the value (0.0 .. 1.0) in the console before merging process.

If the shapes of the faces are different, you will get different jaw line

which requires a hard post process.

But you can pretrain a celeb on large dst faceset with applied Generic XSeg mask (included in torrent). Then continue train with dst of the fake.

In this case you will get more ‘sewed’ face.

And merged face looks fine:

Large dst WF faceset with applied Generic XSeg mask is now included in torrent file.

If your src faceset is diverse and large enough, then ‘lct’ color transfer mode should be used during pretraining.

XSegEditor: delete button now moves the face to _trash directory and it has been moved to the right border of the window

Faceset packer now asks whether to delete the original files

Trainer now saves every 25 min instead of 15

== 12.05.2021 ==

FacesetResizer now supports changing face type

XSegEditor: added delete button

Improved training sample augmentation for XSeg trainer.

XSeg model has been changed to work better with large amount of various faces, thus you should retrain existing xseg model.

Added Generic XSeg model pretrained on various faces. It is most suitable for src faceset because it contains clean faces, but also can be applied on dst footage without complex face obstructions.

5.XSeg Generic) data_dst whole_face mask – apply.bat

5.XSeg Generic) data_src whole_face mask – apply.bat

== 22.04.2021 ==

Added new build DeepFaceLab_DirectX12, works on all devices that support DirectX12 in Windows 10:

AMD Radeon R5/R7/R9 2xx series or newer

Intel HD Graphics 5xx or newer

NVIDIA GeForce GTX 9xx series GPU or newer

DirectX12 is 20-80% slower on NVIDIA Cards comparing to ‘NVIDIA’ build.

Improved XSeg sample generator in the training process.

== 23.03.2021 ==

SAEHD: random_flip option is replaced with

random_src_flip (default OFF)

Random horizontal flip SRC faceset. Covers more angles, but the face maylook less naturally

random_dst_flip (default ON)

Random horizontal flip DST faceset. Makes generalization of src->dst better, if src random flip is not enabled.

Added faceset resize tool via

4.2) data_src util faceset resize.bat

5.2) data_dst util faceset resize.bat

Resize faceset to match model resolution to reduce CPU load during training.

Don’t forget to keep original faceset.

== 04.01.2021 ==

SAEHD: GAN is improved. Now produces less artifacts and more cleaner preview.

All GAN options:

GAN power

Forces the neural network to learn small details of the face.

Enable it only when the face is trained enough with lr_dropout(on) and random_warp(off), and don’t disable.

The higher the value, the higher the chances of artifacts. Typical fine value is 0.1

GAN patch size (3-640)

The higher patch size, the higher the quality, the more VRAM is required.

You can get sharper edges even at the lowest setting.

Typical fine value is resolution / 8.

GAN dimensions (4-64)

The dimensions of the GAN network.

The higher dimensions, the more VRAM is required.

You can get sharper edges even at the lowest setting.

Typical fine value is 16.

Comparison of different settings:

== 01.01.2021 ==

Build for “2080TI and earlier” now exists again.

== 22.12.2020 ==

The load time of training data has been reduced significantly.

== 20.12.2020 ==

SAEHD:

lr_dropout now can be used with AdaBelief

Eyes priority is replaced with Eyes and mouth priority

Helps to fix eye problems during training like “alien eyes” and wrong eyes direction.

Also makes the detail of the teeth higher.

New default values with new model:

Archi : ‘liae-ud’

AdaBelief : enabled

== 18.12.2020 ==

Now single build for all video cards.

Upgraded to Tensorflow 2.4.0, CUDA 11.2, CuDNN 8.0.5.

You don’t need to install anything.

== 11.12.2020 ==

Upgrade to Tensorflow 2.4.0rc4

Now support RTX 3000 series.

Videocards with Compute Capability 3.0 are no longer supported.

CPUs without AVX are no longer supported.

SAEHD: added new option

Use AdaBelief optimizer?

Experimental AdaBelief optimizer. It requires more VRAM, but the accuracy of the model is higher, and lr_dropout is not needed.

== 02.08.2020 ==

SAEHD: now random_warp is disabled for pretraining mode by default

Merger: fix load time of xseg if it has no model files

== 18.07.2020 ==

Fixes

SAEHD: write_preview_history now works faster

The frequency at which the preview is saved now depends on the resolution.

For example 64×64 – every 10 iters. 448×448 – every 70 iters.

Merger: added option “Number of workers?”

Specify the number of threads to process.

A low value may affect performance.

A high value may result in memory error.

The value may not be greater than CPU cores.

== 17.07.2020 ==

SAEHD:

Pretrain dataset is replaced with high quality FFHQ dataset.

Changed help for “Learning rate dropout” option:

When the face is trained enough, you can enable this option to get extra sharpness and reduce subpixel shake for less amount of iterations.

Enabled it before “disable random warp” and before GAN. n disabled. y enabled

cpu enabled on CPU. This allows not to use extra VRAM, sacrificing 20% time of iteration.

Changed help for GAN option:

Train the network in Generative Adversarial manner.

Forces the neural network to learn small details of the face.

Enable it only when the face is trained enough and don’t disable.

Typical value is 0.1

improved GAN. Now it produces better skin detail, less patterned aggressive artifacts, works faster.

== 04.07.2020 ==

Fix bugs.

Renamed some 5.XSeg) scripts.

Changed help for GAN_power.

== 27.06.2020 ==

Extractor:

       Extraction now can be continued, but you must specify the same options again.

       added ‘Max number of faces from image’ option.

If you extract a src faceset that has frames with a large number of faces,

it is advisable to set max faces to 3 to speed up extraction.

0 – unlimited

added ‘Image size’ option.

The higher image size, the worse face-enhancer works.

Use higher than 512 value only if the source image is sharp enough and the face does not need to be enhanced.

added ‘Jpeg quality’ option in range 1-100. The higher jpeg quality the larger the output file size

Sorter: improved sort by blur and by best faces.

== 22.06.2020 ==

XSegEditor:

changed hotkey for xseg overlay mask

“overlay xseg mask” now works in polygon mode

== 21.06.2020 ==

SAEHD:

Resolution for –d archi is now automatically adjusted to be divisible by 32.

‘uniform_yaw’ now always enabled in pretrain mode.

Subprocessor now writes an error if it does not start.

XSegEditor: fixed incorrect count of labeled images.

XNViewMP: dark theme is enabled by default

== 19.06.2020 ==

SAEHD:

Maximum resolution is increased to 640.

‘hd’ archi is removed. ‘hd’ was experimental archi created to remove subpixel shake, but ‘lr_dropout’ and ‘disable random warping’ do that better.

‘uhd’ is renamed to ‘-u’

dfuhd and liaeuhd will be automatically renamed to df-u and liae-u in existing models.

Added new experimental archi (key -d) which doubles the resolution using the same computation cost.

It is mean same configs will be x2 faster, or for example you can set 448 resolution and it will train as 224.

Strongly recommended not to train from scratch and use pretrained models.

New archi naming:

‘df’ keeps more identity-preserved face.

‘liae’ can fix overly different face shapes.

‘-u’ increased likeness of the face.

‘-d’ (experimental) doubling the resolution using the same computation cost

Opts can be mixed (-ud)

Examples: df, liae, df-d, df-ud, liae-ud, …

Not the best example of 448 df-ud trained on 11GB:

Improved GAN training (GAN_power option).  It was used for dst model, but actually we don’t need it for dst.

Instead, a second src GAN model with x2 smaller patch size was added, so the overall quality for hi-res models should be higher.

Added option ‘Uniform yaw distribution of samples (y/n)’:

       Helps to fix blurry side faces due to small amount of them in the faceset.

Quick96:

       Now based on df-ud archi and 20% faster.

XSeg trainer:

       Improved sample generator.

Now it randomly adds the background from other samples.

Result is reduced chance of random mask noise on the area outside the face.

Now you can specify ‘batch_size’ in range 2-16.

Reduced size of samples with applied XSeg mask. Thus size of packed samples with applied xseg mask is also reduced.

== 11.06.2020 ==

Trainer: fixed “Choose image for the preview history”. Now you can switch between subpreviews using ‘space’ key.
Fixed “Write preview history”. Now it writes all subpreviews in separated folders

also the last preview saved as _last.jpg before the first file

thus you can easily check the changes with the first file in photo viewer

XSegEditor: added text label of total labeled images

Changed frame line design

Changed loading frame design

== 08.06.2020 ==

SAEHD: resolution >= 256 now has second dssim loss function

SAEHD: lr_dropout now can be ‘n’, ‘y’, ‘cpu’. ‘n’ and ’y’ are the same as before.

‘cpu’ mean enabled on CPU. This allows not to use extra VRAM, sacrificing 20% time of iteration.

fix errors

reduced chance of the error “The paging file is too small for this operation to complete.”

updated XNViewMP to 0.96.2

== 04.06.2020 ==

Manual extractor: now you can specify the face rectangle manually using ‘R Mouse button’.

It is useful for small, blurry, undetectable faces, animal faces.

Warning:

Landmarks cannot be placed on the face precisely, and they are actually used for positioning the red frame.

Therefore, such frames must be used only with XSeg workflow !

Try to keep the red frame the same as the adjacent frames.

added script

10.misc) make CPU only.bat

This script will convert your DeepFaceLab folder to work on CPU without any problems. An internet connection is required.

It is useful to train on Colab and merge interactively on your comp without GPU.

== 31.05.2020 ==

XSegEditor: added button “view XSeg mask overlay face”

== 06.05.2020 ==

Some fixes

SAEHD: changed UHD arhis. You have to retrain uhd models from scratch.

== 20.04.2020 ==

XSegEditor: fix bug

Merger: fix bug

== 15.04.2020 ==

XSegEditor: added view lock at the center by holding shift in drawing mode.

Merger: color transfer “sot-m”: speed optimization for 5-10%

Fix minor bug in sample loader

== 14.04.2020 ==

Merger: optimizations

        color transfer ‘sot-m’ : reduced color flickering, but consuming x5 more time to process

        added mask mode ‘learned-prd + learned-dst’ – produces largest area of both dst and predicted masks

XSegEditor : polygon is now transparent while editing

New example data_dst.mp4 video

New official mini tutorial https://www.youtube.com/watch?v=1smpMsfC3ls

== 06.04.2020 ==

Fixes for 16+ cpu cores and large facesets.

added 5.XSeg) data_dst/data_src mask for XSeg trainer – remove.bat

       removes labeled xseg polygons from the extracted frames

== 05.04.2020 ==

Decreased amount of RAM used by Sample Generator.

Fixed bug with input dialog in Windows 10

Fixed running XSegEditor when directory path contains spaces

SAEHD: ‘Face style power’ and ‘Background style power’  are now available for whole_face

 New help messages for these options.

XSegEditor: added button ‘view trained XSeg mask’, so you can see which frames should be masked to improve mask quality.

Merger:

added ‘raw-predict’ mode. Outputs raw predicted square image from the neural network.

mask-mode ‘learned’ replaced with 3 new modes:

       ‘learned-prd’ – smooth learned mask of the predicted face

       ‘learned-dst’ – smooth learned mask of DST face

       ‘learned-prd*learned-dst’ – smallest area of both (default)

Added new face type : head

Now you can replace the head.

Example: https://www.youtube.com/watch?v=xr5FHd0AdlQ

Requirements:

       Post processing skill in Adobe After Effects or Davinci Resolve.

Usage:

1)  Find suitable dst footage with the monotonous background behind head

2)  Use “extract head” script

3)  Gather rich src headset from only one scene (same color and haircut)

4)  Mask whole head for src and dst using XSeg editor

5)  Train XSeg

6)  Apply trained XSeg mask for src and dst headsets

7)  Train SAEHD using ‘head’ face_type as regular deepfake model with DF archi. You can use pretrained model for head. Minimum recommended resolution for head is 224.

8)  Extract multiple tracks, using Merger:

  1. Raw-rgb
  2. XSeg-prd mask
  3. XSeg-dst mask

9)  Using AAE or DavinciResolve, do:

  1. Hide source head using XSeg-prd mask: content-aware-fill, clone-stamp, background retraction, or other technique
  2. Overlay new head using XSeg-dst mask

Warning: Head faceset can be used for whole_face or less types of training only with XSeg masking.

== 30.03.2020 ==

New script:

       5.XSeg) data_dst/src mask for XSeg trainer – fetch.bat

Copies faces containing XSeg polygons to aligned_xseg\ dir.

Useful only if you want to collect labeled faces and reuse them in other fakes.

Now you can use trained XSeg mask in the SAEHD training process.

It’s mean default ‘full_face’ mask obtained from landmarks will be replaced with the mask obtained from the trained XSeg model.

use

5.XSeg.optional) trained mask for data_dst/data_src – apply.bat

5.XSeg.optional) trained mask for data_dst/data_src – remove.bat

Normally you don’t need it. You can use it, if you want to use ‘face_style’ and ‘bg_style’ with obstructions.

XSeg trainer : now you can choose type of face

XSeg trainer : now you can restart training in “override settings”

Merger: XSeg-* modes now can be used with all types of faces.

Therefore old MaskEditor, FANSEG models, and FAN-x modes have been removed,

because the new XSeg solution is better, simpler and more convenient, which costs only 1 hour of manual masking for regular deepfake.

== 27.03.2020 ==

XSegEditor: fix bugs, changed layout, added current filename label

SAEHD: fixed the use of pretrained liae model, now it produces less face morphing

== 25.03.2020 ==

SAEHD: added ‘dfuhd’ and ‘liaeuhd’ archi

uhd version is lighter than ‘HD’ but heavier than regular version.

liaeuhd provides more “src-like” result

comparison:

       liae:    https://i.imgur.com/JEICFwI.jpg

       liaeuhd: https://i.imgur.com/ymU7t5E.jpg

added new XSegEditor !

here new whole_face + XSeg workflow:

with XSeg model you can train your own mask segmentator for dst(and/or src) faces

that will be used by the merger for whole_face.

Instead of using a pretrained segmentator model (which does not exist),

you control which part of faces should be masked.

new scripts:

       5.XSeg) data_dst edit masks.bat

       5.XSeg) data_src edit masks.bat

       5.XSeg) train.bat

Usage:

       unpack dst faceset if packed

       run 5.XSeg) data_dst edit masks.bat

       Read tooltips on the buttons (en/ru/zn languages are supported)

       mask the face using include or exclude polygon mode.

       repeat for 50/100 faces,

             !!! you don’t need to mask every frame of dst

             only frames where the face is different significantly,

             for example:

                    closed eyes

                    changed head direction

                    changed light

             the more various faces you mask, the more quality you will get

             Start masking from the upper left area and follow the clockwise direction.

             Keep the same logic of masking for all frames, for example:

                    the same approximated jaw line of the side faces, where the jaw is not visible

                    the same hair line

             Mask the obstructions using exclude polygon mode.

       run XSeg) train.bat

             train the model

             Check the faces of ‘XSeg dst faces’ preview.

             if some faces have wrong or glitchy mask, then repeat steps:

                    run edit

                    find these glitchy faces and mask them

                    train further or restart training from scratch

Restart training of XSeg model is only possible by deleting all ‘model\XSeg_*’ files.

If you want to get the mask of the predicted face (XSeg-prd mode) in merger,

you should repeat the same steps for src faceset.

New mask modes available in merger for whole_face:

XSeg-prd       – XSeg mask of predicted face  -> faces from src faceset should be labeled

XSeg-dst       – XSeg mask of dst face               -> faces from dst faceset should be labeled

XSeg-prd*XSeg-dst – the smallest area of both

if workspace\model folder contains trained XSeg model, then merger will use it,

otherwise you will get transparent mask by using XSeg-* modes.

Some screenshots:

XSegEditor: https://i.imgur.com/7Bk4RRV.jpg

trainer   : https://i.imgur.com/NM1Kn3s.jpg

merger    : https://i.imgur.com/glUzFQ8.jpg

example of the fake using 13 segmented dst faces

          : https://i.imgur.com/wmvyizU.gifv

== 18.03.2020 ==

Merger: fixed face jitter

== 15.03.2020 ==

global fixes

SAEHD: removed option learn_mask, it is now enabled by default

removed liaech arhi

removed support of extracted(aligned) PNG faces. Use old builds to convert from PNG to JPG.

== 07.03.2020 ==

returned back

3.optional) denoise data_dst images.bat

       Apply it if dst video is very sharp.

       Denoise dst images before face extraction.

       This technique helps neural network not to learn the noise.

       The result is less pixel shake of the predicted face.

SAEHD:

added new experimental archi

‘liaech’ – made by @chervonij. Based on liae, but produces more src-like face.

lr_dropout is now disabled in pretraining mode.

Sorter:

added sort by “face rect size in source image”

small faces from source image will be placed at the end

added sort by “best faces faster”

same as sort by “best faces”

but faces will be sorted by source-rect-area instead of blur.

== 28.02.2020 ==

Extractor:

image size for all faces is now 512

fix RuntimeWarning during the extraction process

SAEHD:

max resolution is now 512

fix hd arhitectures. Some decoder’s weights haven’t trained before.

new optimized training:

for every <batch_size*16> samples,

model collects <batch_size> samples with the highest error and learns them again

therefore hard samples will be trained more often

‘models_opt_on_gpu’ option is now available for multigpus (before only for 1 gpu)

fix ‘autobackup_hour’

== 23.02.2020 ==

SAEHD: pretrain option is now available for whole_face type

fix sort by abs difference

fix sort by yaw/pitch/best for whole_face’s

== 21.02.2020 ==

Trainer: decreased time of initialization

Merger: fixed some color flickering in overlay+rct mode

SAEHD:

added option Eyes priority (y/n)

       Helps to fix eye problems during training like “alien eyes”

       and wrong eyes direction ( especially on HD architectures )

       by forcing the neural network to train eyes with higher priority.

       before/after https://i.imgur.com/YQHOuSR.jpg

added experimental face type ‘whole_face’

       Basic usage instruction: https://i.imgur.com/w7LkId2.jpg

       ‘whole_face’ requires skill in Adobe After Effects.

       For using whole_face you have to extract whole_face’s by using

       4) data_src extract whole_face

       and

       5) data_dst extract whole_face

       Images will be extracted in 512 resolution, so they can be used for regular full_face’s and half_face’s.

       ‘whole_face’ covers whole area of face include forehead in training square,

       but training mask is still ‘full_face’

       therefore it requires manual final masking and composing in Adobe After Effects.

added option ‘masked_training’

       This option is available only for ‘whole_face’ type.

       Default is ON.

       Masked training clips training area to full_face mask,

       thus network will train the faces properly.

       When the face is trained enough, disable this option to train all area of the frame.

       Merge with ‘raw-rgb’ mode, then use Adobe After Effects to manually mask, tune color, and compose whole face include forehead.

== 03.02.2020 ==

“Enable autobackup” option is replaced by

“Autobackup every N hour” 0..24 (default 0 disabled), Autobackup model files with preview every N hour

Merger:

‘show alpha mask’ now on ‘V’ button

‘super resolution mode’ is replaced by

‘super resolution power’ (0..100) which can be modified via ‘T’ ‘G’ buttons

default erode/blur values are 0.

new multiple faces detection log: https://i.imgur.com/0XObjsB.jpg

now uses all available CPU cores ( before max 6 )

so the more processors, the faster the process will be.

== 01.02.2020 ==

Merger:

increased speed

improved quality

SAEHD: default archi is now ‘df’

== 30.01.2020 ==

removed use_float16 option

fix MultiGPU training

== 29.01.2020 ==

MultiGPU training:

fixed CUDNN_STREAM errors.

speed is significantly increased.

Trainer: added key ‘b’ : creates a backup even if the autobackup is disabled.

== 28.01.2020 ==

optimized face sample generator, CPU load is significantly reduced

fix of update preview for history after disabling the pretrain mode

SAEHD:

added new option

GAN power 0.0 .. 10.0

       Train the network in Generative Adversarial manner.

       Forces the neural network to learn small details of the face.

       You can enable/disable this option at any time,

       but better to enable it when the network is trained enough.

       Typical value is 1.0

       GAN power with pretrain mode will not work.

Example of enabling GAN on 81k iters +5k iters

https://i.imgur.com/OdXHLhU.jpg

https://i.imgur.com/CYAJmJx.jpg

dfhd: default Decoder dimensions are now 48

the preview for 256 res is now correctly displayed

fixed model naming/renaming/removing

Improvements for those involved in post-processing in AfterEffects:

Codec is reverted back to x264 in order to properly use in AfterEffects and video players.

Merger now always outputs the mask to workspace\data_dst\merged_mask

removed raw modes except raw-rgb

raw-rgb mode now outputs selected face mask_mode (before square mask)

‘export alpha mask’ button is replaced by ‘show alpha mask’.

You can view the alpha mask without recompute the frames.

8) ‘merged *.bat’ now also output ‘result_mask.’ video file.

8) ‘merged lossless’ now uses x264 lossless codec (before PNG codec)

result_mask video file is always lossless.

Thus you can use result_mask video file as mask layer in the AfterEffects.

== 25.01.2020 ==

Upgraded to TF version 1.13.2

Removed the wait at first launch for most graphics cards.

Increased speed of training by 10-20%, but you have to retrain all models from scratch.

SAEHD:

added option ‘use float16’

       Experimental option. Reduces the model size by half.

       Increases the speed of training.

       Decreases the accuracy of the model.

       The model may collapse or not train.

       Model may not learn the mask in large resolutions.

       You enable/disable this option at any time.

true_face_training option is replaced by

“True face power”. 0.0000 .. 1.0

Experimental option. Discriminates the result face to be more like the src face. Higher value – stronger discrimination.

Comparison – https://i.imgur.com/czScS9q.png

== 23.01.2020 ==

SAEHD: fixed clipgrad option

== 22.01.2020 == BREAKING CHANGES !!!

Getting rid of the weakest link – AMD cards support.

All neural network codebase transferred to pure low-level TensorFlow backend, therefore

removed AMD/Intel cards support, now DFL works only on NVIDIA cards or CPU.

old DFL marked as 1.0 still available for download, but it will no longer be supported.

global code refactoring, fixes and optimizations

Extractor:

now you can choose on which GPUs (or CPU) to process

improved stability for < 4GB GPUs

increased speed of multi gpu initializing

now works in one pass (except manual mode)

so you won’t lose the processed data if something goes wrong before the old 3rd pass

Faceset enhancer:

now you can choose on which GPUs (or CPU) to process

Trainer:

now you can choose on which GPUs (or CPU) to train the model.

Multi-gpu training is now supported.

Select identical cards, otherwise fast GPU will wait slow GPU every iteration.

now remembers the previous option input as default with the current workspace/model/ folder.

the number of sample generators now matches the available number of processors

saved models now have names instead of GPU indexes.

Therefore you can switch GPUs for every saved model.

Trainer offers to choose latest saved model by default.

You can rename or delete any model using the dialog.

models now save the optimizer weights in the model folder to continue training properly

removed all models except SAEHD, Quick96

trained model files from DFL 1.0 cannot be reused

AVATAR model is also removed.

How to create AVATAR like in this video? https://www.youtube.com/watch?v=4GdWD0yxvqw

1) capture yourself with your own speech repeating same head direction as celeb in target video

2) train regular deepfake model with celeb faces from target video as src, and your face as dst

3) merge celeb face onto your face with raw-predict mode

4) compose masked mouth with target video in AfterEffects

SAEHD:

now has 3 options: Encoder dimensions, Decoder dimensions, Decoder mask dimensions

now has 4 arhis: dfhd (default), liaehd, df, liae

df and liae are from SAE model, but use features from SAEHD model (such as combined loss and disable random warp)

dfhd/liaehd – changed encoder/decoder architectures

decoder model is combined with mask decoder model

mask training is combined with face training,

result is reduced time per iteration and decreased vram usage by optimizer

“Initialize CA weights” now works faster and integrated to “Initialize models” progress bar

removed optimizer_mode option

added option ‘Place models and optimizer on GPU?’

  When you train on one GPU, by default model and optimizer weights are placed on GPU to accelerate the process.

  You can place they on CPU to free up extra VRAM, thus you can set larger model parameters.

  This option is unavailable in MultiGPU mode.

pretraining now does not use rgb channel shuffling

pretraining now can be continued

when pre-training is disabled:

1) iters and loss history are reset to 1

2) in df/dfhd archis, only the inter part of the encoder is reset (before encoder+inter)

   thus the fake will train faster with a pretrained df model

Merger ( renamed from Converter ):

now you can choose on which GPUs (or CPU) to process

new hot key combinations to navigate and override frame’s configs

super resolution upscaler “RankSRGAN” is replaced by “FaceEnhancer”

FAN-x mask mode now works on GPU while merging (before on CPU),

therefore all models (Main face model + FAN-x + FaceEnhancer)

now work on GPU while merging, and work properly even on 2GB GPU.

Quick96:

now automatically uses pretrained model

Sorter:

removed all sort by *.bat files except one sort.bat

now you have to choose sort method in the dialog

Other:

all console dialogs are now more convenient

XnViewMP is updated to 0.94.1 version

ffmpeg is updated to 4.2.1 version

ffmpeg: video codec is changed to x265

_internal/vscode.bat starts VSCode IDE where you can view and edit DeepFaceLab source code.

removed russian/english manual. Read community manuals and tutorials here

https://mrdeepfakes.com/forums/forum-guides-and-tutorials

new github page design

== 11.01.2020 ==

fix freeze on sample loading

== 08.01.2020 ==

fixes and optimizations in sample generators

fixed Quick96 and removed lr_dropout from SAEHD for OpenCL build.

CUDA build now works on lower-end GPU with 2GB VRAM:

GTX 880M GTX 870M GTX 860M GTX 780M GTX 770M

GTX 765M GTX 760M GTX 680MX GTX 680M GTX 675MX GTX 670MX

GTX 660M GT 755M GT 750M GT 650M GT 745M GT 645M GT 740M

GT 730M GT 640M GT 735M GT 730M GTX 770 GTX 760 GTX 750 Ti

GTX 750 GTX 690 GTX 680 GTX 670 GTX 660 Ti GTX 660 GTX 650 Ti GTX 650 GT 740

== 29.12.2019 ==

fix faceset enhancer for faces that contain edited mask

fix long load when using various gpus in the same DFL folder

fix extract unaligned faces

avatar: avatar_type is now only head by default

== 28.12.2019 ==

FacesetEnhancer now asks to merge aligned_enhanced/ to aligned/

fix 0 faces detected in manual extractor

Quick96, SAEHD: optimized architecture. You have to restart training.

Now there are only two builds: CUDA (based on 9.2) and Opencl.

== 26.12.2019 ==

fixed mask editor

added FacesetEnhancer

4.2.other) data_src util faceset enhance best GPU.bat

4.2.other) data_src util faceset enhance multi GPU.bat

FacesetEnhancer greatly increases details in your source face set,

same as Gigapixel enhancer, but in fully automatic mode.

In OpenCL build works on CPU only.

before/after https://i.imgur.com/TAMoVs6.png

== 23.12.2019 ==

Extractor: 2nd pass now faster on frames where faces are not found

all models: removed options ‘src_scale_mod’, and ‘sort samples by yaw as target’

If you want, you can manually remove unnecessary angles from src faceset after sort by yaw.

Optimized sample generators (CPU workers). Now they consume less amount of RAM and work faster.

added

4.2.other) data_src/dst util faceset pack.bat

       Packs /aligned/ samples into one /aligned/samples.pak file.

       After that, all faces will be deleted.

4.2.other) data_src/dst util faceset unpack.bat

       unpacks faces from /aligned/samples.pak to /aligned/ dir.

       After that, samples.pak will be deleted.

Packed faceset load and work faster.

== 20.12.2019 ==

fix 3rd pass of extractor for some systems

More stable and precise version of the face transformation matrix

SAEHD: lr_dropout now as an option, and disabled by default

When the face is trained enough, you can enable this option to get extra sharpness for less amount of iterations

added

4.2.other) data_src util faceset metadata save.bat

       saves metadata of data_src\aligned\ faces into data_src\aligned\meta.dat

4.2.other) data_src util faceset metadata restore.bat

       restore metadata from ‘meta.dat’ to images

       if image size different from original, then it will be automatically resized

You can greatly enhance face details of src faceset by using Topaz Gigapixel software.

example before/after https://i.imgur.com/Gwee99L.jpg

Download it from torrent https://rutracker.org/forum/viewtopic.php?t=5757118

Example of workflow:

1) run ‘data_src util faceset metadata save.bat’

2) launch Topaz Gigapixel

3) open ‘data_src\aligned\’ and select all images

4) set output folder to ‘data_src\aligned_topaz’ (create folder in save dialog)

5) set settings as on screenshot https://i.imgur.com/kAVWMQG.jpg

       you can choose 2x, 4x, or 6x upscale rate

6) start process images and wait full process

7) rename folders:

       data_src\aligned        ->  data_src\aligned_original

       data_src\aligned_topaz  ->  data_src\aligned

8) copy ‘data_src\aligned_original\meta.dat’ to ‘data_src\aligned\’

9) run ‘data_src util faceset metadata restore.bat’

       images will be downscaled back to original size (256×256) preserving details

       metadata will be restored

10) now your new enhanced faceset is ready to use !

== 15.12.2019 ==

SAEHD,Quick96:

improved model generalization, overall accuracy and sharpness

by using new ‘Learning rate dropout’ technique from the paper https://arxiv.org/abs/1912.00144

An example of a loss histogram where this function is enabled after the red arrow:

https://i.imgur.com/3olskOd.jpg

== 12.12.2019 ==

removed FacesetRelighter due to low quality of the result

added sort by absdiff

This is sort method by absolute per pixel difference between all faces.

options:

Sort by similar? ( y/n ?:help skip:y ) :

if you choose ‘n’, then most dissimilar faces will be placed first.

‘sort by final’ renamed to ‘sort by best’

OpenCL: fix extractor for some amd cards

== 14.11.2019 ==

Converter: added new color transfer mode: mix-m

== 13.11.2019 ==

SAE,SAEHD,Converter:

added sot-m color transfer

Converter:

removed seamless2 mode

FacesetRelighter:

Added intensity parameter to the manual picker.

‘One random direction’ and ‘predefined 7 directions’ use random intensity from 0.3 to 0.6.

== 12.11.2019 ==

FacesetRelighter fixes and improvements:

now you have 3 ways:

1) define light directions manually (not for google colab)

   watch demo https://youtu.be/79xz7yEO5Jw

2) relight faceset with one random direction

3) relight faceset with predefined 7 directions

== 11.11.2019 ==

added FacesetRelighter:

Synthesize new faces from existing ones by relighting them using DeepPortraitRelighter network.

With the relighted faces neural network will better reproduce face shadows.

Therefore you can synthsize shadowed faces from fully lit faceset.

https://i.imgur.com/wxcmQoi.jpg

as a result, better fakes on dark faces:

https://i.imgur.com/5xXIbz5.jpg

operate via

data_x add relighted faces.bat

data_x delete relighted faces.bat

in OpenCL build Relighter runs on CPU

== 09.11.2019 ==

extractor: removed “increased speed of S3FD” for compatibility reasons

converter:

fixed crashes

removed useless ‘ebs’ color transfer

changed keys for color degrade

added image degrade via denoise – same as denoise extracted data_dst.bat ,

but you can control this option directly in the interactive converter

added image degrade via bicubic downscale/upscale

SAEHD:

default ae_dims for df now 256. It is safe to train SAEHD on 256 ae_dims and higher resolution.

Example of recent fake: https://youtu.be/_lxOGLj-MC8

added Quick96 model.

This is the fastest model for low-end 2GB+ NVidia and 4GB+ AMD cards.

Model has zero options and trains a 96pix fullface.

It is good for quick deepfake demo.

Example of the preview trained in 15 minutes on RTX2080Ti:

https://i.imgur.com/oRMvZFP.jpg

== 27.10.2019 ==

Extractor: fix for AMD cards

== 26.10.2019 ==

red square of face alignment now contains the arrow that shows the up direction of an image

fix alignment of side faces

Before https://i.imgur.com/pEoZ6Mu.mp4

after https://i.imgur.com/wO2Guo7.mp4

fix message when no training data provided

== 23.10.2019 ==

enhanced sort by final: now faces are evenly distributed not only in the direction of yaw,

but also in pitch

added ‘sort by vggface’: sorting by face similarity using VGGFace model.

Requires 4GB+ VRAM and internet connection for the first run.

== 19.10.2019 ==

fix extractor bug for 11GB+ cards

== 15.10.2019 ==

removed fix “fixed bug when the same face could be detected twice”

SAE/SAEHD:

removed option ‘apply random ct’

added option

   Color transfer mode apply to src faceset. ( none/rct/lct/mkl/idt, ?:help skip: none )

   Change color distribution of src samples close to dst samples. Try all modes to find the best.

before was lct mode, but sometime it does not work properly for some facesets.

== 14.10.2019 ==

fixed bug when the same face could be detected twice

Extractor now produces a less shaked face. but second pass is now slower by 25%

before/after: https://imgur.com/L77puLH

SAE, SAEHD: ‘random flip’ and ‘learn mask’ options now can be overridden.

It is recommended to start training for first 20k iters always with ‘learn_mask’

SAEHD: added option Enable random warp of samples, default is on

Random warp is required to generalize facial expressions of both faces.

When the face is trained enough, you can disable it to get extra sharpness for less amount of iterations.

== 10.10.2019 ==

fixed wrong NVIDIA GPU detection in extraction and training processes

increased speed of S3FD 1st pass extraction for GPU with >= 11GB vram.

== 09.10.2019 ==

fixed wrong NVIDIA GPU indexes in a systems with two or more GPU

fixed wrong NVIDIA GPU detection on the laptops

removed TrueFace model.

added SAEHD model ( High Definition Styled AutoEncoder )

Compare with SAE: https://i.imgur.com/3QJAHj7.jpg

This is a new heavyweight model for high-end cards to achieve maximum possible deepfake quality in 2020.

Differences from SAE:

+ new encoder produces more stable face and less scale jitter

+ new decoder produces subpixel clear result

+ pixel loss and dssim loss are merged together to achieve both training speed and pixel trueness

+ by default networks will be initialized with CA weights, but only after first successful iteration

  therefore you can test network size and batch size before weights initialization process

+ new neural network optimizer consumes less VRAM than before

+ added option <Enable ‘true face’ training>

  The result face will be more like src and will get extra sharpness.

  Enable it for last 30k iterations before conversion.

+ encoder and decoder dims are merged to one parameter encoder/decoder dims

+ added mid-full face, which covers 30% more area than half face.

example of the preview trained on RTX2080TI, 128 resolution, 512-21 dims, 8 batch size, 700ms per iteration:

without trueface            : https://i.imgur.com/MPPKWil.jpg

with trueface    +23k iters : https://i.imgur.com/dV5Ofo9.jpg

== 24.09.2019 ==

fix TrueFace model, required retraining

== 21.09.2019 ==

fix avatar model

== 19.09.2019 ==

SAE : WARNING, RETRAIN IS REQUIRED !

fixed model sizes from previous update.

avoided bug in ML framework(keras) that forces to train the model on random noise.

Converter: added blur on the same keys as sharpness

Added new model ‘TrueFace’. Only for NVIDIA cards.

This is a GAN model ported from https://github.com/NVlabs/FUNIT

Model produces near zero morphing and high detail face.

Model has higher failure rate than other models.

It does not learn the mask, so fan-x mask modes should be used in the converter.

Keep src and dst faceset in same lighting conditions.

== 13.09.2019 ==

Converter: added new color transfer modes: mkl, mkl-m, idt, idt-m

SAE: removed multiscale decoder, because it’s not effective

== 07.09.2019 ==

Extractor: fixed bug with grayscale images.

Converter:

Session is now saved to the model folder.

blur and erode ranges are increased to -400+400

hist-match-bw is now replaced with seamless2 mode.

Added ‘ebs’ color transfer mode (works only on Windows).

FANSEG model (used in FAN-x mask modes) is retrained with new model configuration

and now produces better precision and less jitter

== 30.08.2019 ==

interactive converter now saves the session.

if input frames are changed (amount or filenames)

then interactive converter automatically starts a new session.

if model is more trained then all frames will be recomputed again with their saved configs.

== 28.08.2019 ==

removed landmarks of lips which are used in face aligning

result is less scale jittering

before  https://i.imgur.com/gJaW5Y4.gifv 

after   https://i.imgur.com/Vq7gvhY.gifv

converter: fixed merged\ filenames, now they are 100% same as input from data_dst\

converted to X.bat : now properly eats any filenames from merged\ dir as input

== 27.08.2019 ==

fixed converter navigation logic and output filenames in merge folder

added EbSynth program. It is located in _internal\EbSynth\ folder

Start it via 10) EbSynth.bat

It starts with sample project loaded from _internal\EbSynth\SampleProject

EbSynth is mainly used to create painted video, but with EbSynth you can fix some weird frames produced by deepfake process.

before: https://i.imgur.com/9xnLAL4.gifv 

after:  https://i.imgur.com/f0Lbiwf.gifv

official tutorial for EbSynth : https://www.youtube.com/watch?v=0RLtHuu5jV4

== 26.08.2019 ==

updated pdf manuals for AVATAR model.

Avatar converter: added super resolution option.

All converters:

fixes and optimizations

super resolution DCSCN network is now replaced by RankSRGAN

added new option sharpen_mode and sharpen_amount

== 25.08.2019 ==

Converter: FAN-dst mask mode now works for half face models.

AVATAR Model: default avatar_type option on first startup is now HEAD.

Head produces much more stable result than source.

updated usage of AVATAR model:

Usage:

1) place data_src.mp4 10-20min square resolution video of news reporter sitting at the table with static background,

   other faces should not appear in frames.

2) process “extract images from video data_src.bat” with FULL fps

3) place data_dst.mp4 square resolution video of face who will control the src face

4) process “extract images from video data_dst FULL FPS.bat”

5) process “data_src mark faces S3FD best GPU.bat”

6) process “data_dst extract unaligned faces S3FD best GPU.bat”

7) train AVATAR.bat stage 1, tune batch size to maximum for your card (32 for 6GB), train to 50k+ iters.

8) train AVATAR.bat stage 2, tune batch size to maximum for your card (4 for 6GB), train to decent sharpness.

9) convert AVATAR.bat

10) converted to mp4.bat

== 24.08.2019 ==

Added interactive converter.

With interactive converter you can change any parameter of any frame and see the result in real time.

Converter: added motion_blur_power param.

Motion blur is applied by precomputed motion vectors.

So the moving face will look more realistic.

RecycleGAN model is removed.

Added experimental AVATAR model. Minimum required VRAM is 6GB for NVIDIA and 12GB for AMD.

== 16.08.2019 ==

fixed error “Failed to get convolution algorithm” on some systems

fixed error “dll load failed” on some systems

model summary is now better formatted

Expanded eyebrows line of face masks. It does not affect mask of FAN-x converter mode.

ConverterMasked: added mask gradient of bottom area, same as side gradient

== 23.07.2019 ==

OpenCL : update versions of internal libraries

== 20.06.2019 ==

Trainer: added option for all models

Enable autobackup? (y/n ?:help skip:%s) :

Autobackup model files with preview every hour for last 15 hours. Latest backup located in model/<>_autobackups/01

SAE: added option only for CUDA builds:

Enable gradient clipping? (y/n, ?:help skip:%s) :

Gradient clipping reduces chance of model collapse, sacrificing speed of training.

== 02.06.2019 ==

fix error on typing uppercase values

== 24.05.2019 ==

OpenCL : fix FAN-x converter

== 20.05.2019 ==

OpenCL : fixed bug when analysing ops was repeated after each save of the model

== 10.05.2019 ==

fixed work of model pretraining

== 08.05.2019 ==

SAE: added new option

Apply random color transfer to src faceset? (y/n, ?:help skip:%s) :

Increase variativity of src samples by apply LCT color transfer from random dst samples.

It is like ‘face_style’ learning, but more precise color transfer and without risk of model collapse,

also it does not require additional GPU resources, but the training time may be longer, due to the src faceset is becoming more diverse.

== 05.05.2019 ==

OpenCL: SAE model now works properly

== 05.03.2019 ==

fixes

SAE: additional info in help for options:

Use pixel loss – Enabling this option too early increases the chance of model collapse.

Face style power – Enabling this option increases the chance of model collapse.

Background style power – Enabling this option increases the chance of model collapse.

== 05.01.2019 ==

SAE: added option ‘Pretrain the model?’

Pretrain the model with large amount of various faces.

This technique may help to train the fake with overly different face shapes and light conditions of src/dst data.

Face will be look more like a morphed. To reduce the morph effect,

some model files will be initialized but not be updated after pretrain: LIAE: inter_AB.h5 DF: encoder.h5.

The longer you pretrain the model the more morphed face will look. After that, save and run the training again.

== 04.28.2019 ==

fix 3rd pass extractor hang on AMD 8+ core processors

Converter: fixed error with degrade color after applying ‘lct’ color transfer

added option at first run for all models: Choose image for the preview history? (y/n skip:n)

Controls: [p] – next, [enter] – confirm.

fixed error with option sort by yaw. Remember, do not use sort by yaw if the dst face has hair that covers the jaw.

== 04.24.2019 ==

SAE: finally the collapses were fixed

added option ‘Use CA weights? (y/n, ?:help skip: %s ) :

Initialize network with ‘Convolution Aware’ weights from paper https://arxiv.org/abs/1702.06295.

This may help to achieve a higher accuracy model, but consumes a time at first run.

== 04.23.2019 ==

SAE: training should be restarted

remove option ‘Remove gray border’ because it makes the model very resource intensive.

== 04.21.2019 ==

SAE:

fix multiscale decoder.

training with liae archi should be restarted

changed help for ‘sort by yaw’ option:

NN will not learn src face directions that don’t match dst face directions. Do not use if the dst face has hair that covers the jaw.

== 04.20.2019 ==

fixed work with NVIDIA cards in TCC mode

Converter: improved FAN-x masking mode.

Now it excludes face obstructions such as hair, fingers, glasses, microphones, etc.

example https://i.imgur.com/x4qroPp.gifv

It works only for full face models, because there were glitches in half face version.

Fanseg is trained by using manually refined by MaskEditor >3000 various faces with obstructions.

Accuracy of fanseg to handle complex obstructions can be improved by adding more samples to dataset, but I have no time for that 🙁

Dataset is located in the official mega.nz folder.

If your fake has some complex obstructions that incorrectly recognized by fanseg,

you can add manually masked samples from your fake to the dataset

and retrain it by using –model DEV_FANSEG argument in bat file. Read more info in dataset archive.

Minimum recommended VRAM is 6GB and batch size 24 to train fanseg.

Result model\FANSeg_256_full_face.h5 should be placed to DeepFacelab\facelib\ folder

Google Colab now works on Tesla T4 16GB.

With Google Colaboratory you can freely train your model for 12 hours per session, then reset session and continue with last save.

more info how to work with Colab: https://github.com/chervonij/DFL-Colab

== 04.07.2019 ==

Extractor: added warning if aligned folder contains files that will be deleted.

Converter subprocesses limited to maximum 6

== 04.06.2019 ==

added experimental mask editor.

It is created to improve FANSeg model, but you can try to use it in fakes.

But remember: it does not guarantee quality improvement.

usage:

run 5.4) data_dst mask editor.bat

edit the mask of dst faces with obstructions

train SAE either with ‘learn mask’ or with ‘style values’

Screenshot of mask editor: https://i.imgur.com/SaVpxVn.jpg

result of training and merging using edited mask: https://i.imgur.com/QJi9Myd.jpg

Complex masks are harder to train.

SAE:

previous SAE model will not work with this update.

Greatly decreased chance of model collapse.

Increased model accuracy.

Residual blocks now default and this option has been removed.

Improved ‘learn mask’.

Added masked preview (switch by space key)

Converter:

fixed rct/lct in seamless mode

added mask mode (6) learned*FAN-prd*FAN-dst

changed help message for pixel loss:

Pixel loss may help to enhance fine details and stabilize face color. Use it only if quality does not improve over time.

fixed ctrl-c exit in no-preview mode

== 03.31.2019 ==

Converter: fix blur region of seamless.

== 03.30.2019 ==

fixed seamless face jitter

removed options Suppress seamless jitter, seamless erode mask modifier.

seamlessed face now properly uses blur modifier

added option ‘FAN-prd&dst’ – using multiplied FAN prd and dst mask,

== 03.29.2019 ==

Converter: refactorings and optimizations

added new option

Apply super resolution? (y/n skip:n) : Enhance details by applying DCSCN network.

before/after gif – https://i.imgur.com/jJA71Vy.gif

== 03.26.2019 ==

SAE: removed lightweight encoder.

optimizer mode now can be overriden each run

Trainer: the loss line now shows the average loss values after saving

Converter: fixed bug with copying files without faces.

XNViewMP : updated version

fixed cut video.bat for paths with spaces

== 03.24.2019 ==

old SAE model will not work with this update.

Fixed bug when SAE can be collapsed during a time.

SAE: removed CA weights and encoder/decoder dims.

added new options:

Encoder dims per channel (21-85 ?:help skip:%d)

More encoder dims help to recognize more facial features, but require more VRAM. You can fine-tune model size to fit your GPU.

Decoder dims per channel (11-85 ?:help skip:%d)

More decoder dims help to get better details, but require more VRAM. You can fine-tune model size to fit your GPU.

Add residual blocks to decoder? (y/n, ?:help skip:n) :

These blocks help to get better details, but require more computing time.

Remove gray border? (y/n, ?:help skip:n) :

Removes gray border of predicted face, but requires more computing resources.

Extract images from video: added option

Output image format? ( jpg png ?:help skip:png ) :

PNG is lossless, but produces images with size x10 larger than JPG.

JPG extraction is faster, especially on HDD instead of SSD.

== 03.21.2019 ==

OpenCL build: fixed, now works on most video cards again.

old SAE model will not work with this update.

Fixed bug when SAE can be collapsed during a time

Added option

Use CA weights? (y/n, ?:help skip: n ) :

Initialize network with ‘Convolution Aware’ weights.

This may help to achieve a higher accuracy model, but consumes time at first run.

Extractor:

removed DLIB extractor

greatly increased accuracy of landmarks extraction, especially with S3FD detector, but speed of 2nd pass now slower.

From this point on, it is recommended to use only the S3FD detector.

before https://i.imgur.com/SPGeJCm.gif

after https://i.imgur.com/VmmAm8p.gif

Converter: added new option to choose type of mask for full-face models.

Mask mode: (1) learned, (2) dst, (3) FAN-prd, (4) FAN-dst (?) help. Default – 1 :

Learned – Learned mask, if you choose option ‘Learn mask’ in model. The contours are fairly smooth, but can be wobbly.

Dst – raw mask from dst face, wobbly contours.

FAN-prd – mask from pretrained FAN model from predicted face. Very smooth not shaky countours.

FAN-dst – mask from pretrained FAN model from dst face. Very smooth not shaky countours.

Advantages of FAN mask: you can get a not wobbly shaky without learning it by model.

Disadvantage of FAN mask: may produce artifacts on the contours if the face is obstructed.

== 03.13.2019 ==

SAE: added new option

Optimizer mode? ( 1,2,3 ?:help skip:1) :

this option only for NVIDIA cards. Optimizer mode of neural network.

1 – default.

2 – allows you to train x2 bigger network, uses a lot of RAM.

3 – allows you to train x3 bigger network, uses huge amount of RAM and 30% slower.

Epoch term renamed to iteration term.

added showing timestamp in string of training in console

== 03.11.2019 ==

CUDA10.1AVX users – update your video drivers from geforce.com site

face extractor:

added new extractor S3FD – more precise, produces less false-positive faces, accelerated by AMD/IntelHD GPU (while MT is not)

speed of 1st pass with DLIB significantly increased

decreased amount of false-positive faces for all extractors

manual extractor: added ‘h’ button to hide the help information

fix DFL conflict with system python installation

removed unwanted tensorflow info from console log

updated manual_ru

== 03.07.2019 ==

fixes

upgrade to python 3.6.8

Reorganized structure of DFL folder. Removed unnecessary files and other trash.

Current available builds now:

DeepFaceLabCUDA9.2SSE – for NVIDIA cards up to GTX10x0 series and any 64-bit CPU

DeepFaceLabCUDA10.1AVX – for NVIDIA cards up to RTX and CPU with AVX instructions support

DeepFaceLabOpenCLSSE – for AMD/IntelHD cards and any 64-bit CPU

== 03.04.2019 ==

added

4.2.other) data_src util recover original filename.bat

5.3.other) data_dst util recover original filename.bat

== 03.03.2019 ==

Convertor: fix seamless

滚动至顶部