You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,15 +6,15 @@ The Vapoursynth filter version has the advantage of coloring the images directly
6
6
7
7
For this filter is available a [User Guide](https://github.com/dan64/vs-deoldify/blob/main/documentation/HAVC%20User%20Guide.pdf) which provides useful tips and detailed explanations regarding the filter functions and usage. It is strongly recommended reading it before using the filter.
8
8
9
-
The filter (_HAVC_ in short) is able to combine the results provided by _DeOldify_ and _DDColor_ (_Colorization_), which are some of the best models available for coloring pictures, providing often a final colorized image that is better than the image obtained from the individual models. But the main strength of this filter is the addition of specialized filters to improve the quality of videos obtained by using these color models and the possibility to improve further the stability by using these models as input to [Deep Exemplar based Video Colorization](https://github.com/zhangmozhe/Deep-Exemplar-based-Video-Colorization) model (_DeepEx_ in short), [DeepRemaster](https://github.com/satoshiiizuka/siggraphasia2019_remastering) and [ColorMNet](https://github.com/yyang181/colormnet).
9
+
The filter (_HAVC_ in short) can be considered the _Swiss Army knife_ for coloring videos. It offers a wide range of options and coloring models and filters. It is able to combine the results provided by _DeOldify_ and _DDColor_ (_Colorization_), which are some of the best models available for coloring pictures, providing often a final colorized image that is better than the image obtained from the individual models. But the main strength of this filter is the addition of specialized filters to improve the quality of videos obtained by using these color models and the possibility to improve further the stability by using these models as input to [Deep Exemplar based Video Colorization](https://github.com/zhangmozhe/Deep-Exemplar-based-Video-Colorization) model (_DeepEx_ in short), [DeepRemaster](https://github.com/satoshiiizuka/siggraphasia2019_remastering) and [ColorMNet](https://github.com/yyang181/colormnet).
10
10
11
11
_DeepEx_, _DeepRemaster_ and _ColorMNet_ are exemplar-based video colorization models, which allow to colorize a movie starting from an external-colored reference image. They allow to colorize a Video in sequence based on the colorization history, enforcing its coherency by using a temporal consistency loss. _ColorMNet_ is more recent and advanced respect to _DeepEx_ and it is suggested to use it as default exemplar-based model.
12
12
13
13
_DeepRemaster_ has the interesting feature to be able store the reference images, so that is able to manages situations where the reference images not are synchronized with the movie to colorize. Conversely _ColorMNet_ is not storing the full reference frame image (like _DeepRemaster_) but it stores only the key points (e.g., representative pixels in each frame). This imply that the colored frames could have some colors that are very different from the reference image. _DeepRemaster_ has not this problem since it stores the full reference image. Unfortunately, the number of reference images that DeepRemaster is able to use depends on GPU memory and power, because the time required for inference increase with the number of reference images provided. Instead _ColorMNet_ has some interpolation capability while _DeepRemaster_ is very basic and is unable to properly colorize a frame if is missing a reference image very similar and it need a lot of reference images to be able to properly colorize a movie (the time resolution of _DeepRemaster_ is 15 frames). So, the choice of which exemplar-based video colorization model to use depends on the source to colorize and the number of reference image available.
14
14
15
15
## Quick Start
16
16
17
-
This filter is distributed with the torch package provided with the **Hybrid Windows Addons**. To use it on Desktop (Windows) it is necessary install [Hybrid](https://www.selur.de/downloads) and the related [Addons](https://drive.google.com/drive/folders/1vC_pxwxL0o8fjmg8Okn0RA5rsodTcv9G?usp=drive_link). **Hybrid** is a Qt-based frontend for other tools (including this filter) which can convert most input formats to common audio & video formats and containers. It represent the easiest way to colorize images with the HAVC filter using [VapourSynth](https://www.vapoursynth.com/). In the folder _documentation_ is available a [User Guide](https://github.com/dan64/vs-deoldify/blob/main/documentation/HAVC%20User%20Guide.pdf) that provides detailed information on how to install Hybrid and use it to colorize videos. The Guide also provides tips on how to improve the final quality of colored movies.
17
+
This filter is distributed with the torch package provided with the **Hybrid Windows Addons**. To use it on Desktop (Windows) it is necessary install [Hybrid](https://www.selur.de/downloads) and the related [Addons](https://drive.google.com/drive/folders/1vC_pxwxL0o8fjmg8Okn0RA5rsodTcv9G?usp=drive_link). **Hybrid** is a Qt-based frontend for a lot video filters (including this one) which can convert most input formats to common audio & video formats and containers. Hybrid represents one of the most comprehensive solutions for implementing A.I. video filters and offers the most user-friendly approach to image colorization using the HAVC filter via [VapourSynth](https://www.vapoursynth.com/). In the folder _documentation_ is available a [User Guide](https://github.com/dan64/vs-deoldify/blob/main/documentation/HAVC%20User%20Guide.pdf) that provides detailed information on how to install Hybrid and use it to colorize videos. The Guide also provides tips on how to improve the final quality of colored movies.
18
18
19
19
20
20
## Dependencies
@@ -145,6 +145,8 @@ As explained previously the stabilization is performed by averaging the past/fut
145
145
146
146
**DDColor Tweaks**: This filter is available only for DDColor and has been added because has been observed that the DDcolor's _inference_ is quite poor on dark/bright scenes depending on the luma value. This filter will force the luma of input image to don't be below the threshold defined by the parameter _luma_min_. Moreover this filter allows to apply a dynamic gamma correction. The gamma adjustment will be applied when the average luma is below the parameter _gamma_luma_min_. A _gamma_ value > 2.0 improves the DDColor stability on bright scenes, while a _gamma_ < 1 improves the DDColor stability on dark scenes.
147
147
148
+
**B&W tune**: Starting with HAVC version 5.5.0, a new post-processing filter called B&W Tune was introduced, which can automatically correct most color allocation errors. Unfortunately, forcing color stability has the side effect of producing washed-out colors with a slight pink cast (similar to skin tone). This new post-processing filter can automatically correct this problem and restore image colors to a more natural color. With B&W Tune, HAVC has evolved beyond basic colorization, it now delivers vivid, natural colors while ensuring consistent color stability throughout films.
149
+
148
150
### Chroma Adjustment
149
151
150
152
Unfortunately when are applied to movies the color models are subject to assign unstable colors to the frames especially on the red/violet chroma range. This problem is more visible on DDColor than on DeOldify.
0 commit comments