So for the github project of kijai/ComfyUI-LivePortraitKJ to work you need visual Studio 2017 or later with build x86/264 for 2017 or later, python 3.10, pip, comfyui and all the requirements there in. To just run the project since the read out reads like you get it and compile it locally. In short it makes the process of facial motion capture to a 2D image easier when you just have a PNG. What it still leaves out i see used a lot is all the extras i see streamers use. Same thing with the extras games use in how they use the rigged models though with AI gen txt2video, img2video and video to video this will be a better method to do it since it is closest to img2video but still not with how it works. The image below shows how it works. In short you have an image a video. As inputs and get a video as output though you also can do it in the way of webcam input. It just will take more processing to do webcam input as it will be more like streaming data instead of taking the data from a file and merging it with the color from another. [link] [comments] |