Hey it's me again, I posted a week or two ago about the non-obvious application of Seedance 2.0. You can view the original thread here: https://www.reddit.com/r/artificial/comments/1szkpjb/seedance_20_whats_the_most_interesting_nonobvious/
The reason why I'm so interested in this scenario is because both my parents are teachers and I have seen them waste away countless hours in building slide decks for their students. More often then not, they have supplementary material to show the class so they do a lot of switching back and forth between sources, videos, etc.
When I first saw the use case of embedding a Seedance video in a presentation my first thoughts were: this will greatly reduce students' attention lost from switching between teaching materials. So I did some searching and gave the web-app a test. If anyone is interested in trying it out yourself here is the link: pi.inc
Conclusion: The end product is 9/0. The workflow however is about 7/10.
The problem lies in the fact that you have to generate your video and your deck in two different interfaces. And you have to download your video first and then upload it back into your deck. Pi does give you a workspace, one for your decks and another for your video, but it can't pull video from said workspace. So it takes a minimum of 2 prompts and downloading/uploading to get everything done:
- generate video and download it
- generate slide and upload video
What I think would be better:
- generate slide
- generate video and embed
It also has GPT-image2 and you can directly create in the slide deck interface. Now why can't I do the same with Seedance 2.0?
I'm not a tech person, is there an underlying difference between generating a video vs an image post process?
I'm going to try out some other AI presentation tools soon, if I find anything interesting maybe I'll post again!
[link] [comments]