Of course! I think that is a new field with lots of interesting possibilities to explore.
There are some options to do the work:
- Extract the original video frames to independent png image files using programs like ffmpeg, use an AI resize program like Gigapixel AI to upscale all the frames, and then, use a video editor or ffmpeg to create the new video file from the resulting upscaled frames.
- Use an AI program to upscale video (Topaz Video Enhance AI in this case) and process the original video file to a new video file directly.
- An intermediate method between the previous ones, to use the Ai program to upscale video to process the original video file to independent frames this time and then create the final video file using a video editor or ffmpeg.
Each option has its pros and cons, and some are better for cartoons and some other are better for real image videos. I used the last two options. The first option it's the best for real image videos but it takes some more time and effort, and the frame image files will take up a lot of space on your hard drive (for example... 22 mins of video at 23'976 fps and 1080p, like 70 GB). The two last options are better with cartoons/anime, and the second option does not take up large amounts of hard drive like the other two, and it's faster and easier. The bad thing is that, by now, Topaz Video Enhance AI does not allow to configure the options of the final video, so you can't control the codec used or the video's quality. It only uses H.263 mp4 video and like 15-20 MB of bitrate, good for some things but limited if you want the better image quality. Luckily, you can export the results like independent frames (png or tiff files) and this way (3rd option) get the best possible quality.
I used the last two options, and the third in my last versions (all 4:3 versions and Space Ace 16:9). It has to be said that the only one video file versions can't be done using all this methods, so you have to use multiple video files versions instead. I first tried that type of versions, but no matter what I did, there was always video and audio out of sync when using the resulting video file on Daphne (that out of sync only occurs with the emu... apparently, if you mount video and audio files with a video editor all it's ok). With multi-video versions everything works flawlessly, but you have to do a lot of heavy work selecting each sequence of frames to process each mini video one by one (237 videos for Space Ace, for example), and, about the alternative audio files (spanish in this case), you have to cut the original spanish Pc version video file (which have the video sequences in different order) to all the independient sequences, split the audio and adjust it for every minivideo one by one (because the original Pc video file uses 25 fps instead 23'976, so you have to modify and sincronice every one of the audio chunks).
Another thing to have in account is that first is necessary to convert the m2v video files to mp4 (mpeg2 is totally outdated and it's not supported by a lot of video editor programs and Video Enhance AI) to be processed by the AI programs, and do the color correction (if necessary) first too. And then, if ussing the second method, it's necessary to convert the resulting mp4's to m2v again. I used for this XMedia Recode. For the first and third methods, you can code the resulting png's directly to m2v using Vegas Pro, for example.
This is roughly how I have done it. Another uses can be to upscale and add detail to backgrounds and textures of games, using Gigapixel AI, for example. You can use Dolphin emu to extract the textures of a Gamecube/Wii game, then upscale using Gigapixel Ai, and finally use the new texture pack within Dolphin itself. You can do a lot of hi-res texture packs for 1st person shooters like Duke Nukem, Doom, Quake, Hexen, etc... Hi-res backgrounds for survival horror games that uses pre-rendered images like Resident Evil 2, for example, for games like Final Fantasy VII, etc...
Finally (i almost forget to say something about this) it's important to know that all the AI programs takes a lot of time to do the work and it's highly recomended to use a good nVidia graphic card from RTX or GTX 10 series to do the work in a reasonably amount of time. You can use the CPU too, but with x10 more time (and that with the most powerful i7/i9 processors)... In my case, with a Intel Core i7 6700k and a nVidia GTX 1080, it takes about 1 sec/frame, so to process all the video material from Dragon's Lair, for example, that is about 10 hours and half.