Celeb Lookalike - Which celebrity do you look like? Upload your photo.TV Episode Generator - The Simpsons, Friends, Game of Thrones and more.Joke Battle - Can you tell a better joke than our AI?.Insult Battle - We dare you to insult our AI.Dating Simulator - A game where you go on dates with a virtual girl or guy.Falling Sand - Play with lava, water, napalm and more.Lyrics Generator - Our AI writes hit songs.AI Article Writer - Our AI will write an article for you.Crazy Images - Babies Skydiving, Toddlers Playing In Lava, Celebs Dressed As Their Names.AI Rap Battles - Eminem vs Jay-Z, Elon Musk vs Mark Zuckerberg and more.AI Image Generator - Type what you want to see and it appears.But that really has nothing to do with photo blending or even the quality of the photo. As an example, if the "good" training set happened to have more photos with oceans/lakes in it, then the ML model would consier almost any photo with an ocean or lake in it to be a good photo. But it is not really looking at the blending part, because it has no idea what that is. It looks at the photos in each category and tries to learn the difference. It treats this task the same, for example, as if it was trying to learn how to recognize dogs vs cats. The second, and much bigger problem, was that when we trianed the machine learning model, it did not know it was looking at double exposure images. For example, if you can hardly see the second photo in the blend, and the first photo looks great by itself, would that still be good? In the end, all art is subjective. If one of the original photos was amazing, it is much more likely the blend will still produce a very good image. And, there is a difference between measuring the quality of the double exposure blend, and the overall quality of the final image. We classified demo blends into good and bad categories to train the neural network model, but other humans might come up with totally different groups of good and bad photos. First, what is considered a good photo blend is very subjective. There are 2 problems with using AI for it. We did try using a machine learning model to have it display only "good" photos, but we were not able to get it to work better than randomly blending the photos. But, this current version does not use AI. The program picks a random human photo and combines it with a random background photo. But for a while, I used Blender exclusively and have made several transparent animations with this method.The double exposure (blended) photos displayed above are 100% machine generated with no human involvement. To be honest, I import all of my image sequences into Adobe Premiere now and am not using Blender for video editing any longer. So, depending on how universally available you want this animation to be this may not be the option for you. I’ve had this method work in some applications and not in others. MOV file will be created with an alpha channel. If we select PNG from the drop-down menu, it does give us an alpha channel. The H.264 Codec is what I use for most things but it does not give us an RGBA (alpha channel) option. One I’ve found to work for some applications (but not all) is Quicktime.Īfter choosing Quicktime, we still have to go down a little further and expand the Video tab and look for the “Codec” setting. I usually use MPEG-4 but this doesn’t support an alpha channel. The default is Matroska which doesn’t seem to work in most applications.
0 Comments
Leave a Reply. |