gaomeilan – Image Processor

What will a text to image generator do with abstract language depicting love and pain?

I.
i remember when carmine rivers seeped from my
shins, tracing a route back to the bathtub. once,

i dreamt of bathing in my own bruises. my legs
still bleed whenever i miss you.

 

II.

who cares about subclavian
and carotid arteries. blood

is blood. everyone knows that
model hearts don’t look

like the real thing, they’re
all too red and stiff; genuine

myocardium is pink, fragile,
too fragile, disgusting, raw.

 

III.

It was dark. You were faceless. The air was stagnant. I was silent.
I didn’t touch you, but I knew you were warm.

We lay our bodies by the marsh, staring up at the sky,
silence slitting our throats. The darkness shrouds our bodies

like a pall. I wondered if two cadavers could kiss.

bookooBread-ImageProcessor

 

First of all, I spent way too much time on the Situated Eye part of this deliverable so I didn’t have much time for this. I plan on playing with Runway a lot more in the future when I have time.

My idea was to use the text to image generator to create some weird ass visuals… and that sort of worked?

But then, I wanted to connect these images to other models and use them to style other images/video or mix it with a face. None of these worked and I think that’s because I just haven’t spent enough time working with Runway yet so I’m not quite sure how all the tools work. But I will keep at it since it is so so goddamn cool!

YoungLee – Image Processor

 

When I was a part of my newspaper class in high school, I came across my high school yearbook from 1978. I remember being fascinated by it and took images of the black and white photos. At the time, the only colorized photos were the senior portraits and nothing else, so using the neural filter tool in photoshop, I colorized the other images. It was really easy to use, and I will definitely use it in the future as well.

Images:

rathesungod- ImageProcessor

This was a really fun and new process that I haven’t experienced. I found that using just a single figure to insert would be simple and nice but just not my style. I used the two versions of my past project this semester to combine their two features with an ongoing pattern. By cutting out the inputs and outputs in specific parts, it became really difficult and time consuming. But overall, this was super fun and I will definitely be using this tool again in future video projects.

fr0g.fartz-ImageProcessor

I made a project of me trying to ollie on a skateboard, but put it in outer space! I couldn’t find an option to input my own video, I think you had to pay. It’s fun because I’m actually doing the ollie on grass (much easier, I’ve never attempted it on concrete), but you can’t tell that from the video! It’s crazy how easy this technology is to use. Although it’s less precise than things you could get with Photoshop, it does a pretty great job at detecting which parts you want to stay in the image. I was happy with the result of my project and I will likely use this tool again!

minniebzrg-ImageProcessor

In an alternate life, I would have grown up in Mongolia, spending my summers in the grassland in our summer traditional home(a ger). I would wake up to the beautiful clear blue skies of Mongolia and smell the mixture of livestock and fresh grass in the air. The wind blows through my hair and I thank God for creating me in this land. If I didn’t come to America, this would be a possible reality.

Process:

First, I recorded several videos. My original idea was to make this short film of me doing “daily tasks” that would be a reality in Mongolia. However, I didn’t anticipate the amount of time it would take me to draw the background and foreground, the quality of the video/my drawing skills, and all the steps it takes to make one ebsynth production. I stuck to one keyframe and one short clip and with some more time (and more keyframes) I could complete my original idea. The final product is very rough because there are sections of the video that couldn’t register the style I drew.

“don’t spend more than 2 hours on this”  a h a h a….. I had trouble working ebsynth. I ran into trouble because I made the keyframe separately on procreate and this messed up the resolution of the keyframe from the rest of the frames. As a result, ebsynth wouldn’t render the output. I figured out how to do this by redrawing the keyframe using the png from procreate. Luckily I separated the layers so this was possible.

bumble_b – ImageProcessor

I decided to use RunwayML’s green screen effect to pick out one person in a video and retain their color against a black/white version of that video. I chose a cute scene from Friends (my favorite show ever), and I keyed out Rachel using their green screen feature.

So, I tried to get it perfect, and when I thought it was perfect, I exported it. Unfortunately, once I clicked export it looks like it got rid of all the work I did? Maybe I don’t know the website well enough, but I couldn’t find any saved file or anything. When I went and rewatched the video, I noticed there were some parts I would’ve really liked to tweak, but I wasn’t willing to go and do everything all over again because I was definitely approaching the 2 hour time cap.

Anyway, I took the now green-screened video into Premiere, Ultra Key’d the green screen out and layered it on top of the same video that I added that black/white effect on,

and this is the result:

shrugbread- ImageProcessor

I decided to focus on the selfie 2 anime GAN photo manipulating program on Runway ML. I found the results quite hilarious because in most cases it failed entirely to deliver on the promise of realistic anime stylized images. I found some of my results closer to a Picasso rendition than an anime style. The resulting images more so just blocked out certain colors and bumped up the saturation, while fun looking I can’t quite say that they recognized the facial features as easily as in the reference images. A version of this filter also exists on Snapchat and works much more consistently and live, but using it for a  bit I can tell that it’s pulling from a database of hairstyles, face shapes, and tracking them onto the face whereas this uses a GAN to tweak the image and until it is considered “anime style”