Skip to content Skip to footer

Nvidia’s Canvas AI painting tool instantly turns blobs into realistic landscapes

AI has been filling in the gaps for illustrators and photographers for years now — literally, it intelligently fills gaps with visual content. But the latest tools are aimed at letting an AI give artists a hand from the earliest, blank-canvas stages of a piece. Nvidia’s new Canvas tool lets the creator rough in a landscape like paint-by-numbers blobs, then fills it in with convincingly photorealistic (if not quite gallery-ready) content.

Each distinct color represents a different type of feature: mountains, water, grass, ruins, etc. When colors are blobbed onto the canvas, the crude sketch is passed to a generative adversarial network. GANs essentially pass content back and forth between a creator AI that tries to make (in this case) a realistic image and a detector AI that evaluates how realistic that image is. These work together to make what they think is a fairly realistic depiction of what’s been suggested.

It’s pretty much a more user-friendly version of the prototype GauGAN (get it?) shown at CVPR in 2019. This one is much smoother around the edges, produces better imagery, and can run on any Windows computer with a decent Nvidia graphics card.

This method has been used to create very realistic faces, animals and landscapes, though there’s usually some kind of “tell” that a human can spot. But the Canvas app isn’t trying to make something indistinguishable from reality — as concept artist Jama Jurabaev explains in the video below, it’s more about being able to experiment freely with imagery more detailed than a doodle.

For instance, if you want to have a moldering ruin in a field with a river off to one side, a quick pencil sketch can only tell you so much about what the final piece might look like. What if you have it one way in your head, and then two hours of painting and coloring later you realize that because the sun is setting on the left side of the painting, it makes the shadows awkward in the foreground?

If instead you just scribbled these features into Canvas, you might see that this was the case right away, and move on to the next idea. There are even ways to quickly change the time of day, palette, and other high-level parameters so they can quickly be evaluated as options.

Animation of an artist sketching while an AI interprets his strokes as photorealistic features.

Image Credits: Nvidia

“I’m not afraid of blank canvas any more,” said Jurabaev. “I’m not afraid to make very big changes, because I know there’s always AI helping me out with details… I can put all my effort into the creative side of things, and I’ll let Canvas handle the rest.”

It’s very like Google’s Chimera Painter, if you remember that particular nightmare fuel, in which an almost identical process was used to create fantastic animals. Instead of snow, rock and bushes, it had hind leg, fur, teeth and so on, which made it rather more complicated to use and easy to go wrong with.

Image Credits: Devin Coldewey / Google

Still, it may be better than the alternative, for certainly an amateur like myself could never draw even the weird tube-like animals that resulted from basic blob painting.

Unlike the Chimera Creator, however, this app is run locally, and requires a beefy Nvidia video card to do it. GPUs have long been the hardware of choice for machine learning applications, and something like a real-time GAN definitely needs a chunky one. You can download the app for free here.

 

Leave a comment

0.0/5