Shadow Catcher V1

Recently, I made a shadow catcher in Eevee.

An advantage Blender Internal had, and that a particular modified Blender build has with Cycles, was Light Groups. A light group allowed you to cause a light to only affect certain objects. It was convenient for NPR purposes. The other useful, and sorely missed feature Internal had, was the ability to define, in the material, whether it took shadows or not. That’s incredibly useful, because it means you can use it to isolate shadows, not just regular shading, and define which areas you want to take shadows or not.

For example, anime character’s eye and brow, if shaded normally, would likely have a strong and ugly shadow cast by polygon hair, which doesn’t look good. You could, if you could isolate it that way, mask it out. That’s not possible by default in Eevee or Cycles.

But, using Eevee, I’ve found a way. I wanted to be able to isolate the shading from any given light. Especially since it was one of the techniques used in Guilty Gear Xrd to achieve such convincing false 2d.

They used a light on each individual character to give them specific control, rather than scene only lighting. I’d like to explore scene-based shading, lights in the world, etc, but it’s good to have this as an option. That made me consider isolating lighting results again. I did some research, and came across this Polycount thread.

I already knew shading could be faked by using the dot product of a direction and the surface normal, but that can’t capture shadows. I’d thought beforehand, I could use the RGB node to get the difference between fake shading and real shading if I could just make them match; what I lacked was the ability to work out what direction the light is coming from in order to do that, and that’s what that thread taught me.

Node setup by Polycount user Jekyll

By using this technique, I was able to get the light direction. Then, by simply using a Shader to RGB node with a diffuse shader and subtracting the false shading from it, I was able to isolate the shadows.

With this, I can catch shadows and then use whatever technique I’d like to mask them out.

The advantages of this techniques are:

  • Able to identify shadows.
  • By using multiple lights, you can identify specific shadow types, like using two to distinguish contact shadows and shadows.
  • Can use multiple to distinguish hard and soft lights, then mix between them to preference.

The disadvantages:

  • It’s dependent on drivers. You have to manually select the light, and set up drivers for its location.
  • If the light is hidden, I’ve discovered the driver will break, meaning you have to reassign the object.
  • If you move it into a different scene and don’t bring the light, you’ll have to replace it and redo the drivers again.
  • It’s inconvenient to have to use multiple lights to get multiple types of shadow.
  • Using multiple lights makes the whole scene brighter. Objects not using the shadow catcher can’t account for this, and may be undesirably bright.
  • It’s inconvenient if the lights have to be changed significantly, like the number.
  • I believe it only works correctly with directional lights.

Having to set it up in such a time consuming way is the big problem for me. It’s not as if there’s just one box to select it in. It’s inconvenient, and especially when it breaks just from the light being hidden. That might be fine if we use the Guilty Gear Xrd method and only use a specific light or set of lights for it, but if we want to change things a lot, it’ll be a pain.

So, I’ve been doing some more experimenting, trying to recover that information somehow using the Texture Coordinates node, which can select the object. If I can, it will be much easier and quicker to modify. I’ve only had mixed results so far, and not worth showing yet. Still, I’ve learned from this, so it’s not wasted. I’m hoping to get a convenient way working of shadow catching this way. The ability to modify how it’s shaded and shadowed, in real time, not compositing, would be extremely useful.

Frustration

Recently, I’ve been frustrated with my shaders.

None of them look the way I want them to. I’d like to achieve a painterly appearance, or watercolour, but they’re quite elaborate. I sometimes think it would be better if I could settle for cel shading, but I just can’t accept that. Or rather, I want to be able to do more than just shade everything so simply. I’m not one of those people who’s interested in NPR primarily, or purely, to replicate the style of anime. It’s not that I look down on that style; it’s just not my preference.

So, I’ve been doing more research, as usual. The big problem with using Fresnel or Layer Weight to make the rough shape of the object, or an outline, is that it can give unwanted lines, and does. I dislike having to manually correct these, so I’ve been trying to think of ways to fix this. I’ve experimented the day or so with blurring the normals of the model to remove details.

How did I do this? Well, I’m not blurring the normals, exactly. I came across this video a while back. One of the features demonstrated is creating a false blur by distorting a texture.

23:58 for the blur technique.

I’d hoped I could apply the same technique straight to the normals, but it didn’t work. Instead, I baked the object’s original normals out to an image, and used that method to distort it.

The result was effective, actually. I haven’t removed anything from the default Suzanne head, but by blurring it and using it as the Normal input, it’s removed the detail. It also darkened it for some reason, which I was able to compensate for by remapping the value by an equal amount to the distortion. I’d hoped to use this to get a blurred version of Fresnel I could use to make a better silhouette. However, when I tried it..

It only works from the front. I really don’t know why this is. My best guess is that it’s something to do with the normals’ texture. The colours don’t appear the same way when I look at them as the original normals do, so perhaps it’s a reason why it doesn’t display right. However, when I used it as the normals without blurring, it displayed correctly.

It’s quite frustrating. But at least I learned I can blur out detail with this method. That’ll likely come in useful.

The advantage of this method is that the original normals are a texture that isn’t being permanently modified, so it can be controlled as I please.

The disadvantage is, it’s grainy. I haven’t found a way to smooth it yet, though, for what I’m doing, a bit of grain may help it appear natural. Also, as it’s just using the original normals from a texture, it can’t be modified the way the real normals of the geometry can.

On another note, I came across an interesting method for modifying shading.

This technique comes up ~ 1 hour, 15 minutes in.

This one uses an image texture to paint on to control shading. The trick is to create a UV slot on the objects you want to modify, and then use the UV Project modifier to project UVs, like the Project From View UV mapping option, but in real time. Then, have an image texture using that UV to control the shading by adding or subtracting. As they’ll all reference the same image, you don’t need to select individual objects to control, and can modify everything in the shot at once.

The advantage of this method is that you can control them all at once as long as they’re referencing the texture. Additionally, if you make it the same resolution as your render – and you probably should – it shouldn’t appear pixelated. And since it’s an image, you could also edit it externally if you really needed to or wanted to. I find this method far more responsive as I paint than modifying vertex paint, and it can be far more precise than vertex paint would be on most sensible meshes. It’s extremely effective for cel shading.

The disadvantages are several. For one, it’s an image, so it’ll bloat your file size, potentially a lot if you’re rendering at high resolution. Also, because it’s an image, you can’t animate it the same way you could the vertex paint method; you’d have to use an image sequence, increasing the file size further and manually repainting each frame. Although, if replicating anime, that may be best, as it’s essentially what colouring in anime is like, and leaves room for human errors that make it appear more genuine. Lastly, if you change anything in the shot, you’ll have to repaint. Depending on the change, this can be small or large; if you move the camera, you’d have to redo the lot, or, if you were just panning, perhaps you could compensate using the Mapping Node or translation method. That makes it tricky if you want to change the camera angle or re-pose a character or such, though if you modify the shading last, it shouldn’t be as much of an issue. It’s also more difficult to make look natural on something with smooth shading.

Overall, I find there are advantages and disadvantages to this method, but it’s certainly something I’ll keep in mind. I might well use it next time I’m doing a render; I prefer the flexibility of vertex paint, being able to modify things after the fact and the shading changes not be rendered all worthless…But the speed and precision of this texture method isn’t something I can ignore. Custom shading is very likely the key to convincing 2d, or at least an essential.

Anyway, that’s what my research and testing has turned up for the moment. I want to experiment more. Reading a reference from an old Siggraph paper, Painting With Polygons, Video Watercolorization using Bidirectional Texture Advection gave me some ideas; that’s actually where my idea to blur the normals came from. It also seems several steps of their process are ones I’d used in my earlier compositing technique, though applied differently. I’ll try experimenting more and see what I can turn up.