Losing Sight

This year, I’m really wanting to make more art. I also want to make better quality art. To that end, I spent a lot of January trying to learn GLSL. I’ve started using the new renderer, Malt, to give me a lot more flexibility in the way I render. It took me several weeks of learning and experimenting, but I was finally able to more or less replicate my existing watercolour shader in Malt.

It’s quite convenient in some ways. It can do a lot, and in real time. Some of its features, like the linework, were available in Toonkit, but so much slower. I suspect, if Malt catches on and adds more basic shaders to use, it’ll kill Toonkit by virtue of the fact that it can do anything that can and more, and hundreds of times more quickly.

Unfortunately, though, I realised I’ve had a big blind spot. I tried making a Ryou model, but…It’s just shit.

The only good thing about it at all is the hair. I like the hair. I learned how to use curve-based hair better this time. But that’s all. The model itself is very flawed, the topology isn’t great, the normal editing is choppy, the clothing is simple, the colours aren’t good, the shader didn’t come out right….It’s just bad.

I realised, while trying to finish this for the sake of having it done, rather than dropped, that I’ve had serious tunnel vision for a long time, sigh. I’ve been so fixated on the quality and flexibility of the shaders that I didn’t focus at all on the fundamentals.

The topology is inadequate. It’s dense where it doesn’t necessarily need to be, it doesn’t flow as cleanly as I’d like, and it isn’t convenient for normal editing. I’d been leaving it up to the Zremesher, but I think that may actually have been a mistake. I did some practice, and…There really is nothing quite as clean as hand-placed topology.

I tried doing some manual retopology on a head I’d made. I like the result a lot better. It looks cleaner, and because it’s a subdivision surface, I can spend less time on the topology and still have it match the dynamesh properly. It seems I can also edit the topology of an existing mesh, as long as it’s under 15,000 points. I’m thinking that from now on, I might just use Zremesher for the body, and then do the head, and maybe the hands, myself. It would make it much easier to get good topology while accounting for the shapes needed for good normal editing, I think. But it does take a bit more time, unfortunately.

Then my other main problem is clothes. I’ve focused a lot trying to learn anatomy, but not enough on fabrics. How to model them properly, make them look right. I’m near clueless at that.

Then there’s rigging, which I’m incredibly weak at, and because I’m bad at that, the posing is bad, because the rigs aren’t good and the weights aren’t optimal.

I need to do so much learning in these areas….The best shader in the world won’t make a badly modeled, stiffly rigged mesh look good. I don’t know how I overlooked that….I’m really angry with myself for fixating so much and forgetting my fundamentals. I need to change…Basically everything.

So for now, I’m planning to just focus on my models. I’ll do some renders, I think, but only as 3D renders. Not worrying about NPR for now, just the quality of the models, the rigging, themselves. I need them to be better for the renders to be better. I’m so stupid…Losing sight of something so obvious.

I’ll work hard on revisiting the fundamentals for now, and make as many good models as I can.

Serph, Perfectionism, 2021

I haven’t posted in quite a while. But actually, I’ve been sculpting a lot recently.

Around the end of this last year, I think I reached a breaking point with my perfectionism. It got to be so frustrating. Back when I started taking sculpting seriously, I was always wanting to get better. To make what I could sculpt match what I could imagine. But, because of that, I got frustrated time and time again. I was constantly scrapping things, and redoing them, and then scrapping that, too. And then I would hate myself for not finishing it, but I’d hate myself for what it was and think it wasn’t worthy of being finished anyway. Thinking back…It was a very toxic cycle. I don’t know if I can say I’ve escaped that. But…I am trying.

I felt like making some Digital Devil Saga fan art a while ago. I made a model of Serph, and some sketches of others. Serph, being the leader and player character, I decided to focus on first and use for practice. I’ve actually learned a lot from this.

For starters, I used one of my base meshes again. I think I improved a bit on the legs, but they’re still not good.

I had a lot of trouble making his armour. Clothes in general, I’m not very experienced with. Let alone armour that doesn’t exist in real life. I ended up researching quite a bit on ways ZBrush artists make them; I ended up finding different, potentially better ways I could have done them after the fact, gah. I ended up having to do some elements in Blender, mainly those round elements. I don’t know what they are, so I couldn’t reference well. The jacket was also a challenge; it’s symmetrical in overall shape, but the zip goes across it diagonally rather than down the middle. Fortunately, ZBrush had just updated with a slice tool for the Zmodeler, so I was able to use that to cut the zip in. I used Zbrush’s default stuff for the zips, but I don’t like to do that, so afterwards I looked it up and learned how to make my own. Next time, I’ll use those instead.

The armour isn’t well done, honestly. The polygon density is all over the place; just compare the jacket to the sleeves, or the boots. It feels tacked together, which it kind of was. I felt clumsy trying to model it. I need to gain experience and skill with that, and make a better go of it next time I make clothes and armour.

I did try some different things this time, though. I found the Zremeshed model was too dense and slow, so I needed to keep detail, but on a lower poly mesh. I found out I could use a tool to draw lines on the model and convert them into polygroups, and use those to help guide the remesh, so I tried that.

It was actually quite effective. I’ll need to play with it more next time and see if I can learn how to make it tick most effectively. I think a careful combination of that and Zremesher guide lines should give a good result. I need it to be relatively low poly; Blender, since 2.8, doesn’t have great performance.

On another note, I tried something different with the shading, too. It seems, in the NPR community, people either use Abnormal or the data transfer modifier right now to get good shading. But, Abnormal is too slow for me to use on anything like this. And the data transfer…It is handy, but I can’t get to like it. It’s so…Uncontrolled. You just have to hope it gives you what you need. And, because it’s going off of geometry positions, you can’t really do anything that’s too far away from it, or super wonky. Limits.

So, this time, I tried painting vertex groups and using the Normal Edit modifier.

Each group was related to a vector. It’s a bit similar to what anime-style NPR artists had done, but they can get away with a bit more, because they’ve been using simpler, generally more moe, styles. But building straight into your topology has problems; if it’s simple in or out of the vertex group, it has a hard edge, so the light will suddenly snap. And if you use any gradients with it, you’ll have an area that gradually lightens as the light is rotated, but where the lit area remains all the same, looking unnatural.

Those also can get away with a bit more. Like using one vector for one side of the face, and one for the other; in a cel-shaded style, a sharp crease in the middle of the face isn’t necessarily so bad, but not good here, so I had to make several. A front, front-side, and side. I could make even more if I wanted to, but I think blending between those is enough.

By using weight paints, I could give it smooth shading at the end. While I was working on it, I used a hard-edge weight paint brush; no need to make it smooth by blurring until I knew it was going to be good. Although, next time I could build the shapes into the topology a bit, too, and just smooth out their edges. Although I’m not too fond of that idea; it seems like it could mess with the deformation in animation.

It took a while, since it’s my first time trying this method, but I was satisfied with the results.

I really liked the face shading here. Although, it’s difficult to get it right from all angles.

I also did some normal editing on his armour; this is an incomplete version, which you can tell from the face being non-smooth. On the armour, I tried for a bit of a simpler version. Just a front, back, left and right group. I think it made it more prone to being dark, though. I tried to follow the shape of the armour. I’ll have to practice this more. But, I liked how it came out, mostly.

I’m also on V10 of my shader. Right now, I’m using Toonkit for Cycles. It’s convenient in some ways. It lets you isolate the shading of a specific light, and it lets me use an outline, which I used to make a softer edge, that won’t mess up like fresnel.

The trouble is, as it’s cycles, it’s a lot slower. It also doesn’t give me any way to account for reflections, unlike Eevee. I think I could replicate it in Malt/BEER using GLSL, given a bit of time, but I tried it and the performance was shockingly bad. It seems my computer’s VRAM is way lower than it should be.

Ultimately, I was very disappointed, though. I had trouble rigging it, and just got frustrated and finished it so I could be finished.

It just looks bad, sigh. I couldn’t control the linework well, and after baking the normals to textures, they had some artifacts. The face doesn’t look good now – maybe in part because of the colours – and it’s stiff as fuck. And because it was Cycles, it was difficult to see these things before rendering. Plus, some of them probably would’ve gone away if I’d used more samples, but it would take hours to render. I need to investigate more and find a solution.

Lightning Boy Studio released their shader for Eevee. It’s quite good.

I don’t intend to buy it, though. I can already do pretty much everything it can. But, what catches my eye is how it can isolate different lights. I’d known for a while it’s possible using drivers, but I don’t like to use them because it means you have to either keep one light with one character no matter what you do, or go to the hassle or resetting it every time. But, being able to do that in real time would be very convenient. The trouble is, Eevee still has no way to give me the outlines I need. I don’t really want to drop that feature, because it’s important for getting an uneven edge and making things look less 3d without compromising the linework with actual displacement. But, it’s worth looking into. I want to isolate lights, but in a way that minimises the work I have to do. One or two clicks would be best. Although, my shader in general seems to choke Eevee, so even if I could do that I’m not sure how much good it would do.

All in all, I’m frustrated, but I still have goals. I want to make another model. And then another one. And another one after that. Each time, I can learn. I can make the next one better. I want to make the next one better. I don’t want to be bogged down hating myself and what I’ve made anymore. I’m not really one for new year’s resolutions, but I do want to make a lot of art this year.

I’ll make another model of Serph when I’m a bit better. A better one. But not now. Not scrapping this one to do it.

Progress

I’ve had a bit of time off work recently. Unfortunately, I’ve been having to move my sleeping pattern around quite severely, so I haven’t been able to use it all as much as I’d have liked to; it’s hard to concentrate when tired.

Still, I’ve been progressing, I think. I’ve almost finished my model of TKB.

I spent a while testing out different methods to see if I could capture shading information separately to the silhouette’s vectors. Object based normals seemed to work, but don’t hold up from every angle. I don’t know how to fix that yet, urgh.

Modeling clothes and such was difficult…I’m not skilled with them. A lot of the time, they’d be sculpted with the creases and such on them already, but this is one I’d want to post in various ways, and they wouldn’t look natural when the creases would be identical in every shot. So I’m keeping the basic mesh simple, and then I’ll add those as and when I need.

I’m using colour masks to separate different colours. Using R, G and B separately, I can three colours for one image. I’m not good at painting on patterns and such, though. I’m very inexperienced with that.

As it stands, I’ve applied most of the shaders to him, but…It doesn’t look good to me, or like watercolour. Obviously, the shading needs correction, as I haven’t edited his normals yet, but it’s more than that. The silhouette is crap; they always need normal editing to become decent, and even then, it’s not as reliable as I’d like. What I really need is a two dimensional outline…Something that can just consider how it is as a flat shape, not something in 3d space. I could displace it using the camera, but I’d have to change that modifier every time I changed the camera. I need…Some way to filter out those details, to simplify the normals data it’s using. Normal editing works, but I can’t have a real set of normals for a silhouette and another real set for the shading. And it’ll look very 3d if the edge is so perfect without it.

I also did a bit of sketching. I’ve been testing out the Flatten brush. I usually use the Trim Dynamic brush, but the Flatten one seems more effective. I like to sculpt in a fairly planar way, so it’s good at that. I tried sculpting Heat, and retooled his sketch-head into Serph for fun.

I also practiced mouths. I think I have a better grasp now on how I’d like to model a mouth interior. I’ll try applying it to my next character models.

Watercolour V8

I’ve upgraded my watercolour shader again. I found it was performing badly, so I rewrote it entirely. It worked better, for a while, but seems to have become slow again. I’m starting to think it’s just a Blender issue. I have also noticed, it’s become so complex, even recompiling it just for the material preview is very slow. It’s quite terrible.

I modified it to work better with retaining the colour on dark colours. Basically, it does the standard interpolation, then uses that for a second interpolation with a set border, only falling off from the retention amount to white after that point.

I also added ambient occlusion. It wasn’t difficult, but it’s harder to get right than normal shading. I may just end up faking it instead.

Lastly, I changed my interpolation function. It used to just switch between them, but now I’ve given it the ability to interpolate between different types of blending, for more varied results and more control. It’s not performing well, though; I should interpolate between their respective factors rather than doing all of those interpolations and then blending the result.

Somehow, this shader is a lot slower than V6 and V7. It might be the AO, or something else; I need to get to the bottom of it, because I need it to be fast and responsive.

I’m also finding, I need to work on the transparency. Things like sleeves are still an issue. I can bypass it by masking out bits manually that would be covered by worn clothing, but it’s inconvenient. I’d prefer not to have to deal with such things. I need a way to determine which things are occluded from view by others and not render them, but that seems like it would require ray tracing, which Eevee is not capable of at the moment. The edge soak is also rather weak on the skin-coloured monkey head.

A way to determine if it’s being viewed in perspective or orthographic would also be nice. It doesn’t display right in ortho anymore, because I used my depth-detecting function to scale the edge soak and edge noise textures so that they’ll stay relatively consistent whether they’re up close or far away.

I still have a very long way to go before this is what I want it to be. Still, this is progress. I also finished a character model of Maya, from Persona 2. By doing that, I worked out things I need to change. Next, I’ll make a post about that, then next I want to make Mariku, Ryou and the others.

Watercolour V6

It’s been a while since my last post. I mean to update more often, but I don’t end up doing it enough.

Since my last one, I revised my shader a few times. I’ve finally got one that I’m finding to be a decent mix of stability and functionality. I haven’t changed too much from before; I’ve swapped out the procedurals for baked textures from them, though it doesn’t seem to make much difference. I also was annoyed with the way I was applying texture in the previous version all as one at the end. It makes more sense to texture the main and shadow colours before they’re ever seen. Other than that, it’s mostly the same.

I recently worked on a model of the Kaibacorp building for Millie. I decided I’d practice with it.

I applied the shader to it, and I’m quite satisfied with the results. It’s…..Not perfect, but better. It seems to work better on static objects than organic things like character models. It’s less tricky….Characters have a lot more than can trip you up, I find.

I rendered two versions. One in standard perspective. The second is using a fake perspective on top of the usual one, made with the Lattice modifier. Faking perspective is very important to give it a 2d touch, so I wanted to experiment with it a bit. I should use a Mesh Deform in future, I think, for greater control, but the Lattice is quite useful. It’s quick and easy, whereas a Mesh Deform takes time to calculate. For this kind of perspective, I think it’s quite helpful. I could probably have exaggerated it more.

I also tried applying it for an experiment to a character mesh I’ve been sculpting. I’m still struggling, unfortunately, with getting satisfactory, crisp character models. It’s the same problems….The eyes, the mouths, nails….Those small details. I could model them onto the retopo mesh and just not use a normal map, but it seems…..Reductive. I want to be able to capture those details, without having to have a huge polycount. There are some times when I might want to throw out the normal mapped detail, but I’d rather have it conveniently saved to a texture map the rest of the time. Lower poly meshes are easier to work with and perform better.

In this case, that’s a work in progress mesh of one of my D&D characters, Nagi, a half-orc wizard. I’ve had difficulty with the paper texture on grey colours; with my current settings, I found it was too dark, so I’ve recalibrated it. The way my shader applies paper texture and edge soak according to what colour is used couldn’t be called physically correct, I think. It’s certainly no simulation of real watercolour. I’m just going by what appears right, what looks and feels right based on what I’ve seen. I think that’s probably a better way to go about it, as far as art goes.

I want to work on the edge soak, though. It’s not very prominent there, and I’ve found at greater sizes it seems to just….Dissolve.

At any rate, I feel like I’m making progress. I want to be able to update more soon.

Effie

So, I finally finished a model.

I wanted to make my Mariku model, but I decided to do a test model of another character first for practice and to see how my workflow would go. I decided to make one of Effie from Descender, before she was Queen Between.

I did two different lighting angles to test it. I’m fairly satisfied with my shader at this point….But not how I’m using it. I need to get more used to it, and adjustments I need to make to account for it. For example, the legs show through on the two images in several places where I forgot to turn on the clipping mask. I’d like a way to render them all as if 2d, if possible, all on one alpha layer, but I don’t know how I’d go about that with them as separate objects.

I also see I should change my linework. I was quite happy with the sketchy effect I was getting, but in practice, I don’t have the control I want; the nostrils, for example, get lines when I wouldn’t want them to, and it’s just…Inconvenient having to wait to see how it’ll look. I want as close to real time feedback as I can get.

I also need to work out the rigging. I used a proper armature this time, and then used the Data Transfer modifier to transfer the weights onto the clothes. She lacks her jacket in the render because I couldn’t get it to copy them and didn’t want to have to stop and rig that manually. I wanted to use the Mesh Deform modifier, but it was finicky and unreliable on this model, and having to unbind and rebind everything every time I changed the cage would be a pain. I also haven’t found a solution for unexpected behaviour triggered by having mesh deform and armatures at the same time.

I also just need to get better. The mesh quality is just not good enough. It doesn’t hold up to close ups and the anatomy isn’t good enough. Especially the legs. I’m not good at them.

Plus, I need to work out a better way to do the colours. I mixed them individually where needed, but because each colour has a set of values to go with it, it’s inconvenient. It’s just awkward. I could use two masks, one for total main colour and one for total shadow, but I worry it would damage the watercolour effect I’m going for, being too perfect and clean, whereas with multiple colours, the masks can be modified, such as adding edge soak at the borders, etc.

Lastly, I need to change how I’ll alter shading. The shader just won’t perform well enough to modify it in realtime with vertex paint as I’d wanted to. My experience of Blender’s texture painting makes me think it probably won’t handle using the texture-based method in a real scene, either. Normal editing is a nuisance, though. I’m uncertain what to do about it.

Still, I want to work something out. I’m making progress. I just wish I didn’t feel like I was constantly at the “If it was just a bit better…” Stage.

Pseduowatercolour V2 And Effie

I’ve been working hard recently. Real life has been a bit tricky recently with the corona virus making people panic. I’ve had more overtime recently because work has been more hectic because of that, so I’ve been trying to make the most of my free time. With that, I’ve made a fair few changes to my psuedowatercolour shader, and I’m liking the results.

A problem I had before was that it didn’t work well for dark colours. I tried applying it to my Mariku model, but it looked so wrong. The problem was that it was also blending to the background colour, white. But in real paint, it probably wouldn’t do that; darker pigments seem to stain more, so a dark brown would probably just fade at the edges to a less dark brown. So, I added a new input for it, Maximum Blend, that defines how much the colour can blend with the canvas colour at most. This doesn’t affect the transparency, allowing me to keep my uneven transparent outer edge, while avoiding it looking too unnatural. I tried some more tests on my test models, including multiple colours this time.

I think it’s more effective than before, and I’m quite satisfied, mostly, with how the dark colours look here. The lighter ones look better, though.

In any case, there still problems. Main being the performance. Frankly, it’s insufficient. I might be using a laptop, but even so, it’s far too slow for my liking when I alter values or use multiple colours. In this instance, I used two copies of the shader mixed with a texture. So it’s running the entire thing twice, and I found it slow.

A more efficient way to do it would be to mix the values the shader uses beforehand, like the colour ramp inputs, mix the colours, etc. I tried making a node group for it.

Unfortunately, as you can see, it’s obscenely long. It’s ridiculous. Technically, it works, but it’s just not sufficient. It’s clunky and hard to use, and takes a long time to plug in. So, my next move with be to make it more efficient. I’m going to pack as many of those values into single inputs and outputs as possible. For example, I could combine the values for Silhouette Ramp Low, Silhouette Ramp High, and Maximum Blend into a colour. Colours limit it to 0, though, so I think I’ll use vectors. I’ll need to make one function to convert the full value set into the packed version, then another to unpack them, and a version of this function to mix the two, using the packed versions as inputs. Then I should have less to plug in to each thing.

I’m also going to add in again some features I took out. I thought that modifying things by viewing angle was making the node too large, but after thinking about it and testing it, it would be useful, I just need to use it more carefully. I’m also not currently seeing as much use for the depth as I’d thought before; I’ll probably remove it. By getting the depth from the node group I made, I could stick it in as an input anyway by using it as the override mask, rather than it definitely needing to be internal.

I have more to write soon, but this is it for now. I finally, mostly, solved my sculpting problem with smoothing, so I’ll have proper models to show soon. I’m currently working on one of Effie from Descender, for fun and practice. If I can make this work, I can definitely make Mariku work.

Pseudowatercolour V1, Continued

Since my previous post, I’ve been working more on my pseudowatercolour shader.

For starters, in the end, I abandoned the screen door transparency. I still have the nodes I made for it, and might use them again in the future, but I couldn’t get them to look close enough for my satisfaction right now. I think mainly the problem is with the patterns I used in each section. They were made mathematically, to avoid having to waste texture slots, but because of that, they’re not easy to work with. I need to set aside some time sometime to make better patterns, and probably modify the node groups. It currently has six steps, but I’d like to bump that up to eight smoother ones for a better transition. I found that the Alpha Blend option gave sufficient results, with less of a performance hit than I’d had previously.

I also started applying it to a person-shaped mesh, rather than the Suzanne heads I’ve been using up till now. I found that what looks good on that doesn’t necessarily apply to an overall model. A big problem I had, for example, was how the depth shading I was using changing dramatically from a front to top angle. On a tall, but narrow character like a human, what gave a smooth gradient at the front gave a bad, very harsh result from the top.

To fix that, I made a function that outputs a value from black to white according to the view the model is being used from. It uses the Object coordinate and Camera coordinate to tell where it’s being viewed from relative to the object’s direction. Currently, I just have it check the top, bottom, let and right, but I could easily add options for the front and back, too, though I’ve not really needed to since they’re fairly easy generally. Using that, I can modify the use of the depth based on the view. Initially, I wanted to use it to modify how much depth is used, but I decided instead to use it to modify how much the depth is blended with the fresnel, since from some views the depth isn’t needed to smooth it and just gets in the way. I also remade the function that adds noise to edges, such as the outline or shading.

I also modified the fake paper texture, and generally tidied the shader up, putting it into a node group. The main reason I had it outside of one before was to use colour ramps. Linear interpolation lets you control it from outside a node group using numbers, and can drive them by other things, but it isn’t organic looking because it’s linear, rather than smooth. To solve that, I read up on different types of interpolation and implemented them. Using those, I found I got smoother and adequate results.

I also modified the edge soak to be driven by maths, rather than a colour ramp, so the strength and size are determined that way, too. The end result is a very long nodegroup, but I think it has a good amount of control.

The last thing I did was revise my linework. I hadn’t given it much time until now, but it really can be a big help to make something look more 2d and natural. I used two layers, basing it on Dustin Nguyen’s linework in Ascender.

I noticed that quite often, lighter sketch lines can be seen in his art, with stronger, more final overs over the top. It adds texture to them, and I thought applying a similar effect could help make my work look more 2d and created by hand. I used varying amounts of Sinus Displacement in freestyle on them, to make sure the lines don’t perfectly match the shapes. On the test background piece, though, I used one with very little; I think it looks okay on characters for it to be uneven, but environmental pieces would need straighter lines.

I think the linework really helps it look more natural. Considering it without them, it’s dissatisfactory. It looks too 3d. I think it’s the white edges. I used transparency with a threshold and noise so they don’t perfectly match the 3d shape, but I don’t think it’s enough. I’ll have to work on that; if just taking the lineart away makes it look 3d, it isn’t good enough.

Still, it is progress, and for once I’m feeling relatively satisfied with the results.

Next, I need to improve the outline of the models themselves, and start to add more options. As it stands, it’s only diffuse shading; I want to be able to account for things like reflections and subsurface scattering. Not all of the options PBR shaders have will work in Eevee, since I’m using the shader to RGB node, but it should be possible to fake some of them. I also want the shader to account for the colour of lights, and multiple lights. Blender NPR recently did a video showcasing a method that allows up to three lights, but I think I know a different way to get the results from different lights and their colours.

I’ll be adding that in later, and saving my progress on the shader as I go. But for now, I want to focus on sculpting and applying the shader I do have to them. I can learn and implement more as I go. I want more to show for myself than test files. I’m going to apply it to my Mariku model, and the others, too.

Pseudowatercolour V1

I’ve nearly finished a Ryou model recently. Although, having done that, I had the problem that I didn’t have a good shader to apply to it.

I’ve spent quite a lot of time in research and experimentation for various aspects of my shaders, for fun and to use in my art, but I’ve never really got something I would say is complete and reliable. Perhaps because I was too perfectionist about it, and nitpicked a lot over 3d elements. Making 3d that’s a convincing facsimile of 2d has been a goal of mine for a long time. I always wanted to be able to draw well, but the amount of work it needed was daunting, and I got put off. It’s ironic that with the amount of effort I’ve put into NPR to imitate it, I could’ve just learned the real stuff by now.

Still, I’ve been trying to progress. I want to achieve what I set out to do this year, making more art and models, so I made a Ryou base mesh. From that, I want to derive several, like Yami Bakura, fem Ryou, au versions. With that, I needed a shader, so I went back to my previous efforts.

My previous work in steps; it may look similar, but a lot of them derive their appearance differently.

I was dissatisfied with what I had previously, though, so I considered my approach. The biggest problem was the silhouette. It was too 3D, with fresnel, the easiest real time method I knew of to get something resembling an outline, capturing too much detail. I considered editing the normals, but then, I also need a correct set of original normals for the shading. Having multiple sets baked seemed like too much of a pain. I also experimented with using nodes to blur a texture that defines the normals to try and get a blurry, and thus less detail-accurate, fresnel, but it didn’t work when viewing it from different angles. Then I came across this, a discussion on how to, essentially, replicate a depth pass with a shader.

I replicated the effect, so now I have that available to me. But more importantly, it made me remember a previous technique I had, using depth to get a shade.

At that time, I wrote this off for the time being because the results weren’t accurate enough, but I did remember it. The depth pass made me consider that I could use it as part of the solution, if not the whole thing itself. The most problematic details from fresnel were at the front. So, I could use the model’s depth as a mix factor to get the nice smoothness it has at the front, and keep the overall shape detail the fresnel captures.

Doing it that way, I was able to get a much more pleasing result.

This new shader uses the depth to get smoothness at the front, and be controllable, while keeping most of the silhouette. I also made some more node groups to add a false paper texture according to how coloured parts are, small variation, and again, noise at the edges. I also added more features to control the shading, like the ability to set the minimum amount of colour an area must always have, for tricky areas or ones needing constant detail. Although, this depth+fresnel method does need tweaking when the view is changed, and relies on the object’s origin being its centre of mas. That does give it some flexibility, too, though, if I wanted to move that to centre a specific point instead of the actual centre.

I’m quite happy with how this looks, for once. It’s not perfect, but it is a step up from previous attempts. There is something else I did differently here, too. The transparency. Ideally, I’d like to use Blender’s hashed transparency. It provides most natural looking results, and looks smooth.

However, I find it’s not as responsive in the viewport. It also takes longer to be able to see what’s going on, since it’s grainy for a moment, and is harder to judge. The difference isn’t too bad, but I’m on a laptop; I’d like to buy all the performance I can get. Especially since it’ll likely multiply significantly in performance loss in a scene with many objects.

So, I considered using blended transparency. It’s quick, it’s smooth. But, it doesn’t seem to work quite right. I have to make sure to set backfaces to be rendered, and then it doesn’t show correctly. Switching that off, however, seems to work.

But it hadn’t worked quite right for me before, and I want to keep performance as good as I can get it, so I thought about the simplest, which is alpha clipping. For that to work, it needs a game type transparency, called Screen Door Transparency. It’s not really transparent, but by hiding more pixels, you give the illusion of it.

Blender doesn’t have anything like that by default, so I made a group node myself, that does it in six steps, converting a Value input to work with it. I think the results are decent, and it’s more responsive in the viewport than hashed. It’s more digital and 3d, though; I may end up using Blend, if it works sufficiently. Disappointingly, the performance at render time compared to hashed is about the same, which is strange. It would be frustrating if Blend works just fine after all the time I spent working out how to do that. Although I do think this way gives distinction to it.

The overall size of the node set is massive, though. I could put it in a group by replacing the colour ramps with Remap Value nodes I made, but the control would be linear and insufficient.

In any case, I’m fairly satisfied with it for now. I can choose the canvas colour, main colour and shadow colour, control the falloff from the centre to emulate watercolour spreading, control transparency to emulate it being thinner the further from that point it is, and control the way the HSV is modified by the fake paper texture.

Next, I want to add more. I want to be able to add fake brush strokes by using a texture as a vector input, which I believe I know how to do but just never did yet; similar way to how a normal map works. I also want the colours to be dynamic, responding to coloured light. That I know how to do already, but need to check more how it would appear in Eevee.

Those can wait, though. My next task is to apply it to a model and make some art. But for now, I’m going to go to bed because my brain is slowly packing it in. It is 1AM, after all.

Shadow Catcher V1

Recently, I made a shadow catcher in Eevee.

An advantage Blender Internal had, and that a particular modified Blender build has with Cycles, was Light Groups. A light group allowed you to cause a light to only affect certain objects. It was convenient for NPR purposes. The other useful, and sorely missed feature Internal had, was the ability to define, in the material, whether it took shadows or not. That’s incredibly useful, because it means you can use it to isolate shadows, not just regular shading, and define which areas you want to take shadows or not.

For example, anime character’s eye and brow, if shaded normally, would likely have a strong and ugly shadow cast by polygon hair, which doesn’t look good. You could, if you could isolate it that way, mask it out. That’s not possible by default in Eevee or Cycles.

But, using Eevee, I’ve found a way. I wanted to be able to isolate the shading from any given light. Especially since it was one of the techniques used in Guilty Gear Xrd to achieve such convincing false 2d.

They used a light on each individual character to give them specific control, rather than scene only lighting. I’d like to explore scene-based shading, lights in the world, etc, but it’s good to have this as an option. That made me consider isolating lighting results again. I did some research, and came across this Polycount thread.

I already knew shading could be faked by using the dot product of a direction and the surface normal, but that can’t capture shadows. I’d thought beforehand, I could use the RGB node to get the difference between fake shading and real shading if I could just make them match; what I lacked was the ability to work out what direction the light is coming from in order to do that, and that’s what that thread taught me.

Node setup by Polycount user Jekyll

By using this technique, I was able to get the light direction. Then, by simply using a Shader to RGB node with a diffuse shader and subtracting the false shading from it, I was able to isolate the shadows.

With this, I can catch shadows and then use whatever technique I’d like to mask them out.

The advantages of this techniques are:

  • Able to identify shadows.
  • By using multiple lights, you can identify specific shadow types, like using two to distinguish contact shadows and shadows.
  • Can use multiple to distinguish hard and soft lights, then mix between them to preference.

The disadvantages:

  • It’s dependent on drivers. You have to manually select the light, and set up drivers for its location.
  • If the light is hidden, I’ve discovered the driver will break, meaning you have to reassign the object.
  • If you move it into a different scene and don’t bring the light, you’ll have to replace it and redo the drivers again.
  • It’s inconvenient to have to use multiple lights to get multiple types of shadow.
  • Using multiple lights makes the whole scene brighter. Objects not using the shadow catcher can’t account for this, and may be undesirably bright.
  • It’s inconvenient if the lights have to be changed significantly, like the number.
  • I believe it only works correctly with directional lights.

Having to set it up in such a time consuming way is the big problem for me. It’s not as if there’s just one box to select it in. It’s inconvenient, and especially when it breaks just from the light being hidden. That might be fine if we use the Guilty Gear Xrd method and only use a specific light or set of lights for it, but if we want to change things a lot, it’ll be a pain.

So, I’ve been doing some more experimenting, trying to recover that information somehow using the Texture Coordinates node, which can select the object. If I can, it will be much easier and quicker to modify. I’ve only had mixed results so far, and not worth showing yet. Still, I’ve learned from this, so it’s not wasted. I’m hoping to get a convenient way working of shadow catching this way. The ability to modify how it’s shaded and shadowed, in real time, not compositing, would be extremely useful.