Serph, Perfectionism, 2021

I haven’t posted in quite a while. But actually, I’ve been sculpting a lot recently.

Around the end of this last year, I think I reached a breaking point with my perfectionism. It got to be so frustrating. Back when I started taking sculpting seriously, I was always wanting to get better. To make what I could sculpt match what I could imagine. But, because of that, I got frustrated time and time again. I was constantly scrapping things, and redoing them, and then scrapping that, too. And then I would hate myself for not finishing it, but I’d hate myself for what it was and think it wasn’t worthy of being finished anyway. Thinking back…It was a very toxic cycle. I don’t know if I can say I’ve escaped that. But…I am trying.

I felt like making some Digital Devil Saga fan art a while ago. I made a model of Serph, and some sketches of others. Serph, being the leader and player character, I decided to focus on first and use for practice. I’ve actually learned a lot from this.

For starters, I used one of my base meshes again. I think I improved a bit on the legs, but they’re still not good.

I had a lot of trouble making his armour. Clothes in general, I’m not very experienced with. Let alone armour that doesn’t exist in real life. I ended up researching quite a bit on ways ZBrush artists make them; I ended up finding different, potentially better ways I could have done them after the fact, gah. I ended up having to do some elements in Blender, mainly those round elements. I don’t know what they are, so I couldn’t reference well. The jacket was also a challenge; it’s symmetrical in overall shape, but the zip goes across it diagonally rather than down the middle. Fortunately, ZBrush had just updated with a slice tool for the Zmodeler, so I was able to use that to cut the zip in. I used Zbrush’s default stuff for the zips, but I don’t like to do that, so afterwards I looked it up and learned how to make my own. Next time, I’ll use those instead.

The armour isn’t well done, honestly. The polygon density is all over the place; just compare the jacket to the sleeves, or the boots. It feels tacked together, which it kind of was. I felt clumsy trying to model it. I need to gain experience and skill with that, and make a better go of it next time I make clothes and armour.

I did try some different things this time, though. I found the Zremeshed model was too dense and slow, so I needed to keep detail, but on a lower poly mesh. I found out I could use a tool to draw lines on the model and convert them into polygroups, and use those to help guide the remesh, so I tried that.

It was actually quite effective. I’ll need to play with it more next time and see if I can learn how to make it tick most effectively. I think a careful combination of that and Zremesher guide lines should give a good result. I need it to be relatively low poly; Blender, since 2.8, doesn’t have great performance.

On another note, I tried something different with the shading, too. It seems, in the NPR community, people either use Abnormal or the data transfer modifier right now to get good shading. But, Abnormal is too slow for me to use on anything like this. And the data transfer…It is handy, but I can’t get to like it. It’s so…Uncontrolled. You just have to hope it gives you what you need. And, because it’s going off of geometry positions, you can’t really do anything that’s too far away from it, or super wonky. Limits.

So, this time, I tried painting vertex groups and using the Normal Edit modifier.

Each group was related to a vector. It’s a bit similar to what anime-style NPR artists had done, but they can get away with a bit more, because they’ve been using simpler, generally more moe, styles. But building straight into your topology has problems; if it’s simple in or out of the vertex group, it has a hard edge, so the light will suddenly snap. And if you use any gradients with it, you’ll have an area that gradually lightens as the light is rotated, but where the lit area remains all the same, looking unnatural.

Those also can get away with a bit more. Like using one vector for one side of the face, and one for the other; in a cel-shaded style, a sharp crease in the middle of the face isn’t necessarily so bad, but not good here, so I had to make several. A front, front-side, and side. I could make even more if I wanted to, but I think blending between those is enough.

By using weight paints, I could give it smooth shading at the end. While I was working on it, I used a hard-edge weight paint brush; no need to make it smooth by blurring until I knew it was going to be good. Although, next time I could build the shapes into the topology a bit, too, and just smooth out their edges. Although I’m not too fond of that idea; it seems like it could mess with the deformation in animation.

It took a while, since it’s my first time trying this method, but I was satisfied with the results.

I really liked the face shading here. Although, it’s difficult to get it right from all angles.

I also did some normal editing on his armour; this is an incomplete version, which you can tell from the face being non-smooth. On the armour, I tried for a bit of a simpler version. Just a front, back, left and right group. I think it made it more prone to being dark, though. I tried to follow the shape of the armour. I’ll have to practice this more. But, I liked how it came out, mostly.

I’m also on V10 of my shader. Right now, I’m using Toonkit for Cycles. It’s convenient in some ways. It lets you isolate the shading of a specific light, and it lets me use an outline, which I used to make a softer edge, that won’t mess up like fresnel.

The trouble is, as it’s cycles, it’s a lot slower. It also doesn’t give me any way to account for reflections, unlike Eevee. I think I could replicate it in Malt/BEER using GLSL, given a bit of time, but I tried it and the performance was shockingly bad. It seems my computer’s VRAM is way lower than it should be.

Ultimately, I was very disappointed, though. I had trouble rigging it, and just got frustrated and finished it so I could be finished.

It just looks bad, sigh. I couldn’t control the linework well, and after baking the normals to textures, they had some artifacts. The face doesn’t look good now – maybe in part because of the colours – and it’s stiff as fuck. And because it was Cycles, it was difficult to see these things before rendering. Plus, some of them probably would’ve gone away if I’d used more samples, but it would take hours to render. I need to investigate more and find a solution.

Lightning Boy Studio released their shader for Eevee. It’s quite good.

I don’t intend to buy it, though. I can already do pretty much everything it can. But, what catches my eye is how it can isolate different lights. I’d known for a while it’s possible using drivers, but I don’t like to use them because it means you have to either keep one light with one character no matter what you do, or go to the hassle or resetting it every time. But, being able to do that in real time would be very convenient. The trouble is, Eevee still has no way to give me the outlines I need. I don’t really want to drop that feature, because it’s important for getting an uneven edge and making things look less 3d without compromising the linework with actual displacement. But, it’s worth looking into. I want to isolate lights, but in a way that minimises the work I have to do. One or two clicks would be best. Although, my shader in general seems to choke Eevee, so even if I could do that I’m not sure how much good it would do.

All in all, I’m frustrated, but I still have goals. I want to make another model. And then another one. And another one after that. Each time, I can learn. I can make the next one better. I want to make the next one better. I don’t want to be bogged down hating myself and what I’ve made anymore. I’m not really one for new year’s resolutions, but I do want to make a lot of art this year.

I’ll make another model of Serph when I’m a bit better. A better one. But not now. Not scrapping this one to do it.

Frustration

Recently, I’ve been frustrated with my shaders.

None of them look the way I want them to. I’d like to achieve a painterly appearance, or watercolour, but they’re quite elaborate. I sometimes think it would be better if I could settle for cel shading, but I just can’t accept that. Or rather, I want to be able to do more than just shade everything so simply. I’m not one of those people who’s interested in NPR primarily, or purely, to replicate the style of anime. It’s not that I look down on that style; it’s just not my preference.

So, I’ve been doing more research, as usual. The big problem with using Fresnel or Layer Weight to make the rough shape of the object, or an outline, is that it can give unwanted lines, and does. I dislike having to manually correct these, so I’ve been trying to think of ways to fix this. I’ve experimented the day or so with blurring the normals of the model to remove details.

How did I do this? Well, I’m not blurring the normals, exactly. I came across this video a while back. One of the features demonstrated is creating a false blur by distorting a texture.

23:58 for the blur technique.

I’d hoped I could apply the same technique straight to the normals, but it didn’t work. Instead, I baked the object’s original normals out to an image, and used that method to distort it.

The result was effective, actually. I haven’t removed anything from the default Suzanne head, but by blurring it and using it as the Normal input, it’s removed the detail. It also darkened it for some reason, which I was able to compensate for by remapping the value by an equal amount to the distortion. I’d hoped to use this to get a blurred version of Fresnel I could use to make a better silhouette. However, when I tried it..

It only works from the front. I really don’t know why this is. My best guess is that it’s something to do with the normals’ texture. The colours don’t appear the same way when I look at them as the original normals do, so perhaps it’s a reason why it doesn’t display right. However, when I used it as the normals without blurring, it displayed correctly.

It’s quite frustrating. But at least I learned I can blur out detail with this method. That’ll likely come in useful.

The advantage of this method is that the original normals are a texture that isn’t being permanently modified, so it can be controlled as I please.

The disadvantage is, it’s grainy. I haven’t found a way to smooth it yet, though, for what I’m doing, a bit of grain may help it appear natural. Also, as it’s just using the original normals from a texture, it can’t be modified the way the real normals of the geometry can.

On another note, I came across an interesting method for modifying shading.

This technique comes up ~ 1 hour, 15 minutes in.

This one uses an image texture to paint on to control shading. The trick is to create a UV slot on the objects you want to modify, and then use the UV Project modifier to project UVs, like the Project From View UV mapping option, but in real time. Then, have an image texture using that UV to control the shading by adding or subtracting. As they’ll all reference the same image, you don’t need to select individual objects to control, and can modify everything in the shot at once.

The advantage of this method is that you can control them all at once as long as they’re referencing the texture. Additionally, if you make it the same resolution as your render – and you probably should – it shouldn’t appear pixelated. And since it’s an image, you could also edit it externally if you really needed to or wanted to. I find this method far more responsive as I paint than modifying vertex paint, and it can be far more precise than vertex paint would be on most sensible meshes. It’s extremely effective for cel shading.

The disadvantages are several. For one, it’s an image, so it’ll bloat your file size, potentially a lot if you’re rendering at high resolution. Also, because it’s an image, you can’t animate it the same way you could the vertex paint method; you’d have to use an image sequence, increasing the file size further and manually repainting each frame. Although, if replicating anime, that may be best, as it’s essentially what colouring in anime is like, and leaves room for human errors that make it appear more genuine. Lastly, if you change anything in the shot, you’ll have to repaint. Depending on the change, this can be small or large; if you move the camera, you’d have to redo the lot, or, if you were just panning, perhaps you could compensate using the Mapping Node or translation method. That makes it tricky if you want to change the camera angle or re-pose a character or such, though if you modify the shading last, it shouldn’t be as much of an issue. It’s also more difficult to make look natural on something with smooth shading.

Overall, I find there are advantages and disadvantages to this method, but it’s certainly something I’ll keep in mind. I might well use it next time I’m doing a render; I prefer the flexibility of vertex paint, being able to modify things after the fact and the shading changes not be rendered all worthless…But the speed and precision of this texture method isn’t something I can ignore. Custom shading is very likely the key to convincing 2d, or at least an essential.

Anyway, that’s what my research and testing has turned up for the moment. I want to experiment more. Reading a reference from an old Siggraph paper, Painting With Polygons, Video Watercolorization using Bidirectional Texture Advection gave me some ideas; that’s actually where my idea to blur the normals came from. It also seems several steps of their process are ones I’d used in my earlier compositing technique, though applied differently. I’ll try experimenting more and see what I can turn up.