The scope for pidgeon generation is like so:
For those not familiar with breeding sim games (Flight Rising, Wajas, etc.) - these games use generated images for creatures that you own and can breed to mix colours. This is achieved by drawing markings (and other parts) as individual images, then colourising and layering them to produce the final image. Breeding works by averaging the parents' colours (either a literal average of RGB values, or by finding a midpoint on a colour wheel) and randomly choosing markings that the parents have. Sometimes the markings can also have lowered opacity (relevant to breeding, as the marking loses opacity when both parents don't have it). In this way, it's possible to run a game where players can produce new creatures infinitely, and even with nice results.
The one thing about this system of marking generation that I find restrictive, though, is that the markings have preset shapes and cannot change procedurally. To add a marking with slightly more or less coverage, you have to add a completely new drawing. (This can be seen in games like Wolvden where you sometimes have 1-2 versions of a marking shape/markings that look very similar to each other, making the marking catalogue extremely large.) What if you could use one image to define a marking and its variations on a sliding scale?
To do this, the marking images are created with blurred edges, which I call coverage maps (ok, I actually call them heatmaps in the code but they're not really heatmaps). The opacity of each pixel determines how likely that part of the map is to be visible. The image generation rolls a threshold value for opacity and subtracts the threshold from each pixel's opacity, then normalises the result so that the most opaque part is actually opaque. The result is a subsection of the coverage map that can be coloured as usual, and regenerated if the threshold value is known. The best part (I think) is that these can be designed by an artist (me), which takes the ugly out of pure random generation (I hope).
Honestly this was done out of curiosity. I was looking at Turing patterns and wondering if I could generate them with Imagick, as you do. As it turns out, you can simulate the reaction-diffusion process with some Photoshop filters, which is probably a lot faster than actually trying to simulate the process itself with Imagick. The main things this needed were 1. Perlin noise, 2. a high pass filter, 3. a blur filter. Only 3 is provided with Imagick.
I did go over the details of how Perlin noise is generated (video playlist, technical writeup), but also found a PHP implementation of the code in the linked writeup, so I'm using that.
For the high pass filter - I didn't find anything in Imagick that could do this out of the box (although maybe I should look at ImageMagick?) so I wrote this one, which simply blurs the image and subtracts it from the original image. Running the parts in sequence produces...something remarkably like the Turing pattern in the video. I say "like", because it isn't quite there - it's kind of angular, additional loops don't really do much, and it has some artifacts here and there, but honestly I'm surprised it worked somewhat and it's good enough for my purposes.
Further experimentation with the values produced some pretty funky results, which will hopefully show up on a pidge or two in the future :)
In order to give these guys the LQ image treatment I initially tried using the quantize function but it didn't pick up on the lineart well, so I ended up using remap with the palette + black. Unfortunately remap doesn't work with transparent pixels...??? So I filled the background, remapped, then used composite to pull out the image again. Using a white fill for the background occasionally messed with the lineart pixels...using a black fill fixed this, but I feel like the glitched-looking pixels were pretty charming.
So, if I were to tell the generator to just randomly generate something, it'd most likely turn out butt ugly. With the implementation of some basic rules and hacks, I've managed to get them down to just ugly. I'll describe these in greater detail as I come to those steps.
To be honest, I'm not sure if all of these can be called Turing patterns...I'm not using a true reaction-diffusion simulation here because I do not want to kill my server haha. Details are as mentioned above. This generates a natural-looking pattern that is unique. I generate one of these per week and the same pattern image is used for all pidges generated in that week.
First, a palette is generated for a group of pidges. Details on palette generation can be seen on the palette generation tab - I've also added some generation restrictions on those that are technically unnecessary, but make the results look more pleasant. Why a group, and not one palette per pidge? Due to the random nature of marking assignment, it's rare that every single colour in a palette gets used in a pidge...so even if a nice palette is generated, maybe the marking generation doesn't do it justice the first time round. Maybe the pidge turns out mostly blank. If I keep switching the palette for every pidge, it doesn't allow the palette to be explored. So, in testing, I found it was a lot more enjoyable to generate multiples based on the same palette before moving on.
During this process, I also generate indices for the palette. These are points on the palette (e.g. the first colour, the last colour, etc.) that indicate a significant colour on the palette that I can pull from later to coordinate colours. So for example, on monochrome palettes, I may add an accent black and/or white - the indices for these accents are put into the indices list. Later, when I pick colours for markings, the generator is certain to consider these accents for markings (though there is still a chance that they won't be picked). Out of the seven colour palette, 4 indices are assigned.
When I design marking-centric feral characters, typically I like to make sure the design is cohesive in some way, so parts of the body will have the same sort of colouration - for example, making the front/underside of the neck lighter than the body colour, having this flow into a lighter underbelly, lighter underside of tail...etc. Translating this into marking generation means that parts of the body should be coloured in the same way. I call this theming: defining how to colour a part by formula. I can define multiple different ways of composing a theme, and then the palette is applied to those parts.
As a setup step, I draw a bunch of markings for pidges...these are used as coverage maps, described above. These can be a "cover" or an "accent". There isn't a real difference here; "cover" just means it's a simple and large marking like a gradient over the part, and "accent" means it's a more detailed one, such as the edges of feathers, tail rings, etc. Generally speaking I wouldn't want to use too many accents as they clutter up the design, but multiple covers can be put on even the same accent. There is a third marking type, "pattern", which is where the Turing patterns come in. This is used even more sparingly as they tend to be busy.
Then, at generation time, body parts are randomly selected to be put into groups. A config defines which body parts are most likely to match up with another to look pleasing (parts not necessarily next to each other), and as it rolls parts from the list, it crawls through the connections to form a few theme groups. Not all parts have to be assigned to a group - if a part isn't picked, it simply shows the body's base colour. If a pattern is allowed to be generated, it assigns a pattern to one of those groups. No more than one theme group at a time can have a pattern (this avoids clutter). After the theme groups are generated, a theme from a list of themes is picked based on whether a pattern is allowed.
Themes just define colour assignments and how the markings are layered - for example, a basic one would be just base (use "base" colour) + cover (use "cover" colour) + accent (use "accent" colour), and a more coomplex one would be base (use "base" colour) + cover (use "cover" colour) + pattern (use "pattern" colour, but clip another cover layer on it with "pattern_1" colour for a gradiented pattern). Colours are also assigned to each of: base, cover, accent, pattern. These correspond neatly to the 4 colour indices defined in the palette. Each of the 4 also have 2 more colours selected to form a gradient, which are the colours next to those indices. (For example, if index 4 was picked for cover, then cover_1 and cover_2 can be 5 and 6 OR 3 and 2 respectively.) By now, all themes and colours are assigned to every selected part of the body.
At this point, the hard part is done, and the body parts are finally coloured. Everything has already been defined, so it's just a matter of generating the layers, colouring them, and clipping them to each other. Then, the parts are combined together. As a final graphical touch, any soft blending is removed by enforcing the 7 colour palette + black for the lineart (blended colours are dithered). At the moment, there seems to be a random bug in local testing where the lineart black is somehow not included in the palette, but this only seems to happen with certain palettes (???) and very rarely. It has a cool effect so I think it is a feature now.
I think what I want to say is that I spent a lot of time thinking about how I come up with marking designs on characters, and how to summarise even a fraction of the design decisions I make into an algorithm for random generation. These are a lot more designed than they initially appear to be...which is understandable, honestly, with how ugly most of these look. They would look a lot uglier if I didn't curate my palette generation parameters, reduce colours, define layer combinations, etc. In such a time when randomly generated NFT trash exists, when soulless AI image generators built on libraries of stolen art exist, all for the sake of turning a quick profit instead of creating art...I wanted to make a random generator that is neither boring nor deeply unethical, and full of my best attempts at designing the output. It is, of course, specialised for making garish pidgeons, but I hope you find these strange birds(?) as fun as I do. (Which is a lot. I've generated over 2000 of these in testing already)