This tool is designed to streamline the task of building and populating structures in a game. It was built over a few days in free time, and has had maybe one major iteration so far. Based on rules, rooms types are created, and appropriate furniture for that room is placed. First a random floorpan is created based on the square footage the user requests. Wall and roofing are then placed around and over the floorplan. Rooms are assigned a type based on size, a hierarchy, and the presence of other rooms. Appropriate room furnishings are only present in a valid room type ie. No toilets in the main living area, and no beds in the washroom. Furniture generally lives along walls, and placement areas become invalid when there are doorways, or low windows or other objects claiming the space. Assuming there are multiple valid locations for a piece of furniture the user can manually cycle through the options to find a more pleasing location. One of the interesting parts of the tool is the probability matrix that allows the user to control the mixture of assets. In this case the mixture of plain walls, and two kinds of window treatments.
Lastly the architecture can be stylized for art direction reasons so walls and corners are not confined to perfect 90 degree angles. Roofs can dip etc while remaining in a fully procedural non-destructive system. This is still a work in progress. There are some minor roofing scenarios that could generate better results as well as the missing connection piece for roof corners.
This is a tool developed to lay trees out effeciently and at scale, while still being very artistically directable. The idea here is to place trees within a non-destructive closed curve. A maximum tree count is specified by the user but if the curves area is not large enough, or there is not enough valid terrain under the curve it will not place a tree. The trees, rocks, and grass all have their own rules as to what a valid slope if for growth to occur. In the case of trees the user can also control how much of the terrains normal is inherited, making the tree up to perpendicular with the uneven terrain. This assumes the slope of the terrain is valid for tree placement in the first place, which is another controllable setting.
Multiple tree types, rocks and grasses are fed into a probability matrix that allows the user to intimately control the composition of each growth type. This could possibly be used for changing seasons over time, or for performance gains by increasing the occurrence of cheaper trees. Trees are treated as bundles, so they have their sort of biome of rocks and grass. They also claim their own space so you will not see grass growing through rocks, or trees growing on rocks. Further to that obstruction objects can be designated which the trees, rocks and grass will grow around and accommodate.
This tool is not meant to populate an entire level with one large curve, it instead is far more powerful when used to create many smaller tree clusters that can be directed in the location of the trees, the shape of the cluster, the composition of trees and coverage.
A cheap and responsive, yet visually rich preview mode allows an artist to work at very interactive speeds.
I have been wanting to play with OpenCV for a while. While I have used it before, I had only done very basic things like augmenting images, basically things you could already do with Photoshop, and grabbing depth maps or segmenting images with my old 360 Kinect. Not the interesting things like facial recognition, tracking objects, or extracting features and keypoints.
A pet project of mine for a while has been the idea of a 3d DCC having some sense of the data it is working on. It seems like the program knowing that you are working with a car, or a quadruped or in this case a human head would allow the software to make smarted context valid automations. This is a very early iteration of the tool.
First using openCV I identify the face, eyes, nose and mouth of the 3d model. This then allows more landmarks to be placed and validated on the mesh. A skull mesh is then placed, aligned, and scaled to fit within the volume of the head. Various anatomical skin depths are then taken into account over the head so that the skull is a reasonable distance from the surface at all locations.
The registration could still be better and will be refined further. The next phase is building viens and placing facial muscles and tendons between the skull and skin surface that again adapt to the scan it is fed.
After a major release of the game I work on, some down time was awarded which meant more time to get back into the sandbox. Some house cleaning moving from Keras 1.0 to Keras 2.x, trying Tensorflow as my backend instead of Theano which I had used exclusively before. As well as some other updated python libraries. Here are some new images with some lots of improvements over the tests from 12+ months ago. Convergence time is much quicker. Instead of potentially a few dozen iteration usefully under 20 is more than enough with quickly demonizing returns be about the tenth iteration.
Here is another batch with the old pink image reworked for comparison. Not much difference but far quicker result. In general more style input images seem to be successful in producing satisfactory results basically meaning higher success rate.