For my FMP I have to create 2 art tools that can be demonstrated in an Unreal Engine level. My plan is to create a sci-fi corridor generator loosely inspired by the bases on uncharted worlds in the original Mass Effect game. For my second tool I think it would be cool to create a room generator to work alongside the corridor tool but I appreciate that there would be a lot of logic overlap between the two, so I might instead create a custom Python script for UE that I can use to help with optimisation. I am not sure exactly what the Unreal Python API is capable of yet so I will have to do some initial research before planning what I can do with that tool.
For my corridor generator I plan to make it spline-based similar to my pipe tool. This also allows me the chance to further develop my skills in Houdini although this time I will be working with less procedural geometry and more modular prefabs. I could potentially use the Unreal PCG tools as well but I am not familiar with them. I will look into multiple options and decide which path will be best based on a balance of practising as many new things as possible but also something I know I will able to complete within the bounds of the deadline which is the 9th of May. This gives me 14 weeks to complete the project if we don't include the 3 week Easter break. I will most likely work through part of the holiday to give myself some extra buffer time towards the end of the project for final bug fixes and tweaks.
As I mentioned this tool is inspired by the original Mass Effect game. There are several side-missions with bases or ships for the player to explore, however I found myself repeatedly exploring the same base or ship over and over again. This didn't take away from the experience at all but it did get to the point where I didn't even have to open the map to figure out where I needed to go because I had memorised the layout. Part of me wonders whether this was a choice made to conserve disk space by reusing the same level over and over, or if it was a result of time constraints that left artists and level designers without the time to create bespoke locations for all the side missions. Interestingly they found a clever way to disguise this in some cases but changing the doors which were unlocked or the entrance the player used to access the base which did give the layout some versatility.
Now in stark contrast to that game there is Starfield which I stopped playing because I explored several completely different bases. I have a suspicion that all those bases were procedurally generated. How could I tell? I'm not sure. Can I say simply that the vibes were off? I expect it was something to do with how hollow they felt. There were an awful lot of empty corridors, and the layout didn't make instinctive sense. There are still plenty of awesome environments in Starfield but they tend to be the ones that are clearly bespoke and not the heavily procedural ones.
Two examples of modular sci-fi corridors from Mass Effect 1 (Legendary Edition). Both are fairly plain and simple.
Two examples of Starfield environments, one very simple and modular and one still modular but with much more character. Both images from Maxim Chenel's ArtStation. I would like to try and make my environment feel more like the second image, at least in some key areas.
So how have both these games inspired my plan for this tool? Well, I would like to make something in between the very singular focused design of ME1 and the very broad voluminous design of Starfield. I don't want to make a tool so broad that it's used completely in place of bespoke design, but I also want to save artists time on the less interesting parts of level design (i.e corridors) so that they can put more attention into the fun stuff like explorable rooms and overall layout. So my plan for this tool is to make something that can create a hollow hull, placing down the core structure that an artist needs to be able to play around in; I want to demonstrate this by creating a level using my tool to connect two interesting rooms for the player to explore. As a stretch goal I could procedurally place some basic assets like cables or crates, or maybe have some interesting modular variations so that the corridors don't stand out at too boring but my focus is on creating a time saving tool.
I learned from my pipe tool the basics of procedural geometry in Houdini but I haven't yet used it with external prefabs already set up in engine which is what I would like to learn for this project. My initial thoughts are that there will either be a way to import the geometry to Houdini (which seems inefficient but good for visualisation) or perhaps some way to link to an Unreal Asset reference and then visualise in Houdini with placeholder assets. I will work with placeholder assets to get the function of the tool working and then I will know the exact scale my modular pieces will need to fit to in order to work within the system. I haven't decided on the intended vibe of the sci-fi corridor but my initial thoughts but I will need to decide on whether it will feel open and spacey or scary and claustrophobic before I start modelling. I will also need to decide whether this is intended for a first person or third person experience because I expect that will have an impact on the height and width of the corridor. My aim is to have a basic Houdini test done and in engine by the middle of next week so that I can iron out my exact plans and write up an accurate brief for myself. However, because a lot of this project relies on skills I have yet to learn I need to do some researching before I commit to anything to strongly.
For my initial research I watched two tutorials:
Ana Opara's Lakehouse Tutorial (Volume 1) - This was really helpful for giving me more knowledge about VEX coding as well as assigning custom attributes
Simon Verstreate's Sci-Fi Level Generator Tutorial (Part 1) - This series is designed to work with Unity not Unreal but the basic logic is still transferable. I especially appreciated the section about automatically measuring convexity/concavity as this would have been something I tried to do manually and much less efficiently.
Initially I tried to emulate a system similar to that of the Lakehouse tutorial, taking an input, generating a point cloud, and snapping it to a grid. I then filled in geometry, extruded and resampled it and then snapped it back to the grid. This was fun but didn't strike me as particularly efficient. I was creating an awful lot of points that I wasn't doing anythign with, so I needed to find a different approach. This method was also causing me issues when it came to calculating intersections so I decided to try something else.
For my second attempt I tried something more similar to the Level Builder tutorial, where I take the spline input and snap that to a grid straight away. I then copy a grid at regular intervals along the spline to create a floorplan and then dissolve the flat edges so I am left with just the corners. This felt much more clean and efficient to work with and I am glad I decided to switch early before I created any more systems within the tool.
The basis of this tool is that it needs to figure out which modular pieces to assign where. From my perspective there are 4 main components to a corridor:
Walls - will spawn when the normal is 1 on the X OR Z axis, and 0 on the Y (Houdini is Y-up)
Corners - will spawn when the normal is 0 on the Y and not following the X or Z axis - I may be able to group based on curvature to sort corners into convcave/convex
Floor - will spawn at the bottom of the walls, following the shape of the floorplan
Ceiling - will spawn at the top of the walls, following the shape of the floorplan
There are also doors which I want to include in this tool but I think doors are something that should be manually placed as they will have a significant impact on gameplay/level design so I'm going to add a system similar to how I allowed valves to be placed on the pipe tool but to allow doors to be placed along the corridor.
After extruding the walls I can group the edges by curvature. This then allows me to sort the corners into two groups based on how convex/concave they are so that they will be assigned the correct modular piece in engine.
I used the Copy to Points node with some basic boxes in Houdini to test out how the modular piece assignment was working and at this point it looked like it was working perfectly. I did notice that there was an issue with clipping with the large walls so I had to adapt to assigning smaller walls when there wasn't enough space to fit a final large wall into the corridor. Alternatively I could have just swapped to all smaller walls but this would have involved changing my edge sampling and I think the larger walls will allow for more customisable modules and reduce repetition in the long run.
I made some quick modular test pieces in Plasticity as at this point topology and edgeflow weren't my key concerns I just needed something simple to test that I could get everything to align nicely and also to check the scale from a player perspective. At first I had intended to make this tool for a first person game because I assumed that would be easier. However, I really like third person games and I simply prefer that perspective for my own designs so after testing out both options in engine using the unreal default gamemodes I decided to pivot to third person. I knew this might mean I need to make the ceilings higher in order to allow for the over the shoulder camera to sit at a comfortable angle but that is something I can easily fix.
Initial in engine test. At this point the modular pieces are being imported from the file into Houdini, and then the corridor is being build directly in Houdini and exported to Unreal which isn't how I want the final tool to work. I would prefer to have the prefabs in engine and have Houdini instance them from there.
The Houdini Engine documentation has this page about instancing in Unreal which was helpful but I couldn't quite get it working. It wasn't until I found this blog post by Simon Verstraete that I finally figured out how the instancing worked. For some reason the reference I get from assets in Unreal isn't what Houdini needs, I have to chop off the first couple directories for the path to read correctly, which is only mildly annoying.
Once I had this figured out I disconnected all the copy to points nodes and replaced them with attribute create nodes where I assigned the relevant static mesh paths to the unreal_instance attribute on each point. This meant I could no longer visualise the corridor in Houdini as all the geometry is kept on the engine side, but now the tool was working as I intended from the outset.
Once I got the attribute instancing to work I decided to implement a basic test of the manual door replacement system. The level designer can place a box over where they want a door to be and add it to the door override input and then a door will spawn instead of a wall. All this box does is create a bounding box in Houdini that groups any points inside and then overwrites whatever is currently in the unreal_instance attribute to be the path to the door mesh instead. However, my doors won't be static meshes, they'll be blueprints so that I can animate them opening and closing. Thankfully, Houdini can also instance blueprints. This was when I noticed something odd.
For every override box, two door blueprints were being added to the scene. The blueprints are created as children of the original HDA unlike the static mesh instances which are Houdini components within the HDA until baking. This means I could see two child blueprints in the heirarchy even when I only had one box over one single wall. This was very odd and I couldn't figure out why it was happening just for the blueprints and nothing else.
Then I decided to take a break and work on my Python tool, the main function of which would be to count static mesh instances. As it turns out all of the large walls were actually duplicating; there were two walls for each one point. I still have no idea why! The geometry spreadsheet in Houdini would show 116 points, and if I used copy to points 116 objects would be created, but when I used attribute instancing it would create 232 walls! This is truly the strangest issue I have ever encountered in Houdini Engine. I reached out to forums and even the Tech Art Aid discord but nobody had any ideas. Eventually, I sat down and went through node by node and managed to isolate the issue to a fuse node earlier in the graph. I think that for some reason, despite the geometry spreadsheet showing the points had successfully fused from 232 down to 116 there was some kind of ghost in the machine that caused each fused point to be counted as two. What is really odd is that it works fine copying to points but not with attribute instancing. I am no Houdini expert but I think this might be a bug. If I get the chance I will report it to SideFX.
Well I can't fix broken nodes so my only option was to find a work around - I needed an alternative to the fuse node. So I asked myself, what is the fuse node doing in this case? The answer is it was taking the long edge points and merging them so that I could get the middle point which is what I needed to spawn my walls at the right location. It was also maintaining the correct normal direction for each set of points so that the walls would face the right way. SO:
find middle point of edge
keep normals correct
I think I can do this with VEX.
Here you can see how I worked around this issue. I bypassed the original fuse and instead deleted every other point (this would delete the end point of every line, leaving a single point to work with). I then used VEX to shift the point location based on the direction the wall was facing. If the wall faces the X direction, it will shift in the Z direction and vice versa. Because the walls snap to a grid when the normal is in the X direction (1,0,0) it's inherently 0 on the Z direction, so it will automatically select which direction it will be shifted because one of the lines of code is just adding 0 which will change nothing. This also works nicely because the size of my modular pieces fits along a 2x2 grid meaning I can add the unit normal (which is 1) and not have to worry about scaling it. This would be easy enough to alter if needed I would just have to add a multiplier to the normal before adding/subtracting it to the point location.
Is there a better more logical mathematical way to calculate the middle of two points? Yes and I probably learned it in further maths but for now this works fine and is a nice simple and efficient solution.
The VEX snippet and nodes I used to work around the fuse node.
At some point during the week I mentioned that I wasn't able to customise the width as it was too complex. Admitting this annoyed me so I decided to add customisable width as a feature to spite myself. The corridors currently have a fixed height but this is something I can easily change in the future to allow for more flexibility if someone wanted to change to a different modular kit or adapt the tool to work with multiple floors or something.
Examples of a very narrow and very wide corridor setup using the same modular pieces
I have decided to use Blender for part of this project as it is another useful software for me to add to my repertoire. I hadn't used Blender before this so I got my good friend Fennel to give me a quick crash course in keyboard-shortcuts and modifieres and off I went to create a corridor. I wanted to try and emulate the mid-poly workflow I had learned about from Ashley during his visit to our class, so I started off working on one half of one section and then used the mirror tool to show me what the final piece would look like.
In addition to the mirror modifier I also used the array modifier to get an impression of what the whole corridor would feel like.
My first modular piece was a single large wall.
When initially importing to engine I noticed the pivots were incorrect and I figured this was due to the test floor and ceiling I had also exported so I removed these and reimported to fix the issue.
When trying to make the corners I ended up with some distorted normals due to the tansforms I had performed on one side of the corner piece.
This was an easy fix, by pressing Shift+N I could easily recalculate the normals and get the bevel behaving normally.
I double checked everything lined up nicely in engine, especially the raised floor panels which I wanted to make sure had a nice flow.
When importing these assets Unreal generated a default box collision which wasn't ideal as this shape is concave so I decided to test out creating some custom collision just using scaled simple boxes. Thankfully because the modules are so simple this was very straightforward.
At this point I actually altered the pivots of all the pieces slightly to allow for more breathing room on the corners. Originally the pillars were almost kissing and it made the shapes difficult to read so I spaced them out a bit more.
Preview of the currently very plain and pink corridor.
In my head I have a vision for the floor where it has glowing tracks following the path for imaginary robots to follow. I think this is inspired by the ship in WALL-E where all the chairs and robots follow glowing paths. While this will be done in the texturing phase I will initially model in some gutter like lines into the floor pieces so that I can ensure they align correctly in engine.
These are the glowing floor lines in WALL-E
I tried a few different methods for this. The first was just generating a flat plane for the floor in Houdini and then using the original spline to bool in the geometry for the pathlines. However, the original spline can be diagonal so that wasn't quite right and I moved on to a different method
My next attempt involved subdividing the original floor plan and then isolating and connecting the midpoints of each original face.
This method worked quite well but as the complexity of the shape increased the procedural lines didn't always look great, for example on double width corridors it became one massive grid.
Another issue with this procedural method is that it doesn't match the modular nature of the rest of the corridor, so I decided that a modular approach would probably be best. This would require 4 pieces: the main piece, the corner, the ends and a crossroads. The crossroad piece may actually need to be 2 different pieces, one with 4 edges and one that is more of a 3-edge T-junction type. I will start with just the 4-edge crossroad and worry about adding the logic for T-junctions later on as this will require some more complex intersection checking.
I ran into a problem early on; you can see in this image that the two ends of the line have normals in equivalent directions even though instinctively the lines are "flowing" in different directions. These normals were being generated by extruding the line into a polygon and then calculating normals based on that which did seem convoluted, so I researched other methods for creating normals along curves where I came across this video. I used the polyframe/tangentu method and this worked perfectly.
Here you can see the before and after of fixing the normals. The end pieces of the floor now all align correctly regardless of the direction of the curve. The system now correctly assigns the main pieces, corners, ends and 4-way junctions. The points for the 4-way junctions are determined using the point cloud method I originally learned all the way back at the start of this project when I watched Anna Opara's Lakehouse tutorial. The corner points are assigned by checking for normals that don't follow an axis.
I knew this was going to be tricky. To detect the 4 way junction I simply wrote a bit of VEX that created a point cloud and checked for any point that had 5 neighbors (itself, plus 4 points for each out-going path). Originally I had this node set up to delete any points that didn't meet this criteria in order to isolate the cross-points but this meant I had to have 2 nodes for each sort point, one that deleted the matching points and one that deleted the non-matching points. I fixed this later by figuring out the VEX code for addinig points to groups - much cleaner!
For the T-junction I knew I could do something similar, so I duplicated this function and changed the criteria to look for points with 4 neighbours (itself, plus the 3 points that form the T). This isolated the points for me. That was easy! Now the hard part: figuring out how to get the T-junction mesh to orient correctly. This would involve figuring out a way to control the normal vector and get it to point in a consistent direction for each T-point. My initial plan was as follows:
isolate the point cloud for each individual T junction
remove the middle/inline point
calculate the average normal direction of the remaining 3 points
assign that normal to the original T-point
I tried this. It didn't work.
The issue was that the flow of the line for each T point could vary significantly, as illustrated below.
Here you can see in the left T the long edge points have a consistent normal vector whereas on the right T it's the short edge that has the consistent normal vector. This would break my average normal system because the normals between the T's don't follow the same logic so I can't use it as the base for calculating the correct orientation.
I had to come up with another plan. I really wanted this to work and I wanted to be able to use one mesh that rotated automatically, not 4 different meshes of the same geometry pre-rotated because this would be significantly more expensive in terms of instancing. So I poked around in Houdini for a while, playing around to see what I could come up with, and this is what I got:
By deleting the original T-point and then reconnecting the remaining points within a set radius I was able to generate these primitives.
Aha! I had this problem cornered now! With these arrow-like primitives. I could use a similar method I had done to isolate the T-points to isolate the midpoint of the arrow by looking for points with 2 connected edges. I did this with some simple VEX and was then able to use an attribute transfer that points normal to the original T-point. Now I had a consistent logic where the normal would always point in the direction of the outgoing short edge of the T.
I have no idea if this is the most elegant solution but I know that it works well! I have only tested this tool with a single input line so far. I am not sure how it will react if multiple line inputs create a T-junction so I will be checking that during the major testing phase towards the end of my project but for now all the core features of my corridor tool have been implemented!
GIF of T-point direction calculation process.
I thought I would break down how this system works step by step because I think it's pretty nifty.
This is how the tool calculates the direction of the T-junction step by step:
Isolate the T-point by number of pointcloud neighbors
Create a bounding box at those points and group the points within
Use PolyFrame on points with style primitive centroid and set the tangentu to N
Invert the normals
Connect adjacent pieces with a radius slightly larger than module width to get two edges
set an attribute wrangle with this code to detect and group the points with 2 outgoing edges
i@group_Tdir = neighbourcount(0, @ptnum) == 2;
Blast away points not in the above group
Use an attribute transfer to move the normals from the Tdir group to the original T-point group
I tend to keep my node graphs fairly neat out of habit. Creating frames or comments as I go just makes keeping track of everything much easier, so when I finish with a graph there isn't a massive amount of cleaning up to do but I still wanted to neaten up some specific areas such as the output. I wanted to have all the unreal_instance attribute creations in the same place so when it comes to possibly changing any references it's easy to find. This means splitting the graph into two sections, the sorting section (grey) and the assigning section (colour). I think this will make my life easier going forward.
I went back through my graph and optimised some things. For example, generating the floor line went from the chain on the left to the chain on the right.
Before cleaning up the graph.
After cleaning up the graph.
There are still some other small optimisations I could make such as removing certain bypassed nodes that are no longer needed, but for now this will do. I'll do a final clean up pass once the rest of the project is complete and fully tested.
So now that I've finished the main logic of the tool it's time to take a break from making meshes and node graphs and instead focus on something new - shaders! Shader creation is something that I have decided to prioritize in this project as it's a skill I really want to improve. I would like to create some more interesting and complex shaders but more importantly I want to optimize them. I got the chance to speak with Mike Pickton this week and he kindly gave me some really good ideas of what sort of things to work on. Some of the examples he gave include using parallax, iridescence, and distortion as well as designing shaders which would save an artist time such as shaders which are highly reusable with variation. Here are my initial ideas:
Parallax - creating detailed sci-fi gubbins that can be implemented to break up surface repetition inspired by this video. I think it would be really cool to try and recreate this effect.
Distortion - my first thought is heat lines coming off of vents although optimising this may be a challenge as I think the translucency would cause necessary overdraw.
Iridescence - This is a quality created by wave interference under specific circumstances. Bug chitin can be iridescent because its built of lots of thin layers that each receive light and bounce it slightly differently causing different colours to appear from different viewpoints. A material like this will look really cool in my level and sounds interesting to create. Perhaps some kind of oil puddle or a mineral like bismuth would work really well for this, but this one is definitely the biggest challenge to incorporate thematically.
Other than the parallax these will all be new effects for me so I will need to do some initial research to figure out the best approach, so in the meantime I am going to work on another shader idea - holograms!
An example of the kind of hologram effect I would like to recreate.
I had a specific vision for this shader. Rather than the modern clean holograms seen in modern movies I wanted that classic retro-futuristic look. I wondered why a lot of old sci-fi movies used this kind of effect for their screens and holograms and then I remembered that VHS tapes used to have this effect when played on CRT TVs. The scanlines will be key to this shader. The fact that a visual artifact from a decades old video playback mechanism is now so integrated into the aesthetic of recorded video in our culture that I instinctively want to use it in my digital effect is kind of cool when you think about it.
I also want to incorperate some distortion into this effect; some kind of glitch or wobble would really add to the believability of the hologram. I think I will be able to achieve this by displacing some of the scanlines but I will set up the initial hologram first and then experiment with the distortion effect later on.
Red - Texture parameter and base colour input
Green - Setup for panning/scrolling lines
Blue - Fresnel + noise effect
This was my first pass at the shader effect. I learned about the panner node which I think has basically the same effect as adding time to the texture coordinates only it makes it a bit easier to isolate directions and control speed all in one node. I set up three panners for three different line widths and speeds to get this effect.
In order to create the ripple effect I desired I initially tried using the same panner method but this created lines which were to regular. I wanted only one line to be visible at a time so I opted to create a simple mask texture for this effect. I then used an append node to create a float 2 value which I could then multiply by a scalar parameter to control how extreme the ripple appeared.
The final effect with an Earth heightmap texture. Any black and white input can be used so this same shader can be used to create multiple planets. The effect can also be applied to any mesh, however the mesh itself doesn't rotate for this effect, the textures simply pan so I will most likely stick to spheres and planes for this.
This shader was inspired by Ben Cloward's GDC talk Shaders 101. I decided to start off by attempting to mimic his distortion effect and then try and build on this by working it into a heat-haze shader. The talk was really informative - showing the foundational concepts clearly in a way I hadn't thought about directly. For example I knew that multiplying texture coordinates scaled them, but I hadn't thought about the fact that adding and subtracting translates them. This is really obvious when you think about it and realistically I did know that was the effect it had but being able to visualise it in terms of a transformation is really useful for my own understanding.
My first very basic distortion attempt using the tiling rainbow clouds texture which allows for a bit more variety than a standard noise.
This is what the first shader looks like; if you look too closely the scrolling of the distortion becomes very obvious.
I fixed the repetition by mixing in some more variations from the same texture sample by pulling from different channel combinations. I am not sure if having 3 separate texture sample nodes of the same texture costs more in terms of memory vs having one node with multiple links - I plan to look into this to see whether this is an optimisation I could make later on.
This is what the three combined noises look like when added together into the distortion patter - much less easy to teach the repetition I think (other than the fact this is a looping gif).
Initially I removed the debug texture from my heatwaves material and set the mode to transparent and opacity to 0 so that the material could be see-through. Then I realised I needed to enable refraction and plugged my scrolling distortion into that input rather than the world position offset. This created the effect on the left below where at shallow viewing angles the image became far more distorted than intended. I had no idea why this was happening until I noticed the refraction method said index of refraction. It may have been a few years but I do remember the optics module of my Physics A Level and I do know what a refraction index is - I realised this input needed a fixed value (preferably one close to the true RI of hot air) and my distortion texture needed to go somewhere else. I know that surface direction would be important in this case so I tried the normal node as an educated guess and this created the effect on the right where even at shallow angles the effect still looks good.
Initial distortion attempt with incorrect method for RI.
Second attempt with RI of 1.04
The final shader graph with two texture samples as I think this is a good balance between visual fidelity and potential cost for such a subtle effect.
The final effect on a plane with very visible edges.
This is what the final effect looks like. It's cool but it doesn't look like a mirage as mirages in real life don't tend to have crips planar borders. This is something I definitely did not consider when making this shader. Even if I make some more complex geometry to put this shader on the bounds will always be very visible just by the nature of refraction. I think this would be significant enough to break immersion so I am going to try and come up with either another use for this effect other than a mirage, or try and find a way to adapt it to work in a more subtle way, maybe through using a niagara emmitter or something where the edges are hidden such as a view through a small window. Maybe this will be an effect limited to the interior of ovens or decontamination suites. Or maybe I can use it as the effect for an invisible or "cloaked" object. Oh well! At least I learnt a lot about the process of distorting texture coordinates to create this kind of effect.
Before I create any more fun shaders I need to prioritise finalising the modular piece meshes and creating their basic textures. Once this is done I will be able to move on to adding a bunch of cool features and effects.
This was my initial floor mock-up mesh, designed to make sure everything aligned properly when spawning in the tiles. However, after some feedback I aknowleged that it was very visually busy and the triple tracks didnt make logistical sense.
My second iteration was similar to the first with the groove track but now much simpler - now too simple. I spoke with some environment artists from Dambuster studios who let me know that a bit more interest would be a good investment as the floor takes up a lot of the screen space.
This was attempt #3 and while I do like it, I think the design makes no sense; what is the raised middle rail for? It also didn't really solve the repetitive shapes issue as there was nothing to break up the long stretched of matching tiles.
Iteration #4 - this is the one I am sticking with! It's a goof balance between interesting and flat. The large panel in the middle will allow me to create some interesting floor variations - maybe some transparent (or POM) panels which reveal some pipes below the surface. I also bevelled the outside edges of the panels where they connect to add to the modular feeling.
I noticed one very frustrating detail about my corridor tool while making these floor tiles and that is that it doesn't generate symmetrical corridors. This makes sense based on how I built the tool - it compensates for the corners with shorter wall modules that allow for varying wall lengths but also then cause an offset in symmetry. This means I can't use any ceiling bridge pieces that join opposite wall pillars because it likely won't align properly. Now, I could retool my entire system to create perfectly symmetrical segments, perhaps having whole prefabbed corridor sections and corner pieces where the floor, walls and ceiling are all combines. However at this point in the project I don't think that would be a wise decision. This is a minor aesthetic gripe that I have not a giant functional issue. I need to prioritise more important features right now, as much as it might pain me to ignore this I need to stay on schedule - symmetry is not part of the brief (I might go back after submission and rework it though when nobody can stop me!).
For the ceiling I wanted to incorperate some basic lights. Originally I thought I was going to have to use a blueprint for this but my initial test which just used emissives looked very effective so for now I am going to stick with them. In Lumen we trust. I would like to incorperate some prim data or vertex colour into the lighting setup to perhaps allow for some variation. I think the large flat light panels would look really cool with an LED effect and an image of blue sunny skies on them. I could even make some of them glitch slightly which would really add to the uncanny nature of the corridors.
This image shows the ceiling with basic emissive lighting. You can also see some of the wall variations where I have added some empty planters which I am hoping to add some simple plants/foliage to in the coming weeks.
I knew from the beginning of this project I wanted the space station to have windows out into space. For now I have added a random wall variation that has a simple window mesh attached and I have created a basic parallax shader to give the impression of space outside without the cost of translucency and a skysphere. I am interested in this method though however I think this would require exclusively manual window placement to avoid being able to see the backs of other pieces of the corridor.
Now prior to this point I haven't worked with much foliage but I recognise it's an ever increasing area of expertise within games and I would like to at least grasp the basics of foliage creation and then gain a deeper understanding of the best ways it can be optimised. Immediately I knew that I would be using a masked texture which meant necessary overdraw - however I also knew I could minimise this by trying to follow the silhouette of the leaf with my geometry as close as possible. This would minimise the number of masked pixels resulting in less overdraw and therefore better performance.
This is what my initial plant mesh looked like - the silhoette of the leaves is designed to minimise overdraw versus using a square plane.
Foliage with final material.
However at this point strange things started happening. I wanted to merge the foliage with the wall mesh so that the whole thing could be packaged as an instance but this made the materials distort.
Notice how the test cube and window all display the material correctly where as the walls and planter are just grey - it seemed like something was wrong with the UVs.
The UVs for the whole wall module.
It didn't take me long to discover the issue - the UV channels had merged incorrectly leading to the leaves being distorted and the rest of the wall in a separate channel completely. I had to research what had caused this and I believe it is because I had been working between multiple different softwares to do my UVs (3ds max for the foliage and blender for the walls) and this led to a naming conflict.
To fix this I simply separated the meshes, renamed the UV maps in blender to match each other and then merged them back together; this fixed the problem and now the textures were being applied correctly.
I decided to use a trim sheet the planter because I know how expensive textures are in terms of disk space, and additionally separate materials means additional draw calls. I think this method still maintains fidelity and is a bit more efficient so is worth it overall. I also used what would have been the metalness (B channel) in the AORM map to instead be used as a mask for the emmissive texture. I think this is an efficient use of what would otherwise have been an empty black map and I made sure to update the naming of the texture to match.
Here are the wall variations I added to break up the regularity of the initial corridor. The planter and seat variations are designed to have distinct silhouettes compared to the more basic window and TV variants and I think this has a nice refreshing impact on the scene.
At this point I wanted to give my level a quick once over to make sure that the texel density was roughly consistent at 1024px/m. I did this with the help of my material reassignment tool, quickly using it to bulk change everything to a checker material. I then edited the UVs of any meshes that stood out as incorrect using Rizom which has an inbuilt texel density assignment tool.
The texel density when I initially checked - quite inconsistent. My main goal was to make sure all of the wall modules and floor/ceiling were consistent as these make up so much of the environment. I was less concerned about the props but still wanted to make sure they aligned roughly.
Texel density after I fixed a few assets. Some items such as the foliage and TV screens still stand out. I purposely kept the texel density of the leaves higher as they use a 512x512 material as I was concerned about pixelation of the alpha when scaling to larger leaves. The TVs also remain warped due to their need to fit the full UV sheet square for the panning texture.
As this project is coming to a close I decided to add some fun little effects to my scene. Firstly is animating the holgoram to pop-up when the player is within proximity. I initially wanted the scale to scale with proximity but it was pointed out to me that this would create a strange effect of growing and shrinking rather than the sort of pop-up I wanted. I decided to keep things simple for now and simply set the scale to 0 initially and then ramp it up to 1 using a timeline that is triggered once the player enters a bounding box. However setting the scale to 0 means that the level designer can't see the size of the final mesh when laying out rooms so I also added some placeholder geometry that is only visible in the editor to make their life a bit easier.
Hologram BP with placeholder geometry and bounding box.
BP logic for pop-up hologram.
Animating the doors was achieved in a similar manner to the blueprint pop-up, by using a timeline. Instead of having an effect on scale this just had an effect on location instead.
I also integrated some basic sound effects I found royalty-free here.
Parameters for the corridor tool set up in Houdini.
Choosing what to expose as a parameter comes down to what would be useful for an artist to have control over and what is going to cause unnecessary confusion or complication.
The two things I knew I wanted to expose were:
The location of doors
The modular pieces being used
In addition to this I also chose to expose the weight of the randomly assigned module variations so that if artists change a module they can also change the frequency with which it spawns. I also added a couple of technical overrides such as the floor/ceiling override and the end direction override.
The floor/ceiling ends being flipped was a recurrent issue and I accounted for about 90% of cases in my logic and it worked correctly. However those last 10% of cases where floors would be backwards I needed a backup option just in case - so I added a tick box that if enabled will simply switch the end directions. I judged this to be a better solution rather than trying to go back in and overhaul the whole system for a tiny minority of possible outcomes - it is much more time-efficient in the long run.
The override ceiling/floor button allows the artist to enable closed ceilings and floors in the event that they have created a non-linear path, i.e a room. Using the linear pieces in this case causes gaps between the edges of pieces as they aren't designed for this case. My initial plan was to create a whole separate tool for this purpose (room creation) but I decided to prioritise improving my scripting skills instead and as a compromise have added this option so that rooms can still be functional. Again my initial instinct was to go in and overhaul the logic to account for every possible variation but prioritising finishing the tool was becoming more and more important so for now I think this is a relatively elegant solution.
Without override - gaps are visible highlighted in yellow.
With override - no gaps visible.
The final result of my corridor does look very basic; I wasn't chasing premium aesthetics for this project but I do think I could have added some additional functions to the tool that would have boosted the visuals. For example, I think adding some kind of procedural cabling running along the floor or ceiling, or having some exposed pipework would have been really cool. I could have pushed my understanding of parallax materials further to create some cool open ceiling panels or recessed wall nooks. I also would have liked to create some parallax decals because although they would add to the render cost I think it would have been a cool visual effect and good practice for baking which I didn't use on this project. The positive for this corridor is that the tool does support module variations as seen in the walls with options for seats, windows and planters in addition to the basic wall. This has the biggest impact in terms of breaking up the visual repetition so I think this was the right call in terms of what to prioritise.
All the modular meshes make use of weighted normals rather than any baked information which was really useful to learn. I know that the mid-poly workflow is used in a lot of large-scale sci-fi games today, Star Citizen for example, so getting to learn the basics of this process has been really useful. I think it was appropriate not to use Nanite for this project however this is a system I want to gain some familiarity with as it is being used increasingly in upcoming games.
I think the incorporation of emissive textures into the light panels was a wise decision as, thanks to Lumen, it means that my scene can have artificial light without the need to procedurally generate light actors which would lead to me creating specific rules to restrict light spawning to help minimise light complexity. By making the roof tile materials one sided I can also cheat a bit and let the light from the main level directional light pass through which gives a basic light to the majority of the scene. I would have liked to incorporate some light functions or perhaps primitive data to allow the colours of the lights to be controlled beyond changing them all in the material instance, but I think as far as a base for lighting/environment artists to work on this goal is achieved.
I think the hologram materials are effective and really sell this as a sci-fi environment as opposed to just a modern corridor. Having the hologram and doors react to player proximity and use timelines was beneficial both in terms of player experience and for my own familiarity with blueprinting aspects of gameplay.
In terms of the corridor generation itself I am really pleased that I accounted for self intersection this time as this was something that bothered me in my pipe tool. Snapping the whole of the corridor to a grid, while initially I thought this would be limiting, was actually a really beneficial decision in terms of allowing both self intersection and height/width scalability. I think that the workaround to allow rooms to be build using this tool by overriding the floor and ceiling panels is effective, if a bit limited. However, I need to remember that the purpose of this tool is to generate a corridor the rooms are just a nice bonus for visualisation purposes. In terms of optimisation the tool runs quite efficiently. It is fast and often takes less than a second to generate a new corridor section. The tool is also set up to create instances rather than individual meshes which is a massive gain performance-wise. These instances also support LODing in the form of HierarchicalInstancedStaticMeshes but given the nature of a corridor I am not sure this is a better option than the standard ISMs. It would be useful if a corridor was long and linear and viewable from a far distance but as it stands often when in a corridor segment the other segments will be occluded or offscreen. This is something that would be worth testing for a bigger project where this could have a significant impact on performance, but for now there are no notable performance issues.
There are a couple of residual bugs in the corridor tool. Similar to the pipe tool Unreal sometimes bugs out if you try to use the native undo to remove a point added to the line, and the workaround for this remains the same - manually delete unwanted points, don't undo. The main bug that bugs me (haha) is one that seems to intermittently create duplicates of meshes. Somewhere in the Houdini graph invisible duplicate points are spawning; they're not in the geometry spreadsheet anywhere and they appear inconsistently. This inconsistent appearance is what led me to believe it isn't actually a bug of my own creation but rather something odd happening with the Houdini Engine integration. Thankfully, despite having no permanent solution, the workaround for this is relatively simple - delete the duplicates! This is easy to say and harder to implement as sometimes the duplicates are hard to spot. The ceiling panels appear to duplicate before they are flipped so these are easy to spot from above the corridor, but the walls and floor are only visible if Z-fighting is spotted. It is also possible to spot this overlap by using Unreal's shader complexity visualisation mode, any modules with overlap will appear dark red instead of green (or more complex than expected compared to other meshes with identical materials). Another way to spot this (and how I actually discovered it initially) is with the use of the Instance Counter tool I made for my Python Script Project.
Overall while this project isn't flawless and could definitely be pushed further in terms of both function and aesthetics, I do think it has been a success. I hit all the points on my brief including a couple stretch goals which is really positive. The main thing with this project is that I learned so much. I can't list everything here but in terms of level optimisation I now have experience working with LODs, instances and lighting; I have learned loads about working with procedural geometry both in Houdini specifically and in terms of the wider logic that can be used across any procedural software. I learned a lot of adjacent skills more typical of an environment artist such as creating modular kits and working with weighted normals in the mid-poly workflow. Most importantly I learned resiliance in the face of relentless troubleshooting and problem solving. There were a few times in this project where I felt like I had taken on more than I could handle, but every time I admitted defeat outloud it just spurred me on to come back the next week and figure it out. There were some issues that I couldn't fix which actually helped me learn another important skill - letting things go. I had a conversation with a technical artist from playground who told me that I'd be suprised by the number of "problems" in big games like Forza. Most of the time these small issues just aren't worth the time or money to fix and so it is important to be able to let some things go.