Coming back to Uru, part 3
Six months ago, I decided to implement an asset viewer for Uru, a game in the Myst series. The goal was to be able to load and visualize levels from the game in real-time, with a rendering as close to the in-game appearance as possible.
Last time, we were left with a basic textured look using the first listed texture for each object, and rendering everything all the time. After the last post, I kept working on it for around a month.
Looking at the git history from the 17th of April to the 29th of May:
- I tried to write the whole "multi layered rendering" in a unique shader executed on the GPU, supporting up to 8 layers (the maximum encountered in Uru and Myst V), but there where so many edge cases and conditions to handle that the performances degraded quickly. Optimizing it should be possible but the current framerate is less than one frame per second, so the improvements would need to be drastic for it to be worthy.
- So I quickly went back to rendering one layer at a time, even if this means that I'm hammering the GPU with geometry. After all, that was more or less the original approach of the Plasma engine. When layers are rendered one on top of the other, some blending can be performed. I had to emulate some of those compositing operations with an additional two-layers shader.
- Culling is performed at each frame, to only render objects that are visible at the current instant. To this end, each object bounding box is checked against the visible region of the scene. As there are no animations in the scenes, the boxes can be computed once at load time.
- Some special objects are now supported: for instance billboards (planes facing the user at all time, used for vegetation/lens flares effects) are now oriented properly, etc.
- I've added some helpers in the graphical user interface, to be able to list all objects, show only some parts, materials or layers to help debugging.
- I encountered a damning issue with vertices colors. Plasma can associate a color to each vertex of an object mesh, used to store some pre-computed lighting. I kept getting an "almost-right-but-not-perfect" result for days, before realizing the colors should be stored as unsigned bytes and not signed bytes when sent to the GPU.
The result I'm getting right now is pretty satisfying. Basic layered textures are properly handled, objects are culled when unused. Some issues remain, obviously. In levels that only rely on dynamic lighting, everything is completely dark. Providing some default lighting should help explore those scenes. A longer term goal would be to completely support the original lights contained in the scene. The GUI also needs some cleanup before being usable. Dynamic textures (texts generated on the fly, reflection maps) are not handled. Finally, transparent objects are not correctly ordered when rendered, preventing other objects from being visible behind them.
Since I've started to write this post, I am fiddling with the project again, so expect another post in the coming weeks. I have uploaded the code online on Github ; it will only compiles on macOS out of the box, but I managed to run it on Windows recently with a bit of tweaking in the Cmake files. Until next time!
Link to part 1
Link to part 2