Coming back to Uru, part 4

2018/11/02 | Simon Rodriguez | opengl rendering reverse-engineering video games

Last week, I mentioned some improvements that could be brought to my Uru/Myst V assets viewer. I'm glad to say that I've implemented most of them! I think that this project is now mature enough to be usable. Even if the recent re-release of the complete Myst collection guarantees that we will still be able to access those worlds in the coming years, having the option to examine and understand how they were created is still great!

Result 1
Result 1

Result 2
Result 2

First, transparent objects were not rendered properly, causing objects in front of them to be hidden and other depth-related artifacts. This is a classical issue with transparency in real-time rendering: the rasterization pipeline is built around the assumption that you have a unique depth[1] at each pixel. This allows for optimizations as you can render the objects closest to the camera first, and ignore everything that is behind them. In the case of transparency this is not possible: objects have to be rendered in a back-to-front fashion. So I'm now rendering opaque objects first, and sorting transparent objects from furthest to closest before rendering them. The sky is also handled separately, to always render it even if the culling algorithm considers it as too far away.

While working on transparency, I also fixed some bugs in the multi-layered rendering of materials. Some objects would appear transparent or entirely black when they were not supposed to. Not all occurences are solved because of specific undocumented cases in the initial Plasma engine rendering code, and because we have to handle things differently with a fully-programmable graphics pipeline.

Result 3
Result 3

Per-object lights are now supported in the viewer. Each object in the assets files has a series of lights associated to it (especially objects that are dynamic). Those are now loaded and contribute to the final rendering through a basic per-vertex ambient and diffuse lighting. There are still some issues with animated lights[2] and with global lights that are not associated to any object. I tried to add them to all objects, but there can be as many as fifty of those in a level, while I'm currently limiting the light count per object at 8 for performances reasons (and because this is what the Plasma engine was doing). This could be solved by adding the closest ones for each object, but this would require extracting additional informations from the loaded assets (visibility volumes,...) I think.

Result 4
Result 4

I spent quite some time tuning the user interface. It is now possible to search for objects in the level. Each object can be displayed alone, decomposed into its parts, and the information about each material layer is logged. Textures can also be displayed fullscreen. Things like the camera speed, field of view and far plane distance can now be adjusted, along with the internal rendering resolution. The background color can also be defined, as I was not able to find where the fog environment color is stored in the assets files.

Example of a single object renderer
Example of a single object renderer

Example of a texture rendered in fullscreen
Example of a texture rendered in fullscreen

I implemented some additional visualization modes, to show only the pre-baked lighting stored as vertex colors, to apply a default white light to all objects for scenes where the lights are unsupported, and to disable the per-object lights. The baked lighting is especially interesting in my opinion: the amount of realism it brings to the final scene is huge, from soft lighting to ambient occlusion effects.

The different viewing modes for a given scene
The different viewing modes for a given scene

Per-vertex baked lighting
Per-vertex baked lighting

Wireframe geometry rendering
Wireframe geometry rendering

Overall this has been a great learning experience: diving into the undocumented details of an existing (if a bit dated) game engine, trying to recreate its rendering results, understanding how the assets are represented internally and replicating some DirectX 9 fixed pipeline functions in modern OpenGL. I'm pretty proud of the final results and their closeness to the in-game renders, even if some limitations remain. It was quite striking to discover how those beautiful environments were created and the tricks and knowledge involved in making them appear as they do[3]. The code is available online on Github. Some improvements would be to fix the issues mentioned in the paragraphs above, and to improve the memory consumption and rendering performances. Any help is welcome!

Finally, a huge thanks to the people at Cyan who created and engineered these games and decided to open source the Plasma engine, and to the people of H-uru for maintaining a clean fork of the engine.

Result 5
Result 5

Result 6
Result 6

Result 7
Result 7

Result 8
Result 8

Result 9
Result 9

Result 10
Result 10

Link to part 1
Link to part 2
Link to part 3


  1. distance from the surface of the scene to the camera. 

  2. for instance in Teledahn. 

  3. the importance of pre-baked lighting in the final result for instance.