This creates the distortions shown above. The trouble here is that compressed texture coordinates that are numerically close are not necessarily close spatially once they get decompressed. Quantization decodes to values close to the original, but not necessarily ones that are exactly the same. It can only be applied to floating-point attributes so it cannot be used on the byte oct-encoded normals, and even though the texture coordinate compression does store its value into a float, attempting to apply quantization on top of it produces some interesting results. Unfortunately, quantization cannot be used to further compress these attributes. Like quantization, this also results in roughly 50% compression, so either technique can be used with similar results. The other technique stores two 32-bit floating-point numbers representing texture coordinates into a single 32-bit floating-point number. This compresses three 32-bit floating-point numbers down to two bytes, a roughly 83% compression, so this approach is preferable to quantization of normals. Normal vectors are oct-encoded using the technique described in A Survey of Efficient Representations of Independent Unit Vectors, by Zina H.
We already use two techniques for compressing normal and texture coordinate attributes in Cesium. The gltf-pipeline project contains an open-source implementation of quantization that can be used to compress your glTF models. These steps are a result of the gltf-pipeline implementation. This is due to restructuring of the data accessors during quantization as well as the addition of a few properties enabling quantization. There are some slight changes in the size of the glTF JSON itself. The reason the data doesn’t shrink down all the way to 50% is because a portion of the binary data buffer is made up of indices that cannot be quantized.Īpplying gzip to models shrinks the gap between the quantized and non-quantized file sizes, but ultimately the quantized models still have a size advantage over their non-quantized counterparts and use less GPU memory.
Shader model 50 download free#
StatisticsĪs expected, the model with more geometry benefits more from this compression, but even the model whose file size is mostly impacted by texture benefits from what is effectively free compression. 2 Cylinder EngineĪ geometry-heavy model donated to the glTF sample models repository by Okino Polytrans Software. To illustrate this, I’ve included a synopsis using two models in the glTF sample models repository. So files that have complex geometry tend to benefit the most. Quantization works best on models whose file size is composed mostly of quantizable attributes, such as positions and normals. The decompression is done by a simple matrix multiplication in parallel in the vertex shader on the GPU.
This means that the attribute data can be stored in half the amount of space! Quantization takes those attributes and stores them as 16-bit integers represented on a scale between the minimum and maximum values. Usually model attributes such as positions and normals are stored as 32-bit floating-point numbers. The WEB3D_quantized_attributes extension is based on Mesh Geometry Compression for Mobile Graphics, by Jongseok Lee, et al. Quantized 3D model files means smaller files, faster downloads, and less GPU memory usage with no performance degradation. This meets the needs of 3D Tiles in Cesium perfectly because the 3D Tiles engine frequently downloads new tiles based on the view. The WEB3D_quantized_attributes extension to glTF offers reasonable compression with little to no overhead for decompression.