"An MSI consists of a series of concentric spherical shells, each with an associated RGBA texture map. Like the MPI, the multi-sphere image is a volumetric scene representation. MSI shells exist in three dimensional space, so their content appears at the appropriate positions relative to the viewer, and motion parallax when rendering novel viewpoints works as expected. As with and MPI, the MSI layers should be more closely spaced near the viewer to avoid depth-related aliasing. The familiar inverse depth spacing used for MPI’s
yields almost the correct depth sampling for MSI’s, assuming depth is measured radially from the rig center. The spacing we use is determined by the desired size of the interpolation volume and angular sampling density as described in Appendix A.
3.2.1 MSI Rendering. For efficient rendering, we represent each sphere in the MSI as a texture-mapped triangle mesh and we form the output image by projecting these meshes to the novel viewpoint, and then compositing them in back-to-front order. Specifically, given a ray r corresponding to a pixel in the output view, we first find all ray-mesh intersections along the ray. We denote Cr = {c1, . . . , c }
and Ar = {1, . . . , } as the color and alpha components at each intersection, sorted by decreasing depth. We then compute the output color cr by repeatedly over-compositing these colors. [..] We parameterize the MSI texture maps using equi-angular sampling, although other parameterizations could be used if it were necessary to dedicate more samples to important parts of the scene."
Presumably interpolating from data points recorded IRL to synthesize data points not in the IRL dataset, but which do exist, at least approximately, IRL