Before joining the platform team, I managed Wevr's content production pipeline. During that time I have experimented with many 100s of examples of 360 video content including live action, cg, content at different frame rates, lighting conditions and stitched footage from different configurations of rigs including custom 3d printed solutions. I have gone on many VR shoots and assisted directors and producers, helping them make better VR content.
Some projects were more challenging than the others and I have captured the process of a few below -
The challenge with this content was a unique one - the content source was not equirectangular but instead in polar projection format which was used at the Meatwad Full Dome Experience, a part of Adult Swim's San Diego Comic-Con presence in 2014. This did not work in VR as the content when projected onto a full sphere (like all 360 video content is) would either get stretched or get cut off at eye level if projected onto a half sphere - which was not acceptable.
Example of a Polar projected frame and how it gets cut off at the eye level in VR
The first step towards finding a solution was to change the projection of the content to the common format used in VR video - equirectangular. This was done by creating a simple shader network using the domemaster plugin for Maya. After numerous experiments we decided that the best way to make the content feel immersive in VR would be to mirror the exact same content below to complete the sphere. We did this in Adobe Premiere and the over the top psychedelic nature of the content lent itself well to this experiment.
Polar to Equirectangular with mirrored bottom resulting in a fully immersive experience
Although this gave us the result we needed visually, we were having performance issues running this content on the phones, the devices would heat up, freeze and even crash. A lot of it had to do with the many different types of CG content with varying bitrates playing in sequence. There was no time to encode each section separately and then concatenate the result. We realized that the mirrored content was actually baked into the frames but the same result could have been achieved by mirroring the UV map of the sphere the content was being projected on. This would reduce the size of the video itself and solve for a lot of the performance issues - and it did !
The following script was used to crop the content frame with the mirrored content and then scale the result to an appropriate size for cardboard devices. We had to also cap the max bitrate on the video.
ffmpeg -r 30.0 -framerate 30.0 -i ASD_DOME_%5d.tif -i ASD_AudioOnly_3.wav -c:v libx264 -crf 18 -maxrate 18000k -bufsize 18000k -preset slow -vf "crop=3840:1360:0:0, scale=2880:1020" -pix_fmt yuv420p -x264opts keyint=infinite MWD_finaledit_2880x1020_crf18_max18_slow_h264_v01.mp4
Final 360 Video.
FINDING YOUR TRUE SELF
The challenge for this experience was that we needed to adhere to the recommendations of filesize provided by the Oculus in order to submit it to the store and also make sure that the visual quality of the title was its best possible.
The issue was mainly caused by the fact that the experience had two videos in it - an introduction (7 mins long) and the meditation itself (16 mins long) which was one of the reasons that caused the overall file size to go over. The other reason was that a large part of the meditation had a dark galaxy background - when the whole video was encoded using standard FFMPEG recipies the dark parts showed a lot of compression artifacts and when compressed with higher quality setting, it resulted in a bloated file size. Find below the process used to overcome this challenge -
Experience menu system
Compressing the Introduction:
The lecture was a simple live action 360 video of Deepak in his office, sharing the benefits of meditation. It was well lit and really had pretty much a constant bitrate. I used one of our standard FFMPEG recopies to encode it.
Compressing the Meditation:
The meditation is 16 minutes long and a large part of it had a lot of black galaxy background. When encoded using a standard ffmpeg recipies the result showed a lot of compression artifacts in the black galaxy part of the video and when compressed with higher quality settings, it resulted in a file of size much higher than the oculus guidelines. For this reason we decided to divided the video into three seperate parts and encoded the part which needed better quality with a different set of parameters.