Before joining the platform team, I managed Wevr's content production pipeline. During that time I have experimented with many 100s of examples of 360 video content including live action, cg, content at different frame rates, lighting conditions and stitched footage from different configurations of rigs including custom 3d printed solutions. I have gone on many VR shoots and assisted directors and producers, helping them make better VR content. 
Some projects were more challenging than the others and I have captured the process of a few below -
The challenge with this content was a unique one - the content source was not equirectangular but instead in polar projection format which was used at the Meatwad Full Dome Experience, a part of Adult Swim's San Diego Comic-Con presence in 2014. This did not work in VR as the content when projected onto a full sphere (like all 360 video content is) would either get stretched or get cut off at eye level if projected onto a half sphere - which was not acceptable. 
Example of a Polar projected frame and how it gets cut off at the eye level in VR 
The first step towards finding a solution was to change the projection of the content to the common format used in VR video - equirectangular. This was done by creating a simple shader network using the domemaster plugin for Maya. After numerous experiments we decided that the best way to make the content feel immersive in VR would be to mirror the exact same content below to complete the sphere. We did this in Adobe Premiere and the over the top psychedelic nature of the content lent itself well to this experiment. 
Polar to Equirectangular with mirrored bottom resulting in a fully immersive experience
Although this gave us the result we needed visually, we were having performance issues running this content on the phones, the devices would heat up, freeze and even crash. A lot of it had to do with the many different types of CG content with varying bitrates playing in sequence. There was no time to encode each section separately and then concatenate the result. We realized that the mirrored content was actually baked into the frames but the same result could have been achieved by mirroring the UV map of the sphere the content was being projected on. This would reduce the size of the video itself and solve for a lot of the performance issues - and it did !
The following script was used to crop the content frame with the mirrored content and then scale the result to an appropriate size for cardboard devices. We had to also cap the max bitrate on the video. 
ffmpeg -r 30.0 -framerate 30.0 -i ASD_DOME_%5d.tif -i ASD_AudioOnly_3.wav -c:v libx264 -crf 18 -maxrate 18000k -bufsize 18000k -preset slow -vf "crop=3840:1360:0:0, scale=2880:1020" -pix_fmt yuv420p -x264opts keyint=infinite MWD_finaledit_2880x1020_crf18_max18_slow_h264_v01.mp4
Final 360 Video. 
The challenge for this experience was that we needed to adhere to the recommendations of filesize provided by the Oculus in order to submit it to the store and also make sure that the visual quality of the title was its best possible.
The issue was mainly caused by the fact that the experience had two videos in it - an introduction (7 mins long) and the meditation itself (16 mins long) which was one of the reasons that caused the overall file size to go over. The other reason was that a large part of the meditation had a dark galaxy background -  when the whole video was encoded using standard FFMPEG recipies the dark parts showed a lot of compression artifacts and when compressed with higher quality setting, it resulted in a bloated file size. Find below the process used to overcome this challenge -  
Experience menu system
Compressing the Introduction:
The lecture was a simple live action 360 video of Deepak in his office, sharing the benefits of meditation. It was well lit and really had pretty much a constant bitrate. I used one of our standard FFMPEG recopies to encode it.
Compressing the Meditation:
The meditation is 16 minutes long and a large part of it had a lot of black galaxy background. When encoded using a standard ffmpeg recipies the result showed a lot of compression artifacts in the black galaxy part of the video  and when compressed with higher quality settings, it resulted in a file of size much higher than the oculus guidelines. For this reason we decided to  divided the video into three seperate parts and encoded the part which needed better quality with a different set of parameters.

Part 1 - 0-10 mins
Part 2 - 10-15 mins
Part 3 - 15-16 mins
Once the 3 files were encoded the files were concatonated into one single file using the concatonate setting in FFMPEG. The audio was then added back and the app met its memory, performance and visual benchmarks for submission.
YouTube link to 360 teaser
Among other studios, Wevr was invited to film the Endeavour space shuttle at the California Science center in VR. The specific event we covered was of the Spacehab module being lowered into space shuttle Endeavour's payload bay at the California Science Center.  The challenge with working on this shoot was that we had to work around the NASA and Science Center engineering team's tight schedule. The team at the Science center was trusting of us and allowed us to put our rigs whenever we thought would get us the best shots -  I assisted the director of production in installing different types of VR camera rigs at different locations in and around the shuttle. 
10 camera mono GoPro rig inside the Spacehab module.
14 cam stereo GoPro rig at the entrance of the shuttle and a 10 cam mono GoPro rig inside the cockpit
10 Cam GoPro installed inverted under the lift.
The most successful experiment was the one where we mounted a 10 camera rig under a crane and shot the entire shuttle form a top down view. I stitched all the footage from different rigs using Kolor Autopano Pro. Find below some stills from the result of this shoot - 
"Experience David Bowie’s Ziggy Stardust’s stratospheric rise to fame through Mick Rock’s legendary photo archive of the visually stunning stage persona that established Bowie as a major force in modern pop music." 
The goal of this prototype experiment was to create a three dimensional immersive Vr experience using two dimensional  photographs of David Bowie taken from the photo archive of Mick Rock. The challenge was obvious ! I worked with creative director Travis Hutchson and associate creative director Monica Nascimento to build this prototype in VR.
The 2D images were first animated to create video files (like a texture atlas but made out of video files) which were then mapped onto 3d objects placed at different depths  to create a 3 dimensional multimedia scene. 
Photographs of David Bowie by Mick Rock
Concept layout of animation created using the photographs
Layout the animation in 3D
VR capture from the Wevr engine