It’s certainly an exciting time for Virtual Reality! The new technology is attracting a lot of attention from prominent media sources as well as everyday consumers. With the development of smartphones, we have seen more and more people becoming attracted to virtual reality addiction. With more and more people discovering 3D content in virtual reality, it is important that they are able to find relevant content that appeals to them.
VR is growing fast, one of the main factors in VR development is 3D content. Tools like Unity3D and Unreal Engine make it easy to get 3D models and add them to your scene. I want to share with you what resources are available, where you can find free models and how to convert your models for import into a game engine with animation.
You know how long I’ve been interested in Virtual Reality? A long time! Not just for video gaming, but 3d content in general. I’ll never forget when my friend introduced me to the Oculus Rift Kickstarter campaign, and then I finally tried it at a VR meetup. It’s amazing how virtual reality is changing my life, and I want as many people to experience this unforgettable moment for themselves.
3d Content For Vr
We desperately need to have amazing content for virtual reality to go mainstream. How to create virtual reality content is probably one of the most important questions of this time. There are several ways how to do it. One way of doing this is by using computer software (Unity, Unreal or CryEngine) to create it. Created content is imaginary and is usually used to build VR games. But what about if we want to capture the real world? In this blog post, we will look at some of the virtual reality capture methods. These will allow you to create virtual reality replicas of the real world.
How can we create virtual reality today?
Virtual reality capture methods: 360 video and photo capture
360-degree videos are created by filming all 360 degrees of a scene at the same time. Users can view the video from any angle. Turn and move the device and the 360-degree video will follow, creating an immersive “virtual reality” type experience. 360-degree video is typically recorded using either with a special rig of cameras or a dedicated camera that contains multiple camera lenses. The resulting footage is then stitched to form a single video. This virtual reality capture method is done either by the camera itself or using specialized video editing software. This software can analyze common visuals and audio to synchronize and link the different camera feeds together. You can read about how to stitch 360 videos here.
Captured content can be separated into either monoscopic or stereoscopic 360 content. Stereoscopic 360 content allows you to have a 360° overview of the environment and creates the 3D effect with objects close by. Monoscopic 360 content lacks the 3D effect but is much easier to produce and easier to distribute.
How can Viar360 help you turn your 360-degree images and videos into VR experiences?
How to create an virtual reality experience in 30 minutes with Viar360?
Use your 360 camera to record the environment. You’ll end up with a bunch of 360-degree videos and images which will serve as your underlying media files.
Upload the media files to Viar360. Create a new folder in your media library and upload 360-degree and regular media files.
Create a new project and add your 360-medial files. Each file you add will represent one individual VR scene.
Add additional media files and interactions in each individual VR scene. Open each scene and use Viar360’s editor to add additional media files and interactions inside your 360 scenes.
Publish the experience and download the VR app to view it on your VR headset. Once you’re done with editing publish your “Story” and download the app for one of the supported VR headsets. Once you login to the app you will be able to play the VR experience inside your VR headset.
Virtual reality capture methods: Light field capture
Unlike standard 360º video, light field video captured with Lytro Immerge allows for six degrees of motion freedom within the camera’s volume, which is about one meter. This means that you could conceivably move around within the volume of the sphere and lean in toward objects. In addition to adding a degree of positionally tracked volume to the scene, a true light field recording like Lytro’s creates both horizontal and vertical parallax, giving the scene true depth and perspective regardless of your viewing angle.
Light fields are a relatively simple concept and are not that new, but their actual implementation is extremely hard to pull off. A ‘light field’ (a.k.a. ‘plenoptic function’) is really just all the light that passes through an area or volume. Physicists have been talking about light fields since at least 1846. Lytro really popularized the idea by developing the world’s first consumer light field camera back in 2012. A Light Field can be captured using an array of multiple cameras as well as a plenoptic device like the Lytro ILLUM, with an array of microlenses placed across its sensor. The core principle in both cases is that the Light Field capture system needs to be able to record the path of light rays from multiple viewpoints.
With Lytro Immerge Light Field camera, the light rays’ path is captured via a densely packed spherical array of proprietary camera hardware and computational technology. In its spherical configuration, a sufficient set of Light Field data is captured from light rays that intersect the camera’s surface. With that captured Light Field data, the Lytro Immerge system mathematically reconstructs a spherical Light Field Volume. Spherical Light Field Volume is roughly the same physical dimension as the camera.
Virtual reality capture methods: Volumetric 3D capture
Volumetric VR gives you the feeling of reality. Users see scenes with three-dimensional humans. They actually look like real people but they can also physically walk around these “characters” and watch them from any angle. Unlike film, there are no “takes” or “shots” in VR that are edited in post-production. It’s much more fluid as the viewer is the one framing the scene and choosing their own perspective. In that sense, the viewer takes that role from the director, which opens up entirely new possibilities for storytelling and acting.
While traditional approaches to VR content turn cameras outward, 8i turns the cameras inward. 8i uses off-the-shelf high-definition cameras to record video of a real person from various viewpoints. Then it uses its own software to capture, analyze, compress, and recreate in real-time all the viewpoints of a fully volumetric 3D human. You can see an example of such capture below.
The captured volumetric video can also be used for AR. In the clip below you can see how Microsoft captured holograms that can then be viewed through their Hololens.
Virtual reality capture methods: Photogrammetry
The fundamental principle used by photogrammetry is triangulation. By taking photographs from at least two different locations, the so-called “lines of sight” can be developed from each camera to points on the object. These lines of sight (sometimes called rays owing to their optical nature) are mathematically intersected to produce the 3-dimensional coordinates of the points of interest.
A somewhat similar application is the scanning of objects to automatically make 3D models of them. Some programs like Photoscan have been made to allow people to quickly make 3D models using this photogrammetry method. You can check out this resource for the best photogrammetry (3D scanning) apps available at this time. It should be noted though that the produced model often still contains gaps. Additional cleanup with software is often still necessary. Microsoft is also making its own play in this area.
Photogrammetry is already used in different fields, such as topographic mapping, architecture, engineering, manufacturing, quality control, police investigation, and geology. It’s also used a lot by archaeologists to quickly produce plans of large or complex sites. Meteorologists use it as a way to determine the actual wind speed of a tornado where objective weather data cannot be obtained. It is also used to combine live action with computer-generated imagery in movies post-production. Photogrammetry was used extensively to create photorealistic environmental assets for video games.
What to do with the captured material?
Let’s assume you’re interested more in 360 video and photogrammetry. 360-degree video is already available to the public today and doesn’t require a steep learning curve. Once you recorded and edited a 360 video you can then publish it on YouTube or you can build an interactive VR story on a platform like Viar360. Interactive stories give the viewers the ability to move from one 360 video to another. This way the viewer control how the story unfolds.
If you have used photogrammetry to capture a 3D model with one of the 3D scanning apps, then you can take that model and place it into a CGI environment with game engines like Unity or Unreal. But let’s leave this for another time.