Download Print this page
Panasonic AG-HMX100 Manual

Panasonic AG-HMX100 Manual

3d production and post
Hide thumbs Also See for AG-HMX100:

Advertisement

Quick Links

3D Production and Post
Barry Clark
03-26-10
Real World 3D
When a viewer's eyes focus on a real object, they automatically converge on the object.
From the separate perspectives seen by the two eyes, the viewer's brain fuses a coherent
3D image of the object. All of the objects that the viewer sees in 3D occupy a cone that is
bounded by the edges of the overlapping fields of focus and convergence of the viewer's
eyes. Everything outside of this cone is seen by the viewer in 2D. As the viewer's eyes
focus on progressively more distant objects, the zone of convergence shifts with the zone
of focus and the cone shrinks in width until an
outer limit of distance is reached—a distance of
100-200 yards in the average adult—beyond
which the viewer can no longer distinguish the
perspectives seen by the left and right eyes.
Everything that is located further away from the
viewer seems to lie on a flat, 2D plane. To judge
the relative position in space of objects that lie
beyond this stereoscopic limit, a viewer must rely
on monoscopic depth cues, including motion cues
(nearby objects seem to shift position more
rapidly than distant objects), atmospheric cues
(the hue of objects shifts toward blue as they move
into the distance), and occlusion cues (near
objects obscure the view of more distant objects).
Fig.1 – Real World 3D
Simulated 3D
The experience of viewing a 3D film is significantly different from the way a viewer sees
3D in the real world. The most obvious differences between real world 3D and the
simulated 3D that is viewed on a screen are a consequence of the fixed depth-of-field and
the fixed point-of-view of the lenses that capture the images. As a result of these
constraints, viewers watching simulated 3D can no longer alter their point-of-view simply
by shifting the position of their heads, as they can in the real world. And when turning
their attention from one object of interest to another, they can no longer simply refocus
their eyes, as they can in the real world. In a 3D film, the point-of-view and the focus are
invariables established on the set. In addition, when looking at a 3D object displayed on a
screen, a viewer's eyes must focus on the screen while, at the same time, they converge on
a point in space that may be located beyond the screen, on the screen, or in front of the
screen. As a result of this process—which differs from the way a viewer sees the world—
the viewer has the sensation that the 3D object is located either in the space beyond the
screen, on the screen plane, or in front of the screen. A 3D object that appears to be
located on the screen plane is relatively easy for a viewer to watch. But, over time, a
viewer may experience eyestrain from the effort involved in fusing coherent 3D images of
objects that reside far beyond or far in front of the screen.

Advertisement

loading

Summary of Contents for Panasonic AG-HMX100

  • Page 1 3D Production and Post Barry Clark 03-26-10 Real World 3D When a viewer’s eyes focus on a real object, they automatically converge on the object. From the separate perspectives seen by the two eyes, the viewer’s brain fuses a coherent 3D image of the object.
  • Page 2 Depth and Scale Effects Along with the limitations noted above, simulated 3D offers some valuable tradeoffs. In addition to the opportunity to use most of the creative effects that are familiar to 2D filmmaking (among them: color effects, lens distortions, and a wide depth-of-field), 3D filmmakers gain the opportunity to exploit depth and scale effects that are unique to the 3D medium.
  • Page 3 Convergence, Parallax, and Depth When the parallax value of the pair of images of an object is negative, the left-eye image of the object is seen on the screen at a position that lies to the right of its right-eye image. When a viewer converges the image pair, the 3D object seems to be located in front of the screen plane.
  • Page 4 Preproduction Decisions The essential decisions regarding I/O and convergence are often made in preproduction. While errors in convergence can be relatively easily corrected in post, misjudgments in setting the interocular are not so easily fixed. The I/O and convergence decisions for a film may be incorporated into a depth script, a graph that tracks the filmmaker’s intentions for the emotional impact of the scenes.
  • Page 5 Production Decisions As illustrated in Fig.5, the cameras in the 3D rig may be configured to shoot (1) in parallel mode (unconverged); (2) converged beyond the key object of interest in the scene; (3) converged on the key object of interest in the scene; or (4) converged in front of the key object of interest in the scene.
  • Page 6: Post Processing

    Check As You Go The avoidance of unwanted artifacts during production is best achieved on location by evaluating shots with a 3D monitor and by screening 3D dailies on a screen that matches the size of the display on which the film will be seen in its target market. As further insurance, simple computer programs permit filmmakers to determine acceptable interocular settings by entering values for parallax, focal length, imager size, subject distance, audience interocular, and the width of the target screen.
  • Page 7 Convergence Considerations As noted above, excessive convergence can produce unwanted warping effects (including keystoning effects, which result from vertical disparities at the edge of the right and left images), as well as scaling effects—a consequence of placing familiar objects far beyond or far in front of the screen plane.
  • Page 8 Maximizing Depth When viewing 3D films, audiences accustomed to scanning the real 3D world tend to explore the entire contents of the frame, believing they are free to focus and converge their eyes wherever they choose. As a result, viewers may experience eyestrain when they try, without success, to resolve objects on the screen which are out of focus—objects that are beyond the lenses’...
  • Page 9 Watching the Edges When composing films for smaller screens, filmmakers must take into account the prospect that objects with strong negative parallax (i.e. objects that intrude a significant distance into the viewer’s space) may be truncated at the right or left sides of the screen, confusing the signals that viewers need in order to accurately place the objects in space.
  • Page 10 same time managing to avoid obstructing the views of the spectators. In addition to these considerations, 3D filmmakers who shoot live events are unlikely to want the depth and scale artifacts that may find a useful place in the palette of the dramatic filmmaker. In particular, by shooting with a very narrow I/O, live event filmmakers may inadvertently capture images that suffer from gigantism, while the choice of a wide I/O may result in images that suffer from miniaturization.
  • Page 11 Terminology parallax – separation on the screen plane between left and right images of an object. Determines the perceived depth of objects relative to the screen plane. negative parallax – objects are perceived to be positioned within the viewer’s space, i.e. in front of the screen plane.

This manual is also suitable for:

Ag-3da1