issue #20 - Veritas et Visus

Transcription

issue #20 - Veritas et Visus
We are pleased to provide this sample of the 3rd Dimension newsletter from Veritas et Visus.
We encourage you to consider an annual subscription.
•
For individuals, an annual subscription (10 issues) is only $47.99. Order information
is available at http://www.veritasetvisus.com/order.htm.
•
For corporations, an annual site license subscription is $299.99. The site license
enables unlimited distribution within your company, including on an intranet. Order
information is available at http://www.veritasetvisus.com/order_site_license.htm.
•
A discount is available to subscribers who order all five of our newsletters. Our five
newsletters cover the following topics:
ο
ο
ο
ο
ο
3D
Touch
High Resolution
Flexible Displays
Display Standards
The goal of this newsletter is to bring subscribers the most comprehensive review of recent
news about the emerging markets and technologies related to 3D displays. This newsletter
combines news summaries, feature articles, tutorial, opinion & commentary columns,
summaries of recent technology papers, interviews and event information in a
straight-forward, essentially ad-free format. The 3rd Dimension enables you to easily and
affordably stay on top of the myriad activities in this exciting market.
We look forward to adding you to our rapidly growing list of subscribers!
Best regards,
Mark Fihn
Publisher & Editor-in-Chief
Veritas et Visus
http://www.veritasetvisus.com
Veritas et Visus
3rd Dimension
3rd Dimension
Veritas et Visus
DAZ 3D, p39
September 2007
September 2007
MED, p72
Beowulf, p22
Vol 2 No 10
AMD, p43
Letter from the publisher : A different perspective… by Mark Fihn
2
News from around the world
12
S3D Basics+ Conference, August 29-29, 2007, Berlin, Germany
42
Society for Information Display 2007 Symposium, May 20-25, Long Beach, California
45
International Workshop on 3D Information Technology, May 15, 2007, Seoul, Korea
49
3DTV CON 2007, May 7-9, Kos Island, Greece
52
Stereoscopic Displays and Applications 2007 Conference, January 29-31, San Jose
60
Interview with Greg Truman from ForthDD
69
Interview with Ian Underwood from MED
72
Expert Commentary:
• Shoveling Data by Adrian Travis
• 3D camera for medicine and more by Matthew Brennesholtz
• 3D isn’t so easy by Chris Chinnock
• Selling to the market for 3D LCD displays… by Jim Howard
• 3D maps – a matter of perspective? by Alan Jones
• PC vs. Console – Has the mark been missed? by Neil Schneider
• 3D DLP HDTVs – all is revealed! by Andrew Woods
• The Last Word: How things get invented by Lenny Lipton
Calendar of events
76
78
79
82
85
89
91
96
98
The 3rd Dimension is focused on bringing news and commentary about developments and trends related to the
use of 3D displays and supportive components and software. The 3rd Dimension is published electronically 10
times annually by Veritas et Visus, 3305 Chelsea Place, Temple, Texas, USA, 76502. Phone: +1 254 791 0603.
http://www.veritasetvisus.com
Publisher & Editor-in-Chief
Managing Editor
Associate Editor
Contributors
Mark Fihn
[email protected]
Phillip Hill
[email protected]
Geoff Walker [email protected]
Matt Brennesholtz, Chris Chinnock, Jim Howard, Alan Jones, Lenny Lipton,
Neil Schneider, Adrian Travis, Andrew Woods
Subscription rate: US$47.99 annually. Single issues are available for US$7.99 each. Hard copy subscriptions are
available upon request, at a rate based on location and mailing method. Copyright 2007 by Veritas et Visus. All
rights reserved. Veritas et Visus disclaims any proprietary interest in the marks or names of others.
http://www.veritasetvisus.com
1
Veritas et Visus
3rd Dimension
September 2007
A different perspective…
by Mark Fihn
Not long ago, one of our subscribers asked me why I include topics in this newsletter that not directly related to 3D
displays. He suggested that my coverage of 3D photography, 3D binoculars, 3D pointing devices, 3D scanners,
stereo-lithography, lenticular printing, 3D art, etc., was a distraction from the topic of displays.
Fortunately, this gentleman seems to be in the minority, because most of the feedback I get is quite positive with
regard to these non-display topics. In any case, if you don’t care for the non-display coverage, we’re confident that
you can choose to scroll through it quickly and find those topics that are most interesting to you.
The reason that we cover non-display-related topics in this newsletter is because most of it is directly related to
displays, and even those things that are not electronic in nature, provide us with some amazing clues about stereo
vision and three-dimensional imaging.
Readers of our sister newsletter, High Resolution, will be aware of my fascination with optical illusions. How is it
that we “see” things that really aren’t there? Of course, this is exactly what stereographic 3D displays do – they
recreate an illusion, as we are not actually seeing three dimensions on the surface of the 2D display. Perhaps this is
one of the reasons that I find 3D so fascinating – it’s little more than a trick we play on our brains. But as with all
good optical illusions – the consequent image is really quite convincing.
In addition to optical illusions, I find myself fascinated with artwork that considers three dimensions. Sculpture is
an obvious three-dimensional exercise, but even more than sculpture, I am fascinated by the imaginative forms of
art that consider the third dimension from a different perspective. As such, in what will be a rather lengthy
introduction to the newsletter, I’m going to share several creations in 3D (or the appearance of 3D) that intrigue me.
Although not directly related to displays, only those of you straight-jacketed by the practical and mundane will fail
to find the next pages interesting, or at least a little fun. Enjoy!
Jen Stark art work explores three-dimensional use of paper
Artist Jen Stark introduced an interesting collection that explores how two-dimensional objects like paper can be
transformed into stunning three-dimensional creations. The basic premise is simple – involving nothing more than
hand-cutting stacks of construction paper. http://www.jenstark.com
http://www.veritasetvisus.com
2
Veritas et Visus
3rd Dimension
September 2007
Robert Lang shows off incredible origami
The Japanese art of paper-folding known as origami is a well known process that takes two-dimensional sheets of
paper to create three-dimensional objects. An American, Robert Lang, has been an avid student of origami for over
thirty years and is recognized as one of the world’s leading masters of the art, with over 400 designs catalogued and
diagrammed. He is noted for designs of great detail and realism, and includes in his repertoire some of the most
complex origami designs ever created. His work combines aspects of the Western school of mathematical origami
design with the Eastern emphasis upon line and form to yield models that are distinctive, elegant, and challenging
to fold. http://www.langorigami.com
On the left is Robert Lang’s “Allosaurus Skeleton”, fashioned from 16 uncut squares of paper; in the center is “Tree Frog”,
made from a single uncut square of paper; and on the right is “The Sentinel”, crafted from 2 uncut sheets of paper
The Punch Bunch features 3D floral punch art
My wife runs a little business called The Punch Bunch in which she wholesales craft punches to scrapbook and
craft stores. One of the things that she has worked hard to popularize is an amazing form of art in which specialty
paper punches are used to cut out shapes that are then molded and colored to create what are some amazing threedimensional floral bouquets. The business started almost 10 years ago out of our home, (the first warehouse was
some extra space in the master bathroom). She now ships all over the world and offers perhaps the largest
collection of punches in the world. A substantial portion of what she sells is used by crafters to make floral
arrangements that are often difficult to distinguish from real flowers. http://www.thepunchbunch.com
Floral arrangements crafted from paper punch outs. The image on the left is by Australian punch artist Leone Em;
the two images on the right are by Seattle-based artist Susan Tierney-Cockburn.
http://www.veritasetvisus.com
3
Veritas et Visus
3rd Dimension
September 2007
Guido Daniele shows off more Handimals
In issue #18 of the 3rd Dimension, we showed off some intriguing hand paintings by Italian artist Guido Daniele
whose “Handimals” serve as an excellent schoolteacher for creating 2D images on a 3D form. By changing the
perspective of the viewer, Daniele’s meticulous images use the human hand as an easel, but the image is lost if the
perspective is altered or if the hand is moved. 3D displays have similar problems – except that they recreate a 3D
image on a 2D form, but similar problems of perspective persist. http://www.guidodaniele.com
Note that body art has become quite popular, particularly when it
comes to painting “clothing” on the bodies of attractive young
female models. While some of this body art is truly creative, it
rarely relies so much on the shape of the body parts and the
perspective of the viewer to create the three-dimensional effect. In
addition to his popular Handimals, Daniele’s website features
numerous examples of full-body art, most of which has been created
for commercial purposes. The image to the right, although not a full
body-art image is an example of Daniele’s commercial art using his
animal motif in relation to a Jaguar.
http://www.veritasetvisus.com
4
Veritas et Visus
3rd Dimension
September 2007
Julian Beaver’s sidewalk chalk paintings continue to astound
In past editions of the 3rd Dimension, we’ve shown images of Julian Beever’s amazing chalk paintings, (see issues
#12 and #13), which so clearly show us the importance of perspective. The bottom pair of images serves to identify
just how important the viewer’s position is to a successful rendering of a 3D image on a 2D surface. In all of these
images, the lines in the sidewalk serve to remind us that these really are 2D paintings.
http://users.skynet.be/J.Beever
http://www.veritasetvisus.com
5
Veritas et Visus
3rd Dimension
September 2007
John Pugh’s amazing trompe-l’œil artistry
Trompe-l’œil is an art technique involving extremely realistic imagery in order to create the optical illusion that the
depicted objects really exist, instead of being mere, two-dimensional paintings. The name is derived from French
for “trick the eye”. One of the current-day masters of the technique is John Pugh, whose stunning creations are so
lifelike they have caused traffic accidents. In his image “Art Imitating Life Imitating Art Imitating Life” (shown
below), which is featured at a café in San Jose, California, a customer complained he had received “the silent
treatment” when he tried to introduce himself to the woman reading a book. http://www.illusion-art.com
John Pugh’s “Art Imitating Life Imitating Art Imitating Life”. The lower left image is an early concept layout; the
lower right shows Pugh painting the statue.
http://www.veritasetvisus.com
6
Veritas et Visus
3rd Dimension
September 2007
Eric Growe paints 3D murals
Eric Growe is another fascinating artist that uses the trompe-l'œil style to create amazing murals that transform 2D
spaces into stunning 3D paintings. Growe does most of the artwork by himself and researches, paints and designs
each project from scratch. These paintings, all on flat surfaces, completely change the perception of an otherwise
empty space and serve to create a truly novel way to attract interest and attention. http://www.ericgrohemurals.com
Growe painted the above mural on the side of a shopping mall in Niagara, New York. The upper images are before and after
photos; the lower image shows detail while, also providing a hint about how the image appears from differing perspectives.
On the left is the side of a store in Massillon, Ohio, previously nothing more than a brick wall. The front of the
building was later painted to match the architectural details of the mural on the side of the building. On the right is a
mural on a wall at the Mount Carmel College of Nursing in Columbus, Ohio.
http://www.veritasetvisus.com
7
Veritas et Visus
3rd Dimension
September 2007
Two of four murals painted by Growe at the Washington State Corrections Center for Women in Gig Harbor, Washington.
RollAd competition showcases 3D illusions on sides of trucks
German ad agency RollAd rents out advertising space on the sides of trucks and for the past three years has
sponsored the Rolling Advertising Awards competition. The ads are printed on interchangeable canvas covers
which are placed over the container portions of the trucks. The winners of the competition have their mock-up
designs actually implemented and showcased at the annual awards ceremony (http://www.rhino-award.com). The
website shows many very clever examples. The 2005 competition winner was the Pepsi design shown lower left.
Interestingly, the lower right image is taken from a different perspective where the illusion is not as effective.
http://www.veritasetvisus.com
8
Veritas et Visus
3rd Dimension
September 2007
Devorah Sperber uses chenille stems to show examples of perspective
Devorah Sperber showcases on her website a couple of amazing pieces of art made from chenille straws that both
center on Holbein’s art and which highlight the importance of perspective when viewing art. In her words:
“While many contemporary artists utilize digital technology to create high-tech works, I strive to ‘dumbdown’ technology by utilizing mundane materials and low-tech, labor-intensive assembly processes. I place
equal emphasis on the whole recognizable image and how the individual parts function as abstract elements,
selecting materials based on aesthetic and functional characteristics as well as for their capacity for a
compelling and often contrasting relationship with the subject matter.” http://www.devorahsperber.com
In this piece, called “After Holbein”, Sperber arranged chenille stems in such a way that when the image is reflected
from a polished steel cylinder that Holbein’s work becomes visible
In this piece, a skull becomes obvious only when viewed from an extreme viewing angle
http://www.veritasetvisus.com
9
Veritas et Visus
3rd Dimension
September 2007
Inakadate rice farming…
Each year since 1993, farmers in the town of Inakadate in Aomori prefecture create works of crop art by growing a
little purple and yellow-leafed kodaimai rice along with their local green-leafed tsugaru-roman variety. The images
start to appear in the spring and are visible until harvest time in September. While I suppose it’s arguable about
whether these are 3D images, the notion of perspective is a big consideration, as depicted in the close-up image
depicted in the bottom right. http://www.am.askanet.ne.jp/~tugaru/z-inakadate.htm
The top left image is the 2007 crop art image creation by the farmers of Inakadate, Japan. On the top right is the
2006 image and the lower left image is from 2005. The lower right image is a close-up of the different rice plantings,
identifying the importance of perspective when viewing any image.
Stan Herd crop artistry
Stan Herd is an American
crop artist known for creating
advertisements that are
strategically placed to
coincide with airline flight
paths. Pictured here are a
couple of his more artistic and
not-so-commercial efforts. On
the left is his “Sunflower
Field” and on the right is a
“Portrait of Saginaw Grant”.
http://www.stanherdart.com
http://www.veritasetvisus.com
10
Veritas et Visus
3rd Dimension
September 2007
Heather Jansch creates driftwood sculptures
Heather Jansch’s driftwood sculptures don’t give us any insights into 2D to 3D transformation or into perspective,
but they are such unique and beautiful creations that I couldn’t resist including them here anyway. Her specialty is
in assembling life-size works made from scrap driftwood, particularly of horses. Jansch lives and works in the West
Country of England. She is holding “Open Studio” 2007 from September 8th to 23rd, showing off work-in-process,
new life-size works in driftwood and in oak, a woodland walk, a children’s “treasure trail”, and some featured guest
artists. http://www.jansch.freeserve.co.uk
http://www.veritasetvisus.com
11
Veritas et Visus
3rd Dimension
September 2007
3D news from around the world
compiled by Mark Fihn and Phillip Hill
Acacia Research to soon release 3D Content Creation 2007 report
Market researchers Acacia Research announced their impending release of “3D Content Creation 2007”, which will
examine the market for 3D modeling and animation tools in the film and video, video game, advertising,
visualization, and other industries. In addition to software shipment and revenue forecasts, this market study will
provide details on revenues and budgets of the industries that use the tools and a look at spending on 3D content
creation within those industries. It will discuss the major trends in the industry including consolidation, new market
opportunities, the explosion of specialized and lightweight tools, and much more. The report is available at an
individual rate for $1,995.00 and at a site license rate of $2,992.50. http://www.acaciarg.com
ITRI forms 3D display alliance with panel makers
Taiwan’s Industrial Technology Research Institute
(ITRI) recently formed the 3D Interaction & Display
Alliance with Taiwan LCD panel makers, including AU
Optronics (AUO), Chi Mei Optoelectronics (CMO),
Chunghwa Picture Tubes (CPT) and HannStar Display,
and digital TV content suppliers as well as several
system makers. ITRI suggested that the 3D display
market will grow from $300 million in 2007 to over $2
billion in 2010. Taiwan panel makers such as CMO and
CPT have already developed 3D LCD panels with
CMO set to volume produce 22-inch 3D panels in the
third quarter while CPT’s 20.1-inch wide-screen panel
has gained attention from first-tier display vendors,
according to sources. http://www.itri.org.tw
Jon Peddie Research reports about Q2’07 graphics market
JPR released its Q2’07 quarterly MarketWatch report on PC graphics shipments. Traditionally, the second quarter
is slow for the computer industry. Nevertheless Q2’07 saw nVidia make significant grains, while AMD and Intel
saw more typical results for the time period. VIA saw a slight rise, SiS slipped more, and Matrox dropped too.
Total shipments for the quarter were 81.3 million units, up 3% in over last quarter. Compared to the same quarter
last year shipments were up 10.6%. On the desktop nVidia was the clear winner, claiming 35.0% against Intel’s
31.3%, while AMD had a modest gain to 18.8%. In the mobile market Intel held its dominant position and grew
slightly to 51.5%, with nVidia number two at 27% and AMD at 21%. Mobile chips continued their growth to claim
31.2% of the market with 24.5 million units. http://www.jonpeddie.com
Vendor
AMD
Intel
nVidia
Matrox
SiS
VIA/S3
TOTAL
Q2’07
15.86
30.59
26.48
0.13
2.00
6.26
81.32
http://www.veritasetvisus.com
Market share
19.5%
37.5%
32.6%
0.2%
2.5%
7.7%
100.0%
Year ago
19.67
29.68
14.48
0.10
3.33
6.27
73.53
Market Share
26.7%
40.4%
19.7%
0.1%
4.5%
8.5%
100.0%
Growth
-19.4%
3.1%
82.9%
30.0%
-39.9%
-0.2%
10.6%
12
Veritas et Visus
3rd Dimension
September 2007
Ozaktas and Onural edit new book about “Three-Dimensional Television”
The scope of a new book entitled “Three-Dimensional Television: Capture,
Transmission, Display”, reflects the diverse needs of this emerging market. Different
chapters deal with different stages of an end-to-end 3DTV system such as capture,
representation, coding, transmission, and display. In addition to stereographic 3D
solutions, both autostereoscopic techniques and holographic approaches are also
covered. Some chapters discuss current research trends in 3DTV technology, while
others address underlying topics. In addition to questions about technology, the book
also addresses some of the consumer, social, and gender issues related to 3DTV. The
800-page book is expected to be available in early December. In hardcover, the book is
priced at $269.00/€199.95/£154.00. http://www.springer.com
Philips introduces WOWzone 132-inch 3D display wall
In late August at IFA in Berlin, Philips introduced the 3D WOWzone, a large 132-inch multi-screen 3D wall,
designed to grab people’s attention with stunning 3D multimedia presentations. Philips claims that the out-ofscreen 3D effects fascinate viewers and holds their attention for longer than standard 2D images, thereby making
3D a valuable marketing tool. No glasses are needed to view the Philips 3D WOWzone and it gives marketeers an
element of surprise that leaves their target audience with an entertaining 3D multimedia experience. The Philips
WOWzone multi-screen 3D wall consists of nine 42-inch Philips 3D displays in a 3x3 display set-up. A fully
automated dual mode feature allows the user to display 3D content as well as 2D high-definition content. Philips
WOWzone is a complete end-to-end solution including 3D displays, mounting rig, media streamer computers,
control software and dedicated 3D content creation tools. The WOWzone is available today on a project basis and
will be commercially available from Q1 2008 onwards.
Philips and eventIS demonstrate 3D video-on-demand feasibility
In early September, Philips and eventIS announced that they successfully completed testing of 3D video-ondemand (VoD) using an eventIS metadata system and Philips 3D displays. This proves that the new 3D video
format, based on “2D-plus-depth”, can be integrated into existing media distribution and management systems such
as video-on-demand via cable, satellite, Internet or terrestrial broadcasting. Earlier this year Deutsche Telekom and
Philips demonstrated interactive 3D applications like movies, home shopping and online games. Now eventIS takes
this a step further, by demonstrating that 3D VoD capabilities can easily be implemented in their metadata media
management system. According to the company, VoD will play an important role in the early distribution of highquality 3D movies to the consumer. In the demo, eventIS makes use of a library that consists of 3D animated,
stereoscopic and 2D-to-3D converted videos. http://www.philips.com/newscenter
On the left is the Philips 3D WOWzone 132-inch 3D display wall; the image on the right depicts 3D video-on-demand
http://www.veritasetvisus.com
13
Veritas et Visus
3rd Dimension
September 2007
iZ3D ships 22.0-inch 3D gaming monitor
In late August, San Diego-based iZ3D announced it is selling its
iZ3D 22.0-inch widescreen 3D gaming monitor for $999 – and
the company is specifically targeting the gaming market. iZ3D
says the system works using custom software drivers, and the
user must wear passive polarized glasses. iZ3D is a newlyformed partnership between 3D imaging developer Neurok
Optics and Taiwan’s Chi Mei Optoelectronics. The monitor
itself offers a 1680x1050 pixel format, 5 ms response time, 170°
viewing angle, 600:1 contrast, and dual DVI/VGA inputs
designed to connect to a dual-output video card. The display
ships with stereo drivers which are compatible with either the
nVidia GeForce 8 series or ATI’s FireGL V3600 workstation
graphics cards. The monitor incorporates two LCDs and can also
be used for standard 2D computing tasks. http://iz3d.com
Fraunhofer Research demonstrates autostereoscopic Free2C display
Fraunhofer Research showed off its Free2C 3D Display at
IFA in late August, claiming it to be “currently the most
advanced development in autostereoscopic (glasses-free
3D) display technology”. Free2C is based on a special
head-tracking lenticular-screen 3D display principle,
allowing free head movements in three dimensions at a
high level of image quality (the current resolution is
1600x1200 pixels). The particular design of the lens plate
ensures that the stereoscopic images are almost perfectly
separated (no ghosting). The Free2C-Desktop Display is
perfectly suited for virtual prototyping, archaeology and
oceanography, minimal invasive surgery and lifelike
simulations. The researchers claim that the viewer can be
freely positioned without degradation to resolution,
brightness, and color reproduction, all with “extremely low
crosstalk”. http://www.hhi.fraunhofer.de
SD&A 2007 “Discussion Forum” now on-line
At the 2007 SD&A event in San Jose, a panel discussion was conducted on the topic, “3D in the Home: How Close
are We?” The discussion was moderated by Lenny Lipton (far left) from REAL D, and from left to right, included
Brett Bryars from the USDC, Art Berman from Insight Media, Mark Fihn from Veritas et Visus, and Steven Smith
from VREX. Transcripts of the forum are now on line: http://www.stereoscopic.org/2007/forum.html
http://www.veritasetvisus.com
14
Veritas et Visus
3rd Dimension
September 2007
Hitachi shows off new stereoscopic vision display technology
In early August, Hitachi
announced its development
of a new “small-sized
stereoscopic vision display
technology”. Measuring in
at 7.9 x 7.9 x 3.9 inches and
weighing 2.2 pounds, the
device utilizes an array of
mirrors and projects a
“synthetic image”. The
device is reportedly similar
in design to its larger “Transpost”. Hitachi hopes to implement the technology in locales such as schools,
exhibitions, and museums. http://www.hitachi.co.jp
NTT develops tangible 3D technology
NTT Comware has developed “Tangible-3D” technology, a next generation communication interface for real-time
motion capture that reproduces the physical feel of three-dimensional video. This technology is an improved
version of the NTT Comware’s tangible 3D system without requiring special glasses that was originally developed
in 2005. It relies on a pair of cameras that capture and process data about an object. It allows tactile impressions to
be transmitted back and forth between multiple users on a real-time basis by the software to process the captured
images. It displays 3D images without requiring special glasses on the 3D display and translates the images into a
tactile impression that the user can feel with the dedicated tangible interaction device at the same time. It
reproduces the physical feel of three-dimensional video on a remote location as well as to allow the viewers to
literally reach out and touch the person or object on the screen by means of a special device.
For instance, a real-time motion capture of 3D images and a tactile impression provides a virtual handshaking
experience. To enable this experience, a pair of cameras captures the image of the hand of a user at one side. The
image is processed to a 3D image and extracts the data of the tactile impression. The data is then transmitted to the
receiving end on a real-time basis. The image of the hand captured is displayed on the 3D display without requiring
special glasses. At the same time, the data of the tactile information for touching a hand is transmitted to the
recipient’s hand through the tactile device to actually feel the on-screen image as it moves such as shaking hands.
While the Tangible-3D system only works in one direction on a one-on-one basis in the demonstration for now,
NTT Comware is developing a two-way system that allows tactile impressions to be transmitted back and forth
between multiple users. The company is also working to improve the 3D screen for multiple angle viewing, which
only appears three-dimensional from a
particular
viewing
angle.
The
technology will also allow visitors of
the museum to handle items of
exhibits with 3D images such as
fossils. If this technology is applied to
a remote classroom to make ceramics
for example, the students can obtain
perceptible information of a work
such as the real shape while the
teacher shows the 3D image on the
screen. This technology also enables
interactive communication for video
conferences. http://www.nttcom.co.jp
http://www.veritasetvisus.com
15
Veritas et Visus
3rd Dimension
September 2007
Novint Technologies brings out games titles for Falcon
Novint Technologies announced a diverse lineup of upcoming titles for the Novint Falcon game controller. The
company is adding its patented 3D touch technology to a variety of existing titles, including Feelin’ It: Arctic Stud
Poker Run. Novint is also creating original titles designed specifically for 3D touch and will begin releasing all
titles later this year. The Novint Falcon is a first-of-a-kind game controller that lets people feel weight, shape,
texture, dimension, dynamics and force feedback when playing enabled games. New releases will range in price
from $4.95 to $29.95 and be available to download through Novint’s N VeNT player. http://www.novint.com.
Mova and Gentle Giant Studios show first moving 3D sculpture of live performance
Performance capture studio Mova and Gentle Giant Studios unveiled at SIGGRAPH a 3D Zoetrope that uses
persistence of motion to bring to life a series of 3D models of an actor’s face captured live by Mova’s Contour
Reality Capture System. This 3D Zoetrope is the first to show a live-action, natural 3D surface in motion. The
resulting effect is a physical sculpture of a speaking human face that comes to life with perfect motion, faithful to
the original actor’s performance down to a fraction of a millimeter. The Zoetrope displayed at SIGGRAPH
consisted of thirty 3D models of a face in motion. The models spin on a wheel and a strobe light illuminates each as
it passes by a viewing window, much as still frames projected intermittently are perceived as a moving image. To
the viewer, it looks like one 3D face in continuous motion. Mova used the Contour Reality Capture System to
capture the live performance of an actor using an array of cameras with shutters synchronized to lights flashing
over 90 times per second, beyond the threshold of human perception. The glow from phosphorescent (“glow in the
dark”) makeup sponged onto the actor is captured by the camera array. Triangulation and frame-by-frame tracking
of the 3D geometry is then used to produce over 100,000 polygons to create a 3D face, to an accuracy of a fraction
of a millimeter. Gentle Giant Studios used the captured 3D surface geometry and formed 30 individual models with
the help of a 3D stereolithography printer, which creates the models using a plastic resin. http://www.mova.com
Mova’s 3D Zoetrope uses 30 3D models that result in a very lifelike full-motion facial representation
Image Metrics performance capture system to model Richard Burton
Image Metrics announced at the SIGGRAPH tradeshow in San Diego that its proprietary performance capture
solution will provide the modeling and animation for a photo-realistic 11-foot 3D hologram of the late Richard
Burton, for the Live on Stage! production of the multi-award winning, 15 million-selling album, Jeff Wayne’s
Musical Version of The War of The Worlds. Image Metrics’ technology analyzes the motion data captured in any
video recording of an actor. It removes the slow process of animation by hand required by other motion capture
programs and eliminates the need for expensive motion capture camera systems. Image Metrics is completing a
total of 23 minutes of photo-real facial animation perfectly synchronized to the original audio recording of the star.
Image Metrics contributed 72 shots developed by a team of five artists. http://www.image-metrics.com
http://www.veritasetvisus.com
16
Veritas et Visus
3rd Dimension
September 2007
DAVID-Laserscanner brings out freeware for 3D laser scanning
DAVID-Laserscanner is a freeware software for 3D laser range scanning to be used with a PC, a camera (e.g. a
webcam), a background corner, and a laser that projects a line onto the object to be scanned. The concept of
DAVID was developed by the computer scientists Dr. Simon Winkelbach, Sven Molkenstruck and Prof.
F. M. Wahl at the Institute for Robotics and Process Control, Technical University of Braunschweig, Germany, and
was published as a paper at the German Association for Pattern
Recognition. The object to be scanned has to be put in front of a known
background geometry (e.g. into the corner of a room or in front of two
planes with an exact angle of 90°) with the camera pointed towards the
object. The laser is held freely in the hand, “brushing” the laser line over
the object. Meanwhile the computer automatically calculates 3D
coordinates of the scanned object surface. To obtain a complete 360
degree model of the 3D object, the company has developed DAVIDShapefusion that automatically “puzzles” together the laser scans made
from different sides. http://www.david-laserscanner.com/
Breuckmann launches 3D digital scanning technique
Breuckmann has launched smartSCAN, especially developed for applications
for high performance digitization in technique, education, art, and cultural
heritage. The new system smartSCAN is positioned for target markets
interested in digitization tasks. The company sees applications for high
performance digitization: technical tasks, education, art, cultural heritage –
everywhere where the emphasis is on creating reliable and accurate data. Due
to the new and compact design the smartSCAN is easy to handle. SmartSCAN
is available as a mono or stereo system: the setup can be configured with
either one or two color cameras and one projection unit. With this design, the
system can be configured to cover this wide range of applications.
http://www.breuckmann.com
Fraunhofer develops projector tiled wall system
Fraunhofer of Germany has developed a cluster-based VR application in an extremely complex process handling
distributed rendering, device management and state synchronization. Fraunhofer has dramatically simplified this
process by utilizing X3D as the VR/AR application description language
while hiding all the low level distribution mechanisms. It provides a new
cluster application deployment solution which allows the running of X3D
content on computer clusters without changes. The application developer
can build high-level interactive application utilizing the full immersive
profile of the ISO standard including PointingSensors and Scripting. The
system has been demonstrated and deployed at the 48-projector/18-millionpixel tiled-display called HEyeWall. The HEyeWall offers an unmatched
visual resolution compared to standard projection systems as it enables the
visualization of brilliant pictures and stereoscopic 3D models. The marketready display system was developed by researchers of Fraunhofer IGD.
Until now, the alerted eye could see single pixels, blurred shapes and colors
when standing close to the projection wall. The HEyeWall allows
examination of the projected image from any position. Various possible
fields of application result from this: from efficient product development to the simulation of heavy flow of traffic
and the visualization of highly structured 3D area and city models to the specific planning of rescue operations. A
free beta version of the cluster-client/server solution is available from http://www.instantreality.org/home/.
http://www.veritasetvisus.com
17
Veritas et Visus
3rd Dimension
September 2007
NRL scientists viewing STEREO images using EVL’s ImmersaDesk4 technology
Solar physicists at the Naval Research Laboratory (NRL) are viewing solar disturbances whose depth and violent
nature are now clearly visible in the first true stereoscopic images ever captured of the Sun. These views from the
STEREO program are providing scientists with unprecedented insight into solar physics and the violent solar
weather events that can bombard Earth’s magnetosphere with particles and affect systems ranging from weather to
our electrical grids. NRL scientists are viewing the high-resolution stereo pairs on an ImmersaDesk4 (I-Desk4)
display system specifically commissioned and installed at the laboratory last
summer in anticipation of the release of the data. The I-Desk4, invented at the
University of Illinois at Chicago’s (UIC) Electronic Visualization Laboratory
(EVL), is a tracked, 4-million-pixel display system driven by a 64-bit graphics
workstation. Its compact workstation design is comprised of two 30-inch
Apple LCD monitors mounted with quarter-wave plates and bisected by a
half-silvered mirror enabling circular polarization. Multiple users can view the
head-tracked 3D scene using lightweight polarized glasses. The Solar Physics
Branch at NRL developed the SECCHI (Sun Earth Connection Coronal and
Heliospheric Investigation) suite of telescopes for the spacecraft. The highresolution sensor suite includes coronagraphs, wide-angle cameras and an
Extreme Ultraviolet Imager. The sensors generate 10 synchronized video
feeds, each up to 2K by 2K pixels. In summer 2006, EVL student Cole
Krumbholz worked with NRL solar physicist Dr. Angelos Vourlidas to help
establish a solar imagery display environment at NRL. Krumbholz helped
build two EVL-developed display systems capable of viewing and managing
files on the scale of thousands of pixels per square inch: a nine-panel tiled
LCD wall ideal for viewing high-resolution 2D imagery, and an I-Desk4 for
viewing high-resolution 3D imagery. The tiled wall is capable of
Dr. Angelos Vourlidas of the
synchronously displaying multiple high-resolution video streams. NRL
NRL's Solar Physics Branch
scientists can also composite the sensor data into a single video to conduct a
views stereoscopic solar images
multi-spectral analysis, and view multiple days of video. Krumbholz
taken with the EUV telescopes of
implemented a distributed video-rendering tool with interactive features such
NRL's SECCHI instrument suite
as pan, zoom and crop. http://www.evl.uic.edu
Anteryon develops new display screen optics for Zecotek 2D-3D system
Anteryon of the Netherlands announced the launch of a new display screen optics product developed for the
Zecotek 2D–3D display system. Zecotek is a Vancouver, Canada, based company with facilities in Vancouver and
Singapore where the Anteryon display screen will be assembled into the Zecotek display system. The initial
application focus for this Zecotek product lies in the field of biomedical imaging. http://www.anteryon.com
Ramboll launches free floating video at Copenhagen Airport
Ramboll of Denmark launched the Cheoptics360 XL at
Copenhagen Airport where it is on display until October 4.
Cheoptics360 XL displays free floating 3D video, opening up a
whole new universe of possibilities to those seeking innovative
and persuasive methods to present their products. Cheoptics360
XL is suitable as a stand-alone installation to be viewed from all
angles, and it can also be integrated into all kinds of buildings,
structures or environments. Presentations can be viewed on
Cheoptics ranging in size from 1.5 meters wide up to 10 meters
wide, allowing displays of both small and large objects.
http://www.3dscreen.ramboll.dk
http://www.veritasetvisus.com
18
Veritas et Visus
3rd Dimension
September 2007
University of Tokyo researchers develop TWISTER
A research team from the University of Tokyo has developed a rotating panoramic display that immerses viewers in
a 3D video environment. The Telexistence Wide-angle Immersive STEReoscope, or TWISTER, is the world’s first
full-color 360-degree 3D display that does not require viewers to wear special glasses. The researchers have spent
over 10 years researching and developing the device. Inside the 4-foot by 6.5-foot cylindrical display are 50,000
LEDs arranged in columns. As the display rotates around the observer’s head at a speed of 1.6 revolutions per
second, these specially arranged LED columns show a slightly different image to each of the observer’s eyes, thus
creating the illusion of a 3D image. In other words, TWISTER tricks the eye by exploiting “binocular parallax”.
For now, TWISTER is capable of serving up pre-recorded 3D video from a computer, allowing viewers to
experience things like virtual amusement park rides or close-up views of molecular models. However, the
researchers are working to develop TWISTER’s 3D videophone capabilities by equipping it with a camera system
that can capture real-time three-dimensional images of the person inside, which can then be sent to another
TWISTER via fiber optics. In this way, two people separated by physical distance will be able to step into their
TWISTERs to enjoy real-time 3D virtual interaction. http://www.star.t.u-tokyo.ac.jp/projects/TWISTER/
MIT researchers develop 3D microscope that generates video images
MIT researchers designed a microscope for generating three-dimensional movies of live cells. The microscope,
which works like a cellular CT scanner, will let scientists watch how cells behave in real time at a greater level of
detail. This new device overcomes a trade-off between resolution and live action that has hindered researchers’
ability to examine cells and could lead to new methods for screening drugs. Cells can’t be examined under a
traditional microscope because they don't absorb very much visible light. So the MIT microscope relies on another
optical property of cells: how they refract light. As light passes through a cell, its direction and wavelength shift.
Different parts of the cell refract light in different ways, so the MIT microscope can show the parts in all their
detail. The microscope creates three-dimensional images by combining many pictures of a cell taken from several
different angles. It currently takes only a tenth of a second to generate each three-dimensional image, fast enough to
watch cells respond in real time. This processing technique, called tomography, is also used for medical imaging in
CT scans, which combine X-ray images taken from many different angles to create three-dimensional images of the
body. http://web.mit.edu/newsoffice/2007/cells-0812.html
This image of a live, one millimeter-long worm taken with a new 3D microscope clearly shows internal structures including
the digestive system. The worm’s mouth is at the left and the thick red band is the worm’s pharynx.
http://www.veritasetvisus.com
19
Veritas et Visus
3rd Dimension
September 2007
DNP and Sony commence production of hologram technology to counter fake merchandise
Dai Nippon Printing and Sony PCL announced in July the start of made-to-order production of a new Lippmann
hologram for security uses, which is capable of storing dynamic picture images, including animation and liveaction created with stereogram technology. The newly developed hologram has the capacity to store in excess of
100 image frames on a single hologram, and because it is extremely difficult to counterfeit, is effective in helping
to discriminate between genuine and counterfeit goods via uses including certification seals on genuine products.
Live-action film as viewed via the newly developed hologram. By changing the viewing angle, the images
appear to continually change.
Unlike existing mainstream embossed holograms, which record images in physical relief on the surface of the
material, the newly developed hologram is a Lippmann hologram, which stores images by recording interference
patterns in photo-sensitive layers produced by laser. Lippmann holograms are extremely difficult to counterfeit, as
it is difficult to obtain the photo-sensitive materials used, as Lippmann holograms are capable of producing unique
image expressions not possible with other hologram formats, and as they require specialized manufacturing
technology. DNP and Sony PCL have made it even more difficult to illicitly reproduce the holograms by providing
them with the capacity to record in excess of 100 image frames on a single hologram via the unique application of
line order recording technology, which has made it possible to record dynamic images, including flying logos and
animation. Each parallax image displayed on the LCD is horizontally compressed into a vertical slit by a cylindrical
lens. The beam of the slit and the reference beam form an interference pattern, which is then recorded on the photosensitive material on the glass substrate. The hundreds of vertical slits, placed sequentially side-by-side, form the
larger hologram. The new hologram has undergone approximately 18 months of field tests, and DNP and Sony
PCL have moved into full-scale operations after successfully confirming the effectiveness of the new hologram as
an anti-counterfeiting measure. http://www.dnp.co.jp/international/holo/index.html
New Carl Zeiss stereo microscope claims greatest FOV, zoom, resolution
Carl Zeiss MicroImaging Inc. introduced the SteREO Discovery.V20 stereo microscope, which claims the
industry’s largest field of view (23 mm at 10x), highest zoom range (20 to 1), and greatest resolution, all combined
in one stereo microscope to allow visualization of large samples and their fine details without changing objectives
or eyepieces. The new tool also promises a substantially greater depth of field than other stereo microscopes,
allowing the ability to view and measure well-resolved object details with greater ease and accuracy. Step motor
control enables continuous increases in magnification with precise zoom levels
to create a well-defined, high contrast image throughout the zoom range. The
System Control Panel (SyCoP) puts all major microscope functions at the
user's fingertips, allowing for fast changeover between zoom, focus, and
illumination functions and displays for total magnification, object field,
resolution, depth of field, and Z position. Carl Zeiss has designed the system to
meet the ergonomic demands of users working for hours at a time. The
SteREO Discovery.V20 is fully integrated into Zeiss’ modular SteREO
Discovery system and compatible with all SteREO components. The
microscope can be combined with the AxioCam digital camera and the
AxioVision image analysis and evaluation software for a powerful, complete
image recording and analysis system. http://www.zeiss.com.au
http://www.veritasetvisus.com
20
Veritas et Visus
3rd Dimension
September 2007
Thomas Jefferson University Hospital software creates 3D view of the brain
Researchers at Thomas Jefferson University Hospital in Philadelphia have developed
software that integrates data from multiple imaging technologies to create an interactive
3D map of the brain. The enhanced visualization gives neurosurgeons a much clearer
picture of the spatial relationship of a patient’s brain structures than is possible with any
single imaging methods. In doing so, it could serve as an advanced guide for surgical
procedures, such as brain-tumor removal and epilepsy surgery. The new imaging
software collates data from different types of brain-imaging methods, including
conventional magnetic resonance imaging (MRI), functional MRI (fMRI), and
diffusion-tensor imaging (DTI). The MRI gives details on the anatomy, fMRI provides
information on the activated areas of the brain, and DTI provides images of the network
of nerve fibers connecting different brain areas. The fusion of these different images
produces a 3D display that surgeons can manipulate: they can navigate through the
images at different orientations, virtually slice the brain in different sections, and zoom
in on specific sections. With the new software, surgeons are able to see the depth of the
fibers going inside the tumor, shown as dashed lines, and the proximity of those on the
outside, shown as solid lines. The lines are color-coded based on their depth; they range
from dark red, which represents the deepest, to dark blue, which represents the
shallowest. The scale on the left side of the accompanying images is based on depth,
dark red being the deepest; dark blue the shallowest. http://www.jeffersonhospital.org
AIST improves 3D projector
In 1926, Kenjiro Takayanagi, known as the “father of Japanese television,” transmitted the image of a katakana
character (イ) to a TV receiver built with a cathode ray tube, signaling the birth of the world’s first all-electronic
television. In early August, in a symbolic gesture over 80 years later, researchers from Japan’s National Institute of
Advanced Industrial Science and Technology (AIST), Burton Inc., and Hamamatsu Photonics K.K. displayed the
same katakana character using a 3D projector that generates moving images in mid-air. The 3D projector, which
was first unveiled in February 2006 but has seen some recent improvements, uses focused laser beams to create
flashpoint “pixels” in mid-air. The pixels are generated as the focused lasers heat the oxygen and nitrogen
molecules floating in the air, causing them to spark in a phenomenon known as plasma emission. By rapidly
moving these flashpoints in a controlled fashion, the projector creates a three-dimensional image that appears to
float in empty space. The projector’s recent upgrades include an improved 3D scanning system that boosts laser
accuracy, as well as a system of highintensity solid-state femtosecond lasers
recently developed by Hamamatsu
Photonics. The new lasers, which
unleash
100-billion-watt
pulses
(0.1-terawatt peak output) of light
every 10-trillionths of a second
(100 femtoseconds), improve image
smoothness and boost the resolution to
1,000 pixels per second. In addition,
image brightness and contrast can be
controlled by regulating the number of
pulses fired at each point in space.
http://www.aist.go.jp
http://www.veritasetvisus.com
21
Veritas et Visus
3rd Dimension
September 2007
“Harry Potter and the Order of the Phoenix”: IMAX 3D shatters box office records
IMAX Corporation and Warner Bros. Pictures announced
that “Harry Potter and the Order of the Phoenix”
shattered virtually every opening box office record at
IMAX theatres during its debut, contributing $7.3 million
of the $140 million that the film grossed at the domestic
box office, from July 11 through July 15. The picture also
broke the record for IMAX’s largest single day
worldwide total at $1.9 million and posted a domestic
opening per screen average of $80,500. “Harry Potter and
the Order of the Phoenix” opened on 91 domestic IMAX
screens and 35 international IMAX screens, making it the
largest opening in IMAX’s 40-year history, with a recordsmashing worldwide estimated total of $9.4 million. The
film’s overall worldwide debut total was an estimated
$333 million. Through its 7th week, the film earned more
than $24 million on 91 IMAX screens domestically and more than $11 million on 52 IMAX screens internationally.
The worldwide IMAX total is now more than $35 million with an impressive per screen average of $243,000
making it the highest grossing live-action Hollywood IMAX release. http://www.imax.com
“Beowulf” to be available in Dolby 3D Digital Cinema…
Dolby Laboratories announced in early August that Paramount Pictures’ “Beowulf”, scheduled for release on
November 16, will be made available to select exhibitors who have installed Dolby 3D Digital Cinema technology
by the film’s release date. Dolby claims that their 3D
Digital Cinema provides exhibitors and distributors an
efficient and cost-effective 3D solution. The ability to
utilize a standard white screen gives exhibitors a cost
advantage, as no special “silver screen” is required. The
ease of shifting the Dolby 3D Digital Cinema system from
3D to 2D and back, as well as moving the 3D film between
auditoriums of different sizes, retains the flexibility
exhibitors have come to expect. Dolby 3D Digital Cinema
uses a unique color filter technology that provides very
realistic color reproduction with extremely sharp images
delivering a great 3D experience to every seat in the house.
http://www.dolby.com
…also in REAL D and IMAX 3D
In addition to the Dolby 3D Digital Cinema screens,
“Beowulf” will also be presented on both REAL D and
IMAX 3D platforms, which cater to audiences willing to
pay a premium price for a premium, or multi-dimensional,
viewing experience. Beowulf is a digitally enhanced liveaction film using the same motion-capture technology seen
in “The Polar Express”. Until now, IMAX and Paramount
hadn't released a film together since IMAX began
remastering commercial films into the large format in 2002.
In total, it’s expected that “Beowulf” will show in 3D on
well over 1000 screens worldwide at its release.
http://www.veritasetvisus.com
22
Veritas et Visus
3rd Dimension
September 2007
“Monsters vs. Aliens” in 3D to hit theatres on May 15, 2009
DreamWorks Animation’s “Monsters vs. Aliens” is slated for domestic release May 15, 2009, a week earlier than
previously announced. “Monsters vs. Aliens”, now confirmed as the official title, will be the first DreamWorks
Animation film produced in
stereoscopic 3D. It is described as
a reinvention of the classic 1950s
monster movie into an irreverent
modern-day
action
comedy.
Directed by Conrad Vernon and
Rob Letterman, the film is in
production and will be distributed
domestically
by
Paramount
Pictures. May 2009 is shaping up
to be a crowded month for 3D
releases. James Cameron’s 3D
stereoscopic film “Avatar” is slated
for May 22, which was the planned release date for Monsters vs. Aliens. With two anticipated stereoscopic films
set to debut during the frame, the digital-cinema community is watching this release window. Real D has advised
that it is on track to have 4,000 3D-ready digital-cinema screens installed in the US by May 2009, though that
number might increase. Jeffery Katzenberg, head of DreamWorks has suggested that 6,000 screens need to be
available for “Monsters vs. Aliens” to be the success the studio is hoping. http://www.dreamworksanimation.com
“Sea Monsters: A Prehistoric Adventure” to open in IMAX 3D
National Geographic’s new giant-screen film “Sea Monsters: A Prehistoric Adventure” premieres worldwide in
IMAX and other specialty theatres on October 5th. The movie brings to life the extraordinary marine reptiles of the
dinosaur age on the world’s biggest screens in both 3D and 2D. The film, narrated by Tony Award-winning actor
Liev Schreiber and with an original score by longtime musical collaborators Richard Evans, David Rhodes and
Peter Gabriel, takes audiences on a journey into the relatively unexplored world of the “other dinosaurs”, those
reptiles that lived beneath the water. Funded in part through a grant from the National Science Foundation, the film
delivers to the giant screen the fascinating science behind what we know, and a vision of history’s grandest ocean
creatures. http://www.imax.com
The film follows a family of Dolichorhynchops, also known informally as Dollies as they traverse ancient waters
populated with saber-toothed fish, prehistoric sharks and giant squid. On their journey the Dollies encounter
other extraordinary sea creatures: lizard-like reptiles called Platecarpus that swallowed their prey whole like
snakes; Styxosaurus with necks nearly 20 feet long and paddle-like fins as large as an adult human; and at the top
of the food chain, the monstrous Tylosaurus, a predator with no enemies.
http://www.veritasetvisus.com
23
Veritas et Visus
3rd Dimension
September 2007
3D Entertainment completes photography on “Dolphins and Whales 3-D: Tribes of the Ocean”
3D Entertainment Ltd. announced the successful completion of principal photography on its upcoming feature,
“Dolphins and Whales 3-D: Tribes of the Ocean”. This new breathtaking documentary will make its US debut on
IMAX 3-D screens in February 2008 before expanding into Europe and
will be released in collaboration with the United Nations Environment
Program and its North American office, RONA, based in Washington
D.C. “Dolphins and Whales 3-D: Tribes of the Ocean” is currently in
post-production and will be completed by late November. Principal
photography began in June 2004 in Polynesia and an extensive three
years were required to capture the necessary footage. Filming consisted
of no fewer than 12 expeditions and 600 hours underwater at some of the
remotest locations on Earth, including off the Pacific Ocean atolls of
Moorea and Rurutu, Vava'u Island of the Kingdom of Tonga, Pico Island
in the Azores archipelago and the Bay of Islands in New Zealand.
Following “Ocean Wonderland” (2003) and “Sharks 3-D” (2005),
“Dolphins and Whales 3-D: Tribes of the Ocean” marks the final chapter
in a unique trilogy of ocean-themed documentaries that have proven
immensely popular with audiences, grossing a combined $52.5 million at
the box office. http://www.3defilms.com
Lightspeed Design and DeepSea Ventures announce completion “DIVE!”
Lightspeed Design and DeepSea Ventures announce the completion of their digital 3D stereoscopic film, “DIVE!
Manned Submersibles and The New Explorers”. Utilizing deep-ocean manned submersibles in the Pacific Ocean
off the coast of Washington State, principle 3D photography for “Dive!” was realized in late 2006 by stereoscopic
filmmaker, Lightspeed Design of Bellevue, Washington. In order to fit into the small, three-person submersible,
Lightspeed custom-engineered an opto-mechanical dual
camera rig for two Panasonic HVX-200 high-definition
cameras. The advanced rig creates precise control of
camera offsets, which is determined by 3D algorithms
and Lightspeed’s proprietary live HD video streaming
software. During the voyage two lost shipwrecks were
discovered 1000 feet below the surface. Both were
fishing vessels, one of Japanese origin and the other
most likely American. The ships were located by
DeepSea Ventures (DSV), a deep ocean exploration
company based in Spokane, Washington. The research
vessel Valero IV - Seattle, and submersible experts
Nuytco Research Ltd. of Vancouver, BC, supported the
mission. “DIVE!” is a 22-minute high-definition 3D
film combines computer graphics and live-action to literally take the audience along for the ride as a unique
expedition of “Citizen Explorers” voyage in submarines to the bottom of the ocean. “Dive!” opened June 21, at
MOSI Tampa Florida. http://www.lightspeeddesign.com
Kinepolis chooses Dolby for cinema conversion
The Kinepolis Group has selected the new Dolby 3D Digital Cinema technology to outfit 17 screens throughout
Europe. Kinepolis recently opened its 23rd cinema multiplex, in Ostend, Belgium, and installed the first Dolby 3D
system in Europe. The Belgium-based exhibitor plans to convert one screen per complex using the Dolby 3D
system. http://www.dolby.com
http://www.veritasetvisus.com
24
Veritas et Visus
3rd Dimension
September 2007
Eclipse 3D Systems combines monochrome and color to produce 3D
Eclipse 3D Systems announced a new patent-pending technology for displaying 3D movies in theaters and
homes. The Eclipse 3D technology promises to be less expensive and brighter than polarized projection, which
some theaters have used to show 3D movies. The new technology is applicable to digital projectors and flat panel
displays opening the possibility of distributing high quality 3D through most of the major movie distribution
channels including movie theaters,
DVD sales and rental, and digital
TV. The Eclipse 3D technology
combines a monochrome image
with a full-color image to produce
full-color 3D. The 3D images can
be viewed with Eclipse colored
filter glasses. The images can be
projected on any white screen or
surface. Since a silver screen is not
needed, the Eclipse 3D format is
less expensive and more portable
than the polarized format. Due to
the properties of the human visual
system, the monochrome image is
One of the most surprising aspects of the Eclipse 3D format is that full color
perceived with a brightness gain of
perception can be obtained from only one eye. This 3D pair contains a red
about four times while not
monochrome image and a full-color image, in which case the observed color in
contributing significantly to color
the 3D image is full-color. For even better color, put a red filter from a pair of
vision. This process is similar to
red/cyan glasses over your left eye.
night vision, although the full-color
image is perceived with normal brightness and color. Color perception comes almost entirely from the full-color
image. The gain in brightness for the monochrome image means that little brightness is used in adding 3D to a
display. As such, Eclipse 3D images are about 4X brighter than polarized alternatives. http://www.eclipse-3d.com
Pace and Quantel team on 3D post system
Vince Pace, who co-developed the Fusion 3D camera system with director James Cameron, has been working
closely with manufacturer Quantel on the design of a 3D stereoscopic postproduction system. Quantel and Pace
have presented private technology demonstrations of the developing system. Quantel’s Mark Horton estimated that
there were about 100 visitors to Pace’s Burbank office, including directors, visual effects supervisors,
postproduction execs and representatives from most of the major studios. Horton said the feedback was
encouraging and that as a result, Quantel intends to release the toolset as a product. A shipping date has not been
determined, but Horton said that it would be a new version release of Quantel’s Pablo digital intermediate/color
grading system. Current Pablo customers would have the option to upgrade. The goal is to increase speed, reduce
cost and add creative flexibility in 3D stereoscopic filmmaking. Quantel said the technology is being developed to
enable creative post decisions to be made and viewed in 3D in real time. http://www.quantel.com
3ality Digital uses SCRATCH software in U2 film
“U2 3D”, the 3D feature film, was one of the highlights at the recent Cannes Film
Festival, and 3ality Digital and ASSIMILATE teamed to bring the same experience
to the IBC 07 audience. A three-song segment of “U2 3D”, hosted by Steve
Schklair, founder and CEO of 3ality, was featured in IBC’s Big Screen Programme
venue on September 9. 3ality’s stereoscopic 3D technology, coupled with
ASSIMILATE’s SCRATCH real-time 3D data workflow and DI tool suite (from conform to finish) allows lead
singer Bono to reach out toward the 3D camera and appear to be stepping into the theater. “U2 3D” is scheduled for
release to theaters this year. http://www.3alityDigital.com
http://www.veritasetvisus.com
25
Veritas et Visus
3rd Dimension
September 2007
Kerner debuts 3D mobile cinemas
Kerner Mobile Technologies announced the debut of the first in its new line of Kerner 3D Mobile Cinemas at the
California Speedway in greater Los Angeles on Labor Day weekend. Kerner Mobile’s 30 foot 3D movie screen is
set inside a 10,000 sq. ft. tented theater made by Tentnology. “Opportunity, California FanZone” provided car
racing fans with everything from music concerts to shopping. In the near future, Kerner says that its Mobile 3D
Cinemas will surpass movie theaters with the development of special lighting effects, fog and surround sound: 3D
sights, smells and a light breeze on the face - an immersive 3D experience. http://www.kernermobile.com
nWave releases first “true” 3D feature
nWave, based in Brussels and Los Angeles, has
released “Fly Me To The Moon”, loosely based on
the Apollo 11 moon-landing including Buzz Aldrin as
himself, and the crucial interventional of three flies,
hence the play on words with the song-title. It is the
first true 3D feature film to be released, according to
nWave CEO Ben Stassen. It will be going to around
700 digital 3D cinemas and over 200 IMAX theaters.
According to Stassen, the first 3D full-length film was
the Robert Zemeckis film, “Monster House”. But he
stresses that “Monster House” was not originally
created in 3D. Instead, it utilized software applied
after filming in 2D. Disney’s “Meet The Robinsons”
was the second 3D release film, which also used
software applied after the fact, he says.
“Recent advances in computer technology make it possible to convert 2D films to 3D. However, while
converted films like “Chicken Little” and “Monster House” will be crucial to spurring the development of
digital 3D theaters, to fully utilize the potential of 3D cinema, you must design and produce a film differently
than you would a 2D film,” Stassen says. “It’s a different medium. It involves more than just adding depth and
perspective to a 2D image. There’s a very strong physical component to authentic 3D.”
He points out that there are very encouraging signs that Hollywood is starting to pay attention to the 3D revival
spreading worldwide through the giant screen theater network. He pinpoints the importance of “The Polar Express”
that benefited from a great 3D IMAX version, generating over $40 million of the film’s $283 million worldwide
grosses on only 64 screens. nWave Pictures is known for being one of the most prolific producers of 3D films in the
world. Founded in 1994 by Ben Stassen and Brussels-based D&D Media Group, nWave Pictures quickly
established itself as the world’s leading producer and distributor of ride films for the motion simulator market. The
company’s current library of titles makes up an estimated 60-70% of all ride simulation films being shown
worldwide. Core to the nWave operation is the idea that a computer graphics workstation is a mini Hollywood on a
desktop. It can create a whole movie and, with high-speed Internet, even distribute it. He points out that he only
uses off-the-shelf software – Maya, Lightwave, and more
recently Pixar’s Renderman. Where the difference comes is that
he does not need to use hundreds of animation artists - only
about 50 people are working on a production at any one time.
http://www.flymetothemoonthemovie.com
As an interesting sidenote, the movie’s website includes some
anaglyph images to help showcase the characters. Next to an
icon showing red/green glasses, the following warning has been
inserted:
http://www.veritasetvisus.com
26
Veritas et Visus
3rd Dimension
September 2007
ANDXOR and The Light Millennium put forward human rights proposal to the UN
ANDXOR Corporation and The Light Millennium have submitted a joint proposal to the Department of Public
Information/NGO Section & Planning Committee of the United Nations to create a stereoscopic movie on human
rights. The two companies are offering to produce 40 minutes footage in ortho-stereoscopic to “allow a true three
dimensional vision and an incredible immersive participation of the viewer”. They will create a movie regarding
human rights, “the basic rights and freedoms to which all humans are entitled (from liberty to children abuse, from
freedom of expression to education) and in particular could include Darfur and Karabakh/Azerbaijan”. The
companies say that the footage will be very useful to UN campaigns in terms of the viewers’ support and
awareness. The footage would be filmed with special stereoscopic cameras in digital full-HD. The movie would be
played at the UN/DPI-NGO 61st Annual Conference and after in all the new stereo ready theaters. The companies
will also create a DVD Blu-ray format to be distributed together with stereoscopic glasses and ready to be played
also using standard television. http://www.andxor.com http://www.lightmillennium.org
Reallusion and DAZ 3D partner on real-time filmmaking 3D content
Reallusion, a software developer providing Hollywood-like 3D moviemaking tools for PC and embedded devices,
and DAZ 3D, a developer of 3D software and digital content creation, announced a strategic partnership to bring
real-time filmmaking and 3D content to the masses. Thanks to this partnership, users will be able to import content
created in DAZ Studio or purchased from DAZ 3D’s library of professional content into iClone, Reallusion’s realtime filmmaking engine, using Reallusion’s recently released 3DXchange object conversion tool. The result will be
a truly open filmmaking platform that will empower aspiring filmmakers of all stripes to, in the words of
Reallusion’s theme for SIGGRAPH 2007, “Go Real-Time” with “Movies, Models and Motion.” Reallusion’s
3DXchange supports most 3DS or OBJ files. It also loads existing props, accessories or 3D scenes from current
iClone content so users can customize an object’s position, orientation, size, specularity, shadow or other attribute
setting. Props, accessories and scenes can also be generated into massive libraries for both long and short-form
iClone film productions. DAZ Studio is a free software application that allows users to easily create digital art.
Users can use this software to load in people, animals, vehicles, buildings, props, and accessories to create digital
scenes. http://www.reallusion.com
Belgian cinema chain opens with Barco projectors
Barco announced that its latest range of 2K digital cinema projectors has been installed in Kinepolis’s newest
cinema multiplex at Ostend, Belgium. Exactly one year after its opening of Kinepolis Brugge, Belgian’s number
one cinema chain, the new multiplex is the chain’s 23rd in Europe and houses eight state-of-the-art cinemas with a
total of 1,755 seats. Kinepolis Oostende has been fitted with Barco’s latest range of digital cinema projectors, the
DP-3000 and DP-1500, which were launched at ShoWest in
March this year. The DP-3000 is Barco’s new flagship, and
the brightest “large venue” digital cinema projector in the
industry. Using Texas Instrument’s 1.2-inch DLP Cinema
chip, the DP-3000 is designed for screens up to 30 m (98 ft)
wide and has a 2000:1 contrast ratio, new lenses, a new
optical design and high efficiency 6.5 kW lamps. The DP1500 is Barco’s new mid and small-venue projector, designed
for screens up to 15 m (49 ft) wide. It incorporates Texas
Instrument’s new 0.98-inch DLP Cinema chip that offers the
same pixel resolution (2048x1080) as its larger 1.2-inch
counterpart, but its smaller size offers significant advantages.
http://www.barco.com
http://www.veritasetvisus.com
27
Veritas et Visus
3D – lost in translation
We couldn’t resist including this
screen capture from a Korean
website devoted to stereo imaging.
Their online poll doesn’t translate
very well using Google’s translator
function. http://www.3dnshop.com
3rd Dimension
September 2007
Hang Zhou World now selling 120 Tri-lens stereo cameras
Hang Zhou 3D World Photographic Equipment Co., Ltd introduced their 120
Tri-lens manual reflex stereo camera – the first one developed and made in
China. Specifications include anti-reflection coated glass optics, seven
elements in six groups, f/2.8, 80 mm focal length, a lens separation of
63.5 mm, light metering consisting
of two SPD’s (silicon photo
diodes) for light measurement; and
aperture and shutter speeds
matched according to the LED
display. The focusing screen
consists of a split-image microprism surrounded by a Fresnel
screen, three LEDs in five
exposure graduations. The camera
uses one roll of 120-reversal film
for a pair of 58 x 56 mm stereo
images; six pairs per roll. The
company employs about 100
people focused on the development
of devices that promote stereo
imaging. http://www.3dworld.cn
New IBM mainframe platform developed to support virtual worlds
The International Herald Tribune revealed that IBM is launching a new mainframe platform specifically designed
for next-generation virtual worlds and 3D virtual environments. In concert with Brazilian game developer Hoplon,
IBM will use the PlayStation3’s ultra-high-powered Cell processor to create a mainframe architecture that will
provide the security, scalability and speed that are currently lacking in 3D environments – a lack that is one of the
factors keeping them from becoming widely adopted.
USGS posted 3D photos of national parks
The US Geological Survey (USGS) has posted hundreds of highly-detailed anaglyphic 3D photographs of national
parks on the Web. http://3dparks.wr.usgs.gov/index.html
Anaglyph images from Arches National Park and Saguaro National Monument released by USGS
http://www.veritasetvisus.com
28
Veritas et Visus
3rd Dimension
September 2007
StereoEye features huge collection of 3D photographic images
A Japanese stereo society has posted a large number of images, in numerous formats to their website. Beware;
visiting this website could consume a couple of hours… http://www.stereoeye.jp
The StereoEye website showcases hundreds of 3D photos in several form factors including these anaglyph images of the
Tokyo Tower and a fireworks display in Tokyo Harbor.
3D image of the Moon captured by photographer from the Earth
It’s intuitive to think that getting a stereo image of the moon from Earth is not possible, but all it requires is getting
two pictures from different angles only, which requires only a little patience. In this case, photographer Laurent
Laveder used two pictures taken months apart, one in November 2006 and one in January 2007. He relied on the
Moon’s continuous libration (or wobble) as it orbits to produce two shifted images of a full moon, resulting in a
compelling stereo view. http://www.pixheaven.net
The image on the left is an anaglyph of the Moon; the right image is a stereo pair intended for cross-eyed viewing
http://www.veritasetvisus.com
29
Veritas et Visus
3rd Dimension
September 2007
NASA’s STEREO reveals solar prominences
In late August, STEREO observed a gathering of solar
prominences in profile as they twisted, stretched and floated
just above the solar surface. Over about two and a half days
(August 16-18, 2007), the prominences were seen in extreme
ultraviolet light by the Ahead spacecraft. Prominences are
clouds of cooler gases controlled by powerful magnetic
forces that extend above the Sun’s surface. In a video created
by NASA, the careful observer can sometimes see the gases
arcing out from one point and sliding above the surface to
another point. In the most interesting sequence near the end
of the clip, the upper prominence seems to arch away into
space. Such sequences serve to show the dynamic nature of
the Sun. STEREO (Solar TErrestrial RElations Observatory)
is a two-year mission; launched October 2006 that provides a
unique view of the Sun-Earth system. The two nearly
identical observatories, one ahead of Earth’s orbit, the other
behind, trace the flow of energy and matter from Sun to
Earth. The image to the left is in anaglyph form.
http://www.nasa.gov/mission_pages/stereo/main/index.html
Globe4D shows off four-dimensional globe
Globe4D is an interactive, four-dimensional globe. It’s a projection of the Earth’s surface on a physical sphere that
shows the historical movement of the continents as its main feature, but is also capable of displaying all kinds of
other geographical data such as climate changes, plant growth, radiation, rainfall, forest fires, seasons, airplane
routes, and more. The user can interact with the globe in two ways. First: rotation of the sphere itself. Second:
turning a ring around the sphere. By rotating the sphere the projected image rotates along with the input movement.
Turning the ring controls time as the 4th dimension of the globe. Of course Globe4D is not limited to the Earth
alone. The Moon, the Sun, Mars and any other spherical object can be projected as well. Users can even go to the
middle of the Earth by zooming in on the crust and peeling the earth as if an onion. http://www.globe4d.com
The Globe4D lets viewers see video images on the movable sphere, while time and other functions are
controlled by turning the ring around the sphere.
http://www.veritasetvisus.com
30
Veritas et Visus
3rd Dimension
September 2007
Microsoft and NASA create several new Photosynths
Several new Photosynths were generated through a collaboration between NASA and Microsoft’s Live Labs. They
show different aspects of the Shuttle’s lifecycle related to the Orbiter, Endeavor, Launch Pad, and Vehicle
Assembly Building. The Photosynth process weaves hundreds of images together and allows viewers to pan and
zoom amongst the images in a three-dimensional layer of images. http://labs.live.com
The image on the left shows the Photosynth from a distance. Users can zoom on images to reveal high-resolution
shots such as the close-up of the Endeavor on the right.
Google Earth adds Sky
In late August, Google announced the launch of Sky, a new feature that enables users of Google Earth to view the
sky as seen from planet Earth. With Sky, users can now float through the skies via Google Earth. This easy-to-use
tool enables all Earth users to view and navigate through 100 million individual stars and 200 million galaxies.
High-resolution imagery and informative overlays create a unique playground for visualizing and learning about
space. To access Sky, users need only click “Switch
to Sky” from the “View” drop-down menu in
Google Earth, or click the Sky button on the
Google Earth toolbar. The interface and navigation
are similar to that of standard Google Earth
steering, including dragging, zooming, search, “My
Places”, and layer selection. As part of the new
feature, Google is introducing seven informative
layers that illustrate various celestial bodies and
events, including Constellations, Backyard
Astronomy, Hubble Space Telescope Imagery,
Moon, Planets, Users Guide to the Galaxies, and
Life of a Star. The announcement follows last
month’s inclusion of the NASA layer group in
Google Earth, showcasing NASA’s Earth
exploration. The group has three main components,
including Astronaut Photography of Earth, Satellite Imagery, and Earth City Lights. Astronaut Photography of
Earth showcases photographs of the Earth as seen from space from the early 1960s on, while Satellite Imagery
highlights Earth images taken by NASA satellites over the years and Earth City Lights traces well-lit cities across
the globe. The feature will be available on all Google Earth domains, in 13 languages. To access Sky in Google
Earth, users need to download the newest version of Google Earth, available at: http://earth.google.com.
http://www.veritasetvisus.com
31
Veritas et Visus
3rd Dimension
September 2007
Google Earth introduces flight simulator
In the new Google Earth 4.2 beta, there’s a flight simulator
mode which provides a fascinating 3D experience. To
access, hit the special keyboard shortcut: CTRL-ALT-A to
get a requestor allowing you to choose from two types of
aircraft - an F-16 or a SR-22, and choose from one of several
airports. When you’re ready, select “Start Flight”. You’ll
find controls for flaps, landing gear, trim, and more. The
SR22 is easier to fly for beginners. You get a head up
display (HUD) just like in a fighter-jet. And the indicators
tell you which direction you are moving, rate of climb,
altitude, and other useful information most flight simulator
aficionados will understand. Some useful tips for using the
new simulator are available at http://www.gearthblog.com.
Virtual Earth 3D adds new cities and greater detail
Digital Urban recently added a tutorial for creating very high-resolution cityscape panoramas with Virtual Earth.
Several new cities have recently been launched in 3D and along with the tutorial, the Digital Urban site includes
videos, some obscure tips, and some great insights into the building of a realistic virtual world such as the subtle
tweaks made in modeling low, dense cities like Toulouse, shown below. http://www.digitalurban.blogspot.com
Numerous new VE3D images were recently added, including those of Montreal and Toulouse
Niagara Falls – impressive in Virtual Earth
Good digital elevation models, super high-resolution aerial imagery and 3D modeling combine to create virtual
worlds of amazing realism. The first image below is a static Birds Eye and the second image is a snapshot of the
same part of the Horseshoe Falls at Niagara in interactive 3D. http://www.microsoft.com/virtualearth/
http://www.veritasetvisus.com
32
Veritas et Visus
3rd Dimension
September 2007
Georgia Institute of Technology and Microsoft Research develop 4D Cities
Computer scientists from the Georgia Institute of Technology and Microsoft Research have developed 4D Cities, a
software package that shows the evolution of a city over time, creating a virtual historical tour. The software can
automatically sort a collection of historical city snapshots into date order. It then constructs an animated 3D model
that shows how the city has changed over the years. The idea is to give architects, historians, town planners,
environmentalists and the curious a new way to look at cities, says Frank Dellaert at the Georgia Institute of
Technology in Atlanta, who built the system with his
colleague Grant Schindler and Sing Bing Kang of
Microsoft’s research lab in Redmond, Washington.
To create a model of Atlanta, the researchers scanned
in numerous historical photos of the city that had been
snapped from similar vantage points. The software is
designed to identify the 3D structures within the
image and break them down into a series of points. It
then compares the view in each one to work out why
some of these points are visible in some of the images
but not others. Was the building simply out of shot?
Or was the view of one building blocked by another?
The software continually rearranges the order of the
images taken from each vantage point until the
visibility patterns of all the buildings are consistent.
The result is that the images appear in time order,
allowing the researchers to construct and animate a
3D graphic of the city through which users can travel
backwards or forwards in time. The researchers plan
to extend the system to create models of other cities,
and to improve the software’s ability to recognize whether different photos are showing exactly the same scene.
This can be difficult as some cityscapes change so profoundly. Here is how they introduce the project on the 4D
Cities home page. “The research described here aims at building time-varying 3D models that can serve to pull
together large collections of images pertaining to the appearance, evolution, and events surrounding one place or
artifact over time, as exemplified by the 4D Cities project: the completely automatic construction of a 4D database
showing the evolution over time of a single city.” (www.cc.gatech.edu/~phlosoft).
AMRADNET takes over MedView
American Radiologist Network (AMRADNET) has acquired ViewTec’s medical division. AMRADNET purchased
ViewTec’s medical business along with its MedView core product and technology. Through this acquisition,
existing MedView users worldwide will now be serviced by AMRADNET. Development of MedView, a DICOM
compatible high performance software product for digital medical imaging, took place at the University of Zurich
in cooperation with specialists from leading hospitals in Switzerland. http://www.viewtec.ch
GAF to distribute Intermap 3D data throughout Europe
Intermap Technologies Corp and GAF AG, an international geo-information technology company located in
Munich, Germany, have signed an agreement to allow GAF to immediately begin distributing Intermap’s highresolution 3D digital elevation data and geometric images throughout Germany and the rest of Europe. GAF, a
private sector enterprise is part of the Telespazio group of companies. The company was founded in 1985 and
offers a broad range of geospatial applications, including geodata procurement, image processing, software
development, and consulting services. http://www.Intermap.com
http://www.veritasetvisus.com
33
Veritas et Visus
3rd Dimension
September 2007
Immersive Media continues expansion into commercial media
Immersive Media Corp. announced the first ever use of 360 degree video for experiential marketing. In
collaboration with adidas and TAOW Productions, IMC captured the “premiere” sports event of this summer –
David Beckham’s first game with the Los Angeles Galaxy. This immersive video was launched by adidas on their
website. Soccer fans can experience the 360 degree, full motion video and look around in every direction as if they
were behind the scenes. With regards to the city collection program, IMC is continuing to expand its GeoImmersive
database with additional cities being added in North America and Europe. The European expansion includes cities
in England, Germany, France, Spain and Italy. The GeoImmersive imagery is being licensed to commercial and
public organizations for promotional, planning and asset management purposes. To preview the GeoImmersive
Imagery visit http://demos.immersivemedia.com/onlinecities
Above are images from a drive down the famed 6th Street in Austin, Texas. The images were captured at a pause in the video
and show three different images from the 360o view available from the on-line demonstration.
ComputaMaps releases 3D urban models
ComputaMaps recently released 3D urban video models of several cities, including Toronto, Dubai, Baltimore,
Washington DC, Hong Kong (Aberdeen), and Durban, South Africa. The company manufactures multi-resolution
3D databases ranging in detail from the entire globe down to photo-realistic urban environments. These data can be
deployed in various applications such as interactive entertainment, broadcast weather and news graphics software.
Download animations of the following cities derived from QuickBird satellite imagery. Animations of are available
at: http://www.computamaps.com/3d-visualization/3d-visualization.html
These images are captured from video animations of ComputaMaps flybys in Baltimore and Hong Kong
http://www.veritasetvisus.com
34
Veritas et Visus
3rd Dimension
September 2007
RabbitHoles acquires XYZ Imaging
After recently acquiring the hologram company XYZ Imaging, RabbitHoles announced that it is seeking
partnerships with “boundary-breaking artists to create limited edition RabbitHole 3D Motion Art and to be pioneers
of this new contemporary art medium”. The company advertises that “for 3D artists, working in RabbitHoles is the
first and only way for you to showcase in 3D on gallery walls and
in the homes and work-places of visionary collectors. For 2D
artists, RabbitHoles offers an invitation to experiment with what's
next. The RabbitHole 3D Motion Art is a reflective technology
reliant on precisely placed halogen light to expose its full-color
3D artwork. This radical new art form springs from patented
digital technology that instructs red, green and blue pulse lasers to
expose a specially formulated film 300 times finer that ISO 300.
The company is currently focusing on gaining exposure in
selective, high-profile, artistic contexts that will yield highlycollectible limited edition series and affirm the medium as
revolutionary contemporary art. http://www.rabbitholes.com
Virtual Images Unlimited acquires Kodak lenticular technology and equipment
Virtual Images Unlimited, a division of IGH Solutions from Minnesota, announced that it has acquired the large
format lenticular manufacturing assets of Dynamic Images. The equipment purchased, which utilizes highresolution photographic techniques to produce large-format lenticular in single-panel sizes up to 4 feet by 8 feet,
was originally developed by Kodak. The technology produces 3D and animated effects. The former Dynamic
Images purchased the technology from Kodak in 2001. The items purchased will allow VIU to continue selling
movie standees, posters, and bus shelters into the entertainment industry and other key markets. VIU will locate the
equipment at its parent company’s facility in Minnesota. http://www.viu.com
3D Systems brings out a hard plastic for rapid prototyping
3D Systems Corporation, a provider of 3D modeling, rapid prototyping and manufacturing solutions, announced
Accura Xtreme Plastic, a new material for stereolithography systems. This addition to the company’s family of
Accura materials facilitates the efficient design, development and manufacturing of products by enabling
production of early prototypes having improved durability and functionality. Accura Xtreme Plastic is now
available for beta testing by qualified customers. An extremely tough and versatile material, Accura Xtreme Plastic
is designed for functional assemblies that demand durability. Due to its high elongation and moderate modulus,
Accura Xtreme Plastic is ideally suited for many rapid prototyping and rapid manufacturing applications. Accura
Xtreme Plastic’s properties closely mimic those found in molded ABS and polypropylene, which are major
production plastics. Accura Xtreme Plastic also features lower viscosity and higher processing speeds than other
materials in the marketplace, resulting in easy operation, fast part creation, and quick cleaning and finishing with
less waste. http://www.3dsystems.com
HumanEyes Technologies demonstrates lenticular printing with UV printers
HumanEyes Technologies demonstrated its 3D and lenticular production on the newest UV flatbed inkjet printers at
its partners’ booths throughout Graph Expo, held in Chicago, from September 9-12. HumanEyes software allows
printers to take advantage of the newest technology to produce high quality lenticular and 3D effects on digital
presses from HP, Gandinnovations, Fujifilm Graphic Systems and Océ. The latest hardware developments are
making the production of lenticular easier, faster and of highest quality. New processes and materials have also
considerably reduced the cost of specialty print production. The next generation of UV curable ink flatbed presses
offer very low drop volume, resulting in high resolution, closely comparable to that of photo quality desktop inkjets
and high-end plotters. Also, an impressive geometric accuracy of drop placement secures highly accurate printing.
These features produce very high quality lenticular printing. http://www.humaneyes.com
http://www.veritasetvisus.com
35
Veritas et Visus
3rd Dimension
September 2007
National Graphics partners with Sports Image International
Sports Image International announced the launch of a new line of three-dimensional sports lenticular images
featuring licensed classic images from Major League Baseball and The National Hockey League. By combining the
latest technology in photo enhancement and National Graphics’ lenticular imaging with memorable sports
moments, SII has created a new class of collectible that is unique to the sports memorabilia market. These
collectibles are now available online at http://www.sportsimageintl.com, where sports fans can experience and
purchase the three-dimensional lenticular images. They are also available at both Yankee and Shea Stadium gift
shops. The images are produced by National Graphics, pioneers in lenticular imaging, who claims to provide the
highest quality lithographic lenticular products in the world. http://www.extremevision.com
3D Center of Art and Photography to exhibit French graffiti art
The 3D Center of Art and Photography of Portland, Oregon will exhibit “Urban Spaces” from September 13
through October 28. “Urban Spaces” is an exhibition of 11 stereoscopic images from the series “Kunstfabrik”
(2000-2007)
by
Ekkehart
Rautenstrauch of Nantes, France.
Not far from the center of Nantes
stands an old deserted foundry, out
of service since the 1980s. Since
2000 Rautenstrauch has been
documenting the transformation of
the space by a host of taggers,
graffiti artists and others wishing
to leave their mark on the space.
As the artist describes, “I quite
steadily followed the pictural
transformation, the continuous
fadedness
and
every
new
expression testifying to an alive and constant creation. In my own artistic work the freedom of gesture, the rhythm
of bodies, the writing extended into space, have always been essential elements.” Rautenstrauch’s works are
presented in specially made folding viewers called “folioscopes” (designed by Sylvain Arnoux) which hang on the
wall at eye level, allowing the viewer to view the stereoscopic images. http://www.3dcenter.us
University of Weimar develops unsynchronized 4D barcodes
Researchers from the University of Weimar recently developed a novel technique for optical data transfer between
public displays and mobile devices based on unsynchronized 4D barcodes. In a project entitled PhoneGuide, the
researchers assumed that no direct (electromagnetic or other) connection between two devices can exist. Timemultiplexed, 2D color barcodes are displayed on
screens and recorded with camera equipped mobile
phones. This allows for the transmission of
information optically between two devices. This
approach maximizes the data throughput and the
robustness of the barcode recognition, while no
immediate synchronization exists. Although the
transfer rate is much smaller than can be achieved
with electromagnetic techniques (e.g., Bluetooth or
WiFi), they envision applying such a technique
wherever no direct connection is available. 4D barcodes can, for instance, be integrated into public web-pages,
movie sequences, advertisement presentations or information displays, and they encode and transmit more
information than possible with single 2D or 3D barcodes. http://www.uni-weimar.de/medien/ar/research.php
http://www.veritasetvisus.com
36
Veritas et Visus
3rd Dimension
September 2007
RTT updates software for high-speed development and visualization
RTT unveiling the new versions of its core products – RTT DeltaGen 7.0 and RTT Portal 3.0. With a range of new
functions, both software solutions do not only deliver the maximum degree of visualization, but also several
possibilities for process acceleration and workflow efficiency. The RTT DeltaGen 7.0 software suite enables
extremely realistic, professional 3D real-time visualization. One of the highlights is the freshly designed graphical
user interface. The new color system includes smart icons and facilitates the creation of 3D models and scenes.
Another key element of the recent version is the novel functionality of assembly handling and the direct connection
to RTT Portal libraries. The latter feature enables users to directly access object and materials libraries, such as a
wheel rim database for cars, from RTT DeltaGen to RTT Portal 3.0. Assembly handling allows 3D scenes that have
been divided into assemblies to be loaded into RTT DeltaGen 7.0 to be linked to 3D models or unloaded. Single
work steps can subsequently be accomplished simultaneously and independently from each other by various users.
http://www.rtt.ag
nVidia demonstrates high-speed renderer
nVidia demonstrated its next-generation, near-real-time, high-quality rendering product with performance
improvements capable of re-lighting 60 frames of a complex scene in 60 seconds. Along with a new release of
nVidia Gelato GPU-accelerated software renderer and the announcement of the nVidia Quadro Plex Visual
Computing System (VCS) Model S4 1U graphics server, this technology demonstration shows nVidia’s expertise
in the field of high-quality rendering. Adding high-quality frames, like those used in film and other applications
where visual quality is paramount, have been slow to be integrated into an interactive workflow because they take
too long to render. nVidia previewed the new technology at the SIGGRAPH 2007 conference that will be part of its
next-generation renderer, harnessing the full power of the nVidia GPU to bring a truly interactive workflow to
relighting high-quality scenes in about a second. And, it can also be used for high-speed final renders of broadcastquality frames. Running on the latest nVidia GPU architecture, this technology can achieve rendering performance
improvements of more than 100 times that of traditional software rendering solutions, the company says. By using
the GPU to enhance rendering CPU-based performance, professional quality, interactive final-frame rendering and
interactive relighting is now possible — accelerating production workflow, improving review and approval cycles,
and reducing overall production schedules. http://www.nVidia.com
e frontier launches Poser Pro and teams up with N-Sided
e frontier, Inc. announced Poser Pro, a high end addition to its Poser product line. Geared toward a multitude of
production environments in both the 2D and 3D realms, Poser Pro offers the features and functionality of Poser 7
plus professional level application integration, a 64 bit render engine, and network rendering support. Poser Pro
now supports the COLLADA exchange format for content production, pre-visualization, gaming and film
production, and offers the ability to fully host Poser scenes in professional applications such as Maxon’s CINEMA
4D, Autodesk’s 3ds Max and Maya, and Newtek’s Lightwave. Other features include increased support for Adobe
Photoshop CS3 Extended (via COLLADA) and export of HDR imagery. In addition, N-Sided will provide
“QUIDAM for Poser” based on their QUIDAM character creation software. QUIDAM features the ability to import
and export Poser character files, which will be bundled exclusively with Poser Pro, that brings Poser content and
animations into professionals’ workflow. http://www.e-frontier.com/go/poserpro
TI incorporates DDD software in 3D HDTV
DDD announced that Texas Instruments demonstrated high definition 3D video using DDD’s TriDef 3D
Experience software in conjunction with TI’s DLP 3D HDTV at the IFA consumer electronics conference and trade
show in Berlin between August 31st and September 5th. TI recently announced the world’s first 3D DLP HDTV
based on TI’s all digital DLP imaging device used in the latest HDTVs. The 3D DLP HDTV uses active 3D glasses
to bring games and movies to life, jumping off the high definition screen into the viewer’s home theater. The 3D
enabled feature will be offered by DLP HDTV manufacturers including Samsung and Mitsubishi. The TriDef 3D
Experience is the latest consumer 3D content solution from DDD that enables a full range of popular entertainment
from PC games to the latest high definition 3D movies. http://www.DDD.com
http://www.veritasetvisus.com
37
Veritas et Visus
3rd Dimension
September 2007
Samsung incorporates DDD software into latest mobile phone
DDD Group, the 3D software and content company, announced that Samsung Electronics has launched a 3D
mobile telephone in Korea incorporating the DDD Mobile software library under license from DDD. The Samsung
SCH-B710 3D handset is already available in selected SK Telecom retail stores in South Korea. The SCH-B710 is
a CDMA handset capable of receiving both the satellite (S-DMB) and the terrestrial (T-DMB) mobile television
channels that are presently available in South Korea. Included in the handset is a 3D LCD display that can be
switched between normal 2D display mode and “glasses-free” stereo 3D mode. Using DDD’s solution, standard 2D
mobile TV channels can be automatically converted to stereo 3D as they are received by the handset. The license
agreement with Samsung follows the completion of the £500,000 development agreement that was announced in
mid 2005. DDD has also granted Samsung exclusive rights to the real time 2D to 3D conversion feature of DDD
Mobile for use on mobile telephones made for sale in the Korean market until June 2009. http://www.DDD.com
DDD launches TriDef 3D Experience
DDD has brought out the TriDef 3D Experience - a comprehensive package of software to support a wide range of
stereoscopic 3D display systems, including Samsung’s DLP 3D HDTVs. It can play a wide range of 2D and 3D
movies and photos, including open format files (.avi, .mpg, .jpg, etc); explore Google Earth in 3D; play 3D games;
enable third party applications to work on 3D displays; and plays current 2D DVDs in 3D. The TriDef Experience
includes DDD’s 2D-to-3D conversion software enabling existing 2D photos, movies and DVDs to be enjoyed in
dynamic 3D. A free version is available at http://www.tridef.com/download/latest.html.
Barco and Medicsight team up on colon imaging software
Barco and Medicsight, a developer of computer-aided detection (CAD) technologies, have signed a partnership
agreement to incorporate Medicsight’s “ColonCAD” image analysis software tools within Barco’s “Voxar 3D
ColonMetrix” virtual colonography application. By integrating Medicsight’s CAD function, Barco further expands
the functionality of its ColonMetrix software solution, enabling
faster and more efficient recognition of suspect lesions during
virtual colonography. Medicsight's ColonCAD is an image
analysis software tool designed to be used with CT
colonography (virtual colonoscopy) scans. It has been
specifically designed to support the detection and segmentation
of abnormalities within the colon that may potentially be
adenomatous polyps. ColonCAD can be seamlessly integrated
within advanced 3D visualization and PACS platforms of
industry leading imaging equipment partners. Barco's Voxar
3D ColonMetrix is a complete virtual colonoscopy workflow
and reporting solution that allows radiologists to interpret a CT
colonography study and generate a report typically within 10
minutes. http://www.medicsight.com
Intuitive Surgical selects Christie for non-invasive surgery
Christie was selected by Intuitive Surgical, a pioneer in surgical robotics, to help display high definition 3D video
images generated by the company’s da Vinci surgical system, which is designed to enable surgeons to perform
complex surgery using a minimally invasive approach. In demonstrations at trade shows and professional
conferences around the country, Intuitive Surgical successfully harnessed the power of a pair of Christie DS+5K 3Chip DLP digital projectors to render highly accurate 3D images of surgical procedures in passive stereo, as seen
by surgeons operating the da Vinci surgical system. According to Intuitive Surgical, prior to the da Vinci system,
only highly skilled surgeons could routinely attempt complex minimally invasive surgery. The Christie DS+5K
projector offers 6,500 ANSI lumens, native 1400x1050 resolution, and 1600-2000:1 contrast ratio. It features 3chip DLP technology and the ability to display standard and high-definition video. http://www.christiedigital.com
http://www.veritasetvisus.com
38
Veritas et Visus
3rd Dimension
September 2007
US hospital first to use Viking’s 3Di visualization system in pediatric urological surgery
Viking Systems, a designer and manufacturer of laparoscopic vision systems for use in minimally invasive surgical
(MIS) procedures, announced that Dr. Rama Jayanthi and his surgical team at the Columbus Children’s Hospital in
California successfully performed the first intravesical minimally invasive ureteral reimplantation using Viking
Systems’ 3Di Vision System. Dr. Jayanthi used a 5 mm 3D laparoscope that enabled him to see inside the bladder
and perform this complex procedure. “This procedure is especially delicate and requires surgical precision and
close attention to detail,”' said Dr. Jayanthi. “The 3D high definition view we had during the procedure was
incredibly precise and we look forward to working with this technology more and more. The 3D view certainly
makes fine suturing easier and more accurate.” The 3Di Vision System manufactured by Viking Systems delivers a
magnified, high-resolution 3D image that allows the surgeon to visualize depth in the underlying anatomical
structures and tissue during complex MIS. The 3D images are viewed by the surgeon and surgical team via
Viking’s Personal Head Display (PHD). The PHD places the 3D image directly before the surgeon’s eyes,
providing a high definition immersive view of the surgical field. The Viking 3Di Vision System also delivers an
information management solution known as Infomatix, which provides immediate, picture-in-picture access to
additional surgical information through voice activation. This critical information can be provided simultaneously
with the surgical image on the surgeon’s PHD. http://www.vikingsystems.com
DAZ 3D announces Carrara Version 6 – “The Next Dimension in 3D Art”
DAZ 3D recently announced the upcoming release of the latest version of the popular 3D software, Carrara. This
new version will allow users to choose from a large array of tools while exploring new dimensions in 3D creation.
Carrara 6 provides 3D figure posing and animation, modeling,
environment creation, and rendering tools within a single application. The
extensive support for DAZ 3D content includes handling of morph
targets, the conversion of Surface Materials and complete Rigging, and
Enhanced Remote Control, which allows users control over multiple
translation and transform dials simultaneously. Notable upgrades include
Non-linear Animation, giving users the ability to create clips of animation
that can be reused and combined on multiple tracks of animation;
Dynamic Hair that allows artists to style, cut, brush, and drape the hair;
Displacement Modeling where the user can paint detail on a model using
free-form brush tools; and Symmetrical Modeling that allows content
creators to edit both sides of a symmetrical object at the same time using
a variety of editing tools. Carrara 6 was released for sale in late August at
a $249 for the standard edition, while Carrara 6 Pro will have a MSRP of
$549. http://www.DAZ3D.com
New 3D format approved by Ecma International
On June 28, 2007, at its General Assembly meeting in Prien am Chiemsee, in Germany, the new 4th Edition of the
Universal 3D (U3D) File Format (ECMA-363) was approved. In the new edition, the overall consistency of the
format has been improved, and the free-form curve and surface specification, including the specification of
NURBS, has been added. In addition, the non-normative reference source code, available at SourceForge.net has
been updated accordingly. “The Universal 3D (U3D) File Format Standard (ECMA-363) is a unique 3D
visualization format being an open standard and having an unsurpassed installed 3D reader base due to the massive
deployment of Adobe Reader,” said Lutz Kettner, Director Geometry Product Development, mental images GmbH,
and Co-Editor of Ecma TC43. “3D visualization is finally becoming available to everyone. The U3D File Format
specification and standardization is an ongoing process in which features such as mesh compression, hierarchical
surface descriptions, and generalized shading will be addressed in the near future to satisfy even the most
demanding visualization needs.” http://www.ecma-international.org
http://www.veritasetvisus.com
39
Veritas et Visus
3rd Dimension
September 2007
Dassault Systèmes and Seemage announce strategic partnership
Dassault Systèmes and Seemage announced their intention to become strategic partners. The partnership will
leverage the companies’ respective strengths to grow their presence in the 3D product documentation market. The
partnership will provide a seamless link between product documentation and PLM product-related data. For
companies, this eliminates all disparities between product-related IP and any required product documentation, such
as animations, graphics and illustrations for training, maintenance manuals and service procedures. Working
together, the companies will permit the exploitation of 3D as a universal media. Seemage users can exploit 3D data
from any 3D CAD or enterprise system and create content from this for any desired output in formats including
Microsoft Office documents, PDF and HTML. Seemage’s XML-based architecture integrates seamlessly with
enterprise systems. http://www.seemage.com
NaturalMotion tackles football video games with “Backbreaker”
NaturalMotion, the company behind the euphoria
animation technology featured in “Grand Theft Auto
IV” and “Star Wars: The Force Unleashed”, announced
Backbreaker, an American football game developed
exclusively for next-generation consoles. The title is
slated for a 2008 release. “Backbreaker” is the first
football game with truly interactive tackles. By utilizing
our motion synthesis engine euphoria, players will
never make the same tackle twice, giving them an
intensely unique experience every time they play the
game,” said NaturalMotion CEO Torsten Reil.
http://www.backbreakergame.com
Sony and mental images join up on visualization workflows
Sony and mental images announced a joint project that will allow the Academy Award winning mental ray highend rendering software to operate with Sony’s new prototype Cell Computing Board in a range of visualization
workflows that feature Cell Broadband Engine (Cell/B.E.) technology. The Cell/B.E. is a high-performance
microprocessor jointly developed by Sony Corporation, Sony Computer Entertainment Inc., Toshiba Corporation,
and IBM Corporation. According to the companies, the technology’s innovative architecture is particularly wellsuited for highly parallelized, compute-intensive tasks. The “Cell Computing Board”, developed by Sony
Corporation’s B2B Solutions Business Group, incorporates the high-performance Cell/B.E. microprocessor and
RSX graphics processor to deliver high computational performance capable of handling large amounts of data at
high speed while also achieving reductions in size and energy consumption. An essential element of the project will
be the support of mental images’s new universal MetaSL shading language on the Cell Computing Board platform.
A large library of essential shaders will be provided. In addition, MetaSL shaders can easily be created with
“mental mill”, the graphical shader creation and development technology from mental images. The companies
expect to demonstrate their results in the second half of 2008. http://www.mentalimages.com
Autodesk takes over Skymatter
Autodesk announced that it has signed a definitive agreement to acquire substantially all the assets of Skymatter
Limited, the developer of Mudbox 3D modeling software. This acquisition will augment Autodesk’s offering for
the film, television and game market segments, while providing additional growth opportunities for other design
disciplines. Skymatter is a privately held New Zealand-based company. Skymatter’s Mudbox software offers a new
paradigm of 3D brush-based modeling, allowing users to sculpt organic shapes in 3D space with brush-like tools.
Appealing to both traditional sculptors and digital artists, Mudbox provides a simple and fast toolset for creative
modeling, prototyping and detailing. 3D assets created in Mudbox are often imported into Autodesk 3ds Max and
Autodesk Maya software for texturing, rigging, animation and final rendering. http://www.autodesk.com/mudbox.
http://www.veritasetvisus.com
40
Veritas et Visus
3rd Dimension
September 2007
Luxology launches on-line hub for 3D community
Luxology announced Luxology TV, a new online hub that allows the 3D community to exchange and view highresolution video clips on Luxology’s website. Luxology TV enables anyone to enhance their 3D learning
experience by searching, selecting and immediately watching videos on a variety of subjects such as modeling,
rendering, painting and sculpting. Luxology TV is structured to quickly grow into a repository of training and
presentation material on modo and other topics pertaining to 3D content creation. The majority of videos are free.
Commercial professional training materials from Luxology and third-party vendors will also be available for
purchase. Luxology TV is now live and can be experienced by visiting http://www.luxology.com/training/.
Lockheed Martin acquires 3Dsolve
Lockheed Martin Corporation announced it has acquired 3Dsolve; Inc. 3Dsolve is a privately held company that
creates simulation-based learning solutions for government, military and corporate applications. The company’s
software tools assist clients with collaborative training utilizing interactive 3D graphics. 3Dsolve’s core
competencies include multi-media, software engineering, digital artwork, instructional design and project
management for use in state-of-the-art simulation learning solutions. http://www.lockheedmartin.com
Dan Lejerskar presents his vision of the future of 3D
Real life and digital simulation will merge by 2011, producing a mixed-reality environment that will change the
way consumers communicate, interact and conduct commerce, according to futurist Dan Lejerskar, chairman of
EON Reality, the interactive 3D software provider. “What once was imagined soon will be experienced,” Lejerskar
explained. “The technology convergence of virtual reality, artificial intelligence, Web and search, and digital
content means that people can experience more in their daily lives by blurring the distinction between their physical
existence and digital reality.” As evidence of this trend, he points to the realization of commercially viable
applications for 3D interactive virtual reality technology – as well as the position of industry thought leaders
championing the advancement of such experiences. Heavyweights Google and Microsoft are pushing this trend
toward the manifestation of the 3D Internet, while computer and video game developers are whetting consumers'
appetites for 3D experiences with new technologies, such as Nintendo’s Wii. Hollywood studios and amusement
parks also are incorporating 3D interactive virtual reality elements into their offerings. “We're witnessing the
creation of an environment in which visualization companies, industry, academia and the public sector can meet
and exchange knowledge, experiences and ideas,” Lejerskar said. “Within three to four years, we’ll see radical
changes in how we shop, learn and communicate with business associates, friends and family. Consumers crave
user-generated experiences that combine virtual reality technology with physical location-based events to produce
totally immersive 3D interactive experiences.” http://www.eonreality.com.
EON Reality brings out Visualizer for idiot-proof 3D content creation
EON Reality unveiled its EON Visualizer at the SIGGRAPH Technology Conference. EON Visualizer is a 3D
interactive authoring tool that allows non-technical business users to generate 3D worlds for Web, print, video and
real-time formats. An off-the-shelf tool, EON Visualizer makes it easy for anyone with a computer and Internet
access to create realistic, interactive, 3D content. According to Gartner Research, by 2011, 1.6 billion out of a total
2 billion Internet users will actively participate in virtual worlds. However, the knowledge necessary to create these
worlds today is limited to only the most technically sophisticated. EON Visualizer is founded on EON Reality’s
new kernel, Dali, which improves the scalability and flexibility of the functionality. Even users without
programming knowledge of 3D software can use the EON Visualizer to create 3D content, the company says for
role-playing games, social network communities, business marketing and sales presentations and education and
training. A user with programming skills will be able to add advanced features to EON Visualizer. The following
are some of EON Visualizer’s key features: intuitive interface with drag-and-drop tools; Web-based 3D object
library with 12,000 3D objects and products, more than 12 showrooms and 360-degree landscape settings and
image backdrops to create augmented realities; visual (non-text) search for objects and components through EON ISearch functionality (Google-supported search engine) allows users to search additional EON Reality-supported
content available on the Internet. EON Visualizer will ship in Nov. 2007. http://www.EONReality.com
http://www.veritasetvisus.com
41
Veritas et Visus
3rd Dimension
September 2007
S3D-Basics+ Conference
August 28-29, 2007, Berlin, Germany
Phillip Hill reports on presentations from Blue Frames Media, Advanced Micro Devices,
BrainLAB, Infitec, and Spatial View
Florian Maier of Blue Frames Media of Germany presented on “3Drecording devices for two-channel or multi-channel applications”. The
talk covered research, one-camera 3D recording devices, and multicamera 3D recording devices. The motivation for the company is the
new 3D wave due to digital possibilities and huge demand on 3D
content. But Maier said that there was a lack of efficient and exact
recording devices and a lack of knowledge about 3D recording
parameters. The aims of the research work are to analyze the best 3D
parameters depending on the set-up; development of a PC program; and
the development of photographic recording devices. The company has
carried out deep studies of 3D basics: physiological limits, physiological
problems (3D sickness), existing recording and display techniques. The
program for the calculation of 3D parameters gives the best interaxial
distance between cameras to avoid 3D sickness and the best adaptation
to different display techniques.
The 3D photographic recording devices that the company has developed
fall into two categories: a one-camera system for static objects, and a
multi-camera system for dynamic objects. The one-camera recording
system gives exact and reproducible results, it is very efficient due to
automation, and is designed for special purposes such as macro
photography or lifetime exposure. A small version will be available
soon. The multi-camera system minimizes interaxial distance and gives
a larger field of depth. Close-ups of dynamic objects become feasible
with no loss of picture quality, and normal digital cameras (both photos
and video) can be used.
One-camera 3D recording system
Multi-camera system minimizes
interaxial distance
>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<
Advanced Micro Devices (AMD) presented its ATI FireGL workstation graphics with unified shaders. AMD
claims that its current ATI FirelGL products represent world leadership: first 90 nm graphics processor unit; first
1GB frame buffer; first 10-bit display pipeline architecture; first two dual-link outputs in mid-range; first dual-link
output in entry-level; first 256MB memory configuration in entry level. The product announcement was made at
SIGGRAPH 2007 – a top-to-bottom product line based on scalable, unified shader architecture. New to the range is
the first 65 nm graphics processor, the first 2GB frame buffer, and the first 512MB card in the mid-range level.
http://www.veritasetvisus.com
42
Veritas et Visus
3rd Dimension
September 2007
It targets professional users of CAD, digital content creation, medical
imaging and visual simulation applications. It maximizes graphics
throughput by dynamically allocating resources as needed. It
instinctively configures hardware and software for optimal
performance for certified applications. It enables real-time interaction
with larger datasets and more complex models and scenes – 2GB
graphics memory. It supports multiple 3D accelerators in a single
system for up to quad display output. Finally, it delivers hardware
acceleration of DirectX 10 and OpenGL 2.1 without impacting CPU
performance.
>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<
Jens Witte of BrainLAB of Munich, Germany, gave a brief talk on
“Application of stereoscopic 3D technology for surgical treatment
planning”. He outlined the background to BrainLAB. 2500 hospitals
use BrainLAB software With customers in more than 70 countries. It
10-bit display engine produces over 1
has 940 employees worldwide with 210 R&D engineers and 160
billion colors
service engineers. BrainLAB group revenue last year was €154
million. Activities include image guided surgery solutions;
radiotherapy solutions; integrated operating room solutions; treatment planning; vascular surgery planning;
vascular surgery intraoperative; cranio-maxillofacial reconstruction; and education and training. Witte pointed out
that medical end user expectations for stereo displays were low glare; single user solution to use maximum
resolution for display of diagnostic data; multi viewer solutions for education and teaching; seamless transition of
stereo zones; certification for medical use (ISO 60601); certification for diagnostic use (color calibration);
switchable 2D/3D displays for intraoperative use. For graphics cards/drivers, the criteria are plug and display
support of 3D display hardware; certification for medical use (display of full gray scale); and graphic memory of
more than 512 MB.
>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<
Infitec presented on its new product line – Stereodigital. Current Infitec stereo projection works only with digital
projectors as color shifts induced by the filters must be compensated by an elaborate real-time signal processing
electronics for general-purpose applications. The high quality of Infitec stereo imaging is accessible only by
relatively costly stereo units. With the new Stereodigital, Infitec stereo projection works in full quality without
elaborate real-time signal processing electronics as images generated by digital photography are color corrected by
a software tool, which also makes all necessary geometry correction of the stereo image pair to achieve almost
perfect stereo imaging. Stereo imaging becomes much less costly as there is no need of a real-time processing unit.
Modules offered in the Stereodigital product line are: Infitec stereo projection unit without built-in color correction;
Digital stereo camera; software for stereo image correction; laptop for mobile presentation plus dual VGA splitter;
and Infitec color correction for real-time signal processing for general purpose applications.
>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<
Thomas Oehmichen gave a brief description of some of the developments of Spatial View. He gave a business
update and outlook before discussing digital content creation. He then talked about Spatial View’s Cinema 4D +
SVI Plug-in 2.0 for stereoscopic rendering, live editing in (S)3D, network rendering, and external render engines.
He went on to give an example of a gaming bundle with partner VisuMotion using Spatial View’s 19-inch multiuser display for supported games including “World of WarCraft”, “Counterstrike: Source” and “Need for Speed:
Most Wanted”. He finished by pointing out that the company’s SVI Flash 3D Enhancer opens the 3rd dimension to
vector-based 2D content - a new way to configure flat art into dynamic and full 3D.
http://www.veritasetvisus.com
43
Veritas et Visus
http://www.veritasetvisus.com
3rd Dimension
September 2007
44
Veritas et Visus
3rd Dimension
September 2007
Society for Information Display 2007 Symposium
May 20-25, Long Beach, California
In this second report from the principal event of the year, Phillip Hill covers presentations from
Samsung SDI, Communications Research Centre Canada, Philips Research Laboratories,
and SeeReal Technologies
28.3: Dense Disparity Map Calculation from Color Stereo Images using Edge Information
Ja Seung Ku, Hui Nam, Chan Young Park, Beom Shik Kim, Yeon-Gon Mo, Hyoung Wook Jang, Hye-Dong Kim,
and Ho Kyoon Chung
Samsung SDI, Korea
Samsung has developed a stereo-corresponding algorithm using edge information. The conventional stereocorresponding algorithm, SAD (Sum of Absolute Difference), has good quality in case of texture region, but it has
false matching in the non-texture region. In order to reduce the false matching in the non-texture region, a new cost
function, which is defined as SED (Sum of Edge Difference) based on global optimization, is added to the cost
function of SAD. They evaluate the algorithm and benchmark Middlebury database. The experimental results show
that the algorithm successfully produces piecewise smooth disparity maps while reducing the false matching in the
non-texture region. Moreover, the algorithm is faster in terms of getting the best quality than SAD.
32.1: Invited Paper: — Human Stereoscopic Vision: Research Applications for 3D-TV
Wa James Tam
Communications Research Centre Canada, Ottawa, Canada
The Communications Research Centre (CRC)
Canada has been conducting research on 3D-TV
and related stereoscopic technologies since 1995.
Three areas of CRC’s research on human
stereoscopic vision and its application to 3D-TV
are highlighted. The author presents work on the
use of inter-ocular masking to reduce bandwidth
requirements, without sacrificing high image
quality. Secondly, he presents experimental
results that show the effect of stereoscopic objects
in motion on visual comfort. Thirdly, he presents
studies to illustrate how the tendency of the
human visuo-cognitive system to correct or fill in
missing visual information can be used to
generate effective stereoscopic images from
sparse depth maps.
http://www.veritasetvisus.com
Figure 1: An example of a surrogate depth map is shown at the
bottom right. The original source image and its typical depth
map are shown at the top and on the bottom left, respectively.
45
Veritas et Visus
3rd Dimension
September 2007
In the search for ways to generate depth maps, the researcher investigated the possibility of creating depth maps
from the pictorial depth information contained in standard 2D images, such as from blur information arising from
the limited depth of field of a camera lens. For this example, blur information is useful if one assumes that blurred
objects are at a farther distance than sharp objects and, therefore, a depth map can be created from the blur
information contained in the original 2D images. Along the way, CRC discovered that depth maps do not have to
contain dense information to be effective. CRC found that depth maps containing sparse depth information, that is,
depth information concentrated mainly at edges and object boundaries in the original 2D images, are sufficient to
yield an enhanced sensation of depth, compared to a corresponding monoscopic reference. CRC named these maps
“surrogate depth maps”. An example is shown in Figure 1 (previous page).
In conclusion, the studies show that the approach combining surrogate depth maps and DIBR can be used to
generate rendered stereoscopic images with good perceived depth. The effectiveness of surrogate depth maps can
be explained if it is assumed that the human visual system combines the depth information available at the
boundary regions together with pictorial depth cues to compensate for the missing/erroneous areas and arrive at an
overall perception of depth of a visual scene. The results of the studies also provide useful indications with respect
to the minimum depth information required to produce an enhanced sensation of depth in a stereo image, i.e., depth
at object boundaries. This minimum depth information can be used as a backup method when no other depth
information is available to a 3D-TV broadcast system.
32.2: Effect of Crosstalk in Multi-View Autostereoscopic 3D Displays on
Perceived Image Quality
Ronald Kaptein and Ingrid Heynderickx
Philips Research Laboratories, Eindhoven, The Netherlands
The effect of crosstalk in multi-view autostereoscopic 3D displays on perceived image
quality was assessed in two experiments. The first experiment shows that preference
decreases with increasing crosstalk, but not as strong as expected. The second
experiment shows that the crosstalk visibility threshold is higher than found in earlier
studies.
Gaining insight in the ambivalent effects of crosstalk is essential when it comes to
improving the quality of multi-view autostereoscopic 3D displays. Therefore, this
study investigated the visibility of crosstalk in still images, and its effect on image
quality (preference), taking into account the properties of multiview autostereoscopic
3D displays. To do this, it was necessary to vary the amount of crosstalk over a
considerable range. This was not feasible using a multi-view lenticular 3D display
because of the fixed lens. However, since binocular image distortion was found to be
the average of the monocular image distortions of both eyes, depth was not a
necessary feature. This meant that the researchers could investigate the perceptual
effects of crosstalk by simulating it on a 2D panel. To include the typical pixel
structure of a multi-view lenticular 3D display, they used a high-resolution panel and
simulated a single 3D pixel using multiple pixels from the high-resolution display. In
the present study, only crosstalk and the pixel structure were taken into account. Two
different experiments were performed. The first experiment assessed the preference
for different crosstalk levels. In this experiment, the trade off between visibility of
crosstalk (i.e. blurring and ghosting) and visibility of pixel structure artifacts was
investigated. The visibility threshold of crosstalk was determined in a second
perception experiment.
http://www.veritasetvisus.com
Figure 1: (A) shows LCD
panel and the position of the
lenses. Oblique lines
indicate lens edges. (B)
shows a real 3D pixel
structure, (C) the
simulation.
46
Veritas et Visus
3rd Dimension
September 2007
The 3D TV taken as a reference was a Philips lenticular 3D TV; the high-resolution 2D display used for the
simulations was a 22.2-inch IBM T221 display, with a resolution of 3840x2400 pixels and a pixel pitch of 0.1245
mm. The researchers wanted to simulate the 3D pixel structure using as few 2D pixels as possible, while preserving
the essential characteristics (slope and aspect ratio of the sides). The solution can be seen in Figure 1C (previous
page). The width of this structure is 9 pixels, i.e. 1.12 mm. However, a real 3D pixel has a width and height of
about 1.45 mm. To compensate for this, the simulated image should be viewed from a slightly different distance,
namely a factor of 0.77 closer compared to the 3D TV (i.e. 2.3 m). How crosstalk manifests itself on pixel level can
be derived from Figure 1A.
The results suggest that crosstalk is less visible in multi-view autostereoscopic 3D displays than expected. The
results also suggest that pixel-structure artifacts in lenticular based 3D displays, although visible, play a minor role
in determining image quality, compared to crosstalk, at least at the viewing distance for which the 3D display was
designed (3 m). These results are of importance considering the optimal design of multiview autostereoscopic 3D
displays. They give a first indication of the decrease in image quality that is to be expected when crosstalk is
increased. Increasing crosstalk can help to obtain more uniform display intensity and smoother view transitions
during head movements. All in all, the results can help in finding a better balance between the various factors that
play a role, Philips says.
32.3: A New Approach to Electro-Holography for TV and Projection Displays
A. Schwerdtner, N. Leister, and R. Häussler
SeeReal Technologies, Dresden, Germany
Among 3D displays, solely electro-holographic displays are in principle capable of completely matching natural
viewing. SeeReal’s new approach to electro-holography facilitates large object reconstructions with moderate
resolution of the spatial light modulator. They verified the approach with a standard 20-inch LCD as a spatial light
modulator. A key factor limiting universal application of stereoscopic displays are vision problems due to the
inherent mismatch between eye focusing and convergence. Solely holographic displays are in principle capable of
completely matching natural viewing. The most severe problem creating large-size video holograms is the so-called
space-bandwidth product which is directly related to the number of display pixels. This is the reason why electroholographic displays have been restricted to display very small scenes with very small viewing angles and low
image quality. Therefore, the object of the project
was to develop a new approach to electro-holography
that will enable the observer(s) to see large
holographic reconstructions of 3D objects from
electro-holographic displays having moderate pixel
resolution. Figure 1 illustrates the concept. The light
source LS illuminates the spatial light modulator
SLM and is imaged by the lens F into the observer
plane OP. The hologram is encoded in the spatial
light modulator. The observer window OW is located
at or close to the observer eye OE. The size of the
Figure 1: Schematic drawing of the holographic display
OW is limited to one diffraction order of the
hologram. The observer sees a holographically
reconstructed three-dimensional object 3D-S in a reconstruction frustum RF that is defined by the observer window
and the hologram. An overlap of higher diffraction orders in the observer window is avoided by encoding the
holographic information of each single point P of the object 3D-S in an associated limited area A1 in the hologram.
The correct size and position of A1 are obtained by projecting the OW through the point P onto the light modulator
SLM, as indicated by the lines from the OW through P to the area A1. Light emanating from higher diffraction
orders of the reconstructed point P will not reach the OW and is therefore not visible. A 3D-object comprising
many object points results in overlapping associated areas with holographic information that are superimposed to
http://www.veritasetvisus.com
47
Veritas et Visus
3rd Dimension
September 2007
the total hologram. The researchers summarize by saying that the approach significantly reduces the requirements
on optical components and software compared to conventional electro-holographic displays. They achieve this by
generating the visible object information only at positions where it is actually needed, i.e. at the eye positions.
Large holographic object reconstructions are possible as the pixel pitch of the spatial light modulator does not limit
the size of the reconstructed object. The fundamental idea in the concept is to give highest priority to reconstruction
of the wave field at the observer’s eyes and not the three-dimensional object itself. They say that they have been
working to extend the approach to projection displays. Again, there are one or several observer windows through
which one or several observers see a holographically reconstructed object. The hologram is encoded on a small
spatial light modulator and the holographic reconstruction is optically enlarged by magnification optics.
>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<
Title
A personal view of the UK’s position in flexible electronics
Ground floor opportunity -- flexible electronics explosion
LCD manufacturing in the Far East: numbered days…
Flexible displays industry can’t over-hype expectations
Conclusions from SID…
3D - who needs it?
Disposables – killer application for flexible displays?
New standards for FPDs: White knight or white elephant?
The virtues of the tutorial…
Exploring urban myths
Flexibility on the move
The 3D niche – unless there’s a miracle…
Needs for display standardization in the UK rail system
The 3D community – beyond redemption…
The real world in microscopic form…
Is this the end of the beginning, or the beginning of the end?
New network heralds a good future for the UK
Enjoy the view while you can…
Plastic electronics will become ubiquitous
It’s all about appearances…
Traveling tales
My eyes! My eyes!
Thinking too much hurts…
No to laptops in the hold
Fitness for purpose
How do we be green?
One man’s meat is another man’s poison
The Fordists and the Churchillians…
Thoughts from the Far East
Thoughts from the Far East II
The importance of local production
Give up the day job and do something else?
Introducing the UKDL News…
Newsletter
Flexible Substrate
Flexible Substrate
Flexible Substrate
Flexible Substrate
Flexible Substrate
3rd Dimension
Flexible Substrate
Display Standard
Flexible Substrate
Flexible Substrate
Flexible Substrate
3rd Dimension
Display Standard
3rd Dimension
Flexible Substrate
High Resolution
Flexible Substrate
3rd Dimension
Flexible Substrate
Display Standard
Display Standard
High Resolution
3rd Dimension
Touch Panel
Display Standard
Flexible Substrate
Display Standard
Flexible Substrate
Flexible Substrate
Flexible Substrate
Flexible Substrate
Flexible Substrate
UKDL News
Date
Jan 31, 2005
Mar 16, 2005
Apr 26, 2005
May 17, 2005
Jul 6, 2005
Jul 29, 2005
Aug 8, 2005
Sep 22, 2005
Oct 3, 2005
Oct 22, 2005
Nov 16, 2005
Jan 16, 2006
Jan 29, 2006
Feb 26, 2006
Mar 1, 2006
Mar 24, 2006
Apr 15, 2006
May 8, 2006
May 16, 2006
May 20, 2006
Jun 30, 2006
Jul 16, 2006
Aug 23, 2006
Aug 30, 2006
Sep 6, 2006
Sep 19, 2006
Oct 15, 2006
Oct 23, 2006
Nov 29, 2006
Feb 18, 2007
Apr 11, 2007
Aug 12, 2007
Aug 30, 2007
http://www.veritasetvisus.com
http://www.veritasetvisus.com
48
Veritas et Visus
3rd Dimension
September 2007
International Workshop on 3D Information Technology
May 15, 2007, Seoul, Korea
by Andrew Woods
The 3D Display Research Centre (3DRC) based at Kwangwoon University (Seoul, South Korea) recently organized
the fourth in its series of international workshops. The workshop was held at the compound of Cheong Wa Dae
(literal translation “the blue house”, which is the office of the South Korean president) and featured presentations
from eight international invited speakers, four local speakers, and also a poster session.
The presentations were a mixture of
a summary of 3D research from the
authors’ origin country and also a
summary of the particular research
of each presenter.
The first presentation was by
Professor
George
Barbastathis
(MIT, USA) whose paper was titled
“3D Optics”. His presentation
focused on holographic imaging and
the process of capturing 3D images
and datasets using holographic
methods. The presentation of
Andrew Woods (Curtin University,
Australia)
was
titled
“R&D
Activities on 3D Information
3DIT 2007 invited speakers and organizers: (left to right) Sang-Hyun Kim
(student, Waseda Univ., Japan), Takashi Kawai (Waseda Univ., Japan), unknown
Technologies in Australia” and
(Presidential Security Service), student (3DRC), Jin Fushou (Jilin Univ., China),
summarized the work of a number
student (3DRC), Vladimir Petrov (Saratov State Univ., Russia), Hiroshi
of stereoscopic R&D organizations
Yoshikawa
(Nihon Univ., Japan), Eun-Soo Kim (3DRC), Zsuzsa Dobranyi
in Australia (including DDD, iVEC,
(Holografika,
Hungary), Dae-Jun Joo (Presidential Security Service), Tibor
and Jumbo Vision), plus his own
Balogh (Holografica, Hungary), George Barbastathis (MIT, USA), student
work on underwater stereoscopic
(3DRC), Jack Yamamoto (3D Consortium, Japan), Andrew Woods (Curtin Univ.,
video cameras and the compatibility
Australia), Nam-Young Kim (3DRC).
of
consumer
displays
with
stereoscopic methods. Professor Fushou Jin’s (Jilin University, China) presentation titled “3D display activities in
China” discussed the stereoscopic human factors work of Fang et al. (Zhejiang Univ., 2004), autostereoscopic
video transforms and novel autostereoscopic backlights by Zou et al. (Hefei Univ. of Tech., 2004 and 2005), head
mounted displays by Sun et al. (National Key Lab of Applied Optics, 2005), volumetric displays by Lin et al.
(Zhejiang Univ., 2005), as well as his own work on integral 3D imaging.
The second session contained two papers. Professor Vladimir Petrov (Saratov State University, Russia) discussed
“Recent R&D activities on 3D Information Systems in Russia” which included coverage of his own work on
classification of stereoscopic methods, formats & technologies, optical correction of depth plane curvature, and
electronically controlled optical holograms, plus a description of the autostereoscopic “SmartON” display by
Putilin et al. (FIAN, Moscow), Volumetric displays by Shipitsyn (Moscow, Russia) and Golobov et al. (LETI,
Russia), a stack of holograms by Golobov et al. (LETI, Russia), a stack of light scattering shutters by Kimpanets et
al. (Lebedev Physical Institute, Russia), waveguide holographic display by Putilin et al. (FIAN, Moscow), and
others. The presentation by Tibor Balogh (Holografika, Hungary) was titled “HoloVizio, The Light Field Display
http://www.veritasetvisus.com
49
Veritas et Visus
3rd Dimension
September 2007
System”. Various aspects of the HoloVizio autostereoscopic display system was described including fundamentals,
principles of operation, implementations, hardware and software systems, and applications.
The third session of the day included three papers. Professor Takashi Kawai’s (Waseda University, Japan) paper on
“Recent R&D Activities on 3D Information Technologies in Japan” discussed his own work on stereoscopic
ergonomic evaluation, display hardware and stereoscopic video software development, content creation, time-series
analysis of stereoscopic video, and scalable conversion of stereoscopic content for different screen sizes. He also
summarized industry activities including the Digital Content Association of Japan (DCAJ), the Ultra Realistic
Communications Forum (URCF), and 3D Fair 2006 (November 2006, Akihabara, Japan). Jack Yamamoto (3D
Consortium, Japan) provided a “3D Market Trend Overview” and also summarized the recent activities of the 3D
Consortium. The presentation of Professor Hiroshi Yoshikawa (Nihon University, Japan) was titled “Recent
activities on 3-D imaging and display in Japan” discussed government and industry supported activities along with
a summary of his university’s research in optical holograms, digital holography, a fringe printing system, fast
computer generated holograms, and holo-video.
The final formal session of the day included four papers. Professor Eun-Soo Kim (3DRC, Kwangwoon University,
Korea) presented a paper titled “3D R&D Activities in 3DRC” which provided a brief overview of commercial 3D
R&D activities in Korea (including Samsung, LG, Pavonine, Zalman, Innertech, Sevendata, and KDC Group), an
introduction to the 3DRC, and a summary of the 3D display prototypes and R&D activities of the 3DRC. Dr.
Jinwoong Kim from ETRI (Electronics and Telecommunications Research Institute) (Daejeon, Korea) presented
“R&D Activities on 3D Broadcasting Systems in ETRI”. As well as providing an overview of 3D in the
broadcasting industry, detail was provided on ETRI’s activities in 3D DMB (Digital Multimedia Broadcast) to
handheld devices and multi-view 3DTV systems. Dr Sung-Kyu Kim from KIST (Korean Institute of Science and
Technology) (Seoul, Korea) presented “R&D Activities on 3D Displays in KIST”. His presentation discussed
KIST’s work on multi-focus 3D display systems (using either a laser scanned DMD modulated display, or a multilight source (LED) method), and camera related issues for autostereoscopic mobile displays. Dr. Jae-Moon Jo
(Samsung Electronics, Korea) presented “Status of 3D Display Development”. His paper was divided into three
parts: technology and market trends, 3D display technologies, and technology of Samsung. The latter part of his
presentation discussed Samsung’s 3D DLP HDTVs, Samsung’s 3D LCD DMB phone (SCH-B710), Samsung
autostereoscopic 2D/3D monitor using time-sequential LCD, and the Samsung SDI autostereoscopic OLED 2D/3D
demo.
Selected papers in the poster session included:
•
•
•
•
•
•
•
•
•
•
•
Effective generation of digital holograms of 3-D objects with a novel look-up table method
Three-dimensional reconstruction using II technique of captured images by holographic method
Efficient generation of CGH for frames of video images
Holographic 3D display of captured by II technique
Enhanced IVR-based computational construction method in three-dimensional integral imaging with
non-uniform lens array
Three-dimensional image correlator using computationally reconstructed integral images
Using quantum optics in 3D display
Extraction of rat hippocampus using stereoscopic microscope system
Efficient 3D reconstruction method using stereo matching robust to noise
A compact rectification algorithm for trinocular stereo images
The effect of saccadic eye movements on motion sensitivity in 3D depth
A proceedings volume was published by the 3DRC containing all of the slides of the speakers and poster authors.
http://www.3drc.org/
http://www.veritasetvisus.com
50
Veritas et Visus
http://www.veritasetvisus.com
3rd Dimension
September 2007
51
Veritas et Visus
3rd Dimension
September 2007
3DTV CON 2007
May 7-9, Kos Island, Greece
In this second report on this IEEE conference on capture, transmission and display of 3D video, Phillip
Hill covers presentations from Monash University, Middle East Technical University/STM Savunma
Teknolojileri Muhendislik ve Ticaret, ATR/University of Tsukuba, Tampere University of Technology,
Momentum, Yonsei University, University of Rome, University of Oulu, and two from Tel-Aviv University
Large scale 3D environmental modeling for stereoscopic walk-through visualization
Nghia Ho, Ray Jarvis,
Intelligent Robotics Research Centre, Monash University, Australia
The availability of high resolution and long-range laser range finders with color image registration facilities opens
up the possibility of large scale, accurate and dense 3D environment modeling. This paper addresses the problem of
integration and analysis of multiple scans collected over extended regions for stereoscopic walk-through
visualization. Large-scale 3D environment modeling has gained popularity due to the availability of high resolution
and long range 3D laser scanners. These devices can capture a dense point cloud of the environment, collecting in
the order of 10’s or 100’s of millions of points. Laser scanners can provide very rich data but on the other hand
present a technical challenge in processing such large volume of data which can easily grow to a couple of
gigabytes. Two common tasks that are essential for large-scale environment modeling are scan registration and
visualization. Scans need to be taken at various locations and registered together to build a complete model. The
ability to be able to visualize it via a stereoscopic display is very attractive because the data is well suited for a
walk-through application such as a virtual tour. The paper reports a campus modeling project undertaken at Monash
University.
Figure 1 (left): Buildings and vegetation rendered entirely by planes and texture. Figure 2 (right): Birds eye
view of the campus model.
http://www.veritasetvisus.com
52
Veritas et Visus
3rd Dimension
September 2007
The researchers used the Riegl LMS-Z420i laser range scanner to scan the Monash University campus. The scanner
is capable of scanning 360 degree horizontally and 80 degrees vertically. The average sampling rate for a high
resolution scan is about 8000 points per second. Color information is obtained separately via a Nikon D100
mounted on the scanner. Approximately 30 scans were taken around the Monash campus over the duration of a
couple of weeks. We fixed each scan to capture 5-6 million points. One problem they faced was people walking
about during the scanning. This introduced some unwanted noise, which appears as a thin line of points. To
alleviate this problem they did two scans at the same location. The two scans were then compared side by side and
the points with the furthest distance away from the scanner taken as the true range.
Figure 2 (previous page) shows a bird’s eye view section of the campus model. Some of the tree leaves come out
blue rather than green and is a result of incorrect registration with the sky color. One possible explanation is that the
leaves are being blown by the wind, which causes an incorrect registration with the laser range data and color
images. Point clouds obtained from a laser range scanner are dense for distances near to the scanner but sparse
further away. This greatly affects the visual quality and is noticeable in some part of the scenes where there is
inadequate sampling. This can be improved by performing a longer scan and collecting more points. This also has
the extra benefit of collecting less noise from people moving across the scans.
Shape from unstructured light
Anner Kushnir and Nahum Kiryati
Tel-Aviv University, Israel
A structured light method for depth reconstruction using unstructured, essentially arbitrary projection patterns is
presented. Unlike previous methods, the suggested approach allows the user to select the projection patterns from a
given slide show or from movie frames, or to simply project noise, thus extending the range of possible
applications. The system includes a projector and a single camera. Two progressive algorithms were developed for
obtaining projector-camera correspondence, with each additional projection pattern improving the reliability of the
final result. The method was experimentally demonstrated using two projection pattern sets – a vacation photo
album (similar to frames extracted from a video sequence) and a set of random noise patterns.
Figure 1: Depth map and textured reconstruction of a mannequin head: (a,b) 50 patterns, DP method. (c,d) 50
patterns, PPC method. (e,f) 10 patterns, DP method. (g,h) 100 patterns, PPC method.
http://www.veritasetvisus.com
53
Veritas et Visus
3rd Dimension
September 2007
Figure 1 (previous page) presents the reconstruction results obtained using the vacation photo album pattern set, in
two formats: depth map and visualization of the 3D surface with its texture. Results are shown for the two
correspondence-establishment methods considered, Pixel-to-Pixel Correspondence (PPC) and Dynamic
Programming (DP), using several pattern-set sizes. It can be seen that the DP method performs well even when the
number of projection patterns used for reconstruction is small. The PPC method yields a reasonable result when 50
patterns or more are used, and is superior to the DP method in terms accuracy and robustness when 100 patterns or
more are used.
Effects of color-multiplex stereoscopic view on memory and navigation
Yalın Baştanlar, Hacer Karacan, Middle East Technical University, Ankara, Turkey
Deniz Cantürk, STM Savunma Teknolojileri Muhendislik ve Ticaret, Ankara, Turkey
In this work, effects of stereoscopic view on object recognition and navigation performance of the participants are
examined in an indoor Desktop Virtual Reality Environment, which is a two-floor virtual museum having different
floor plans and 3D object models inside. This environment is used in two different experimental settings: 1) colormultiplex stereoscopic 3D viewing provided by colored eye-wear, 2) regular 2D viewing. After the experiment,
participants filled in a questionnaire that inquired into their
feeling of presence, their tendency to be immersed and their
performance on object recognition and navigation in the
environment. Two groups (3D and 2D), each having five
participants, with equal tendency were formed according to
the answers of “tendency” part, and the rest evaluated to
examine the effects of stereoscopic view. Contrary to
expectations, results show no significant difference between
3D and 2D groups both on feeling of presence and object
recognition/navigation performance.
A museum consisting of two floors was created. Several 3D
object models like cars, airplanes, animals, buildings were
placed in the museum. Also a few 2D pictures were put on
the walls in order to ease navigation. The two floors have
Figure 1: A screen-shot from first the floor of the
different floor plans and wall textures. A screen shot is
virtual environment
shown in Figure 1. Results did not indicate a significant
difference between the two groups. Although increasing the number of participants and preparing different kinds of
navigation and recognition questions may change the result, the current result may be explained by two ideas: both
groups were able to examine the objects and environment closely from different points of view. Although 3D group
participants sometimes spent a little more time on the objects to test stereoscopic viewing, it seems that this did not
cause participants to recognize them well. Despite the stereoscopic viewing 3D group participants did not feel
much more involved than the 2D participants. In fact, the control mechanism and speed of the test environment was
not realistic enough, and this may have resulted in a boring effect on participants causing to lose their attention
while observing objects.
Depth map quantization – how much is sufficient?
Ianir Ideses, Leonid Yaroslavsky, Itai Amit, Barak Fishbain
Department of Interdisciplinary Studies, Tel-Aviv University, Israel
With the recent advancement in visualization devices over the last years, in order to synthesize 3D content, one
needs to have either a stereo pair or an image and a depth map. Computing depth maps for images is a highly
computationally intensive and time-consuming process. In this paper, the researchers describe results of an
experimental evaluation of depth map data redundancy in stereoscopic images. In the experiments with computer
http://www.veritasetvisus.com
54
Veritas et Visus
3rd Dimension
September 2007
generated images, several observers visually tested the number of quantization levels required for comfortable and
quantization unaffected stereoscopic vision. The experiments show that the number of depth quantization levels can
be as low as only a couple of tens. This may have profound implication on the process of depth map estimation and
3D synthesis, the researchers say.
In each experiment, the viewer had to indicate which image has more quantization levels; if correct in his choice,
the program would increase the quantization levels until the viewer can not distinguish between the images. If the
viewer had been incorrect in his choice, the program would reduce the quantization levels until the viewer would
again be able to distinguish between the images. Each experiment is composed of several tens of rounds until the
quantization levels converge. In order to increase the reliability of the selection, the viewer was prompted to verify
his answer to each round. A screenshot of the program is shown below.
The results of these tests show that, for depth map quantization, a relatively low number of about 20 quantization
levels of depth map are sufficient for 3D synthesis. This number was acquired for shapes with high height gradients
and is lower for other shapes. The obtained results can be utilized in different applications, and especially in
iterative algorithms of depth map computation and in the process of generating artificial stereo pairs from an image
and a depth map.
An example of images that were shown to the viewer. The viewer had to indicate which image has a smoother
(quantized with more quantization levels) depth map (viewed with anaglyph glasses, blue filter for right eye).
Virtual camera control system for cinematographic 3D video rendering
Hansung Kim, Ryuuki Sakamoto, Tomoji Toriyama, and Kiyoshi Kogure
Knowledge Science Lab, ATR, Kyoto, Japan
Itaru Kitahara1, Department of Intelligent Interaction Technologies, University of Tsukuba, Japan
The researchers propose a virtual camera control system that creates attractive videos from 3D models generated
with a virtualized reality system. The proposed camera control system helps the user to generate final videos from
the 3D model by referring to the grammar of film language. Many kinds of camera shots and principal camera
actions are stored in the system as expertise. Therefore, even non-experts can easily convert the 3D model to
http://www.veritasetvisus.com
55
Veritas et Visus
3rd Dimension
September 2007
attractive movies that look as if they were edited by expert film producers with the help of the system’s expertise.
The user can update the system by creating a new set of camera shots and storing it in the shots’ knowledge
database.
In one application, the system generates footage by piecing all of the generated videos. Figure 1(a) shows example
footage of a cinematographic video with camera controls using two annotations to 3D regions and five annotations
to time codes. Varied shots with different angles and framing are set for the 3D video to capture a man shadowboxing dynamically. Figure 1(b) shows other footage that is applied to the same shots, to which a region annotation
is added to the man’s foot. Despite the fact that these pieces of footage are made from the videos of the same
scenes, the impressions they give are rather different.
The goal of this study is to develop a virtual camera controlling system for creating attractive videos from 3D
models. The proposed system helps users to apply expert knowledge to generate desirable and interesting film
footage by using a sequence of shots taken with a virtual camera. As future work, the researchers are going to
devise a method to use sensors to automatically determine annotation information.
Figure 1: Outcome of shadow-boxing scene
Mid-air display for physical exercise and gaming
Ismo Rakkolainen, Tampere University of Technology, Finland
Tanju Erdem, Bora Utku, Çiğdem Eroğlu Erdem, Mehmet Özkan, Momentum, Turkey
The researchers presented some possibilities and experiments with the “immaterial” walk-through FogScreen for
gaming and physical exercise. They used real-time 3D graphics and interactivity for creating visually and
physically compelling games with the immaterial screens. An immaterial projection screen has many advantages
for physical exercise, games and other activities. It is visually intriguing and can also be made two-sided so that the
opposing gamers on each side see both their side of the screen and each other through it, and can even walk through
it. The immaterial nature of the screen helps also on maintenance, as the screen is unbreakable and stays always
clean. The initial results show that the audience stayed with the game over extended periods of time.
http://www.veritasetvisus.com
56
Veritas et Visus
3rd Dimension
September 2007
The FogScreen is currently available in 2-meter-wide size and in
modular 1-meter-wide size, which enables several units to be linked
seamlessly together. The FogScreen device is rigged above the heads
of the players so that they can freely walk through the screen, which
forms under the device. The continuous flow recovers the flat screen
plane automatically and immediately when penetrated. The
resolution is not quite as high as with traditional screens, but it works
well for most applications like games. The presented context could
also be used in physical rehabilitation, edutainment in science
museums, and many kinds of sports games like boxing, karate or
other martial arts, for example.
Comparison of phoneme and viseme based acoustic units for speech driven realistic lip animation
Elif Bozkurt, Çigdem Eroglu Erdem, Engin Erzin, Tanju Erdem, Mehmet Özkan
Momentum, Turkey
Natural looking lip animation, synchronized with incoming speech, is essential for realistic character animation. In
this work, the Momentum company evaluates the performance of phone and viseme based acoustic units, with and
without context information, for generating realistic lip synchronization using HMM based recognition systems.
They conclude via objective evaluations that utilization
of viseme based units with context information
outperforms the other methods. Humans are very
sensitive to the slightest glitch in the animation of the
human face. Therefore, it is necessary to achieve
realistic lip animation, which is synchronous with a
given speech utterance. There are methods in the
literature for achieving lip synchronization based on
audio-visual systems that correlate video frames with
acoustic features of speech. A major drawback of such
systems is the scarce source of audiovisual data for
training. Other methods use text-to-speech synthesis,
which utilize a phonetic context to generate both speech
and the corresponding lip animation. However, current
Figure 1: The 3D graphical user interface used for viewing
speech synthesis systems sound slightly robotic, and
the lip synchronization results
adding natural intonation requires more research. If the
lip synchronization is generated using speech uttered by a real person, the animation will be perceived to be more
natural. In such systems, a phonetic sequence can be estimated directly from the input speech signal using speech
recognition techniques. This paper focuses on the limited problem of automatically generating phonetic sequences
from prerecorded speech for lip animation. The generated phonetic sequence is then mapped to a viseme sequence
before animating the lips of a 3D head model, which is built from photographs of a person. Note that, a viseme is
the corresponding lip posture for a phoneme, i.e. visual phoneme.
In this work, the researchers experimentally compare four different acoustic units within HMM structures for
generating the viseme sequence to be used for synchronized lip animation. These acoustic units are namely phone,
tri-phone, viseme and tri-viseme based units. The lip animation method is based on 16 distinct viseme classes. After
the generation of the 3D head model, a graphic artist defines the mouth shapes for the 16 visemes using a graphical
user interface. The results of the lip synchronization can be viewed using a user interface shown in Figure 1.
http://www.veritasetvisus.com
57
Veritas et Visus
3rd Dimension
September 2007
Stereoscopic video generation method using motion analysis
Donghyun Kim, Dongbo Min, and Kwanghoon Sohn
Yonsei University, Seoul, Korea
Stereoscopic video generation methods can produce stereoscopic content from conventional video filmed from
monoscopic cameras. The researchers propose a stereoscopic video generation method using motion-to-disparity
conversion considering a multi-user condition and characteristics of display device. Field of view, maximum and
minimum values of disparity are calculated through an initialization process in order to apply various types of 3D
display. After motion estimation, they propose three cues to decide the scale factor of motion-to-disparity
conversion, which are magnitude of motion, camera movement and scene complexity. Subjective evaluation is
performed by comparing videos captured from a stereoscopic camera and generated from one view of the
stereoscopic video. In order to evaluate the proposed algorithm, several sequences were used. They used two
stereoscopic sequences and two multi-view sequences. The test platform is a 17-inch polarized stereoscopic display
device which offers resolution of 1280x512 in stereoscopic mode. Figure 1 shows the results of stereoscopic
conversion for four test sequences. In this figure, we could find the shape of objects are well represented enough to
assign depth feeling to the moving objects. Note that in the second sequence with the large flowerpot container
captured by panning camera, the result shows that there is reverse depth of background and foreground. It verifies
that camera movement recognition is essential. Errors may occur when the original images are roughly segmented
or illumination variation occurs.
Figure 1: Results of stereoscopic conversion
A non-invasive approach for driving virtual talking heads from real facial movements
Gabriele Fanelli, Marco Fratarcangeli
University of Rome, Italy
In this paper, the University of Rome researchers depict a system to accurately control the facial animation of
synthetic virtual heads from the movements of a real person. Such movements are tracked using “active appearance
http://www.veritasetvisus.com
58
Veritas et Visus
3rd Dimension
September 2007
models” from videos acquired using a cheap webcam. Tracked motion is then encoded by employing the widely
used MPEG-4 Facial and Body Animation standard. Each animation frame is thus expressed by a compact subset of
“Facial Animation Parameters” (FAPs) defined by the standard. They precompute, for each FAP, the corresponding
facial configuration of the virtual head to animate through an accurate anatomical simulation. By linearly
interpolating, frame by frame, the facial configurations corresponding to the FAPs, they obtain the animation of the
virtual head in an easy and straightforward way.
The paper addresses the problem of realistically animating a virtual talking head at interactive rate by resynthesizing facial movements tracked from a real person using cheap and non-invasive equipment, namely a
standard
webcam
(Figure
1).
Using
appropriately trained active appearance models,
the system is able to track the facial movements
of a real person from a video stream and then
parameterize such movements in the scripting
language defined by the MPEG-4 FBA
standard. Each of these parameters corresponds
to a key pose of a virtual face, namely Morph
Target, a concept largely known in the computer
graphics artist’s community. These initial key
poses are automatically precomputed through an
accurate anatomical model of the face
composed by the underlying bony structure, the
upper skull and the jaw, the muscle map, and
the soft skin tissue. The morph targets are
blended together through a linear interpolation
Figure 1: From each input video frame (left), the facial movements
weighted by the parameter’s magnitude,
are tracked (center), and then used to control a virtual talking head
achieving a wide range of facial configurations.
Future developments will focus on extending the system to support 3D rigid transformations of the real head (i.e.,
translations and out-of-plane rotations), and the iris movements. Fields of possible application include the
entertainment industry or human-computer interaction software, where cartoon-like characters could reproduce the
expressions of real actors without the aid of expensive and invasive devices, or visual communication systems,
where video conferences could be established even on very low bandwidth links.
Stereoscopic viewing of digital holograms of real-world objects
Taina M. Lehtimäki and Thomas J. Naughton
University of Oulu, Finland.
The researchers have studied the use of conventional stereoscopic displays for the viewing of digital holograms of
real-world 3D objects captured using phase-shift interferometry. Although digital propagation of holograms can be
performed efficiently, only one depth-plane of the scene is in focus in each reconstruction. Reconstruction at every
depth to create an extended-focus image is a time-consuming process. They investigated the human visual system’s
ability to perceive 3D objects in the presence of blurring when different depth reconstructions are presented to each
eye. Their digital holograms are sufficiently large that sub-regions can be digitally propagated to generate the
necessary stereo disparity. The holograms also encode sufficient depth information to produce parallax. They found
that their approach allows 3D perception of objects encoded in digital holograms with significantly reduced
reconstruction computation time compared to extended focus image creation.
http://www.veritasetvisus.com
59
Veritas et Visus
3rd Dimension
September 2007
Stereoscopic Displays and Applications 2007 Conference
January 16-18, 2007, San Jose, California
In this third installment, Mark Fihn summarizes presentations made by Ocuity, NEC, Dynamic Digital
Depth, Philips Research (x2), Boston University, Eindhoven University of Technology, LG.Philips LCD,
Hitachi, Ltd., and the Tokyo University of Agriculture and Technology. The full papers are available in the
2007 SD&A conference proceedings at http://www.stereoscopic.org/proc
Autostereoscopic display technology for mobile 3DTV applications
Jonathan Harrold and Graham J. Woodgate, Ocuity Limited, Oxford
This presentation discussed the advent of 3DTV products based on cell phone platforms with switchable 2D/3D
autostereoscopic displays. Compared to conventional cell phones, TV phones need to operate for extended periods
of time with the display running at full brightness, so the efficiency of the 3D optical system is key. The desire for
increased viewing freedom to provide greater viewing comfort can be met by increasing the number of views
presented. A four view lenticular display will have a brightness five times greater than the equivalent parallax
barrier display. Therefore, lenticular displays are very strong candidates for cell phone 3DTV. Specifically, Ocuity
discussed the selection of Polarization Activated Microlens architectures for LCD, OLED and reflective display
applications, providing advantages
associated with high pixel density,
device ruggedness, and display
brightness. Ocuity described a new
manufacturing breakthrough that
enables switchable microlenses to
be fabricated using a simple coating
process, which is also readily
scalable to large TV panels.
Photos of Polarization Activated Microlens structures for some simulated
configurations for 2.2-inch 320x240 and 640x470 panels. .
Ocuity has demonstrated that
Polarization Activated Microlens
technology is a strong candidate to
meet the stringent demands of 3D
mobile TV, especially when
combined with recent advances in a
LC coating technology which does
not require sealing or vacuum
processing.
Image from Ocuity
http://www.veritasetvisus.com
60
Veritas et Visus
3rd Dimension
September 2007
A Prototype 3D Mobile Phone Equipped with a Next Generation Autostereoscopic Display
Julien Flack, Dynamic Digital Depth (DDD) Research, Bentley, Western Australia
Jonathan Harrold and Graham J. Woodgate, Ocuity Limited, Oxford
According to the authors, the most challenging technical issues for commercializing a 3D phone are a stereoscopic
display technology which is suitable for mobile applications as well as a means for driving the display using the
limited capabilities of a mobile handset. This paper describes a prototype 3D mobile phone which was developed
on a commercially available mobile hardware platform that was retrofitted with a Polarization Activated Microlens
array that is 2D/3D switchable and provides both class-leading low crosstalk levels, and suitable brightness
characteristics and viewing zones for operation without compromising battery running time. DDD and Ocuity
collaborated to produce this next generation autostereoscopic display, which is deployed on a 2.2-inch TFT-LCD at
320x240 pixels. They also describe how a range of stereoscopic software solutions have been developed on the
phone’s existing application processor without the need for custom hardware. The objective in developing a
prototype 3D mobile phone was to
demonstrate the effectiveness of integrating
an advanced autostereoscopic display into a
Smartphone supported by a range of 3D
content demonstrations in order to stimulate
the development of the next generation of
stereoscopic 3D mobiles. Through the
integration of efficient conversion and
rendering software with the handset’s main
application processor it was possible to
playback 24 frames/sec 320x240 video
content rendered in real-time using
optimized depth based rendering techniques.
This means that provision of content is no
longer an issue for 3D handsets. The phone's
primary purpose was to provide a
benchmark for handset manufacturers and
telecoms carriers to assess the commercial
As part of the presentation, Flack gave an overview of depth based
viability of a stereoscopic 3D phone using
rendering: virtual left and right eye images are rendered from a depth
technologies that are available and ready for
map and the original 2D image at 320x240 pixels
Image from DDD
mass production as of 2006.
Multiple Footprint Stereo Algorithms for 3D Display Content Generation
Faysal Boughorbel, Philips Research Europe, Eindhoven
This research focuses on the conversion of stereoscopic video material into an image + depth format which is
suitable for rendering on the multiview auto-stereoscopic displays of Philips. The recent interest shown in the
movie industry for 3D significantly increased the availability of stereo material. In this context the conversion from
stereo to the input formats of 3D displays becomes an important task. The presentation discusses a stereo algorithm
that uses multiple footprints generating several depth candidates for each image pixel. The proposed algorithm is
based on a surface filtering method that employs simultaneously the available depth estimates in a small local
neighborhood while ensuring correct depth discontinuities by the inclusion of image constraints. The resulting
high-quality, image-aligned depth maps proved an excellent match with Philips’ 3D displays. The researchers
showed that using a robust surface estimation approach built on top of basic window-based matching techniques
leads to impressive results. Several efficient implementations are being pursued towards embedding the presented
algorithm in future commercial sets.
http://www.veritasetvisus.com
61
Veritas et Visus
3rd Dimension
September 2007
A 470x235 ppi LCD for high-resolution 2D and 3D autostereoscopic display
Nobuaki Takanashi and Shin-ichi Uehara, System Devices Research Laboratories, NEC Corp., Sagamihara
Hideki Asada,, NEC LCD Technologies, Ltd., Kawasaki
The NEC researchers suggested that 3D display developers face many challenges, particularly with regard to
autostereoscopic 3D and 3D/2D convertibility. They suggested a solution that utilizes a novel pixel arrangement,
called Horizontally Double-Density Pixels (HDDP). In this structure, two pictures (one for the left and one for the
right eye) on two adjacent pixels form one square 3D
pixel. This doubles the 3D resolution, making it as high
as the 2D display and shows 3D images anywhere in
2D images with the same resolution. NEC’s prototype
polysilicon TFT LCD is lenticular lens-based, at 2.5inches diagonal inches, and at 320x2 (RL) x 480x3
(RGB) resolution. As a 3D display, the horizontal and
vertical resolutions are equal (235 ppi each). NEC
verified the efficacy of the display with a broad user
survey, which demonstrated a high acceptance of and
interest in this mobile 3D display. The researchers
reported that the display enables 3D images to be
displayed anywhere and 2D characters can be made to
appear at different depths with perfect legibility. No
switching of 2D/3D modes is necessary, and a thin and
HDDP Arrangement: right and left-eye pixels combine to
uncomplicated structure and high brightness makes the
form a square.
Image from NEC
design especially suitable for mobile terminals.
Compression of still multiview images for 3D auto-multiscopic spatially-multiplexed displays
Ryan Lau, Serdar Ince and Janusz Konrad, Boston University, Boston
Auto-multiscopic displays are becoming a viable alternative to 3-D displays with glasses. However, since these
displays require multiple views the needed transmission bit rate as well as storage space are of concern. This paper
describes results of research at Boston University on the
compression of still multiview images for display on
lenticular or parallax-barrier screens. Instead of using
full-resolution
views,
the
researchers
applied
compression to band-limited and down-sampled views in
the
so-called
“Ntile
format”,
(proposed
by
StereoGraphics). Using lower resolution images is
acceptable since multiplexing at the receiver involves
down-sampling from full view resolution anyway. They
studies three standard compression techniques: JPEG,
JPEG-2000 and H.264. While both JPEG standards work
with still images and can be applied directly to an N-tile
image, H.264, a video compression standard, requires N
Close-up of rectangular region from a multiplexed image
images of the N-tile format to be treated as a short video
before compression (on the left) and JPEG-compressed
sequence. They presented numerous experimental results
with a compression ratio of 40:1 (on the right). Note
indicating that the H.264 approach achieves significantly
numerous artifacts in the compressed image; displayed
better performance than the other three approaches
on a SynthaGram SG222 screen, these artifacts result in
studied. The researchers examined all three compression
objectionable texture and depth distortions.
Image from Boston University
standards on 9-tile images, and based on their studies,
http://www.veritasetvisus.com
62
Veritas et Visus
3rd Dimension
September 2007
proposed a “mirrored N-tile format” where individual tiles are transposed so as to assure maximum continuity in
the N-tile image and thus improve compression performance. Results of the testing indicate that compressing 9-tile
images gives better results than compressing multiplexed images, and that H.264 applied to 9-tile images gives the
best performance.
Predictive Coding of Depth Images across Multiple Views
Yannick Morvan, Dirk Farin and Peter H. N. de With, Eindhoven University of Technology, Eindhoven
A 3D video stream is typically obtained from a set of synchronized cameras, which are simultaneously capturing
the same scene (multiview video). This technology enables applications such as free-viewpoint video which allows
the viewer to select his preferred viewpoint, or 3DTV where the depth of the scene can be perceived using a special
display. Because the user-selected view does not always correspond to a camera position, it may be necessary to
synthesize a virtual camera view. To synthesize such a virtual view, the researchers from the University of
Eindhoven adopted a depth image-based rendering technique that employs one depth map for each camera.
Consequently, a remote rendering of the 3D video requires a compression technique for texture and depth data.
This paper presents a predictive coding algorithm for the compression of depth images across multiple views. The
presented algorithm provides
•
•
an improved coding efficiency for depth images over block-based motion-compensation encoders (H.264)
a random access to different views for fast rendering.
The proposed depth-prediction technique works by synthesizing/computing the depth of 3D points based on the
reference depth image. The attractiveness of the depth-prediction algorithm is that the prediction of depth data
avoids an independent transmission of depth for each view, while simplifying the view interpolation by
synthesizing depth images for arbitrary view points. The researchers presented experimental results for several
multiview depth sequences that result in a quality improvement of up to 1.8dB as compared to H.264 compression.
Therefore, the presented technique demonstrates that that predictive-coding of depth images can provide a
substantial compression improvement of multiple depth-images while providing random access to individual
frames for real-time rendering.
Application of Pi-cells in Time-Multiplexed Stereoscopic and Autostereoscopic LCDs
Sergey Shestak and Daesik Kim, Samsung Electronics, Suwon
The Samsung researchers investigated Pi-cell based polarization switches regarding their applications in both glass
type and autostereoscopic LCD 3D displays. (Pi-cells are nematic liquid crystal optical modulators capable of
electrically controllable birefringence). They found that Pi-cell should be divided into the number of individually
addressable segments to be capable of switching synchronously with line-by-line image updates in order to reduce
time-mismatch crosstalk. They discovered that the displayed stereoscopic image has unequal brightness and
crosstalk in the right and left channels. The asymmetry of stereoscopic image parameters is probably caused by the
asymmetry of rise/fall time, inherent in Pi-cells. Finally, they proposed an improved driving method capable of
making the crosstalk and brightness symmetrical. Further, they demonstrated that a response time acceleration
technique (RTA) developed for the reduction of motion blur, is capable of canceling the dynamic crosstalk caused
by slow response of LCD pixels.
The research revealed that if an LCD monitor is used in a stereoscopic system with conventionally driven shutter
glasses, severe crosstalk (mixed left and right images) is seen across the almost entire screen. Even simple systems
using passive polarizing glasses demonstrate the same high crosstalk. The crosstalk appears even if the monitor has
a very short response times.
The researchers further studied application of polarization switches based on Pi-cell in two different stereoscopic
systems employing LCD image panels. Both stereoscopic systems are capable of displaying low crosstalk
stereoscopic images at frame rates of 60 and 75Hz. The main problems of the studied systems are the timehttp://www.veritasetvisus.com
63
Veritas et Visus
3rd Dimension
September 2007
mismatch crosstalk caused by low number of individually switchable segments in Pi-cell, the dynamic crosstalk
caused by slow switching of liquid crystal cells and asymmetry of crosstalk and image brightness caused by the
asymmetry in Pi-cell switching time. The researchers also found that the response time acceleration technique
(RTA), developed for the reduction of motion blur, is capable of canceling the dynamic crosstalk, caused by slow
response of LCD pixels. Experimentally, they discovered that the displayed stereoscopic image has unequal
brightness and crosstalk in right and left channels -- probably caused by the asymmetry of rise/fall time, inherent in
Pi-cells. They proposed to compensate the brightness and the crosstalk asymmetry in the left and right images by
the adjustment of duty cycle of control signal. They further concluded that it is not adequate to simply extend the
supported vertical frequency of LCD monitors to 100-120Hz. To provide low crosstalk flicker-less stereo, the
overdrive levels should also be optimized for the canceling of crosstalk at higher operational frequency.
The images on the left show a severe crosstalk problem inherent with autostereoscopic displays. After compensating for the
crosstalk using RTA, the images on the right are markedly improved. The images also show a noticeable difference in
brightness between the left and right images, probably due to the asymmetry of switching a Pi-cell.
Switchable lenticular based 2D/3D displays
Dick K.G. de Boer, Martin G.H. Hiddink, Maarten Sluijter, Oscar H. Willemsen and Siebe T. de Zwart, Philips
Research Europe, Eindhoven
The use of an LCD equipped with lenticular lenses is an attractive route to achieve an autostereoscopic multi-view
3D display without losing brightness. However, such a display suffers from a low spatial resolution since the pixels
are divided over various views. To overcome this problem Philips developed switchable displays, using LC-filled
switchable lenticulars. In this way it is possible to have a high-brightness 3D display capable of showing the native
2D resolution of the underlying LCD. Moreover, for applications in which it is advantageous to be able to display
3D and 2D on the same screen, they made a prototype having a matrix electrode structure.
A drawback of multi-view systems is that, since a number of pixels are used to generate the views, there will be a loss of
resolution that is particularly clear if two-dimensional (2D) content is displayed. The left picture shows the image of a normal
display, the picture at the right shows the same image with a lenticular placed in front of the display. Resolution loss is a
common drawback of both lenticular and barrier technology.
http://www.veritasetvisus.com
64
Veritas et Visus
3rd Dimension
September 2007
To compensate for the loss of spatial resolution, there is a desire for a concept that can switch from 3D mode to 2D
mode. This can be achieved by using switchable barriers or birefringent lenticulars that are either switchable or
polarization activated. The paper discusses the principles of lenticular-based 3D displays, as well as problems
related to their usage. Philips has taken the approach to slant the lenses at a small angle to the vertical axis of the
display to distribute the resolution loss into two directions. The resulting resolution loss is equal to a factor that is
the square root of the number of views.
Multi-view autostereoscopic display of 36 views using an ultra-high resolution LCD
Byungjoo Lee, Hyungki Hong, Juun Park, HyungJu Park, Hyunho Shin, InJae Jung, LG.Philips LCD, Anyang
LG.Philips reported on their development of an autostereoscopic multi view display with 36 views using a 15.1inch ultra-high resolution LCD. The resolution of LCD used for experiment is QUXGA – 3200x2400. RGB subpixels are aligned as vertical lines and size of each sub pixel is 0.032 mm by 0.096mm. Parallax barriers are slanted
at the angle of tan-1(1/6) = 9.46 degree and placed before the LCD panel to generate viewing zones. Barrier
patterns repeated approximately for every 6 pixels. So, the numbers of pixels decrease by six along the horizontal
direction and the vertical direction. Nominal 3D resolution becomes (3200/6) x (2400/6) = 533 x 400.
In slanted barrier configuration, the angular luminance
profile for each zone overlaps each other. For the case of a
2-view 3D system, cross-talk between left eye and right eye
zone deteriorates 3D image quality. However for multiview 3D, cross-talk between adjacent zones does not
always bring about negative effects. The LG.Philips
researchers changed the barrier conditions so that
horizontal angles between each zone are different and 3D
image qualities were compared. For each barrier condition
of different horizontal angles between viewing zones, they
found an acceptable range of 3D object depth and camera
displacement between each zone. The researchers
concluded that the smaller the viewing interval, the more
narrow 3D viewing width becomes but the better
perceptional resolution and the naturalness of 3D imaging.
So the optimum design is necessary depending on target
performance.
The image on the left has a less narrow 3D viewing
width compared to that of the image on the right,
resulting in a better perceptional resolution of the 3D
image quality. But it is a trade-off between the 3D
viewing width and 3D perceptional resolution, as the
image on the right provides a higher level of depth.
Image from LG.Philips LCD
Autostereoscopic Display with 60 Ray Directions using LCD with Optimized Color Filter Layout
Takafumi Koike, Michio Oikawa, Kei Utsugi, Miho Kobayashi, and Masami Yamasaki, Hitachi, Ltd., Kawasaki
Researchers at Hitachi reported about their development of a mobile-size integral videography (IV) display that
reproduces 60 ray directions. IV is an autostereoscopic video image technique based on integral photography (IP).
The IV display consists of a 2D display and a microlens array. The maximal spatial frequency (MSF) and the
number of rays appear to be the most important factors in producing realistic autostereoscopic images. Lens pitch
usually determines the MSF of IV displays. The lens pitch and pixel density of the 2D display determine the
number of rays it reproduces. There is a trade-off between the lens pitch and the pixel density. The shape of an
elemental image determines the shape of the area of view. Based on this co-relationship, Hitachi developed the IV
display, which consists of a 5-inch 900-ppi LCD and a microlens array. The IV display has 60 ray directions with 4
vertical rays and a maximum of 18 horizontal rays. They optimized the color filter on the LCD to reproduce 60
rays, resulting in a resolution of 256x192 pixels and a viewing angle of 30 degrees. These parameters are sufficient
for mobile game use.
http://www.veritasetvisus.com
65
Veritas et Visus
3rd Dimension
The design method optimizes the
parameters of the IV display,
consisting of three parts: increasing
the 2D image frequency, increasing
the number of rays without
decreasing the image frequency, and
optimizing the viewing area. The
proposed color filter layout increases
the number of rays without
decreasing the image frequency.
They claimed “The prototype is
suitable for displaying realistic
autostereoscopic images.”
September 2007
Layout of color filters and lenses for prototype IV display
Image from Hitachi
Development of SVGA resolution 128-directional display
Kengo Kikuta and Yasuhiro Takaki, Tokyo University of Agriculture and Technology, Tokyo
Researchers from the Tokyo University of Agriculture and Technology reported on their development of a 128directional display at 800x600 pixels, called a high-density directional (HDD) display. They previously had
constructed 64-directional, 72-directional, and 128-directional displays in order to explore natural 3D display
conditions to solve the visual fatigue problems caused by the accommodation-vergence conflict, and learned that
spatial resolution was too low for comfortable viewing. The newly developed display consists of 128 small
projectors each at 800x600 pixels; each with a separate LCoS device, using the field sequential technique is used to
display color images. All 128 projectors are aligned in a modified 2D arrangement; i.e., all projectors are aligned
two-dimensionally and their horizontal positions are made different from one another. All images are displayed in
different horizontal directions with a horizontal angle pitch of 0.28º. The horizontal viewing angle is 35.7º, and
screen size is 12.8 inches. The display is
controlled by a PC cluster consisting of
16 PCs. In order to correct image
distortion caused by the aberration of
imaging systems, images displayed on
the LCoS devices are pre-distorted by
reference to correction tables.
Photographs of 3D images generated by the 128-directional display
captured from different horizontal view points.
Image from Tokyo University of Agriculture and Technology
http://www.veritasetvisus.com
The researchers noted that image
intensity was low, because the light
power of the illumination LED was not
sufficiently high, and non-uniform
intensity was observed in the 3D images.
The 3D image intensity will increase by
employment of higher power LEDs, and
crosstalk will be reduced by adjusting
the width of the apertures in the projector
units. Interactive 3D image manipulation
programs developed for the previous
HDD display systems. The research was
supported by the “Strategic Information
and Communications R&D Promotion
Program” from the Ministry of Internal
Affairs and Communications, Japan.
66
Stereoscopic Displays and Applications XIX
Collaborate with industry leaders, researchers and developers
in these fields:
Autostereoscopic Displays
Stereoscopic Cinema
3D TV and Video
Applications of Stereoscopy
Volumetric Displays
Stereoscopic Imaging
Integral 3D Imaging
2D to 3D Conversion
Human Factors
Stereoscopic Image Quality
and much more at the principal venue in the world
to see stereoscopic displays!
Technical Presentations • Keynote Address • Discussion Forum • Demonstration
Session • 3D Theater • Poster Session • Educational Short Course
Conference Chairs:
Andrew J. Woods, Curtin Univ. of Technology (Australia)
Nicolas S. Holliman, Univ. of Durham (United Kingdom)
John O. Merritt, The Merritt Group
Stereoscopic Displays and Applications Conference
and Demonstration: 28–30 January 2008
Stereoscopic Displays and Applications Short Course: 27 January 2008
See the Advance Program in November 2007.
IS&T/SPIE 20th Annual Symposium
Sponsored by
IS&T
27–31 January 2008
San Jose Marriott and San Jose Convention Center
San Jose, California USA
electronicimaging.org
www.stereoscopic.org
Stereoscopic Displays and Applications XIX
28–30 January 2008
San Jose McEnery Convention Center, San Jose, California, USA
2008 Highlights
Keynote Address
Stereoscopic and Volumetric 3-D Displays Based on DLP®
Technology
Dr. Larry Hornbeck, Texas Instruments
Texas Instruments’ DLP® technology enables both stereoscopic and volumetric 3-D
imaging for a variety of markets including entertainment, medical imaging and
scientific visualization. For the first time in history, stereoscopic 3-D entertainment is
commercially viable and being implemented on a large scale. DLP Cinema® projectors, equipped with enhanced stereoscopic functions, support a variety of 3-D digital
cinema implementations. Today, approximately 20 percent of the more than 5,000
DLP Cinema systems currently installed take advantage of this 3-D functionality. In
the consumer HDTV market, DLP technology now enables 3-D display modes in DLP
HDTVs, with more than 16 models entering the market in 2007. Innovators in the
display industry are using DLP technology to advance displays from 2-D image
planes to 3-D volumetric space. Interactive, volumetric DLP displays provide real-time 3-D information
needed to perform complicated tasks, such as targeting cancer tumors in medical radiation therapy. This
informative talk is designed to further the understanding of the role of DLP technology in the 3-D world.
Topics include an introduction to DLP technology; the status of DLP technology in the 3-D home entertainment and theatrical markets; the primary attributes of DLP technology that uniquely enable single-projector
solutions for stereoscopic 3-D entertainment and volumetric imaging applications; how systems designers
are leveraging these attributes to optimize for key application-specific requirements; and some thoughts on
the future of stereoscopic 3-D entertainment.
Technical Presentations
Hear presentations from Sony Pictures Imageworks, REAL D, Disney, In-Three, SeeReal, NEC, JVC,
Actuality Systems, Hitachi, Philips Research, Nokia, Toshiba, Namco Bandai Games, and many
many more.
3D Theatre
A two hour showcase of
stereoscopic video and
stereoscopic cinema
work from around the
around the world shown
on the conference’s
polarized stereoscopic
projection systems.
Demonstration Session
See with your own two eyes a wide collection of
different stereoscopic displays systems. There
were over 30 stereoscopic displays on show at
SD&A 2007—imagine how long it would take to
see that many stereoscopic displays if it weren’t
for this one session!
Discussion Forum
Hear industry leaders discuss a topic of interest to the whole stereoscopic imaging community.
All that and more at Stereoscopic Displays and Applications 2008.
Veritas et Visus
3rd Dimension
September 2007
Interview with Greg Truman from ForthDD
Greg Truman is managing director of Forth Dimension Displays. He has served in that
position since the formation of CRLO Displays Ltd. in September 2004, and of its
predecessor, CRL Opto, and led the successful fund raising that formed Forth Dimension
Displays. He has also participated in the formation of new displays companies Opsys and
AccuScene. Prior roles have included corporate development manager of Scipher plc, where
he was part of the core team working on VC fund-raising (£5 million) and, subsequently, the
IPO of the Company (raising £30 million) in February 2000. Earlier, Greg Truman held roles
in sales, marketing, R&D project management and integrated circuit design within Thorn
EMI, GEC and in a joint venture in Malaysia. Greg Truman has a BSc in Computer Science
from the University of Hertfordshire.
Please give us some background information about Forth Dimension Displays. Forth
Dimension Displays develops, manufactures and supplies the world’s most advanced microdisplays using a
proprietary, fast-switching liquid crystal technology. The company - previously named CRLO Displays Ltd - was
formed in September 2004, funded by an “A” series round from Amadeus Capital Partners and Doughty Hanson
Technology Ventures. The company is located in Dalgety Bay, Scotland across the River Forth from Edinburgh,
with offices in California. In 2006, 82% of ForthDD’s rapidly-growing revenues were from products shipped to
international (non-UK) customers, mostly to the US, Germany, and Japan. ForthDD’s proprietary, high-speed
liquid crystal display and driver technology has major advantages in performance and cost. A portfolio of more
than seventy patents protects ForthDD’s Time Domain Imaging (TDI) technology.
What advantages do your ferroelectric devices have over competitive devices? The biggest advantage is that
the technology is all digital. It processes images in the time domain (TDI) on a single chip, without RGB subpixels, separate RGB beams and optics, and without tilting mirrors. This combination allows both amplitude and
phase modulated imaging. It provides
high native resolution, full 24-bit color
for showing high-speed motion. The
very fast switching (100 times faster
than nematic LC) characteristics of the
ferroelectric LCD material offers
benefits in a number of applications.
The most relevant of these to Forth
Dimension Displays is the ability to
produce high performance, color
sequential displays where it has major
advantages in performance and cost.
The technology is well-matched to the
new LED and laser diode light sources.
On the left is a cross-section of a liquid crystal-based microdisplay in
In addition, there are cost advantages:
operation. On the right is on of ForthDD’s microdisplay solutions. The
The single chip has no moving parts, so
company is focused on producing high-performance displays for near-toit is built using standard CMOS wafer
eye applications such as Head Mounted Displays (HMDs), which are often
processes. The absence of separate RGB
used to simulate scenarios that may be too dangerous or expensive to
replicate in the real world. ForthDD is the world’s leading supplier of
light paths enables customers to use
microdisplays into high-end immersive training and simulation HMDs.
simpler, lower cost optics in their
system integration.
http://www.veritasetvisus.com
69
Veritas et Visus
3rd Dimension
September 2007
You recently made some sizable staff reductions as a result of strategic decision to shift the focus of your
business. Tell us more. We decided that the prospects of success in the rear projection TV market were being
determined more by the price reductions in LCD TV than by the ability of Forth Dimension Displays to meet the
product specifications. Price decreases in LCD TVs have been far greater than any analyst forecast and this made it
very difficult to compete with a “high performance, value” RPTV product proposition. Forth Dimension Displays
already had an established reputation as the leading supplier of premium, high native resolution microdisplays in
training and simulation systems for military and aerospace customers. The company’s business is expanding with
products to customers in areas such as:
•
•
•
•
Confocal microscopy and image injection for medical diagnostic and surgical systems
Digital printing and imaging systems
High-resolution industrial metrology and process systems
Advanced 3D and holographic imaging systems
So the decision was made to drop the RPTV market and focus on those markets where the prospects were better.
Given your decision to withdraw from the rear projection TV market, can you share your thoughts about
the future of RPTVs? A quick review of the news and forecasts from the RPTV market, since we withdrew,
quickly shows that the pressure from LCD TV has continued to drive forecasts down and cause problems for those
companies continuing to focus on that market. It is going to be very difficult for RPTVs to compete in anything
other than the largest sizes (55 inch+) and emergent areas (e.g. 3D TV). Without some radical breakthrough, there
seems little future for RPTV in the mainstream 36-42-inch diagonal TV market.
Please share your opinions about the new class of “pico-projector” products. The pico-projection business has
the prospect of being a large market in terms of unit volumes, the challenge will be achieving profitable
manufacture of microdisplays/microdisplay chipsets at the low prices they will be sold at.
So you’re now focusing all of your efforts on high-resolution near-to-eye devices. How big do you see this
market? It is very difficult to know, as there is little good market data and it depends largely on whether you
perceive that high-resolution near-to-eye (NTE) devices will ever penetrate the consumer market in high volumes.
You are a fabless company, but still have semiconductor integration capabilities. Please tell us how your
supply chain works. Actually, we are not really “fabless” but “partially fabless”; we receive silicon wafers
manufactured on our behalf by a silicon foundry (the fabless bit) but do all subsequent processing (coating,
laminate assembly, cell filling, mounting etc.) within our own Dalgety Bay manufacturing facility. This gives us a
lot more flexibility and control versus trying to use a totally fabless approach and is one of our core strengths.
Although ForthDD does not produce its own silicon wafers, their facility in Dalgety Bay, Scotland does all the processing
(coating, laminate assembly, cell filling, mounting, etc), providing advantages related to quality and scheduling
http://www.veritasetvisus.com
70
Veritas et Visus
3rd Dimension
September 2007
What is your current production capacity? Currently around 20,000 microdisplays per annum but we can
increase capacity in Dalgety Bay to over 100,000 per annum should the market demand be there.
In terms of improving performance, is there one area in which you are focusing your development efforts?
The technology already performs extremely well in our key applications, so we are focused on making small
improvements across the board (while trying not to introduce negative side effects) and reducing cost of ownership
to allow our customers to expend their markets.
Your current solutions are at 1280x1024 pixels. Do you see a need to move to higher resolutions? Yes, we
expect to move from the current 1.3M pixel displays to 2M pixels and beyond.
What are the pitfalls in moving to higher resolutions? Is it more than just a larger die size? The key
challenges include the larger die size (or reduced pixel size) and the high data rates required. A high refresh rate
(120Hz), 2M pixel display requires around 10 Gbits/second to be delivered to the display.
What are the most promising applications for high-resolution near-to-eye devices? Forth Dimension Displays
is the clear global market leader supplying high-resolution microdisplays for near-to-eye (NTE) devices in the
training and simulation market and, right now, this is the best market for us.
Do you see 3D as a big opportunity for Forth Dimension? It already is. We supply a lot of our systems for use
in binocular, stereoscopic head mounted displays.
Tell us one of your favorite customer satisfaction stories. I
would prefer not to put words in our customer’s mouths – and
suggest you contact Marc Foglia of NVis. We contacted Mr.
Foglia, who provided these insights about ForthDD:
“ForthDD has been our supplier for microdisplays since our
company was founded in 2002, enabling NVIS to build an
entire product line of high-resolution head-mounted and
hand-held displays. While most microdisplay suppliers turn
away low volume manufacturers, ForthDD (then CRL
Opto) welcomed the opportunity to work with us. Over
time, great suppliers start to feel more like partners, and
ForthDD always treated NVIS as a partner. They made it
clear to us that our success was an important part of their
business. This was evident in their responsiveness to our
requests for technical information, documentation, and at
times, demanding delivery schedules. As a small
manufacturer, our ability to support our customers is often
tied to our suppliers’ support for us, and in this capacity our
relationship with ForthDD has been vital to our success.
We see a bright future together with ForthDD as both our
businesses grow.” http://www.nvisinc.com.
The NVis nVisor ST uses ForthDD’s highresolution ferroelectric liquid crystal on silicon.
The illumination scheme includes an RGB LED
mounted on the top-face of a polarizing beam
splitter prism. The microdisplay is illuminated by
the light reflected off the polarizing beam splitter
surface. Color is generated by the LED using an
advanced color sequential algorithm that rapidly
switches between red, green, and blue light which
is synchronized with the pixels on the LCoS device
to generate a 24-bit color image.
Given your earlier financial troubles, when do you expect to reach profitability? We have not had any
financial issues since the formation of CRLO Displays (later Forth Dimension Displays) in September 2004. We
have always had positive cash in the bank and have very supportive investors/owners. We expect to achieve break
even in late 2007 and move into sustained profitability in 2008.
Please describe what you think Forth Dimension will look like three years from now. I would expect that we
have grown substantially, are consistently profitable and cash generative and have a value that justifies our
investors’ belief and investment in us.
http://www.veritasetvisus.com
71
Veritas et Visus
3rd Dimension
September 2007
Interview with Ian Underwood from MED
Ian Underwood is CTO and a co-founder of MED as well as co-inventor of its P-OLED
microdisplay technology. Prior to 1999 he was at The University of Edinburgh where he
carried out pioneering research and development in the field of liquid crystal microdisplays
between 1983 and 1999. He is a Fulbright Fellow (1991), Photonics Spectra Circle of
Excellence designer (1994), British Telecom Fellow (1997), Ben Sturgeon Award winner
(1999), Ernst & Young Entrepreneur of the Year (2003), Fellow of the Royal Society of
Edinburgh (2004), and Gannochy Medal winner (2004). He is recognized worldwide as an
authority on microdisplay technology, systems and applications. In 2005, Ian was named
Professor of Electronic Displays at The University of Edinburgh. In addition to his fulltime post at MED, he sits on the Council of the Scottish Optoelectronics Association and
the Steering Committee of ADRIA (Europe’s Network in Advanced Displays). He is coauthor of a recently released book entitled Introduction to Microdisplays.
Please give us some background information about MED. MicroEmissive Displays
(MED) is a leader in polymer organic light emitting diode (P-OLED) microdisplay technology. The company was
founded in 1999 and has developed a unique emissive microdisplay technology by using a P-OLED layer on a
CMOS substrate. In late 2004, MED floated on the Alternative Investment Market of the London Stock Exchange
(AIM) following a fourth successful funding round, which raised £15.7M. Funding has been used for proof of
principle, technology development and establishing pre-production facilities in Edinburgh, culminating in the first
product release and commercial shipments of MED’s “eyescreen” microdisplays in December 2005. MED has been
awarded ISO 9001:2000 registration for the research, design, development and marketing of digital microdisplay
solutions and is working towards full accreditation in 2007. MED is headquartered at the Scottish Microelectronics
Centre, Edinburgh, Scotland and its manufacturing site is in Dresden, Germany. The company employs 62 people
and also has sales representatives and applications support located in Asia, the USA and Europe.
Do you regard yourselves primarily as a display company or as a semiconductor company that happens to be
making displays? MED is a displays company whose displays happen to use a CMOS active matrix backplane.
So, like all microdisplay companies, our manufacturing and cost base is very semiconductor-like.
Please provide an overview about your technology. MED’s eyescreen products are the world’s only polymer
organic light emitting diode (P-OLED) microdisplays. The full color eyescreen combines superb TV quality
moving video images that are free from flicker, with ultra-low
power consumption, enabling greatly extended battery life for the
consumer. This enhancement in battery usage time made possible
by the eyescreen will play a vital role in the widespread adoption of
portable head-sets for personal TV and video viewing in the
consumer marketplace. The design of the eyescreen, with its
integrated driver ICs and its digital interface, offers product design
engineers a robust design-in solution for smaller, lighter weight,
stylish products of the future, all for a size comparable with the
pupil of the human eye.
You are currently very close to offering a complete “display on
a chip” in a CMOS process. What remains to achieve this goal,
and what advantages are derived from offering a complete
solution?
Display-System-on-Chip (DSoC) means that the
microdisplay component is the only high-value or active
component required. MED’s eyescreen microdisplays offer
http://www.veritasetvisus.com
MED builds its display devices on a CMOS
active matrix device. This photo of a wafer
shows how more than a hundred devices can be
manufactured on a single die.
72
Veritas et Visus
3rd Dimension
September 2007
emissive operation which is equivalent to having an “integrated” backlight. The use of a CMOS backplane allows
the functionality of the display driver IC to be integrated. The display has a high level of integrated configurability
such as brightness control, image orientation, frame rate, switching between digital data formats, down-scaling of
incoming data stream, etc.
More generally, what are the primary advantages of OLED microdisplays as compared to LC
microdisplays? The primary advantages are:
•
•
•
Lower power equating to longer battery life
Higher contrast equating to a more vivid image
Higher pixel fill factor equating to higher perceived image quality
In 2004, your display was listed in the Guinness Book of World Records as the world’s smallest color TV
screen. Is this still true? Do you have plans to make even smaller displays? MED’s original display was the
ME3201 (320x240 monochrome). The backplane of the ME3201 was used to create a color microdisplay –
ME1602 (160x120 color) by applying color filters over a 2x2 array of monochrome pixels to create a single color
pixel. ME1602 made it into the Guinness Book of World Records in 2004 and 2005. But Guinness has more
records than they are able to put into the book each year so MED has not appeared in the book since 2005.
As a CDT licensee, does CDT’s polymer OLED development work in large area displays translate without
problem to your microdisplays? CDT is a developer and licensor of generic IP in polymer OLED technology.
MED and CDT have worked very closely together to ensure that MED achieves the best possible implementation
of that IP in its field of application.
OLEDs generally face problems related to barrier layers to protect from moisture and oxygen. Do you face
these same problems, or is it actually much simpler to adequately protect a microdisplay versus a larger
display? All OLEDs must be encapsulated in order to ensure reliable performance by protecting the OLED layer
from the detrimental effects of atmospheric oxygen and moisture. MED has developed an encapsulation strategy
that is appropriate for, and compatible with, P-OLED microdisplays.
In terms of definition, you currently show off 320x240 pixels. Since this is less than even standard TV, are
you under any pressure to increase the resolution? 320xRGBx240 (QVGA color) is a typical definition for lowcost, low-power consumer video glasses. Viewing TV or video content from a personal DV player or iPod, users
are normally satisfied with that. Even those who would prefer
say VGA may not readily accept the additional cost, bulk and
power consumption.
What size do you typically achieve with regard to the
“virtual” image? Does magnifying to larger sizes diminish
the image quality? In other words is there some “sweet
spot” related to device size and virtual image size? The
virtual image is best described in terms of “Field of View” –
the angle subtended at the eye by the diagonal of the image.
The norm is to make the FoV as large as possible without the
individual pixels becoming resolvable. (If individual pixels
can be resolved, this reduces the perceived quality of the
image). The appropriate FoV depends on a number of factors
relating to the display, the system and the application; these
include display definition and pixel fill factor. For eyescreen
ME3204 the appropriate FoV is typically around about 20
degrees.
http://www.veritasetvisus.com
MED’s tiny eyescreen microdisplay with a 6 mm
(0.24-inch) diagonal pixel array can be
combined with magnifying optics to produce a
large virtual image, that appears to the eye to be
equivalent in dimensions to the picture on a TV
screen or computer display.
73
Veritas et Visus
3rd Dimension
September 2007
Your devices are entirely digital with no analog interface. Tell us about what this means in terms of
cost/performance. The future is digital. MED has implemented two interface possibilities into eyescreen ME3204
– CCIR 656 and serial RGB. An all-digital signal path maintains flexibility, reduces power and maintains best
possible image quality. In the case of an application where the data source is analog, e.g. composite video, a low
cost/power video pixel decoder can be used to convert incoming data to CCIR 656.
Your eyescreen devices have recently been showcased in systems that utilize Qualcomm’s MDDI protocol.
Why is this important and do other mobile standards (such as the MIPI standard) provide similar results?
MDDI (Mobile Digital Display Interface) allows an all digital signal path and has provision for driving an external
display. This realizes all of the benefits of eyescreen from a cell phone. The MDDI/eyescreen demo runs from the
cell phone battery – it does not require an external battery box.
Tell us about the markets you intend to address? MED is aiming specifically at consumer markets with existing
or potential for high volume. Our first target is video glasses and we also plan to target electronic viewfinders for
applications including digital cameras, video cameras and night vision systems.
Mobile TV is still something of a question mark. Please give us your thoughts about this market. 3G took off
in Korea and Japan, migrating to Europe then USA and onwards. Similarly, mobile TV is now taking off in Korea.
If mobile TV takes off, what will entice users to consider near-to-eye devices that incorporate MED
microdisplays? Considerations such as:
•
•
•
•
Enhanced viewing experience in any environment (e.g. bright sunlight or fluorescent light); the NTE
device can be configured to block ambient light
Larger image than that available from cell phone or iPod or other pocketable device
Privacy (no one can look over your shoulder)
Consideration for others (what you are watching does not disturb the person sitting next to you in a
plane or train)
Tell us about your work related to developing 3D video glasses? MED worked with the EPICentre at the
University of Abertay and Thales Optics (now Qioptiq) to develop a stereoscopic 3D headset using eyescreen
microdisplays. That project, called EZ-Display, was sponsored by the UK Department of Trade and Industry and
finished in 2006.
One of the historical issues associated with near-to-eye devices is related to nausea. Adding 3D to the
equation and it seems like you’ll need to add “sanitary bags” to your bill of material. What sorts of things
can you do to minimize these inner-ear problems? MED is not a developer and manufacturer of video glasses.
Optimization of the end product to provide a comfortable viewing experience rests with the system manufacturer.
If someone already wears glasses, do they need to wear prescription video glasses? MED is not a developer
and manufacturer of video glasses. Some video glasses can be worn over prescription spectacles and some cannot;
some have focus adjustment and some do not; some could incorporate custom prescription lenses and some cannot.
Your new manufacturing center in Dresden, Germany is a big step forward. When do you expect to have
commercial products ready to ship from the facility? On 24th July 2007, MED announced that it had made its
first production shipment from its Dresden manufacturing facility on schedule.
Why did you choose to build in Dresden rather than in Edinburgh or elsewhere? Dresden was an ideal
selection because it has the Fraunhofer IPMS at its heart and is at the very forefront of electronic innovation. We
are very proud to be part of Silicon Saxony and are looking forward to sharing our success in such a vibrant
technological and cultural center.
http://www.veritasetvisus.com
74
Veritas et Visus
3rd Dimension
September 2007
Twenty Interviews
Volume 2 just released!
Interviews from Veritas et Visus newsletters – Volume 2
+ 21st Century 3D, Jason Goodman, Founder and CEO
+ Add-Vision, Matt Wilkinson, President and CEO
+ Alienware, Darek Kaminski, Product Manager
+ CDT, David Fyfe, Founder and CTO
+ DisplayMasters, David Rodley, Academic Coordinator
+ HDMI Licensing, Les Chard, President
+ JazzMutant, Guillaume Largillier, CEO
+ Lumicure, Ifor Samuel, Founder and CTO
+ Luxtera, Eileen Robarge, Director of Marketing
+ QFT, Merv Rose, Founder and CTO
+ RPO, Ian Maxwell, Founder and Executive Director
+ SMART Technologies, David Martin, Executive Chairman
+ Sony, Kevin Kuroiwa, Product Planning Manager
+ STRIKE Technologies, David Tulbert, Founder
+ TelAztec, Jim Nole, Vice President – Business Development
+ TYZX, Ron Buck, President and CEO
+ UniPixel Display, Reed Killion, President
+ xRez, Greg Downing, Co-founder
+ Zebra Imaging, Mark Lucente, Program Manager
+ Zoomify, David Urbanic, Founder, President, and CEO
78 pages, only $12.99
http://www.veritasetvisus.com
http://www.veritasetvisus.com
75
Veritas et Visus
3rd Dimension
September 2007
Shovelling data
by Adrian Travis
Educated at Cambridge University, Adrian Travis completed his BA in Engineering in 1984,
followed by a PhD in fiber optic wave guides in 1987. With his extensive optics experience, he
is now an internationally recognized authority on flat panel displays. He is a Fellow of Clare
College and lectures at Cambridge University Engineering Department. He is the inventor of
the “Wedge” and is now working to commercialize the product through the Cambridge spinout company Cambridge Flat Projection Displays, Ltd. (CamFPD). Adrian is also a cofounder and the chief scientist for Deep Light, a company that is planning to commercialize
a high-resolution 3D display system.
If we are serious about getting true 3D, then we need displays which modulate light in
both azimuth and elevation and this implies enormous data rates. Suppose we want 100
views in azimuth and elevation; then each pixel has to be in effect a miniature video
projector with 100 by 100 pixels so our data rates will be 10,000 times those of a
conventional flat panel display.
This produces a dilemma: users want displays to be big, but big displays struggle to handle such high data rates
because RC time constants tend to keep row addressing times at between two and four microseconds. Silicon
microdisplays such as TI’s DMD and the ferroelectric LCOS devices from Displaytech and Forth Technologies
easily manage binary frame rates in the range of 2-5 kHz but then they are too small.
The obvious solution is to use projection but instead of one microdisplay, we will need 100 microdisplays running
at 5 kHz if we are to increase data rates by a factor of 10,000 and that sets out the starting conditions for the 3D
designer – how now are we to connect all these devices?
RISC (Reduced Instruction Set) processors brought important advances in computing when processor architects
realised that they did better by make their devices simple in order to increase clock speed, and perhaps the same
might help with microdisplays. After all, both micro-mirrors and ferroelectric liquid crystals can switch at 40 kHz
or so when addressed at several volts and the frame rate tends instead to be limited by the number of lines times the
line-address time. Suppose that we make a “Reduced Microdisplay” by lowering the number of lines from 500 to
100; then the frame rate might get up towards 30 kHz which equals the line rate of a conventional 2D display with
500 lines. The significance off this is that we could display 3D images line by line and this might greatly simplify
the optics.
Our ideal 3D display should have a wide field
of view and 100 views should suffice to span
100º, but lenses with that angular range tend to
be bulky and expensive. A ball lens, for
example, can collimate light from any angle
because of its spherical symmetry but a ball lens
the size of a television would be unthinkable.
However, if we are displaying images line by
line, then our optical system need be only one
pixel thick so the ball lens can be replaced by a
disc as shown:
Stephen Benton used spinning prisms to synthesize his famous holographic video line by line and is reported to
have been eager to get away from moving parts. But line-scanning displays have been demonstrated by several
independent teams as a strategy for simplifying the display of 2D images.
http://www.veritasetvisus.com
76
Veritas et Visus
3rd Dimension
September 2007
A particularly successful concept seems to have been
that in which the emission from a line projector was
passed into a slab light-guide, then piezo-electrics were
used to push gratings into the evanescent field, one line
at a time. The gratings eject the light from the lightguide and need only move a few microns to switch on
and off which sounds easy until one looks through a
right-angled prism and tries to push an object in and
out of the evanescent field at the hypotenuse. Except in
dust free conditions, most solid objects cannot get close
enough while fluids such as water work only too well
and are reluctant to release. This brings me to the
reason why I wrote this article: a few days ago, I tried a
pencil rubber and that works fine, even in my filthy
laboratory.
http://www.veritasetvisus.com
http://www.veritasetvisus.com
77
Veritas et Visus
3rd Dimension
September 2007
3D camera for medicine and more
by Matt Brennesholtz
Matthew Brennesholtz is a senior analyst at Insight Media. He has worked in the display
field since 1978 in manufacturing, engineering, design, development, research, application
and marketing positions at GTE/Sylvania, Philips and General Electric. In this time he has
worked on direct view and projection CRTs, oil film light valves and systems, TFT LCD
microdisplays and systems, DMD systems and LCoS imagers and systems. He has
concentrated on the optics of the display itself, the display system and display
instrumentation. He has also done extensive work on the human factors of displays and the
relationship between the broadcast video signal and the image of that signal as displayed.
More details on this promising market segment and many other conclusions are included in
Insight Media’s “3D Technology and Markets: A Study of All Aspects of Electronic 3D
Systems, Applications and Markets”. This article is reprinted with permission from Insight
Media’s Display Daily on August 8, 2007.
On August 8, Avi Yaron and Joe Rollero of Visionsense visited Insight Media headquarters in Norwalk,
Connecticut to demonstrate their 3D camera technology. This micro-miniature camera technology can be expected
to appear in medical equipment, primarily for minimally invasive surgery (MIS), in mid-2008.
3D MIS has been well received by doctors, at least in principle. According to Yaron, the Intuitive Surgical system
has been especially well received. Unfortunately this system is very large and very expensive, even by the
standards of medical equipment, which has limited its sales and penetration into the minimally invasive surgery
market. Currently, minimally invasive surgery represents only about 15% of all surgery. One of the limits on MIS
is based on the difficulty of doing surgery with only 2D images.
The basic Visionsense technology uses a single sensor, which can be a CMOS or CCD imager. Their “Punto”
detector, shown in the photo with a camera containing the detector, has a diagonal of 3.3 mm. The sensor has a
microlens and a color filter array applied to it in a manner much like an
autostereoscopic LCD panel. In an autostereoscopic display, this produces two
or more “sweet spots” for the pupils of the eyes to receive different images. In
the camera configuration, the system essentially runs backwards and the two
sweet spots become the interpupillary distance for the 3D camera. This single
sensor approach can produce very small cameras. The relatively large physical
size of other 3D camera offerings with two sensors for medical applications has
limited their use in MIS. Visionsense has five granted patents on the technology
plus numerous other patents pending, according to Yaron.
This arrangement provides an interpupillary distance of about 1/2 to 2/3 the imager diagonal. This provides stereo
images out to about 20 to 30 times the interpupillary distance, or in the case of the Punto chip, out to an inch or
two. While this is enough for many MIS applications, when it is not enough there are two approaches to increase
the stereo range. First, you can use a larger image sensor, and Visionsense is working on a high-resolution sensor
6.8 mm in diameter. If that doesn’t provide a large enough interpupillary distance for an application, prisms can be
used in the pupil plane to separate the two pupils as necessary.
Visionsense is developing the camera sensor, camera module including optics and electronics and support software.
They demonstrated for me both pre-recorded 3D video of actual medical procedures and live 3D video coming
from a sample camera containing the Punto sensor. Yaron emphasized that Visionsense was display technology
neutral and the final display for any medical instrument using Visionsense technology would be chosen by the
Visionsense customers, not by Visionsense itself. Their technology has been used as an image source with
http://www.veritasetvisus.com
78
Veritas et Visus
3rd Dimension
September 2007
MacNaughton, Planar and Philips 3D displays and could be used with any other 3D display technology as well.
They have also used single and dual projector installations for demonstrations at medical trade shows, for example.
One interesting software tool Visionsense has developed is called Image Fusion. This tool takes an image from a
3D source such as an MRI or CAT scan and warps it to overlay the live 3D image from the camera. The system
shows the fused image on the surgeon’s 3D monitor. This would allow the surgeon to see, for example, how far
away his tools are from the spinal cord while doing disc surgery, even if the cord is not yet visible in the camera
image.
After leaving Insight Media, Yaron and Rollero were heading up to Boston for several scheduled meetings. While
Yaron would not disclose any customers or potential customers at this point, he said it was necessary to visit both
potential Visionsense customers in the medical equipment business and potential end users in hospitals. Medical
equipment manufacturers are unlikely to commit to designing and building a piece of equipment using new
technology until the concept and the design has been blessed by doctors. Therefore, it’s necessary for Visionsense
to visit end users as well as medical equipment manufacturers.
No medical equipment is currently on sale using Visionsense cameras. Yaron expects this to change in mid-2008,
however. At that point he expects FDA-approved medical equipment containing Visionsense technology to appear
on the market. While Visionsense is currently focused on medical systems, Yaron eventually expects there will be
non-medical applications for the technology as well.
>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<
3D isn’t so easy
by Chris Chinnock
Chris Chinnock is the founder and senior analyst at Insight Media. He serves as
managing director for Insight Media’s operations in newsletters, reports, consulting and
conferences as well as new business development. H combines a broad background in
display-related consulting with 15+ years in a variety of engineering, management, and
business development positions at MIT Lincoln Labs, Honeywell Electro-Optics, GE
AstroSpace and Barnes Engineering. Chinnock has a Bachelor of Science degree in
Electrical Engineering from the University of Colorado. This article is reprinted with
permission from Insight Media’s Display Daily on August 29, 2007.
I am here in Berlin to attend IFA, but also to attend a small 3D event focused on
stereoscopic 3D (S-3D). So far, I have learned some interesting points and seen some
interesting demos. I am becoming more and more bullish on the 3D markets, but the
show has also reminded me how difficult doing 3D can really be and the dangers the
industry faces if it does not do it right this time.
First the bullish part: big players are starting to realize that 3D technology is maturing and that viable applications
and market opportunities exist. Texas Instruments is using the upcoming CEDIA trade show to highlight its 3D
DLP rear projection TVs to the home theater crowd. That’s pretty mainstream. At our upcoming 3D BizEx event,
just look at who is speaking, sponsoring or supporting the event: Sony, Mitsubishi, Philips, Dolby, RealD,
Paramount Pictures and Dreamworks Animation. These are important and major players. 3D digital cinema is here,
TV is coming and a host of consumer-oriented 3D products are likely in the next few years. 3D may indeed be the
next big wave in consumer electronics.
That’s the bullish part, now for the bear’s view. First and foremost is the danger that we will see another 3D
“bubble” that will founder on poor implementations amid high expectations. This happened in the 1950s in cinema
http://www.veritasetvisus.com
79
Veritas et Visus
3rd Dimension
September 2007
as Hollywood rushed to embrace 3D only to see it abandoned very quickly due to shortcomings in the technology
and resultant image quality.
Today’s 3D cinema technology is vastly superior and image quality is quite good. But creating a good 3D movie
takes a lot of effort and special skills to add just the right “Dimensionalization” on a scene-by-scene basis. There
are skilled people to do this, but there are also more than a dozen studios gearing up to do 3D movies. Is there
enough talent and training available to ensure these projects produce high quality content? A bad 3D
implementation of a first run movie can have a disproportionate impact on the impression it creates with the
viewing public about 3D.
Another area of concern is content conversion from 2D to 3D. Games and other computer generated graphics
content with a 3D database behind them could fare well in conversion to stereoscopic 3D. But so far, gaming
content developers have merely converted 2D games to 3D games. What is needed are 3D games that take
advantage of the ability to hide or reveal objects that may not be visible in a 2D version. This will create a real
reason to own a 3D game.
Converting 2D still images to S-3D is not too hard, but converting 2D video is tough. Doing it offline where
professionals can adjust and tweak is the preferred, and expensive, approach but there is a huge pull to do it
automatically in real time. Most of real time demos I have seen of 2D to 3D conversion have not been very
compelling. In fact, some are not good at all. We need to be careful in how fast we roll out these solutions so as not
to create bad impressions of 3D that will take years to reverse.
And let’s not forget the hardware implementation of 3D display systems. Even at trade shows dedicated to 3D, I
have seen demos that are of poor quality or even set up incorrectly. If the people who are trying to sell 3D can’t
configure it properly or create compelling demos, that’s a problem. Case in point: autostereoscopic 3D displays are
ones that do not require glasses to see the 3D effect. To do this, the technology requires that you trade off image
resolution in order to enable multiple viewing zones across a fairly wide field of view. One clear lesson with using
such displays is to limit content to low-resolution images such as icons or larger graphic elements.
At S-3D, I saw one demo that was showing SD resolution video on an autostereoscopic display. As expected, the
video was so compromised it looked like it was out of focus. Also, the viewing zones were so narrow that it was
difficult to find and keep the image in full stereo.
Other demos showed large rainbow patterns and similar difficulties in visually acquiring the image. Another
common mistake is reversing the left and right images when coupled to the polarization filtering glasses. This
creates a stereo image, but it looks funny and will create eye strain. How can manufacturers ensure this doesn’t
happen? There are no standards or methods that I know of.
And, 3D needs to recreate, as much as possible, the way we see the world. You cannot see stereo pairs when
looking at objects beyond 50 feet or so, so don’t try to add dimension to these long distance shots - it looks wrong.
And, when moving your head laterally around a stereo display, don’t maintain the same object orientation as you
move. That’s not how it works in the real world. Finally, making objects jump out at you may work in a theme park
3D experience, but not if you want to use the 3D display for extended periods.
The bottom line: while we have to solve the technology part, we can’t sell the technology to the consumer. It’s
about the application. Let’s stop being obsessed with the technology and focus on making the applications for the
technology to work. Once it is easy to use and offers a clear benefit over 2D, 3D will be adopted. But let’s not be
too over-anxious to roll out 3D systems either. Bad implementations create a poor impression and a backlash that
could take years, maybe decades to reverse.
http://www.veritasetvisus.com
80
Veritas et Visus
3rd Dimension
September 2007
Tuesday, September 18, 2007
Session 1: 3D Public Displays
Rob de Vogel, Sr. Director Business Creation
Leveraging 3D Digital Signage into 3D
8:15 AM Philips 3D Solutions
Entertainment
Jeremy Tear, Consultant/Partner
Does 3D Advertising Increase Brand Retention?
8:40 AM GGT Consultants Ltd.
Keith Fredericks, CTO
3D Digital Signage Solutions
9:05 AM Newsight
Panel Discussion - Moderator: Chris Yewdall, CEO, DDD
Session 2: Stereoscopic 3D for Gamers
Neil Schneider, President & CEO
Taking Stereoscopic 3D Gaming to the Next Level
10:10 AM Meant To Be Seen (MTBS)
Tarek El Dokor, CTO
New Ways in Game-Human Interface
10:35 AM Edge 3 Technologies
Richard Marks, Manager of Special Projects
TBA
11:00 AM Sony Computer Entertainment of America
11:25 AM Panel Discussion - Moderator: Arthur Berman, Analyst, Insight Media
Session 3: 3D Digital Cinema
Matthew Brennesholtz, Sr. Analyst
1:00 PM Insight Media
Lenny Lipton, CTO
1:25 PM RealD
Prospects for 3D Digital Cinema
Next Steps in the 3D Cinema Revolution
Dave Seigle, President/CEO
1:50 PM InThree, Inc.
Trade-Offs in 2D to 3D Conversion
John Carey, Vice President of Marketing
2:15 PM Dolby Laboratories
Aaron Parry, Executive Producer
3:00 PM Paramount Pictures
Jim Mainard, Head of Production Development
3:25 PM Dreamworks Animation
Stereoscopic Technology Options for 3D Digital
Cinema
Challenges to 3D Filmmaking
Authoring in Stereo: Rewriting the Rules of Visual
Story Telling
Panel Discussion - Moderator: Chris Chinnock, President, Insight Media
Session 4: Novel 3D Technologies
Thomas Ruge, Representative of the Americas
Light Field Reconstruction Approach to 3D
Displaying
Alex Corbett, Sr. Engineer
Steerable Holographic Backlight for 3D Autostereo
Displays
4:10 PM Holografika
4:35 PM Light Blue Optics
Exhibits Open to Conference Attendees only 5:00-8:00; Networking Session 5:45-8:00
http://www.veritasetvisus.com
81
Veritas et Visus
3rd Dimension
September 2007
Wednesday, September 19, 2007
Session 5: 3D TV
8:15 AM
8:40 AM
9:05 AM
9:30 AM
10:35 AM
11:00 AM
11:25 AM
11:50 AM
Arthur Berman, Analyst
3D in the Home
Insight Media
Chris Yewdall, CEO
3D TV - Crossing the Chasm
DDD
David Naranjo, Director, Product Development
Educating Retailers & Consumers on 3D TVs
Mitsubishi Digital Electronics America
Nicholas Routhier, President
Joining Forces: Is a 3DTV Consortium
Sensio
Needed?
Panel Discussion - Moderator: John Merritt, CTO, Merritt Group
Session 6: 3D Visualization
Paul Singh
Minimally Invasive Surgery
Albany Medical College
Ronald Enstrom, President & CEO
Geospatial Analysis
The Colfax Group
James Oliver, Director VRAC
Fully Immersive Ultra-High Resolution Virtual
Iowa State University
Reality
Bringing Immersive Visualization into the
Johnny Lawson, Director of Visualization
Light: The Challenges and Opportunities
Louisiana Immersive Tech Enterprise
Creating the LITE™ Center
Panel Discussion - Moderator: André Floyd, Marketing Manager, Sony Electronics
Exhibits Open to Public 10:00 to 3:00 PM
Special Screening at Dolby (Buses leave for Dolby at 3:15 and 3:30 PM)
http://www.3dbizex.com
>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<
Selling to the market for 3D LCD displays…
by Jim Howard
Jim Howard has over 20 years OEM sales experience in the computer and consumer electronics industry. Having sold
to many of the major OEMs in the computer industry, Jim has a database in excess of 60,000 companies available for
customer work covering all the sales channels. Jim has worked for such companies as Canon and their computer
components division, Sharp Electronics, Panoram and others. Jim earned his MBA in Technology Management. He
has extensive sales experience in developing OEM accounts and sales channel strategies, sales implementation tactics
and Sales Plans. He and his family reside in south Orange County.
Selling 3D LCD display technology to the major OEMs and distributors like any other computer product requires
thorough, meticulous and thoughtful logistical strategic planning, tactical execution and technical support. There
are several critical markets that are now seriously evaluating the implementation of 3D LCD displays:
http://www.veritasetvisus.com
82
Veritas et Visus
•
•
•
•
•
3rd Dimension
September 2007
I recently had the opportunity to visit a lab where I saw a demonstration of 3D LCD displays, 3D LCD
TV and a 3D LCD gaming console. The images were astonishing. The images did appear to jump out
into mid-air and it made for the most exciting video game realism I had ever seen.
I know the director of a major medical facility here in Southern California who expressed definite
interest in 3D LCD displays for their medical imaging applications. He said he thought this new state-ofthe-art technology is important for 3D and 4D ultrasound and other applications.
I know computer products distributors who are looking for the most advanced display systems available
and who also represent the niche known as “specialty displays”. For them, 3D display technology is a
no-brainer. It is not a matter of if, but when, they can launch new 3D products.
In my travels and research I had the pleasure of meeting one of the VPs of a major multi-billion dollar
aerospace company. He considers advanced 3D LCD and widescreen panoramic displays to be
important for their government, military and aerospace applications.
The gaming market itself has now grown so fast that the computer industry is now taking it seriously,
whereas previously it was regarded mostly as just a small market for youth. It has grown to be an
established major market for consumers of all ages. 3D is considered to be an essential part of this
growing market.
3D display technology is the next big thing: 3D is the next logical step in the technology cycle. Imagine the
excitement of a truly immersive experience in which you take your family to the theater and you watch an action
adventure movie with 3D images along with the great Dolby sound systems we have. Now you feel like you are
taking part in the action for the images are more real. Now imagine having one of these systems in your home – as
part of your home theater or home PC system. Soon you will be able to do just that.
Advice to OEMs planning on 3D technology: Now it is just a matter time before OEMs take their technology and
form tactical plans to execute and build market momentum. It looks like the next CES show this January will be
more exciting than ever.
Æ Make sure you do the basics well. The basics are fundamental product fulfillment skills that deliver the
product to you quickly and in good working order. They are:
•
•
•
•
A quality product line. The supplier should follow the products and trends and offer a range of quality
products to meet the OEM customer’s particular needs.
Responsiveness to inquiries and good communication. Nobody likes calling, leaving a message and then
waiting a day or two for a call back, especially if they are thinking of spending money with you.
Easy ordering. Your OEM customers don’t want to have to undertake an extensive treasure hunt to find
the products they need. Also, how easy are you to do business with? If there is a stack of paperwork
required this can be a disincentive to the OEM customer and frequently OEMs prefer companies that are
easy to work with and have minimal paperwork or costly delays.
On-time order delivery. Slow product delivery can slowly usher an OEM or a supplier out of business.
It’s taken for granted as a no-brainer, however, that this is one of the basics many competitors fail at and
other leaders succeed at: on-time delivery. This is important and OEM customers appreciate it and show
it with more orders. Fast, friendly and value-added service.
Æ Make sure you have the differentiators OEMs need. This is where the intangibles come in strongly.
OEM differentiators are those things that you do to make sure you present the best solution for the problem or
need for the OEM. Being good at these requires solution experience, technical knowledge as well as an
understanding of their specific needs and a strong relationship connection with the OEM – not just
understanding the needs of a similar customer. OEMs look for suppliers that differentiate themselves in terms of
business philosophy, experience and knowledge in their area of need, and post-sales practices and support.
http://www.veritasetvisus.com
83
Veritas et Visus
http://www.veritasetvisus.com
3rd Dimension
September 2007
84
Veritas et Visus
3rd Dimension
September 2007
3D maps – a matter of perspective?
by Alan Jones
In 1492, Columbus first set off to the west in an attempt to get to Asia. For centuries before and after there was a
debate about whether the Earth was round or flat. Columbus’ feats succeeded in adding fuel to the debate because
he did not circumnavigate the world and left open the possibility that the world was flat, if a little bigger. The Flat
Earth Society continued the debate until the late 20th century when some photographs taken from Space provided a
rather uncomfortable truth for them that the world was indeed round.
But is it? I have been playing with Microsoft’s Virtual Earth and Google Earth. Both are relatively straight-forward
to use. I found the Google Earth interface a little easier to navigate while the Virtual Earth interface was what one
could describe as more lay-person friendly. From an image point of view both are rich in content although Google
Earth provides more support information which can become obtrusive if not turned off.
Spending what time I have with each of them has posed some thoughts and questions, which might generate some
comments from you, our readers. Take the Statue of Liberty as an example. A 2D image looks pretty convincing
that the Statue is indeed a 3D object. But is all we are seeing correct? The left-hand image is the option the “bird’s
eye” view – a photograph taken with perspective. The center image is the map data, which gives the illusion that
the Statue is not there, although there is a shadow! If we rotate the image, as in the right image, then the Statue
comes into view, but where has it come from?
With the high profile that New York enjoys it is not really a surprise that a great deal of effort has gone into
maximizing the representation of the city. Not least of these efforts is the 3D modeling of the New York skyline,
including the Statue of Liberty. On the left is the Virtual Earth image which clearly shows the statue while on the
right is a similar Google Earth image – but oh, where is the Statue?!
http://www.veritasetvisus.com
85
Veritas et Visus
3rd Dimension
September 2007
If we go somewhere less glamorous, then what do we see? Blackpool was famous in the 20th century as a place for
“Sea, Sun and Fun”. It is positioned on the north-west coast of the UK and was a favorite summer haunt for the
mill-workers out of the Lancastrian cotton mills. In the hey-day of the mills, technology was defined by the
Spinning Jenny and Water Frame – much “beloved” by the Luddites! As a tourist attraction, Blackpool has a Tower
which is modeled on the Eiffel Tower in Paris. I have chosen this place because I suspect that there would be little
incentive to focus 3D imagery resources on such a small town.
In the image on the left, the Tower in Blackpool can just be made out. Rotate to the horizon and the Tower
has disappeared, but not the shadow!
A favorite place for both our publisher and me is Stonehenge. Trying the same test as before, in the Virtual Earth
images, we loose those magnificent Preseli Bluestones when we rotate to the horizon,
Virtual Earth views of Stonehenge are shown in these two images, but note the tilted view loses the sense of depth
http://www.veritasetvisus.com
86
Veritas et Visus
3rd Dimension
September 2007
The Google Earth team tried to preserve the depth image of the stones with a 3D overlay, but unfortunately they are
about 30 feet to the east of where they should be.
Google Earth images are shown in the above images, where height is maintained, but not the correct position.
Finally, let’s take a look at the highest place on earth, Mount Everest – depicted in Virtual Earth in the top two
images and in Google Earth in the bottom two images:
http://www.veritasetvisus.com
87
Veritas et Visus
3rd Dimension
September 2007
I guess that there is so much depth involved in a photograph of Mount Everest; we cannot fail to be impressed by
the imposing images.
Summary: From my short time looking at these 3D maps, the conclusion is that we do not yet have the rich 3D
data source in the “real” world that we have in the virtual worlds created inside computers by games, medical
imaging, modeling, etc. However we should be confident that this need will be satisfied over time as the capability
to capture depth information with increasing detail is deployed in volume. What is not so clear is that we shall get
the compelling 3D displays that will allow us to exploit that rich information source as and when it becomes
available.
Postscript: Regular readers will know that I take pride in our old thatched cottage (built in 1766 – about the time
the Spinning Jenny was being invented!). The outside has been used many times in this and other sources, but what
does it look like in Virtual Earth and Google Earth?
But when we do a 3D rotation, there is no depth. Maybe I live on a Flat Earth after all!
Alan Jones retired from IBM in 2002 after 35 years of displays development, marketing and product management. He
was a frequent speaker at iSuppli and DisplaySearch FPD conferences.
http://www.veritasetvisus.com
88
Veritas et Visus
3rd Dimension
September 2007
PC vs. Console – Has the mark been missed?
by Neil Schneider
Neil Schneider is the president & CEO of Meant to be Seen (http://www.mtbs3D.com). He
runs the first and only stereoscopic 3D certification and advocacy group. MTBS is nonproprietary, and they test and certify video games for S-3D compatibility. They also
continually take innovative steps to move the S-3D industry forward through education,
community development, and member driven advocacy.
I’d like to point you all to this article I read online recently that has me feeling very
conflicted. It is an interview summary with Mr. Roy Taylor, nVidia’s VP of Content
Relations. (http://www.tgdaily.com/index2.php?option=com_content&do_pdf=1&id=33143).
There are three things happening here that don’t make a lot of sense to me:
1. He is claiming that players have switched to consoles over PCs for gaming.
2. He strongly believes that the PC innovation that will drive the PC gaming market
share are high-resolution screens.
3. He thinks that it is acceptable for a good game to require a $20,000 (yes, twenty THOUSAND) dollar PC.
I found this article troubling for a number of reasons. First, and most importantly, it doesn’t make any business
sense to me.
Yes, console games are very successful, and they will continue to be very successful. Back in my day, computers
and consoles lived happily ever after with Coleco Vision, Atari, Commodore, Apple, Amiga, and so on. In fact, it
wasn’t the consoles catching up to computers, it was computers catching up to – and surpassing – the consoles! So,
like a fine wine connoisseur, there will always be a market for those who like to buy their white in a box, and their
red in a bottle. The PC market is the bottle market, and Mr. Taylor is at least correct in recognizing that.
Now, he is very much correct that to continue to reap the benefits of superior game developer attention, the PC
market has to differentiate itself from the console market. Unfortunately, his mindset is based on a myth that
believes a console can only plug in to the living room HDTV. I’m sorry, Roy, but monitors are not PC ONLY
equipment, and if nVidia’s marketing strategy is to say “Hey, we are after games that display on LCD panels”, they
are in for a shock!
In fact, I think this is a very dangerous strategy because it gambles the PC market’s success on a relatively boring
piece of equipment – the flat 2D monitor. It amazes me that the biggest idea the industry can think of is a higher
resolution. It just doesn’t strike me as a major breakthrough worth paying top dollar for now that HDTV is
commonplace.
The article hints at having PC games with extra levels and more artistic quality, but where is the innovation? Who
cares? Here’s the real problem - since when is $20,000 for a PC acceptable? Sure, if you want an Octo-SLI set-up
with a CPU farm rendering your video game in your garage while your wife is threatening to run you over as you
chant “serenity now, serenity now”, I guess that’s an option.
Suppose the hardware manufacturers do manage to sell a modest number of these $20K machines. Can you think of
a single game developer who would think to develop and market to such a small, boring market place?
Let’s face facts, the PC gaming market is the industry’s dirty little secret. While the average consumer may be
impressed by the words “Dual Core” or “Intel Inside”, it’s the video games that give customers the annual excuse to
upgrade their computer and feed the industry’s families. The PC dollar value has to be something that every day
consumers can swallow, and still offer a competitive advantage over their console counterpart.
http://www.veritasetvisus.com
89
Veritas et Visus
3rd Dimension
September 2007
The good news is the solution is right under nVidia’s noses. For those of you reading this newsletter, it’s no
surprise that stereoscopic 3D (S-3D) is the thrilling technology used in 3D movie theaters like IMAX 3D, RealD,
and Dolby Labs. Everyone is jumping on board, including Dreamworks Animation, James Cameron, George Lucas,
and more. When millions of moviegoers see Star Wars in re-mastered S-3D, they won’t need to be educated on
what true 3D gaming is in video games.
With the exception of speed, nVidia’s only competitive advantage right now is their stereoscopic 3D support for
video games. While we are very excited to see that nVidia is continuing to develop these drivers, it’s time for
nVidia to put more public focus and private money into it. Their stereoscopic 3D development team is going to be
the lifeblood of that company much sooner than later, and they should have every resource needed to be successful.
It’s not just about nVidia. iZ3D has developed proprietary drivers that work on both nVidia and AMD/ATI graphics
cards, and they further support post-processing effects in 3D like never before seen.
If you spent $5,000 on a computer (which is high), and your neighbor spent $400 on his console, how are you going
to wow him to your house? Give him a pair of 3D glasses, and he won’t be mowing the lawn for months. THAT’S
what the PC industry needs right now, and THAT’S what game developers want to hear. None of this rubbish about
high-resolution monitors that no one cares about.
Like oil riches being drained from the ground, the PC market understands that time is ticking for the next
defendable business breakthrough in gaming, but unlike the world’s energy crisis, the solution is right in front of
our eyes and in movie theaters across the country.
>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<
Veritas et Visus
Individual Subscriptions (10 editions)
Only $47.99
All five newsletters: (50 editions)
A bargain at $199.99
http://www.veritasetvisus.com
http://www.veritasetvisus.com
90
Veritas et Visus
3rd Dimension
September 2007
3D DLP HDTVs – all is revealed!
by Andrew Woods
Andrew Woods is a research engineer at Curtin University’s Centre for
Marine Science & Technology in Perth, Australia. He has nearly 20 years of
experience in the design, application, and evaluation of stereoscopic video
equipment for industrial and entertainment applications. He is also co-chair of
the annual Stereoscopic Displays and Applications conference – the world’s
largest and longest running technical stereoscopic imaging conference.
Samsung released their range of “3D Ready” DLP HDTVs in April,
Mitsubishi released theirs in June, Texas Instruments publicly revealed the
3D format that these HDTVs accept in late August, i-O Display systems
Photo: Mark Coddington
released wireless 3D glasses suitable for these 3D HDTVs early this month,
and PC software which supports the format of these new 3D HDTVs is also becoming available. All the pieces are
slotting together for high-quality stereoscopic 3D viewing, and the marking push is beginning.
Firstly, a short technology recap: these “3D Ready” HDTVs
are capable of displaying 120Hz time-sequential stereoscopic
3D images and video providing high-resolution flicker-free
3D viewing. The viewer wears a pair of liquid crystal shutter
(LCS) 3D glasses that switch in synchronization with the
sequence of left and right perspective images displayed on
the screen (at 120 images per second). All of the “3D ready”
displays are based on DLP technology from Texas
Instruments, offer either 1080p or 720p resolution, and all are
rear-projection TVs (of a new extremely slim design).
Equipment setup is quite simple – plug in a pair of VESA
3-pin compatible LCS 3D glasses into the “3D Sync”
connector on the rear of the 3D HDTV. Connect the DVI
Samsung “3D Ready” HDTV from
output of an appropriate PC (running suitable 3D-capable
http://product.samsung.com/dlp3d
software) to “HDMI input 3” on the display (using a DVI-toHDMI cable). Switch the display to “HDMI 3” (by
pressing the source button), and then enable 3D mode
by pressing the “3D” button on the remote control.
For a full list of the “3D Ready” HDTV models
available from Samsung and Mitsubishi:
http://www.3dmovielist.com/3dhdtvs.html
There have been a number of 3D formats around for
some time (e.g. row-interleaved, column interleaved,
over-under, side-by-side, time-sequential), but the 3D
format that these 3D HDTVs accept is something
different – a checkerboard pattern. That is, the native
resolution image sent to the display consists of an
alternating checkerboard pattern of pixels from the
left perspective and right perspective images (as
indicated by letters “L” and “R” in Figure 1). The
display internally converts this checkerboard pattern
http://www.veritasetvisus.com
Figure 1: 3D DLP checkerboard pattern for a
1920x1080 native resolution display
91
Veritas et Visus
3rd Dimension
September 2007
into a 120Hz time-sequential image that is viewed using LCS 3D glasses.
The reason for using the checkerboard pattern relates to the way that these displays work. They use a process TI
calls “Smoothpicture” or “Wobulation” to achieve a full-resolution image from a half-resolution DMD (Digital
Micromirror Display) panel. Each 60Hz full frame is displayed as two sub-frames (at 120 sub-frames per second).
This has similarities to the way that interlacing works but is also quite different in many ways. With reference to
Figure 1, in the first sub-frame all the “L” designated pixels are displayed, and in the second sub-frame all the “R”
designated pixels are displayed – at 120 sub-frames per second. The position of the image is wobbled between subframes so they are slightly spatially offset. This “wobulation” process was already being used in last year’s range of
DLP TVs, just for 2D display – as mentioned earlier, to achieve a full-resolution image from a half-resolution
panel. The DMD is perfect for this since it can switch between states extremely fast – it has no phosphor
persistence, it has an ultra-fast pixel response time, and can generate a black period. Someone in TI obviously
realized this feature set had possibilities for 3D display. Two discrete images, shown at 120 Hz, with a black period
– perfect for 3D! And this 3D function can be added at almost zero additional cost (over what was already being
done for 2D display in last year’s models). A match made in heaven!
TI recently published a paper (“Introducing DLP 3-D TV” by David Hutchison, available at
http://www.dlp.com/3d, which summarizes the 3D image format that these new displays accept. In that document
they have said that the checkerboard format “preserves the horizontal and vertical resolution of the left and right
views providing the viewer with the highest quality image
possible with the available bandwidth”. This is not entirely
true – the total resolution per eye is half that of the full native
display resolution. The “pixel” layout of each sub-frame is
shown in Figure 2. Each diamond represents a DMD mirror
– notice that they are rotated 45º relative to the normal
orientation for DMD mirrors. Also notice that the center of
each of these diamond mirrors corresponds with the pixels
for one eye in the checkerboard pattern (shown in light gray
in Figure 2). Now, due to the use of the checkerboard
pattern, it isn’t entirely straightforward how to describe the
reduced resolution. It is not half vertical resolution and it is
not half horizontal resolution. It is a bit of both. Perhaps it is
Figure 2: The diamond pixel layout, which produces
1
each eye view – shown overlaid on top of the native
/√2 in each direction. I’m sure someone will work this out
input pixel layout (light gray)
eventually.
An example of combining left and right perspective
images into a DLP 3D checkerboard image from the
TI DLP 3D white paper
http://www.veritasetvisus.com
In TI’s white paper they have also provided examples of how
to generate the checkerboard pattern using Adobe Photoshop
(for 3D still images) and AVIsynth (for 3D video).
However, the average user is unlikely to use either of these
techniques with their 3D HDTV (except for DIY users) and
it is highly unlikely that images or video will be distributed
natively in the checkerboard pattern (it simply doesn’t cope
with compression well). It is most likely that consumers will
use hardware or software that reformats 3D images on-thefly into the checkerboard format. Three pieces of software
which currently support the DLP 3D checkerboard pattern
internally are Peter Wimmer’s Stereoscopic Player
(http://www.3dtv.at), DDD’s TriDef 3D Experience
(http://www.tridef.com), and Lightspeed Design’s DepthQ
92
Veritas et Visus
3rd Dimension
September 2007
Stereoscopic Media Server (http://www.depthq.com) - and I’m sure there will be more soon! These programs
accept 3D video (and stills) in a range of different conventional 3D image formats (e.g. over-under, side-by-side, or
field-sequential) and reformat to the DLP 3D checkerboard pattern in real-time.
With regard to 3D glasses, i-O Display has recently announced an LCS 3D glasses pack specifically tailored for the
Samsung 3D DLP HDTVs (http://www.i-glassesstore.com/dlp3dsystems.html). Some of the packs also include
DDD’s new TriDef 3D Experience software. It is possible to use other LCS 3D glasses with these displays –
anything with a VESA 3-pin “3D Sync” connector should work. The venerable Crystaleyes and NuVision LCS
glasses can be made to work with these displays. There are also RF wireless LCS 3D glasses from Jumbo Vision
International: http://www.jumbovision.com.au. For those who are interested in knowing more about the VESA 3pin “3D Sync” connector, more information is available here: http://www.stereoscopic.org/standard/connect.html.
The availability of 3D content to show on these displays is going to be the next question. There are over 30 fieldsequential 3D DVDs available right now (see this list: http://www.3dmovielist.com/3ddvds.html). However, the
resolution of these DVDs is far below what these new 3D HDTVs are capable of. People will be yearning for highdefinition 3D video content to show on these displays. Hopefully some of the newer digital 3D cinema releases, or
even some of the older 3D movies, will eventually be released on a 3D HD home-format – perhaps on
3D HD-DVD or 3D Blu-ray? For those wanting to show off their new 3D HDTV now with some 3D HD video
content, the “Dzignlight Stereoscopic Demo Reel 2007” is particularly good (available from:
http://www.dzignlight.com/stereo.html).
The largest source of 3D content in the short term will likely be stereoscopic games. No doubt nVidia is preparing
to add support for the DLP 3D checkerboard pattern to their “3D Stereo driver”, but DDD have beat them to market
– the TriDef 3D Experience software mentioned earlier also allows a select number of consumer games to be
played in stereoscopic 3D on the Samsung 3D HDTVs.
It will be interesting to see how the availability of 3D content evolves over the coming months. Stay tuned!
Hopefully these displays will sell well – buy one yourself today!
>>>>>>>>>>>>>>>>>>>><<<<<<<<<<<<<<<<<<<<
Dell, p71
Alexandre Orion, p77
Earth Day, p84
OLPC, p57
97 pages of insights from industry expert Keith Baker about e-waste. Learn about regulations, activities, and
products related to the environmental impact of displays. http://www.veritasetvisus.com
http://www.veritasetvisus.com
93
Society of Motion Picture and Television Engineers
Pre-conference Symposium
STEREOSCOPIC PRODUCTION
Tuesday October 23rd, 2007
Stereoscopic 3D display technology has experienced dramatic advancements within the past few years. Installations of 3D
systems for digital cinema are fast approaching 1000 screens worldwide. In addition, 3D displays are also being proposed for
consumer home theater, videogames and point-of-sale advertising. If stereoscopic displays begin appearing in consumer
homes for gaming and home theater, will 3D television follow?
This symposium will provide the broadcast, production, or cinema engineer with a roadmap for exploring the stereoscopic
production landscape, from acquisition to the latest projection systems. Leading industry experts will explain the core
technologies, applications and challenges of managing a 3D production pipeline. Stereoscopic projection will be liberally used
to illustrate the speakers’ presentations.
Symposium Committee
Pete Ludé
Lenny Lipton
Tom Scott
Chris Chinnock
Sony Electronics, Inc., Editorial VP, SMPTE
CTO, Real D, Conference Chairman
OnStream Media, Program Director
Insight Media, Featured Speaker
8:00 AM Continental Breakfast
Opening Session
8:30 AM
Bob Kisor, President, SMPTE: Welcome to the Conference
8:40 AM
Peter Ludé, Editorial VP, SMPTE: Conference Overview
8:50 AM
Lenny Lipton
CTO
Real D
3D films have been around from the invention of motion pictures, and have enjoyed
The Stereoscopic
passing waves of popularity. Is the recent activity in Hollywood just another fad or
Cinema Reborn
is there something fundamentally different this time?
9:10 AM
Chris Chinnock
President
Insight Media
Emerging 3D
Display
Technologies
This talk will enumerate the principal means of producing 3D images, for both
projection and direct-view displays, and explain the operating principles of each.
The advantages and disadvantages of each approach will be explored and recentlyannounced products and systems for consumer and cinema use will be explained.
Session 1A: Content Creation, Live Action
There are many challenges to real-world stereoscopic cinematography both in terms of compositional considerations and
camera design. The speakers are experts in both areas of this nascent art as applied to the stereoscopic digital cinema. 3D
cameras require exquisite precision and coordination to produce quality images and lately electronic correction has been part of
the solution as has rectification during post. Bleeding edge concepts will be presented here.
9:40 AM
Chris Ward
President
Lightspeed Design
Group
The beam-splitter rig has been adapted to a hi-def meta-data based system for
An Advanced
maintaining image rectification during cinematography and through post-production.
Beam-splitter Rig
This talk will discuss the advances of such an approach to left/right image
Using Meta-data
coordination.
10:00 AM
Jason Goodman
President
21st Century 3D
A New Concept in A compact hi-def camera-recorder is being designed to allow for image capture of
High Definition
uncompressed data. The camera has uses for both industrial and feature
Camera Design
production.
10:20 AM
Vince Pace
President
Pace Technology
Live Action 3D
Cinematography
The technology and methods for capturing, transmitting and projecting D-Cinema
quality stereoscope sporting events will be described.
10:40 PM Break
Session 1B: Content Creation, Synthesis and Computer Generation
Computer generated imaging and conversion from planar to 3D have the promise of creating perfect stereoscopic images,
unencumbered by the limitations of real-world cinematography. This is an art that can create perfectly controlled beautiful
stereo images. But it’s an art must deal with movies that were conceived of a 2D projects – at this moment. Our speakers
include the 3D directors of several recent theatrical releases and have advanced the understanding of the creative elements of
stereoscopic composition. Conversion from planar is also finding its place in the sun as techniques advance.
11:00 PM
David C. Seigle
President
In-Three
This talk will provide a behind-the-scenes look at converting planar movies into 3D
Dimensionalizing movies and dispel some of the myths about the process. It may well be that 2D as
2D Movies
source material produces superior results to conventionally acquired two-view
material.
11:20 PM
Rob Engle
Sony Pictures
Digital Effects
Supervisor
Imageworks
Creating
Stereoscopic
Movies from 3D
Assets
CG animation and motion-captured movies have an intrinsic three-dimensional data
base making production of a stereoscopic version of such material theoretically
possible. But theory and practice depart. What are the practical considerations of
making a 3D movie out of these assets?
11:40 PM
Phil McNally
Stereoscopic
supervisor
DreamWorks
Animation
Stereoscopic
compositional
concerns from
Inception to
Projection
Movies are usually not conceived of from the get-go as being stereoscopic and this
leads to compromises when the stereo director handles the assets. What can be
done to change this situation? An educational program has been instituted at
DreamWorks Animation to teach everyone from writers to layout artists how to
maximize the effectiveness of the 3D medium.
12:00 PM Panel Discussion Lenny Lipton, moderator
12:20 PM Lunch
Session 2: The Stereoscopic Production Pipeline
The stereoscopic production pipeline is being created at this moment. Visionary studio head Jeffry Katzenberg has mandated
that all future DreamWorks Animation films will be in 3D. Disney has already taken the plunge as has Sony Pictures
Imageworks. It is probable that future productions will use everything: CG animation, live action, and synthesis. But where and
what are the tools for the post production processes? That story will be told in this session.
1:30 PM
Mark Horton
Strategic Marketing
Manager
Quantel
A Solution for
Stereoscopic
Post-Production
Stereo acquisition and distribution are being widely discussed – but what issues
must be addressed in post-production and what are the possibilities? This tutorial
focuses on the current workflows being used for stereoscopic post production and
then explores alternative methods. Practical examples are shown of multiple
techniques, covering the most common issues faced and tools for handling them.
1:50 PM
Jim Mainard
Head of Prod. Dev.
DreamWorks
Animation
Pipeline - There
Isn’t One for
Stereo…
From Editorial to Post and most everything in between we find ourselves creating a
new pipeline and revising our toolset to author in stereo. These hazards and how
to deal with them will be identified and explored.
2:10 PM
Buzz Hays
Senior VFX Producer
Sony Pictures
Imageworks
Stereoscopic
Production
Pipeline for VFX,
Live Action, and
Animation
A discussion of the Imageworks pipeline evolution from CG Animation to LiveAction and everything in-between. The talk will include highlights of tools and
processes with examples from SPI-produced films.
2:30 PM
Steve Schklair
Founder and CEO
3ality Digital Systems
Electronic
Rectification
Applied to 3D
Camera Design
Shooting with the two camera heads that make up a single stereoscopic rig can be
a daunting task in terms of producing images that are properly coordinated to a fine
tolerance. This talk will concentrate not only on the means to achieve such
coordination but also on the post-production tools that are required to follow through
to create a quality stereoscopic image.
2:50 PM
Chuck Comisky
3D Visual Effect
Specialist
Lightstorm Productions
Motion Capture
as a Stereoscopic
Source on a
$190M budget
James Cameron’s new feature, Avatar, will be 60% motion capture, 20%
stereoscopic cinematography, and 20% green-screen. How can all of these
processes be coordinated to produce a unified look? This presentation will discuss
that challenge.
3:10 PM Panel Discussion
3:30 PM Break
Buzz Hays, moderator
Session 3: Stereoscopic Exhibition
A unique value-added of the digital cinema is the ability to project 3D images with a quality level not possible with 35mm
projection. In this session we'll review the different add-on technologies available to convert 2D digital projection systems into
3D projection systems.
4:00 PM
Michael Karagosian
The 3D Cinema
President
MKPE Consulting LLC
A high level review of the various stereoscopic digital cinema systems.
4:20 PM
Dave Schnuelle
Dolby
Senior Director Image Stereoscopic
A tutorial on the Dolby stereoscopic projection system.
Digital Projection
Technology
Dolby Laboratories
4:40 PM
Matt Cowan
CSO
Real D
Real D
Stereoscopic
A tutorial on the Real D stereoscopic projection system.
Digital Projection
5:00PM Panel Discussion
5:30 PM Closing Remarks
Michael Karagosian, moderator
Chris Chinnock
Veritas et Visus
3rd Dimension
September 2007
Last Word: How things get invented
by Lenny Lipton
Lenny Lipton currently serves as chief technology officer at Real D. He founded
StereoGraphics Corporation in 1980, and created the electronic stereoscopic display
industry. He is the most prolific inventor in the field and has been granted 25 patents in the
area of stereoscopic displays. He is a member of the Society for Information Display, the
Society of Photo-Instrumentation Engineers, and he was the chairman of the Society of
Motion Picture and Television Engineers working group, which established standards for
the projection of stereoscopic theatrical films.
The history of motion pictures is an interesting one, and I am learning more about it in the
context of my present work inventing stereoscopic motion picture systems, and in
connection with the work I am doing with studios and filmmakers. I am taking working
with filmmakers seriously because the quality of the Real D system is judged by the
content projected on our screens. I was recently appointed as the co-chair (Peter Andersen
is the other co-chair) of the sub-committee of the ASC Technology Committee tasked to help figure out workflow
production pipeline and stereoscopic cinematographic issues. These subjects are tentative and need to be developed
and we’re all learning together.
The stereoscopic cinema, in its present incarnation, as manufactured by Real D, is entirely dependent upon digital
and computer technology. Digital projection allows for a single projector, while other stereoscopic systems use two
projectors. Two projectors work well in IMAX theaters, based on my observations. I cannot say the same for theme
parks, whether they use film or digital technology, because there are occasions when the projected image is out of
adjustment.
Replacing multiple machines with a single machine – i.e. a projector – is the way to go, especially in today’s
projection booths; because typically there is no projectionist in the booth at the time the film is being projected.
There is a technician who will assemble the film reels and make sure everything is going to project well, but then
somebody else – maybe the kid at the candy counter – who actually works the projector and makes adjustments.
(Interestingly the kid at the candy counter may be well qualified to work the servers and projectors because of his
or her PC experience.)
The product that I invented, the projection ZScreen, has been used for years for the projection of CAD and similar
images for industrial applications. Real D turned the ZScreen into a product that had to work even better for
theatrical motion picture applications. It turns out that the film industry has very high standards when it comes to
image quality. This is easy to understand, because the industry lives or dies by image quality.
The stereoscopic cinema has had a long gestation. To date, this is the longest gestation of any technology advance
in the history of the cinema. For example, within about three decades of the invention of the cinema, sound was
added. There were numerous efforts to make sound a part of the cinema and make it a bona fide product. In the
three-year period from about 1927 to 1930, rapid advances were made both in sound technology and in aesthetics.
If you take a look at movies that were made in 1927, and then you see movies that were made in 1930 or 1931,
there’s a gigantic difference. Movies made in the early 1930s look a lot like, and sound like, modern movies. There
was a tremendous advance in the technology and in filmmaker know-how in a short period of time.
It is the creative professionals who will perfect the stereoscopic medium. That’s exactly what they did every time a
new technology came along, whether it was sound, color, wide-screen, or computer-generated images. In fact, those
are the major additions to the cinema, and they all took decades to become an ongoing part of the cinema. Ads for
movies never say, “This is a sound movie,” or “This is a color movie,” or “This movie is in the widescreen (or
’scope) aspect ratio.” It’s assumed. It’s a rare movie that is in black-and-white. It’s an even rarer movie that is
http://www.veritasetvisus.com
96
Veritas et Visus
3rd Dimension
September 2007
silent. And nobody is going back to shooting 4:3 Edison aspect ratio movies. (Curiously, that’s more or less the
aspect ratio used by IMAX for their cinema of immersion.)
An attempt was made in the early 1980s to use a single projector with the above-and-below format – essentially
two Techniscope frames that could be projected through mirrors or prisms or split lenses, optically superimposed
on the screen, and polarized. The audience used polarizing glasses to view the images in 3D. I was the chairman of
the SMPTE working group that established the standards for the above-and-below format. But as soon as the
standards were established, the above-and-below format was more or less abandoned. A few films like “Comin’ At
Ya!” or “Jaws 3-D”, and one I worked on, “Rottweiler: Dogs of Hell” were projected above-and-below, an
approach that was technically inadequate. For one thing it was hard to adjust properly and set up the projector to
achieve even illumination. I know; I set up a few, and it was tough to do a good job because of the design of the
lamp housings and the projectors.
Curiously it was the above-and-below format that led me to the first flicker-free stereoscopic field-sequential
computer and television systems. I noticed that the above-and-below format was applicable to video, because that
which is juxtaposed spatially can, with the injection of a synchronization pulse between the two frames, become
juxtaposed temporally when played back on a CRT monitor; so the first StereoGraphics systems used the aboveand-below format.
The above-and-below video format, which is applicable to video or computer graphics, results in a field-sequential
image that can be viewed using shuttering or related polarizing selection techniques. I designed the first flicker-free
field sequential system in 1980. It used early electro-optics that were clunky, but the flicker-free principal was
established. Using 60Hz video, for example, with the above and below format, one achieved a 120Hz result, that is
to say, 60 fields per second per eye. The field sequential system is what is used for the Real D projection system.
The electro-optics are different. There’s the ZScreen
modulator used in the optical path in front of the
projection lens, and audience members wear
polarizing eyewear. (The combination of ZScreen and
polarizing eyewear actually form a shutter. You can
classify the system as either shuttering for selection or
polarization, but in fact a proper classification is that
it uses both polarization and shuttering.) But the
principal is the same as that used for the early stereo
systems I developed. The right eye sees the right
image while the left sees nothing and vice versa, ad
infinitum, or as long as the machine is turned on.
The issue I had to solve in 1980 was this: How to
make an innately 60Hz device work twice as fast.
And the above-and-below format did just that. We
had to modify the monitors to run fast, but for a CRT
monitor it wasn’t that hard. There are two parts to
stereoscopic systems’ issues: the selection device
design and content creation. Today we are faced with
the same design issue I was faced with in 1980. In
addition, content creation has always been a major
issue and that’s why I am working with the film
industry to work out compositional and workflow
issues.
http://www.veritasetvisus.com
Engineer Jim Stewart (left) and I are working on the first
electronic stereoscopic field-sequential system that produced
flicker-free images (Circa 1980). We used two black and
white NTSC TV cameras as shown, and combined the signals
to play on a Conrac monitor, which, without modification,
could run at 120 Hz. The images were half height, but we
proved the principal. Stewart is wearing a pair of welder’s
goggles in which we mounted PLZT (lead lanthanum
zirconate titanate) electro-optical shutters we got from
Motorola. The shutters had been designed for flash blindness
goggles for pilots who dropped atomic bombs. I kid you not.
97
Veritas et Visus
3rd Dimension
September 2007
Display Industry Calendar
A much more complete version of this calendar is located at: http://www.veritasetvisus.com/industry_calendar.htm.
Please notify [email protected] to have your future events included in the listing.
September 2007
September 8-12
GITEX 2007
Dubai, UAE
September 9-12
PLASA '07
London, England
September 10-11
Europe Workshop on Manufacturing LEDs for
Lighting and Displays
Berlin, Germany
September 10-11
Printed Electronics Asia
Tokyo, Japan
September 11
Workshop on Dynamic 3D Imaging
Heidelberg, Germany
September 12-14
Semicon Taiwan, 2007
Taipei, Taiwan
September 13
Printing Manufacturing for Reel-to-Reel Processes
Kettering, England
September 14-16
Taitronics India 2007
Chennai, India
September 16-20
Organic Materials and Devices for Displays and
Energy Conversion
San Francisco, California
September 17-20
EuroDisplay
Moscow, Russia
September 18-19
3D Workshop
San Francisco, California
September 18-19
Global Biometrics Summit
Brussels, Belgium
September 18-19
RFID Europe
Cambridge, England
September 21
FPD Components & Materials Seminar
Tokyo, Japan
September 24-26
Organic Electronics Conference
Frankfurt, Germany
October 2007
October 1-4
European Conference on Organic Electronics &
Related Phenomena
Varenna, Italy
October 1-5
International Topical Meeting on Optics of Liquid
Crystals
Puebla, Mexico
October 2-3
3D Insiders' Summit
Boulder, Colorado
October 2-3
Mobile Displays 2007
San Diego, California
October 2-6
CEATAC Japan 2007
Tokyo, Japan
October 2-7
CeBIT Bilisim EurAsia
Istanbul, Turkey
http://www.veritasetvisus.com
98
Veritas et Visus
3rd Dimension
September 2007
October 3-4
Displays Technology South
Reading, England
October 7-10
AIMCAL Fall Technical Conference
Scottsdale, Arizona
October 8-9
Printed RFID US
Chicago, Illinois
October 9-11
SEMICON Europa 2007
Stuttgart, Germany
October 9-13
Taipei Int'l Electronics Autumn Show
Taipei, Taiwan
October 9-13
Korea Electronics Show
Seoul, Korea
October 10
Novel Light Sources
Bletchley Park, England
October 10-11
International Symposium on Environmental
Standards for Electronic Products
Ottawa, Ontario
October 10-11
HDTV Conference 2007
Los Angeles, California
October 10-12
IEEE Tabletop Workshop
Newport, Rhode Island
October 10-13
CeBIT Asia
Shanghai, China
October 11-12
Vehicles and Photons 2007
Dearborn, Michigan
October 13-16
Hong Kong Electronics Fair Autumn
Hong Kong, China
October 13-16
ElectronicAsia 2007
Hong Kong, China
October 15-18
Showeast
Orlando, Florida
October 15-19
CEA Technology & Standards Forum
San Diego, California
October 16
Enabling Technologies with Atomic Layer
Deposition
Daresbury, England
October 17-18
Photonex 2007
Stoneleigh Park, England
October 17-19
Printable Electronics & Displays Conference &
Exhibition
San Francisco, California
October 17-20
SMAU 2007
Milan, Italy
October 18
Displaybank FPD Conference Taiwan
Taipei, Taiwan
October 22-25
CTIA Wireless IT & Entertainment
San Francisco, California
October 23
Stereoscopic Production
Brooklyn, New York
October 23-25
SATIS 2007
Paris, France
October 23-25
Display Applications Conference
San Francisco, California
October 24-26
Worship Facilities Conference & Expo
Atlanta, Georgia
http://www.veritasetvisus.com
99
Veritas et Visus
3rd Dimension
September 2007
October 24-26
LEDs 2007
San Diego, California
October 24-26
FPD International
Yokohama, Japan
October 24-27
SMPTE Technical Conference & Exhibition
Brooklyn, New York
October 29-30
Plastic Electronics
Frankfurt, Germany
October 29 November 1
Digital Hollywood Fall
Los Angeles, California
November 2007
November 1-2
Digital Living Room
San Francisco, California
November 5-7
OLEDs World Summit
La Jolla, California
November 5-6
Challenges in Organic Electronics
Manchester, England
November 5-9
Color Imaging Conference 2007
Albuquerque, New Mexico
November 6-8
Crystal Valley Conference
Cheonan, Korea
November 6-9
EHX Fall 2007
Long Beach, California
November 6-11
SIMO 2007
Madrid, Spain
November 7-8
High Def Expo
Burbank, California
November 8
Taiwan TV Supply Chain Conference
Taipei, Taiwan
November 8-10
Viscom
Milan, Italy
November 8-11
Color Expo 2007
Seoul, Korea
November 9
2007 FPD Market Analysis & 2008 Market Outlook
Seoul, Korea
November 11-15
Photonics Asia 2007
Beijing, China
November 12-15
Printed Electronics USA
San Francisco, California
November 14-15
Nano 2007
Boston, Massachusetts
November 14-15
DisplayForum
Prague, Czech Republic
November 15-16
Future of Television
New York, New York
November 15-16
Future of Television Forum
New York, New York
November 19-20
International Conference on Enactive Interfaces
Grenoble, France
November 25-30
RSNA 2007
Chicago, Illinois
November 29
Displaybank Japan Conference
Tokyo, Japan
http://www.veritasetvisus.com
100