Joe Beirne, Technicolor Postworks NY

Transcription

Joe Beirne, Technicolor Postworks NY
So,
What Comes
After
File-Based
Workflows?
Joe Beirne,
CTO
Technicolor PWNY, New York
My presentation asks the question, “As video tape followed film, and data followed videotape, what will come after file-based
workflows?
And is this a question of changing technology or a question of the changing nature of entertainment?
Evolution
We think about evolution in technology in classic Darwinian terms, but actually this process proceeds in a more erratic and
inflected fashion, as does evolution in nature.
In fact both forms of evolution seems to follow what the late Steven Jay Gould referred to as “Punctuated Equilibrium”, i.e. long
periods of relative stasis
followed by periods of rapid morphological and functional innovation.
?
File-Based Media
Videotape
Film
None of the technologies we have relied upon for motion image production were elegantly designed from the ground up...
a collection of existing technologies were bolted together and refined into reliable systems only after years of use. And there
were considerable trade-offs made at each stage of this development, and each was not always an improvement.
This progression from left to right represents the historical order of these technologies.
?
Film
Videotape
File-Based Media
This ordering represents long-term survivability...
?
File-Based Media
Film
Videotape
...And this represents an order of increasing maximum resolution.
?
File-Based Media
Videotape
Film
The historical order of development also maps two other important parameters.
Each stage has decreased significantly in inertial mass and volume per unit of program time.
(An average six-reel motion picture print weighed as much as an average projectionist.)
And each historical stage also represented an increase in “open-ness.” What that means is ambiguous, but immediately
comprehensible. One index of this quality is the conditions under which film, video and file-based dailies can be reviewed.
The candidate most often cited as an answer to our question of what comes next is a single rather ambiguous concept:
The Cloud
A word that itself connotes airy, diffuse and intangible qualities...
The Cloud
...and that has not really answered the question as much as obscured it.
I think its not that The Cloud isn’t the the answer to *this* question, but that it is the answer to *every* question.
File-Based Media
Videotape
Film
The engine of punctuated equilibrium is
Cladogenesis: an evolutionary splitting event in a species... This event usually
occurs when a few organisms end up in new, often distant environments.... or when environmental changes cause
several extinctions, opening up ecological niches for the survivors.
Two significant environmental changes opened the way from film and videotape to file-based workflows: the threat of a
SAG strike in 2008/2009 saw many television productions switch to digital cinematographic cameras in order to shoot
under AFTRA jurisdiction. And the events surrounding the March 2011 earthquake and tsunami to the Miyagi
prefecture affected the supply of SR videotape which forced a rapid adoption of solid state and disk-based acquisition
and distribution. Both events catalyzed unexpectedly rapid change.
What -is- the next evolutionary stage for media?
“X”
Computational Media
Pervasive Media
Collaborative Media
Trans-Media
Hyper-Media
There is probably no one answer, no single technological breakthrough, but will probably entail an unexpected combination or
reuse of new and existing technologies.
(A great general once said that all major battles form at the folds of the ordinance maps.)
Explanation of each type of “new” media and it’s implications for both the physicality and purpose of entertainment.
“X”
Computational Media
Pervasive Media
Collaborative Media
Trans-Media
Hyper-Media
In historical order, Hypermedia, or media that has imbedded links to other content, was first.
One of the earliest, and most influential, was the Aspen Mountain Map.
The Aspen Movie Map was a developed at MIT by a team working with Andrew Lippman in 1978 with funding from
ARPA.
It allowed the user to take a virtual tour through the city of Aspen, Colorado .
A gyroscopic stabilizer with four 16mm stop-frame film cameras was mounted on top of a car with an encoder that
triggered the cameras every ten feet. The distance was measured from an optical sensor attached to the hub of a
bicycle wheel dragged behind the vehicle. The cameras captured front, back, and side views as the car made its way
through the city.
The film was assembled into a collection of discontinuous scenes and then transferred to laserdisc. Hyperlinks were
selectable by joystick, and interior views, 3D models and related data about each address could be accessed.
This was essentially an early form of Google’s Street View.
Hypermedia has advanced a bit since 1978.
Google Glass is the best funded and most watched initiative to build a complete augmented reality ecosystem.
http://www.youtube.com/watch?v=v1uyQZNg2vE
Google Glass
Researchers who have extensively tested augmented reality systems themselves report that they distinctly miss the additional “sense” after they remove the
apparatus.
“X”
Computational Media
Pervasive Media
Collaborative Media
Trans-Media
Hyper-Media
Transmedia is a novel and vital form of new media that has received much attention here already this week.
It is perhaps the first form of storytelling media wholly native to the internet.
Henry Jenkins
Transmedia Storytelling (2003)
“Transmedia storytelling represents a process where integral elements of a fiction get dispersed systematically
across multiple delivery channels for the purpose of creating a unified and coordinated entertainment experience.
Ideally, each medium makes it own unique contribution to the unfolding of the story.”
In this example Lizzie Bennet Diaries
The Lizzie Bennet Diaries is an American drama web series adapted from Jane Austen's Pride and Prejudice told in
the form of vlogs. It was created by Hank Green and Bernie Su
Hank Green created a vlog series “Brotherhood 2.0” with John Green, the best-selling author.
“X”
Computational Media
Pervasive Media
Collaborative Media
Trans-Media
Hyper-Media
Pervasive and Participatory/Collaborative Media is another vital area of innovation: the explosion of cameras embedded in the
built environment and in every mobile device allow both top-down, Panoptic surveillance of the entire society ...
...as well as bottom-up capture of world events from the “Citizen Reporter.”
Jehane Noujaim’s “The Square” is a meta-documentary of the documentation by the participants of Egypt’s Tahrir Square
movement that won the best-doc grand jury prize at Sundance this year. This is her Kickstarter campaign page, (which now only
has 14 days to go.)
http://apod.nasa.gov/apod/ap130218.html
Last Friday, a 10-ton meteorite fell in Western Siberia and this was documented by many Russians. Multiple perspectives of this
phenomenon give it a fantastic immediacy. Note particularly the effect the meteorite has as a light source in several of the clips,
and how it is captured in the reflection in the bus window.
Several of these shots were captured by the dashboard cams that many Russians use to record the road in order to avoid disputes
over accidents and prevent crime. There does not seem to be any evidence in listening to any of these videos that the driver was
aware of the meteorite as it was being captured.
This structured documentation is in a sense more objective (and more likely to be comprehensive.)
“X”
Computational Media
Pervasive Media
Collaborative Media
Trans-Media
Hyper-Media
Computational Media: When I was a student at Cooper Union in the late 70’s there was a class in the architectural program called
“The Architect in the Machine”. We studied the relationship between design and computing in the days before AutoCAD. (I was of
the last generation of Cooper students who were not given any training in computer drafting.)
A key text in that curriculum was Nicholas Negroponte’s “The Architecture Machine”. This was a couple of years before Media
Lab.
For us, computers promised a high-level aid to appropriate and sensitive solutions using all available information on the design
problem, a form of analysis, not simply the production of drawings.
This dichotomy between “Computational Thinking” and the use of computers for specific tasks holds for the future of motion
picture media as well: we have before us the possibility of using the systematic concepts of computing to do very interesting
things with story-telling media.
Computational computing for media come already from techniques as widely ranging as search-ability; ranking; scaleability;
pattern, object and speech recognition; encryption; and the ability to tag and to indelibly sign the content.
But I believe Media soon may be able to both assemble and explain itself (with a broad range of data about its own production)
and to adapt to new data from its environment in order to scale or reorganize itself appropriately for the context in which it is
consumed. Motion picture media will be aware of its volumetric context, recording wavefront data along with the 2D image, and
be aware of the logical relationship between objects in its field of view and their interactions.
Simon from The Foundry referred earlier today to plans they have to be able to actively compile code to their applications on set
in order to closely fit the process to the production challenges.
A early glimmer of this kind of possibility is already part of the Academy IIF ACES specification, which includes a programming
language (CTL: Color Transform Language) that performs color management in a “concise and unambiguous way” rather than
merely employing multiple lookup tables.
SMPTE RDD 15:2007
Software Scripting Language for Pixel-Based Color Transformations
http://www.youtube.com/watch?v=JVrXAsYd1Wk
THINK
Systematic visualization of information from the environment places media in the world of Big Data.
In celebration of IBM’s 100th Anniversary, an ambitious immersive media environment was designed by Mirada/Motion Theory
(Guillermo Del Toro’s media company in Venice) SY Partners (IBM’s brand consultants), Ralph Applebaum Associates (architects,
exhibit designer) and SOSO Ltd (Interactive Design).
This image from Phillip and Phyllis Morrison’s “The Ring of Truth” shows the massive Hoover Dam, a monumental engineering
project that spans one end of the Black Canyon and holds back the Colorado River. At the time it was finished it was the largest
concrete structure in the world - more than 3 million cubic yards - and cost more than a hundred human lives in its
construction.
As the Morrison’s say, the size of the damn looks commensurate with the mass of the river built up behind it...
...but in this satellite image of the 3 trillion gallon Lake Mead, the dam is barely visible.
It occupies the most efficient place to create the greatest consequence: it is as small as it can be.
In the principal that one should be wary of lop-sided thermodynamics, we always look for ways to be more efficient in using time
and resources, including the movement of large sets of data through multiple read-write cycles across busy network fabric.
This may point to a physical form that media may take in a “post-file-based” future: an entangled media.
One Value for
“X”
= Entangled Media
One answer that perhaps meets the challenge:
Quantum physics introduced a true paradox: a change in one component of a quantum system can instantaneously affect a
complimentary component of that system regardless of how far removed those components are from each other at the time. This
particle “entanglement”, first posited as a logical refutation of quantum physics in a thought experiment by Einstein Podolsky and
Rosen in 1935, was proven to be physically true in the 1976 (Alain Aspect) and many times since.
Because the inevitable requirement for data survivability requires at least two identical copies to be made of any source file, the
opportunity arises for a physically separate set of media to be updated and processed at different locations with an extremely
modest flow of meta-information across the public internet: essentially we are leveraging the potential “entanglement” of
redundant media.
Production
Local Replication
Lab Replication
+ Valididation
Processing
Goes to the
Media
!
!
!
Lab Process
!
Studio Process
3rd Party
Process
Trusted Workspace
This satisfies one interpretation of the film/video/file/x progression: with each stage, 35mm motion picture film and its optomechanical ecosystem, videotape and its electro-mechanical ecosystem, file-based media and its data-processing ecosystem,
came a reduction in inertial mass per hour of delivered content.
“Entangled” media allows initially large data pools to be validated, managed, processed, refined and delivered using very
lightweight data payloads and reasonable bandwidth, even in some cases over the current state of mobile data networks.
Production
Local Replication
Lab Replication
+ Valididation
Processing
Goes to the
Media
!
!
!
Lab Process
!
Studio Process
3rd Party
Process
Trusted Workspace
Here we are taking advantage of the need for redundant depositories of media, the low cost of commonly used CPU/GPU
hardware and post-production software packages, as well as effective run-time sandboxing.
The technique is analogous to “just in time manufacturing,” which often deploys sub-contractor processes within the main
manufacturer’s physical plant: here we are caching the select media files in the client’s physical premises and performing the last
stages of finishing (as well as other downstream processes such as packaging and encoding) on equipment that is supervised by
the lab but not necessarily even owned by us.
All stages of the process are performed in more than one place, allowing great flexibility.
Thank You