D.5.1.3 A prototype for online set, animation and lighting editing

Transcription

D.5.1.3 A prototype for online set, animation and lighting editing
D.5.1.3
A prototype for online set, animation and lighting editing
Deliverable due date: 31.3.2016
Actual submission date: 31.03.16
Project no: FP7 - 61005
Project start date: 01.10.13
Lead contractor: The Foundry
Dreamspace_D5.1.3_v1.4_310316.docx
1
Project ref. no.
FP7 - 61005
Project acronym
DREAMSPACE
Dreamspace: A Platform and Tools for Collaborative
Virtual Production Dreamspace
Project full title
Security (distribution
level)
PU
COnfidential, REstricted or PUblic)
Contractual date of
delivery
Month 30, March 2016
Actual date of delivery
Month 30, 31 March 2016
Deliverable number
Deliverable name
D5.1.3
A prototype for online set, animation and lighting
editing
Type
Prototype
Status & version
V4
Number of pages
37
WP / Task responsible
WP5T1
Other contributors
Farshad Einabadi, Carlos Fernandez de Tejada, Kai
Götz, Oliver Grau, Volker Helzle, Harini Priyadarshini,
Andreas Schuster, Simon Spielmann
Author(s)
Internal Reviewer
Thomas Knop
EC Project Officer
Alina Senn
DOCUMENT HISTORY
Version
Date
1.0
15-03-2016
Initial version converted from google-docs
1.1
16-03-2016
Version for internal review (14 days before submission date)
1.2
31-03-2016
Revisions in response to review: final versions submitted to
Commission
1.4
31.03.2016
Version submitted to EC
3
DD-MM-YYYY
Dreamspace_D5.1.3_v1.4_310316.docx
Reason of change
2
Contents
Content ............................................................................................... Error! Bookmark not defined.
1 Public Executive Summary ............................................................................................................... 4
2 Introduction ....................................................................................................................................... 5
2.1 Progress beyond the state-of-the-art .......................................................................................... 5
2.2 Overview and scope of this document and dependencies to other work packages.................... 5
3 Prototype for on-set Editing .............................................................................................................. 6
3.1 Production flow .......................................................................................................................... 7
3.1.1 Synchronization .................................................................................................................. 8
3.1.2 Scene transfer ...................................................................................................................... 9
3.1.3 Benchmark scene ................................................................................................................ 9
3.2 VPET ........................................................................................................................................ 10
3.2.1 Related Work .................................................................................................................... 10
3.2.2 Set editing ......................................................................................................................... 12
3.2.3 Animation editing ............................................................................................................. 13
3.2.4 GUI.................................................................................................................................... 14
3.2.5 Documentation .................................................................................................................. 17
3.3
Supporting additional sensing devices ................................................................................ 18
3.3.1 Device tracking ................................................................................................................ 18
3.3.2 Integrated onset depth sensing .......................................................................................... 19
3.4 Light-Editing ............................................................................................................................ 20
3.4.1 Light capture and estimation ............................................................................................. 20
3.4.2 Lighting Server and integration into the LiveView system .............................................. 30
3.4.3 GUI for Light-editing ........................................................................................................ 31
4 Practical use of the on-set tools....................................................................................................... 31
4.1 Set and animation editing ......................................................................................................... 31
4.2 Light capture and editing ......................................................................................................... 32
Conclusions ........................................................................................................................................ 34
5 References ....................................................................................................................................... 34
Dreamspace_D5.1.3_v1.4_310316.docx
3
1PublicExecutiveSummary
Theaimofworkpackage5istoresearchanddevelopmethodsofdesigning,directing,navigating,
editing and modifying Virtual Productions in a collaborative environment. This report describes
theprototypetowardsthisgoal,developedbytheDreamspaceprojectaspartoftaskWP5T1.
InteractionwithVirtualProductioncontentinacollaborativeenvironmentmeanstoprovidetools
that allow creating and modifying a virtual environment in real-time on set. More specifically
these tools allow the control of creative parameters including the positions of virtual assets,
animations, lighting, keying, compositing, and editing on a timeline. The challenges and the
productionflowareoutlinedinreportD2.1.1;agenericinterfacedescriptionofmoduleswithina
virtualproductionflowforvisualeffectsinfilmandTVcanbefoundindeliveryD5.1.1.Thework
describedinthisreportisbasedonthespecificationsanddefinitionsofthesedeliveries.
A specific emphasis of the work in WP5T1 is on intuitive methods of controlling the creative
parameters to allow creative crew members who are not trained to use sophisticated 3D
modellingandanimationtoolstoworkonset.Forthispurposeasuiteofon-setvirtualproduction
tools have been developed, that allow the visualisation and editing of virtual set elements,
animationsandthescenelighting.
Acentralpointofthesetoolsisacentralsynchronizationserver.Aversionoftheproductionscene
isconvertedintoareal-timerepresentationthatcanbevisualizedonarenderingengineonalocal
tabletPContheVPET(VirtualProductionEditingTool)software.Thesoftwareisdesignedtobe
used to browse and edit the virtual set and animations on set. Scene changes are synchronized
betweenanynumberoftabletsandtheDreamspaceproductionpipeline.
Thesecondfocusareaofworkwasonlightingediting.Thegoalherewastobuildtoolsthatallow
toharmonizereal(on-set)lightingwithvirtualscenelighting.Theapproachistocapturetheset
lightingandtransfertheextractedlightmodelstoareal-timerenderertoprovidealivefeedback
of virtual scene components merged with real scene components or actors. For light capture a
sequenceofHDRlightprobesisused,capturedatdifferentpositionsonset.Fromthisinformation
point-anddirectionallightmodelsareestimated.Thesystemcanalsodealwithdynamicchanges
ofthelighting.Thisishandledthroughaccessofthestudiolightset-upthroughaDMXcontroller.
Dreamspace_D5.1.3_v1.4_310316.docx
4
2Introduction
Thisreportsummarizestheproject’sworkonWP5T1.Theaimofthisworktaskwastoresearch
anddevelopmethodsofdesigning,directing,navigating,editingandmodifyingVirtualProductions
in a collaborative environment. To a large extent these methods consist of tools that allow
creatingandmodifyingavirtualenvironmentinreal-timeonset.Morespecificallythetoolsallow
controlling creative parameters including the positions of virtual assets, animations, lighting,
keying,compositing,andeditingonatimeline.
A specific emphasis was to explore intuitive methods of controlling the creative parameters to
allow creative crew members who are not trained to use sophisticated 3D modelling and
animationtoolstoworkonset.Thisisachievedusingstate-of-the-artandemergingvisualisation
devices.TheprojecthasevaluatedsolutionsprovidedbyVirtualandAugmentedRealityresearch,
includinghead-mounteddevices,projectors,fixedandmobiledisplaysforthispurpose.
2.1Progressbeyondthestate-of-the-art
ThemainobjectiveofWP5task1accordingtotheDoWwas:
Toresearchanddevelopmethodsofdesigning,directing,navigating,editingandmodifying
VirtualProductionsinacollaborativeenvironmentbydesigningandcreatingavirtual
environmentforreal-timeonsetcontentinteraction,visualisation&modification,with
intuitivemethodsofcontrollingcreativeparametersincludingthepositionsofvirtual
assets,animations,lighting,keying,compositing,andeditingonatimeline;designingand
creatinganenvironmentforexploringandpresentingnovelimmersiveexperiencesbased
onvirtualproductionconcepts.
Dreamspacemadesignificantprogressbeyondthestate-of-the-artinthefollowingareas:
1. Developedanintuitivetoolsuitetoeditcreativeparametersonset;alotofemphasishas
beenputintoaneasytouseuserinterfacesothatanycreativecrewmemberisabletouse
thesystem.
2. Aprototypeofafullyintegratedsystemintotheproductionflowhasbeendemonstrated.
Otherapproachesforon-setworkareprovidingonlypartialislandsolutions(likeZoic,see
section3.2.1)thatrequireotherofflinedataconvertertoolsanddon’tprovideasmooth
real-timecapableintegrationintothelivevirtualproductiontoolsandthepost-production
phase.
3. Anewautomatedandeasytooperatecaptureofthesetlightinghasbeendevelopedand
demonstrated; state-of-the art in VFX productions currently use labour and timeconsuming matching of virtual light to reference images. The image-based light capture
and editing developed in Dreamspace is not only saving this manual work, it can also
produceveryexactlightmodelsthatenablehighlyrealisticallyandphysicalcorrectCGI.
2.2Overviewandscopeofthisdocumentanddependenciestootherworkpackages
This document describes the prototype system developed in Dreamspace for on-set work. The
components developed in this context in a way provide an interface to the Dreamspace
Dreamspace_D5.1.3_v1.4_310316.docx
5
productionpipelineandinterfacewiththecentralcomponentsoftheproject,inparticularthelive
viewing system (LiveView [D6.1.2]). The core system is providing a real-time capable production
pipeline, it includes the captured and processed data (WP3) and provides a rendered and
compositedviewbasedontrackedcameradata(WP4).
TheaimofWP5T1istoprovidetoolsthatalloweditingsomeofthecreativeparameterson-setin
acollaborativeway.Inparticularitallowsateamtoeditobjects(set-editor)andanimationsinthe
virtualsceneandtocaptureandeditthereallightingonset.Thecreativecrewcanmakechanges
totherealandvirtualsceneandcanjudgethesechangeswiththelocalon-settoolsandalsogeta
compositedviewoftheintegratedpipeline.Inparticulartojudgethereal-virtualscenelightingit
isimportanttohaveareal-timeorclosetoreal-timefeedbackofthecompositedsceneincludinga
high-quality rendered video using global illumination, as provided by the Dreamspace renderer
(WP4).
Intheremainderofthisdocumentwedescribethecomponentsdevelopedinthisworkpackage.
That includes the on-set editing tools (VPET), the light capture and editing tools and integrated
additionalsensors.
Thedocumentendswithaverybriefoverviewofthepracticalusecases.Theperformanceofthe
tools will be evaluated in the final production test and evaluation WP2 D2.3.3 Final Virtual
ProductionsandWP6D6.2.2FinalFieldTrialandUserEvaluationReport.
3Prototypeforon-setEditing
The on-set prototype constitutes a generic framework to be integrated into existing workflows,
allowing tasks of production and post-production to happen in one logical data space.
Furthermore,itprovidesanarchitectureforusingavarietyofreal-timeinputandoutputdevices.
Figure1showsthearchitectureoftheproposedsystem.Multipletabletscanbeusedinparallelto
exploreandeditthevirtualscenesimultaneously.Allchangesmadetothesceneareimmediately
senttoasynchronizationserverthatcommunicatesthosechangestoallotherattachedclients.In
thissense,itworksasafan-outserverforallincomingmessages.Thesynchronizationserveralso
provides an interface to additionally receive real-time camera tracking data. The camera data is
thentransferredtoeverysubscribedclientandcanbeusedtosimulatethemaincamerainthe
virtualscenery.
Duetothecommunicationthroughnetworksockets,aclientcanconnecttoanyapplicationthat
providesaformattedscenedescriptionandacceptsmessagescarryingatransformupdate.Thisis
a general approach and needs to be implemented through the plug-in interface at the main
application. The plugin shall identify assets to be streamed and convert the mesh topology in a
formathandledbytheUnity3DEngine[Unity]withoutfurtherprocessingonthetabletside.
CurrentlyadirectinterfacetoLiveViewexistsandiscapableofstreamingassetstotheclientsas
wellreceivingchanges.Itpreparesandholdstheassetdataandsendsabinarypackagetoaclient
onrequestaswelllistenonasecondportforincomingupdatesandappliesthemimmediately.
Dreamspace_D5.1.3_v1.4_310316.docx
6
Figure1:SetupofDreamspaceDemonstrator
3.1Productionflow
LiveView is the central unit, which holds and visualizes the scene, using a real-time or offline
renderer,andreceivesupdatesfromconnectedserviceslikecameratrackingaswellaschanges
from set and light editing. Figure 2 shows the software components developed overall the
Dreamspaceprojectandtheinformationflowbetweenthem.LiveViewwillbeloadingsomeassets
preparedinapre-productionstep(bottomleftinFigure2).Thisincludesassetsusedintheon-set
phase by the on-set tools; this includes in particular models of (virtual) objects and pre-defined
animations.
During the on-set phase some parameters of the virtual assets are refined and edited. This
includesthelightinginformation(capturedandeditedlights)andobjectandanimationattributes
editedbytheVPETtools.
Dreamspace_D5.1.3_v1.4_310316.docx
7
NearTime
LIVE
ncam
Track
UdS
Render
Live
Display
Set-upon-set
FAAI
Tools
IVCI
Lights
FIVES
Live
comp
LiveViewon-set
environment
RtR
plugin
UdS
plugin
Remote
plugin
ncamSDK
UpdateManager
Blink
Geolib
LiveView
Offlinedata-flow
Pre-production
NUKE
KATANA
Near-set/Postproduction
iMinds
ODV
NUKE
KATANA
Final
Track
Figure2:OverviewofthesoftwarecomponentsoftheDreamspaceproject
Updates are processed immediately and visualized in real-time and modifications get applied to
the virtual scene and propagated to the render engine which then refreshes the visualization
[D6.1.2].Attheendoftheon-setphasetheupdatedparameterswilltransferredoutofLiveView
intothepost-productiontoolswhichyieldinanupdatedworkingscene.(bottomrightofFigure2).
Thepostproductionworkflowthandependsonthepipelineusedintheproductionenvironment
andisnotrequiredtochange.LiveViewintendstoremaintheworkflowandthereforeimagescan
begeneratedbyusinganyrendererandbeprocessedfurtherincommoncompositingsoftware.
3.1.1Synchronization
Theresponsibilityofthesynchronizationserveristopasseditingupdatesbetweentheconnected
parts. It receives changes from every subscribed client and communicates those changes
immediately to all other. Multiple tablets can be used in parallel to explore and edit the virtual
Dreamspace_D5.1.3_v1.4_310316.docx
8
scenesimultaneouslyandthenumberofclientsisonlylimitedbythemachine’sresources.Inthis
sense,itworksasafan-outserverforallincomingmessages.Alsoamechanismisinplacetoallow
only one tablet to edit a certain object at time by distributing the object name and tablet id
through the synchronization server. All clients mark that object as locked until a release
notification is received. Communication is implemented through the ZeroMQ [ZMQ] distributed
messaging library. A publish-and-subscribe pattern is used to provide the possibility to only
process the relevant messages for each individually configured message-receiving client. The
synchronizationserveralsoprovidesaninterfacetoadditionallyreceivereal-timecameratracking
datafromaNcamsystem.Thecameradataistransferredfurthertoeverysubscribedclientand
can be used to look through the main camera on the tablet application. As the synchronization
server receives all messages it is also responsible for recording the update changes collected
during the virtual production. Currently it dumps every incoming transform in a plain list and
writesthemessagestodiskmarkedwiththetimecodethesynchronizationservergetsfromthe
Ncamsystem.Laterinthepostproductionthoserecordscanbealignedwiththedatacollectedby
theNcamserver.ThishappensforexampleonimportinaDCCtoolwhenallproductiondatagets
combinedtoworkonthefinalimages.
3.1.2Scenetransfer
A scene that shall be used in a real-time environment must either be relatively lightweight in
terms of textures and polygon count or hold a reduced version of geometry and textures. In
LiveViewtheassetsofthereducedversionsaremarkedwithalowlevelofdetail(LOD)attribute
and the scene transfer does only consider such. In the initial state the Scene Distribution plugin
walks the whole scene graph and proceeds only assets with the LOD attribute set to low.
Geometries, lights and cameras are treated separately by delegating the scene handle, which
points to the current asset, to the according function. For all three types the plugin reads the
transform to define placement in 3D coordinate system and to position the object in the scene
hierarchy.
Geometry is specified and the topology defined by polygons, vertices, edges, normals and UVs
gets converted to a representation Unity understands. Mesh topology mostly varies from one
application to the other and is usually polygon based but differs in the order and connection
counts. Unity accepts one certain representation described in the VPET documentation [VPET
Documentation].BasicallyitallowsonlytriangleswithonenormalandUVcoordinatespervertex
andislimitedtoapproximately65000verticesateachmesh.Objectattributeslikeeditableflag,
materialpropertiesaswelllightandcamerasettingsaresimplytransferredtoUnity.Texturepaths
takenfromthematerialpropertiesareusedtoopenthelowresolutiontexturewhichthengets
sentwithoutfurtherprocessing.Forthescenetransfertheconversionofgeometriesisthemost
complicatedpartasitisnotsimplypassingvaluesalongthepipeline.
3.1.3Benchmarkscene
‘San Miguel’ is a complex and close to production scene provided by The Foundry for testing
purposesandoriginalcreatedbyGuillermoM.LealbasedonahaciendainSanMigueldeAllende,
Mexico.Theupdatedscenecontainsgeometryandtextureswithdifferentlevelofdetailsaswell
aslightsinthreedifferentsetups.Thehighlevelofdetailhasmassivegeometry,highresolution
texturesandshaderssetupforofflinerenderingtoachievefinalproductionquality.Ontheother
hand the scene provides a reduced asset set with less polygons and low resolution textures
matchingtherequirementsforrealtimegraphicsonmobiledevices.
Dreamspace_D5.1.3_v1.4_310316.docx
9
Specificationof‘SanMiguelVersion007’:
●
●
●
●
●
●
●
●
●
●
Polycountaround8MTriangles(production)
Texturecountca.220,highres(production)
Nodecountca.162K(production)
Objectcountincludeinstancesca.64K(production)
Meshcount770(production)
Polycountaround4MTriangles(mobile)
Texturecount100,resolutionmax1K(mobile)
Nodecountca.35K(mobile)
Objectcountincludeinstancesca.6K(mobile)
Meshcount340(mobile)
CurrentlythesupportedassetswithinLiveViewarepolygongeometrywithmaterialsandtextures
andalsolightsandcameras.AnyscenewiththeseassetscanbeusedinLiveViewfortestingand
thesystemisonlylimitedbythehardwarerestrictions.Theprovidedtestscene(SanMiguel
Version007)hasbeenoptimizedtomatchthemaximumhardwarecapabilitiesoftheusedclient
tablethardware(NvidiaTegraK1chipset).
3.2VPET
VPETisanapplicationfortabletsrunningAndroid,WindowsoriOSthathasbeendevelopedbased
ontheUnityenginetoexplorepossibilitiesforon-seteditingofvirtualcontent.Userscangraba
tablet at the film set and start exploring and editing the virtual content. The idea thereby is to
enable VFX artists and untrained set staff to edit the virtual elements of the shot in a fast and
intuitivewaydirectlyonset.AttheInstituteofAnimationVPEThasalreadybeenappliedunder
realproductionconditions.
Figure3:ThedirectorusingVPET(left),VPETmenu(right)
3.2.1RelatedWork
Already in 2012, the visual effects studio Zoic [Zoic] developed a proprietary application that
allowed a physically correct view into the virtual world by enabling the user to adjust the
parametersofavirtualcamerainawaythatmimickedthefeaturesofarealcamera.Thisearly
Dreamspace_D5.1.3_v1.4_310316.docx
10
system was only a basic framework for loading GUI elements and assets, while each of these
componentshadtobeloadedandcompiledseparately.Eveniftheworkflowappearedsufficient
for in-house productions in which the tools are only applied by the developers themselves, one
couldnotreasonablyexpecttoreachabroadergroupofcustomers.ThusZoicsetouttoadvance
the application and released Zeus, the Zoic Environmental Unification System, which is now
availableasiPadappviaiTunesfor$9.99.
TheScoutversionofZeus[ZeusScout]constitutesaprevisandscoutingtool,asthenameimplies,
and accesses only the standard components of the tablet device without requiring additional
interfaces or sensors. Like this, objects can be modified or settings changed by tapping on the
multitouchscreen,whilethepositionandorientationofthedeviceismeasuredbyretrievingthe
corresponding values from the gyroscopes and accelerometers inside the iPad. The Zeus:Scout
comes along with seven different operation modes. The view setting enables the user to walk
around freely in search of appropriate camera positions, which can be saved and stored for
blockingpurposes.Inmeasurementmode,itispossibletodefinethescaleofthevirtualsceneby
gatheringnumericvaluesintherealworld.Textured2Dfigurescanbeaddedincharactermode
whileCGassetsarepositioned,scaledandrotatedinpropmode.Theprevisserviceprovidesthe
requiredtoolsforprevisualizingashotwhereastrackingmodeenablestheusertoapplytheZeus
Scoutasvirtualcamera.Lastbutnotleast,liveactionfootagecapturedbytheiPadcameracanbe
chroma-keyedandintegratedintothevirtualworldwhenswitchingtovideomode.
At first glance this set of tools seems to provide a most intuitive and useful solution for virtual
scoutingandsetediting.However,afterhavingthoroughlytestedZeus:Scout,onehastopointto
someseriousshortcomings.Firstofall,Unityandspecialtranscodingtoolsareneededtoprepare
and load custom scenes. Unfortunately the necessary website is not available anymore.
Furthermoretheappisnotcapableofimportingsingleassetsincludingtheiranimationwhilethe
provided characters can only be added as 2D cards, on which cut-out images are projected. In
generalitisnotpossibletoaddobjectson-the-fly.Thegraphicaluserinterfaceturnsouttobeway
toooverloadedandcomesalongwithoutintuitivetoolsordescriptivewidgets.Objectscannotbe
selecteddirectlybysimplytappingonthemonthetouchsurfacebuthavetobeaddressedviaa
drop-downmenu.LastbutnotleasttheentireGUIoffersmuchtoosmallbuttonsandslidersina
metallicsubmarinelookwhichmightpleasesomegamersbutappearsinfactcounterproductive
forameaningfulapplicationinfilmproduction.
Both the scouting features and the menu breakup into distinct modes are features the current
VPET version has been inspired by. Also the easy-to-use navigation knobs proved to be a
convenient alternative in case no device tracking is available. However VPET is heading for
completelydifferentapproachesregardingnotonlytheoverallGUIandeditingtoolsbutalsothe
server-clientcommunicationandsynchronizationfeatures.
A previous approach within Dreamspace applied a gesture-recognition controller and a HMD to
enable the user to access all three axes simultaneously while being able to examine the virtual
worldinamostimmersivemanner[D2.3.2].
BothZeus:ScoutandtheHMDprototypealreadyintroducedacoupleofpromisingadvancements
in the fields of asset modification and pipeline integration. However, the first one is optimized
merelyforprevisualization,andthuscannotmeetthedemandsofrealon-setvirtualproductions,
whilethelattercouldofferonlyahardlyintuitiveinterface.Thoughwell-intentioned,concluding
surveysonthegesture-recognitionapproachrevealedthatuntrainedpersonnelarebarelycapable
ofperformingevenbasictransformtasks.TheHMDprovedtobeapromisingsolutionatfirstbut
seemedinappropriateforon-setproductionsaslongashand-heldorgesture-basedinputdevices
Dreamspace_D5.1.3_v1.4_310316.docx
11
lack usability. Due to less satisfying results with the 6 degrees of freedom gesture recognition,
VPETnowbuildsonauser-friendlytabletinterface.
3.2.2Setediting
VPETprovidesmethodsthatallowfinding,selectingandmanipulatingvirtualassetsandpropagate
thosechangestootherpartsinthesystem.Theeditingneedstobepresentedtotheuserinaway
thatit’sintuitiveandeasytouseandcontainsallnecessaryfunctionalitiesbutreducedtotheuse
caseofonseteditinginavirtualproduction.
According to the description in [D5.1.2] and to production experiences we implemented tools
mandatory for set editing. The topic about how the user can access the functions and how the
toolsaredisplayedonthedeviceisdescribedinthefollowingsectionin3.2.3GUI.
Selection
All object interactions can be accessed by pointing on the scene object that shall be edited. For
selecting in a 3D environment methods needs to be developed to pick and identify an editable
object.Todecideifanobjectiseditableandofwhichtypeitis,thedescribedsystemholdsand
transfers flags to indicate those properties. VPET will read the flags and prepare the scene
accordingly by assigning classes to objects for additional interaction and to hold further
properties.InUnitythisisacolliderandrigidbodycomponenttomaketheobjectvisibletothe
physicsengine.VPETalsocreatesasceneobjectcomponentwhichstoresthetypeofobject,its
reactiontothephysicsandtypespecificproperties.
Forpickingassetsina3Denvironmentacommonimplementationistoconvertfingertaptoa3D
dimensionalrayandthentestagainstintersectionsbetweenthisrayandsceneobjects.Basically
theuser'stapinscreenspaceistransformedinto3Dworldspacecoordinatesthroughthecamera
projectionandtransformmatrix.Thispointandthecameraoriginyieldadirectionvectorwhich
defines a line starting from the camera position and pointing into the 3D scene (see Figure 4).
Then intersections with this line and collider objects are calculated and assets connected with
suchacollidergetreturnedasahitobjectandstoredinaselectionlist.
Figure4:Buildrayfromscreenspacecoordinates
Manipulation
Theverybasicandstandardfunctionismanipulatingthetransformofanobject,i.e.changingthe
positional,rotationalandscalepartinthetransformmatrix.Onewayistomeasurehowtheuser
drags on screen and convert this length into a vector which then gets converted into a three
dimensionalrepresentationformoving,rotatingorscalingthetransform.
Togobeyondtraditionaleditingtoolsweenhancetheuserexperiencebyaddingtwoadditional
functions for changing an object’s position. Objects can be locked to the virtual camera and
Dreamspace_D5.1.3_v1.4_310316.docx
12
carriedaroundwhennavigatingthroughthesceneuntiltheuserexplicitlyunlockstheobjectand
placesitatthecurrentposition.Furthermorethereisashoottoplacemodewhichallowstheuser
tospecifythenewpositionbysimplytappingonthevirtualground.Bothmodesaremoreintuitive
comparedtotheuseofmanipulatorsandtheviewintothesceneremainsclearasthereareno
handlesneedtobeshown.
Besideseditingtheobject'stransform,onecansetparameterstodefineshapeandbehavior.This
couldbeanypropertywhichmakesupavirtualobjectlikematerialproperties,physicalreaction,
binding,nameorvisibility.AgaintheaimofVPETisnottoreflectthefunctionalityofaDCCtool
but rather provide easy to use methods for virtual production. Currently there is an
implementation for switching the connection to the gravity force which is available through the
physic engine in Unity and which has significant impact on the manipulation of objects as it
influence the position and rotation. After the user released the 3D widget handle, the physic
engineimmediatelysimulatesandsettheobject'splacementdependingongravityandcollisions.
Anothercommonfeatureistoresetthechangesmadeduringasession.Thereforetheusercan
restoreobjectpropertiesanditstransformtovaluesinitialreceivedfromtheserver.
3.2.3Animationediting
ThecurrentversionofVPETenablestheusertocreateandmodifykeyframe-basedanimationsof
scene objects. Those animations are either imported together with the scene or created from
scratch using the built-in tools provided by the VPET software itself. Some features are not yet
fully implemented but related concepts and preliminary considerations already exist and will
thereforebebrieflyexplained.Respective,workinprogresswillbeidentifiedassuch.
Beforeperformingasingleanimationeditingtask,theuserhastoswitchtoanimationmodeby
pressing the related button in the mode submenu. This mode encompasses all functionality
related to running and editing animation. When switching to animation mode a global timeline
appearsattheloweredgeofthetabletscreen,accompaniedbyplay,pauseandrewindbuttons
fornavigatingthroughthetimeline.
Figure5:Globaltimelinewithbuttons
Bydraggingthebartheuseroffsetsthepreviewarea,whileapinchgesturealtersthescale.When
clicking on an object, the user can choose between adding a new animation clip, removing an
existingoneormanipulatingtheanimationcurvebyaddingortranslatingtheinterpolationpoints.
Asset-relatedanimationclipsholdaprivateinternaltimeline.Theirkeyframesaredisplayedinside
theglobaltimelineandcanbeoffsetasawhole.Thesynchronizationserverguaranteesthatthe
globaltimelineincludingallkeyframesfromanimationclipsiscontinuouslysynchronizedbetween
allparticipatingVPETinstancesorotherclients.
Singlekeyframescanberetimedafterselectingthemintheviewportbeforedraggingthemonthe
timeline. Adding new keyframes to an object is possible as well. They are always added at the
currentglobaltimeandcanbetransformedusingthe3Dwidgetknownfromtheeditmode.Two
or more keyframes are connected by a hermite curve and build up a trajectory. The
implementation of tools for tangent transformation has been postponed due to the already too
cumbersome animation editing GUI. In fact VPET now pursues a new objective: rather than
copyingtherangeoffeaturesfromestablishedDCCtools,itaimsatofferinganexclusivetoolset
Dreamspace_D5.1.3_v1.4_310316.docx
13
for more intuitive animation editing which goes beyond the extent of offline tools. Thus VPET
formulatesnewanimationeditingparadigms.
Figure6:Animationeditingmode
Oneofthesenoveltoolsetsdescribesacueprinciple.Firstacuenumbershastobeassignedto
objectsandcharacterswithanimationclipsincluded.Bypressingtherelatedbutton,theuserthen
firestheclipsoncommandoneaftertheother.Thisapproachseemsmostrelevantasrecentuse
cases have shown, that directors need to be able to trigger animations themselves or give
commands which can easily be executed. Several animation clips may share an identical cue
numberandcanthereforebefiredsimultaneously.
Adifferentapproachproposesaneditingsolutionforcharacteranimation.Charactersarethereby
staged on the virtual set by falling back on either pre-recorded and processed motion capture
footageoralibraryofmovementsandcycles.Anewanimationcouldforinstancebederivedby
simply choosing a start and end position while the actual movement is assembled from a set of
animation clips automatically. The results should be saved and provided to the post production
departments to serve as reference material or starting point for animation. Alternatively a
subsequent real mocap session would also benefit from such a previsualized staging. Right now
researchisdonetosortout,whichresourcesandeffortswouldbenecessarytoimplementthis
kindofanimationeditinginthefinaldemonstrator.
3.2.4GUI
Theuserinterfacebuildsonwell-establishediconsandnavigationprinciples.[REFmissing]Menus
and buttons are designed for a two-handed tablet resting position and occupy as little screen
space as possible. A context-dependent circular menu encompasses the most important
functionalityforassettransformationandanimationeditingwhilelessfrequentlyusedorobjectindependent features remain hidden in a menu accessible through a button in the upper right
corner.
VPET proposes a tablet GUI that tries to keep the screen space as unspoiled as possible. In idle
mode,whennoobjectisselected,thegraphicalelementsarereducedtoasinglemenubutton,
guaranteeinganimageuncompromisedbyocclusion.Whentappingonavirtualentityacircular
menuappears,providingvariousbuttons,whereasthenumberofoptionsdependsonthetypeof
Dreamspace_D5.1.3_v1.4_310316.docx
14
object. All menus and buttons work without any written information but show only descriptive
icons.AstheoverallVPETfunctionalitycanbeclassifiedintothreemaintasks,namelyscouting,
animating and editing, the GUI also reflects this tripartition. The user can select one of these
modes by accessing the mode-submenu, which is in turn included in the general menu in the
upper right corner of the screen. When selecting for instance the scout mode, the editing and
animationtoolsarelockedandhidden.Theseparationintodifferentmodesandsub-menuswasa
necessary step in order to manage the continuously increasing amount of features offered by
VPET.
Figure7:flowchartofGUImenus
Icon Design
The majority of icons created for the mockup based upon commonly used graphics and
pictograms,whichcanbediscoveredinmostofthecontemporarysoftwarepackagesandenable
thecreativeprofessionalstostartworkingwithoutanyfurtherfamiliarization.Thustheusageof
thesewell-establishedvisualsseemedadvisable.Howevertheiconshavebeenslightlymodifiedto
match the overall design of the toolsets which have already been created by the Institute of
Animation within the scope of Project Dreamspace. The icons themselves are kept in black and
framedbyacirclecolouredinwarmishyellow.Thinoutlinesaddtothebuttonformorecomplex
icons with several overlapping elements. All graphics turn white when selected. The current
buttons constitute a variation of the RET-VP approach from 2015 and aim for a clearer and
purposefullyfrugaldesign,guaranteeinglegibilityevenonsmallerscreens.
Dreamspace_D5.1.3_v1.4_310316.docx
15
Figure8:oldRET-VPicons(toprow),newVPETicons(lowerrow)
Navigation
Ideally the user is capable of maneuvering through the scene just by walking and looking,
exploring the virtual surrounding in an independent and natural manner. In this case no further
graphicalinterfaceisneededtoprovidetheuserwiththenecessaryfunctionality.However,VPET
is also meant to run on standard mobile devices without positional tracking, such as normal
tabletsandsmartphoneswhichonlycomealongwithgyroscopes,thusonlyallowadetectionof
the orientation. The simplest way of altering the point of view is to switch between predefined
camerapositions,whichareeitherspecifiedusingthesnapshotfeatureinscout-modeorincluded
inthescenedeliveredbythesynchronizationserver.Therelatedbuttonsforworkingwiththese
presetscanalsobeaccessedintheperspectivesub-menuwhichispartofthegeneralmenu.This
sub-menu also encompasses options for orthographic views or allows the VPET client to call for
camerapositiondatafromtheNCamsystem.Thelatterenablestheusertoviewthevirtualscene
from the perspective of the principal film camera. In case no external data is provided, the free
perspective mode offers additional touchscreen knobs for navigating through three dimensional
spacemanually.
Selection
Abasictaponthedisplaydefinitelyconstitutesthemostobviousandintuitivepracticetoselect
3DmodelsorGUIelements.Touchingthescreenwilllocktheselectionontheproximateobject.In
case elements occlude each other, the object which is located closest to the camera will be
favored.Thisproceduregoeswithoutanygraphicalelements.
Editing
As soon as a character, asset or light source has been picked with a finger tap, a pale overlay
illustrates the altered status of the object. The concurrently appearing circular menu offers
differentbuttons,dependingonthetypeofobjectandthechosenmode.
Whenselectinga3Dmodelineditmode,fivebuttonsappear,namelytranslation,rotation,scale,
gravityandreset.Afterselectingoneofthetransformationoptions,themenudisappearsanda
contextual 3D widget pops up which indicates the chosen axis by highlighting. As the design of
thesewidgetsiswell-knownandfollowsconventionswhichhavebeenapprovedandcommonly
applied across the range of DCC tools for years, there is no need for self-made alternatives. By
slidingoverthedisplay,thevirtualobjectcanberepositioned,rotatedandscaled.Whendeciding
fortranslation,VPETproposesthreemoremanipulationoptionsbesidesthe3Dwidget,allowing
theusertomoveobjectsandlightsbyattachingthemtothecamera,wipingoverthetoucharea
or shifting them on the floor by clicking on the desired position. All four procedures can be
selectedthroughbuttonsattheloweredgeofthescreen.Whenanobjecthascometorestatthe
intendedposition,thussuccessfullytransformed,ashorttap
on an empty area of the display resets the selection and
inducesthetablettoreturntoidlemodeandablankscreen.
In edit mode, every light source is permanently previewed
asa2Dgizmo,showingnothingbutthebulbcentreposition.
Since lights can be translated analogous to objects, while
rotation can be used to realign the cone of spotlights, the
translationandrotationbuttonsstaythesame.Additionally,
Dreamspace_D5.1.3_v1.4_310316.docx
16
themenunowprovidesthreenewintrinsiclightoptions–colour,intensityandconeangle.When
pickingthose,context-dependentslidersappearattheleftandrightlowercornersofthescreen.
ThecoloursliderindicatesthecurrentlightcolourinRGBwhereasagrayscalesliderdeliversthe
necessaryreal-timefeedbackforintensity.
Imported 3D objects may contain more or less complex animation, which requires spatial and
temporal fine-tuning in real-time. While being in animation mode, the application previews the
trajectoryoftheselectedobjectalongsidearowofkeyframes.Furthermoreatimelineincluding
play,pauseandjumpbuttonsappears,wherethetimingofanimationdatacanbeadjustedwith
frame-accurateprecision.Inordertomanipulateasinglekeyframe,asubmenuoffersoptionsfor
removal, positioning and retiming. Once the positioning tool has been selected the keyframe is
repositioned according to the object transformation method. In animation mode all kinds of
objects can be supplied with a specific cue number, which allows the user to fire the objectrelatedanimationcliponcommandbytappingonthecuebutton.Theparticularanimationsare
therebytriggeredinnumericorder,whileseveralobjectscanshareonsinglecuenumber.
Scout Mode
Inscoutmodetheselectionandmanipulationofassetsandlightsislockedwhereastheintrinsic
andextrinsiccameraparameterscannowbeadjusted.Fourbuttonsappearattheloweredgeof
the screen, offering the functionality for general camera
specs, focal length, snapshot and trajectory record.
Additionally scout mode is meant to provide tools for
painting directly onto a captured frame for further
communicationandplanning.
Figure9:Scoutmodeicons
Annotations
Annotationsmayappearreasonablewhendifferentpeopleworkwithinonesceneinsuccession
and wish to document the changes and modifications they have dealt with. In the final
demonstrator VPET will also introduce an option for appending notes to single objects. When
tapping on the correspondent button a separated frame appears displaying the current
annotation. A pencil icon guides the user to an editable version of the text field, which comes
along with the natively supported display keyboard. Naturally a save option stores the changes
andregistersthetimetheuserhaslastaccessedtheannotation.
3.2.5Documentation
Documentationisessentialtoassistpeopleusingthetoolsandtosupportdeveloperswiththeir
implementationinproductionenvironments.WiththisinmindtoreleasetheVPETunderanopen
sourcelicense,thedocumentationisalsoessentialtobringmorepeopleinthisprojectandhelp
themtogetstarted.Thereforeanoverviewofthesystemisgivenandanintroductiononhowto
setup the tools. Developers can explore classes and method through the given code reference.
Thisiscreatedfromthesourcecodeusingacommondocumentationgeneratorwhichinterprets
classandmethoddefinitionsandincludesdevelopercomments.
Dreamspace_D5.1.3_v1.4_310316.docx
17
Thedocumentationcanbefoundintherepositorywhichwillbeavailabletothepublicitywhen
thetoolshavebeenreleasedofficially.
3.3 Supportingadditionalsensingdevices
3.3.1Devicetracking
Itisofhighinteresttoregistertheeditingtabletsinsidetherealsetsothattheuserhasalways
thecorrectpointofviewinthevirtualsceneashe/shemovesaroundinthestudio.Onewayto
achieve this goal is through Motion Capture (mocap) systems and an attached calibration rig to
the tablets (see Figure 10). However, this requires an expensive mocap system which not all
studios are equipped with. In addition, there are time-consuming studio amendments and
calibrationprocessesinvolved.
Figure10:JamesCameronusingavirtualviewpointtrackedwithmocapsystem
ApromisingapproachistousevisualSLAM(SimultaneousLocalizationandMapping)algorithms
and the integrated rear camera of the tablets. SLAM deals with the computational problem of
constructingorupdatingamapofanunknownenvironmentwhilesimultaneouslykeepingtrackof
anagent'slocationwithinit.OvertheyearsSLAMbasedtrackinghasmaturedintovariantssuch
as PTAM [Klein07], RSLAM [Mei11], DTAM [Newcombe11], and ORB-SLAM [Mur15]. On-going
workistocombineSLAMwithdepth-sensorsandothersensors.TheNcamsystemwasthefirst
commerciallyavailablesystemprofessionalcamerasystemsrequiringacabledsystemofmultiple
sensorsandperformantPC/workstation.
Forourusecase,experimentswereconductedwithtabletPCfromthehardwareside.Wecarried
outexperimentswithORB-SLAMandPTAMalgorithmsonaMicrosoftSurfacetablet.ORB-SLAMis
comparativelyastableandaversatileSLAMmethodformonocular,stereoandRGB-Dcameras.
However, the system in principle is built for Linux based devices and it becomes a challenge to
portonWindowsoperatingsystem.OntheotherhandPTAMiscompatibleunderWindowsandis
suitedfortrackinginsmallworkspaces.WedidsomeexperimentswiththePTAMsystem.
In terms of the integration into the developed tablet application, the camera translation and
rotation values from PTAM are streamed using ZeroMQ and the defined protocol to the Unity
applicationforguidedtouchinputnavigationwithinthescene.AlthoughPTAMinintegrationwith
theeditingclientworksasexpected,themajorlimitationariseswhentrackingwiderworkspaces.
InsuchascenarioPTAMisunstableandlosestrackbeyondtheregisteredstretch.Moreover,the
systemdoesn’tautomaticallyrecoverandrequiresacompletere-initialization.
Dreamspace_D5.1.3_v1.4_310316.docx
18
Recently,GooglehasintroducedtheProjectTango[Tango]prototype.Theprototypeincludesan
Android tablet tracked with visual and depth cues, fused with gyroscope and accelerometer
readingsinreal-time.Therearetwooperatingmodesavailable:visualodometryortrackingwith
the area learning enabled. The latter mode includes localization in pre-learned areas as well as
driftcorrections(loopclosing).Asfaraswetestedthedevice,thetrackingwasstableandprecise.
TheTangotrackinginformationisalsointegratedintotheUnitybasedtabletapplicationusingthe
TangoUnityAPI(ApplicationProgrammingInterface)providedbyGoogle.
3.3.2Integratedonsetdepthsensing
Weintegrateddepthsensorsintotheon-settools.Theavailablesensorsarenotmeanttogivea
qualitytobeusefulforfinalqualitypost-productionfootage.However,theuseofadepthcamera
to obtain a 3D mesh of props or actors is useful for pre-visualization on set to get a rough
understanding of framing and interaction of real props and actors and virtual components. This
would allow for changes in the camera angles, illumination changes and scene layouts to be
modifiedassoonaspossible.Suchapossibilitywouldallowforchangestobeperformedwhilein
thefilmingstage,avoidinganypost-production(CGI)issuesandre-filmingofcertainparts.
Figure11:UnfilteredRGB-depthimageintegratedintoVPETtools
WeusedIntelRealSensedepthcamerasforourtests,butanyRGB+depthcameracanbeusedin
principle.Thecameraobtainsdepthmapthatisconvertedtoa3Dmeshofobjectsandintegrate
themintotheUnity-basedVPETtools.Abasicsystemwithaboundingboxhasbeenimplemented
tovisualizeaspecificpartofthewholedepthimage(seeFigure11).Thiswouldallowtoignore
background / foreground areas or just focus into a small area. The bounding box's size can be
modifiedbytheuserandithasthepossibilitytohide/unhideitforabettervisualization.
Otherfeaturesincludethepossibilitytochangethemesh'smaterialandusingeitherthetexture
from the RGB image or any other material (metal, wood, stone, etc.) that the user might find
appropriate.Alternativelythereisalsotheoptiontofreezethemeshandthereisfullintegration
withtheUnityEditingTools(meshscaling,rotationandtranslation).
Dreamspace_D5.1.3_v1.4_310316.docx
19
3.4Light-Editing
Theharmonizationofthelightingbetweentherealandthevirtualsetisanimportanttaskinpostproduction. This harmonization occurs in both directions. On one side, the real lighting setup of
the studio is captured, modeled, estimated and passed to the LiveView server so that the
augmentedvirtualassetsarerenderedwiththesamelightingastherealstudiolamps.Ontheflip
side,theseestimatedlightsourcescanbeeditedinthevirtualsceneusingtheeditingtools,e.g.
thedevelopedUnity-basedtabletapplication;thesechangesarethensentbacktothereallamps
to update the studio lighting. In the meantime, a real-time feedback is available through the
connected global illumination renderer to the system. This gives the DOP or other creative
professionals more control and the possibility to try out and edit the lights and make valuable
creativedecisionsdirectlyon-set.
Intherestofthissection,wecoverthesetwoaspectsofthelightingharmonization.
3.4.1Lightcaptureandestimation
Applicationslikerenderingofimagesusingcomputergraphicsmethodsareusuallyrequiringmore
sophisticatedlightmodelstogivebettercontrol.Complexscenesincomputergeneratedimages
arerequiringverydifferentiatedlightmodelstogivearealisticrenderingofthescene.Thatusually
includesahighnumberof(virtual)lightsourcestomodelascenetoreproduceaccurateshadows
and shadings. In particular in the production of visual effects for movies and TV the real scene
lighting needs to be captured very accurately to give a realistic rendering of virtual objects into
thatscene.Inthiscontextthelightmodelingisusuallydonemanuallybyskilledartistsinatime
consumingprocess.
Image-basedlightcapture,aspioneeredbyDebevec[Debevec98]wasintroducedasafastmethod
tocapturethelightingsituationofarealscene.Itrequiresarenderingmethodcapableofusing
thecapturedlightprobes,usuallyaraytracingorglobalilluminationrenderer.Furthermore,the
methodisconstrainedtosceneswithsmallextend;strictlyspeakingthelightprobeillumination
modelisonlyvalidforonepointofthescene:thecenterornodalpointofthelightcapturedevice.
Having said that, these approaches tend to produce photorealistic results when the above
conditions are met. These models basically contain all the lighting information of the scene for
thatcertainpoint.Thereareacoupleofeffortsforsamplingimportantareasoftheselightprobes
and modeling them with many point light sources [Debevec08, Agarwal03, Kollig03]. This idea,
although makes the rendering faster and produces visually comparable results to using all the
informationinlightprobes,isnotmodelingtheactualsources,andhencesuffersfromthesame
mentionedproblem.
The approach presented in Dreamspace aims to estimate discrete light source models from a
series of image light probes, captured at different positions in the real scene. From this
informationweintendtoestimatethegeometricandradiometricpropertiesofthespotandarea
light sources in the scene. In contrast to image-based lighting, once the sources are modeled
separately,theycanbeusedforaugmentingassetsindifferentpositionsofthesceneflawlessly.
3.4.1.1Relatedwork
Estimatingdiscretelightsourcesfromimages,althoughdesirable,withhighprecisionasneeded
for photorealistic rendering does only have few prior work. In the most simplified case, light
sources are only modeled with an incident direction (distant light source assumption) [Liu09,
Wang03]. However, for a more accurate estimation of the light geometric properties, point
[Hara05, Frahm05, Powell01, Weber01] or area light [Schnieders09] sources are taken into
Dreamspace_D5.1.3_v1.4_310316.docx
20
account. In these cases, the type of light source must be known beforehand. To cope with this
limitation,ZhouandKambhamettu[Zhou08]andTakaietal.[Takai09]assumemoregenericlight
source models and estimate their parameters accordingly. Here an interface to the renderer is
requiredtotransformparametersofthechosengeneralmodeltotheavailablelightsourcetypes
intherenderer.
Regarding the capture setups, a common approach is to use reflective objects of known
geometries like mirror spheres which can be used for finding geometric properties of the light
sources e.g. their directions and/or 3D positions [Kanbara04, Powell01]. Weber and Cipolla
[Weber01] use a calibration cube with Lambertian reflectance. However, using a probing object
generallyimpliesknowingitsgeometryandtheexactmaterialmodelinadvance.Ackermannetal.
haverecentlyproposedtodirectlytriangulatethevisiblelightsourcesincapturedimagesforthe
geometriccalibration[Ackermann13].Although,duetothelimitsoftheirsetup,theyhadtouse
mirrorspheresforregistrationoftheprobingcameraitself.
Park et al. [Park14] investigated the problem of image-based geometric and radiometric
calibration of non-isotropic near point light sources based on shading cues on a probing plane
(herewhiteboard).Inadditiontothe3Dpositionofthepointsource,itsdominantdirectionand
the intensity fall-off parameters are estimated. However, due to their setup and the inherent
limitedfieldofviewofthenormallensesemployed,estimatedfall-offparametersarevalidfora
limitedrange.
3.4.1.2Proposedmethod
Toestimatediscretelightsourcesinastudio,inDreamspace,weproposetocapturethelighting
environment with a sequentially captured array of wide angle or spherical light probe images
whichareregisteredtothescene.Differentsetupswereintroducedandtestedintheprojectfor
thecaptureandregistrationandthelightprobeimages.Usingthesecaptureddata,weestimate
thedetailedgeometricandradiometricpropertiesofthespotandrectangulararealightsources
usingtheproposedalgorithms.
Incontrasttothemethodsmentionedinrelatedwork,thereareanumberofadvantagesinour
proposed approach. First, the probing camera is registered using approaches which require the
minimum amendments to the studio. Second, we are not limited to any specific probing object
since properties of each light are estimated based on processing the probe images. In addition,
sincetheprobingcameracanmovefreelyintheareaofinterest,therearenolimitsintermsof
thecoveredspace.Largefieldofviewofthefisheyelensisalsobeneficialinthismatter.
In this work, spot lights are modeled finely with their 3D positions, dominant directions (light
axes), beam angles and their intensities. To find the model parameters, a two-step approach is
proposed.Inthefirststep,3Dpositionsofthelampsarefoundusingatriangulationprocess.Then
anoptimizationschemeisemployedtofindtheradiometricparameters.Inaddition,wearealso
able to model the area lights as isotropic planar rectangles. The geometry of area lights is also
estimatedbymodelingtheproblemasaminimizationofadefinedcostfunction.
Theuseoftheimageprobesasinputtoouralgorithmsrequirestocalibratetheprobingcameralenssystemgeometricallyandradiometrically.Thisinvolvesfindingthesensorresponsecurveas
well as the geometric and radiometric properties of the lens such as projection type, vignetting
effect,etc.
Finallywehavetestedourproposedmethodonvirtuallygeneratedaswellasrealdatasetsand
providedtheexperimentalresults.
Dreamspace_D5.1.3_v1.4_310316.docx
21
Intherestofthissection,firstdifferentcapturesetupstestedintheprojectareintroduced.Then
therewillbeasubsectionontherelatedcamera-lensnecessarycalibrations.Lightsourcemodels
and the proposed estimation methods are described in details afterwards. In the end,
experiments'setupsandresultsarepresented.
CaptureSetup
To capture the data needed for the light estimation, we use an array of light probes taken at
different positions in the studio environment, until we have enough data to estimate the fine
detailsofdiscretelightsourcesinthescene.However,theseprobeimagesmustberegisteredto
thescenetobeuseful.Wehavetestedthreedifferentcapturesetupsintheprojectsofar.The
detailsofeachcapturesetupisdescribedinthefollowing.
Mocap-likesystem.Intheearlymonthsoftheproject,anin-housemocap-likesystemdeveloped
for tracking the probing camera using an attached calibration rig (See Figure 12). The system is
constructed of a number of GoPros as witness cameras and a consumer-grade Canon EOS-5D
MarkIIIequippedwithaWalimexPro8mmfisheyelenswhichhas164.5ºhorizontalfieldofview
(fov).
We propose a two-step calibration and registration approach. In the first step, a planar
asymmetriccalibrationpattern(Figure13)isusedforsimultaneouscalibrationoftheintrinsicsand
theposeofallthewitnesscamerasandtheprincipalcamerausingthebundleadjustmentmodule
from R3DR [R3DR]. To minimize the calibration error, the pattern must be captured in different
positions and orientations. The pattern is detected using OpenCV [OpenCV] and the found
features are fed into the bundle adjustment module. In the next step, parameters of witness
camerasarekeptfixedandthepose(positionandorientation)oftheprobingcameraisregistered
inthesamecoordinatesystembyextractionofthefourcolorfeaturesoftheattachedcalibration
rig(Figure13).
Figure12:Mocap-stylelightprobelocalisation
Dreamspace_D5.1.3_v1.4_310316.docx
22
Figure13:Anin-housemocapsystemfortrackingthefisheyeprobingcamera
The installation and calibration of the witness cameras are very time consuming and not really
suitable for on-set use. In addition, we had to manually download the captured images from
GoPros.
Ncam&RicohThetaS.AnothercapturesetupsuitablefortheDreamspaceproductionpipelineis
tousetheNcamprincipalcameratrackingtechnology[ncam]andfisheyecameramountedtothe
principalcamera.WeusedthesphericalThetaScamerafromRicohasthelightprobingdevicein
our experiments. The Ncam server reports the principal camera's pose in real-time using the
attached sensor bar to the camera rig. The technology is based on visual SLAM fused with
differentsensorreadingsavailableontheNcambar.Inthissetup,wemountourThetaSonthe
samecamerarig(Figure14)andafterfindingtheoffsetbetweentwocameras,wehavethepose
ofthelightprobingdevice.
Figure14:SphericalRicohThetaSmountedontheprincipalcamerarigtrackedwithNcam
AlthoughNcamtrackinginformationisstableandprecise,themaindrawbackhereisthatnotall
studiosareequippedwithsuchanexpensivetrackingtechnologyandanNcamequippedcamera
isalsonotveryeasytomovearound.
Tracked tablet. In the third setup, we use a tracked tablet and an attached industrial C-mount
See3CAM_11CUGcameraequippedwithaFujinon1.4mmfisheyelenswith185°horizontalfieldof-view (Figure 15). As discussed in Section 3.2.6, there are different visual SLAM libraries for
camera tracking. Here we use the Google Tango Project prototype with area learning mode
enabled for our experiments. An Android app was developed to access tracking information of
Dreamspace_D5.1.3_v1.4_310316.docx
23
TangousingtheprovidedJavaAPIandcapturelightprobeimagesandsavethemontheexternal
storage.Abriefdescriptionofthisappcanbefoundinsection4.2.
Figure15:C-mountcameraattachedtoTangorigusedforcapturingregisteredlightprobes
Radiometricandgeometriccalibrationoftheprobingdevice
Inthedifferentsetupsmentionedabove,weusedthreedifferentwideangleorsphericalcameras.
In order to be able to use the captured light probe images in our algorithms, one needs to
calibratetheprobingcamera.
From the geometric point of view, the parameters of the lens projection function needs to be
estimated.Thereareanumberofoff-the-shelflibrariesforthispurpose.Weemployedthebasic
radially symmetric polynomial model of Kannala and Brandt [Kannala06] to calibrate the
geometricprojectionofthefisheyecamera-lenssystemsused.IntermsofThetaS,thegeometric
projectionisalreadyknownandthereisnoneedforcalibration.Thecamerastitchestheimagesof
twointernalfisheyelensesandoutputsthemasoneimagewithequirectangularprojection.
Inadditiontogeometricproperties,eachcamera-lenssystemhassomeradiometricbehaviorsto
be known as well. In our work, the cameras are first calibrated using pfscalibration tool
[pfscalibration] to find out their sensor response. This response function is later applied on all
gamma-correctedimagestoproduceradiance-linearprobeimages.
Oncewehavelinearprobeimages,wecancalibrateforotherlenseffectslikevignetting,etc.This
isaone-timeprocedurewhichhastobedoneofflineforthechosenprobingcamera-lenssystem.
Theprocedureisdescribedinthefollowing.
Observed intensity of a light source at each probe position is considered to be related to the
brightnessofitscorrespondingblobinthethresholdedprobeimage.Thebrightnessofeachblob
isalsointuitivelydefinedassummationoveritspixelvalues.
A fixed spot lamp is captured at its dominant direction from different distances and at every
distance with varying lens orientations so that the light blobs appear on different areas of the
camera'ssensorplane.Basedonthisdata,weverifythatbrightnessofthelightblobsatafixed
lensorientationhasquadraticattenuationw.r.t.probingdistance.Inadditiontothis,abrightness
Dreamspace_D5.1.3_v1.4_310316.docx
24
factor is introduced to model the change in blob brightness values w.r.t. lens orientation. This
factor is modeled with C(θ) where θ is the zenith angle of the lamp i.e. the angle between the
camera optical axis and the camera-lamp vector (Figure 16). C(θ) can have different shapes
depending on the camera-lens system like linear or quadratic. We estimate parameters of C(θ)
throughaleastsquarefittothemeasuredsamples.
Figure 16 shows the calibration sampling process and the distance-normalized blob brightness
valuessampledondifferentareasoftheCanonEOS-5DMarkIIIsensorplanewhichisequipped
withtheWalimex8mmfisheyelens(firstsetup).
Figure16:Left:radiometriccalibrationofthecamera-lenssystem;Right:Canon5D/Walimexbehaviour
Lightsourcemodelsandestimationofthemodelparameters
Spot lights. We model spot lights with an Intensity value, I, a 3D position vector, L, a dominant
direction(lightaxis),l,andabeamangle,b.Spotlightshavequadraticintensityattenuationw.r.t.
distancetothelightsourceandquadraticangularintensityfall-off.Theintensityfall-offisradially
symmetricaroundthelightaxis.
To estimate the 3D position vectors of the light sources, one needs to shoot rays from every
detectedlightblobinallprobeimagesandtriangulatethecorrespondingraysfromatleasttwo
probe positions for each source. The following Pseudocode shows the required steps:
Inputs:
A couple of registered light probe images
Intrinsic calibration parameters of the probing camera
Output:
3D position vectors of light sources
Steps:
Detect light blobs in all probe images
Match light blobs to their corresponding light source
For all detected light sources
Shoot rays from corresponding light blobs
Triangulate computed rays
Return the estimated 3D position of the light source
Since the light sources are by far brighter than the studio environment, the corresponding light
Dreamspace_D5.1.3_v1.4_310316.docx
25
blobsareeasilydetectedbythresholdingtheluminancevaluesintheprobeimages.Theareaof
the detected blobs is also checked to be bigger than a threshold to avoid noisy pixels. For the
matching step, considering the light blobs detected in all probe images altogether, the first
corresponding ray to the blobs is assigned to the first light source. Then based on a distance
threshold,eachsubsequentrayischeckedtobeassignedtooneoftheexistinglightsourcesor
createanewlightsource.Finally,alinearsystemofequationsismadeforeachlightsourceand
the corresponding rays are triangulated with the least square method. For a review on
triangulationapproaches,interestedreadercanreferto[Hartley97].
Toestimatetheremainingparametersofthespotlightsources,i.e.theintensity,thelightaxisand
thebeamangle,weformulatetheproblemasanoptimizationproblem.Thecostfunctionwhich
needstobeminimizedissumofsquareddistancesbetweenobservedlightintensitiesonprobe
imagesandcomputedmodelintensitiesas
withmthenumberofprobeswherethelightsourcejisvisibleinthemandPistheircorresponding
3D positions. Bij is brightness of light blob j in probe i and the terms ||Pi - Lj||2 and C(θ)
compensateforitsquadraticattenuationw.r.t.probingdistanceanditsbrightnessfactorbasedon
itsplaceoncamera'ssensorplane,respectively.Theangleɸijisalsodefinedasɸij=∠(lj,Pi-Lj).
As mentioned in the model specification, in our formulation we assume a quadratic angular
intensityfall-offforspotlights
Ij(ɸij)=Ij(1-(ɸij/bj)2).
ThechosenIjcanmodelthecoarsefall-offcharacteristicsofarealstudiospotlampwithalimited
numberofprobeobservations.Weverifythischoicelaterintheexperimentssection.
Area lights.Inourwork,weconsiderisotropicrectangulararealightmodels.Theparametersof
themodelareverticesoftheplanarilluminatinggeometryandtheradiancevalue.
Inordertoestimatethegeometryofthearealight,weadoptthemethodproposedbyZhouand
Kambhamettu[Zhou08]andimproveittoourneeds.Theoriginalpropositionassumesonlyone
stereo view of the area light and then tries to optimize for the light surface parameters by
maximizingtheoverlappingareabetweentheprojectionsofthetwolighthighlightsonthelight
surface.However,thereareacoupleofpointstoconsiderabouttheirproposedmethod.First,in
ordertofindtheprojectionofthelamphighlightonthelightsurface,onerayisshotfromevery
single pixel in the probe image; which makes the process time consuming. Second, the setup is
basedonastereocameraandametalsphere,whichmakesitdifficulttousemorethanonelight
probe due to their registration approach. Third, there are no obvious information on the
optimizationorsearchalgorithmusedtofindthelightplaneoptimumparameters.
Inourwork,weusedZhouandKambhamettuidea[Zhou08];however,thesetupisbasedonmore
thanonestereoprobe(manyprobes).Inaddition,wefitpolygonstotheextractedlightsurface
highlights in the probe images and shoot rays only from the vertices of the fitted polygons. We
provideourestimationresultswithaswarmbasedoptimizeraswellasexhaustivesearchmethod
intheExperimentssection.
Dreamspace_D5.1.3_v1.4_310316.docx
26
3.4.1.3Experiments
Wehaveevaluatedourproposedalgorithmsforestimatingtheparametersofspotandarealight
sources both on virtually generated datasets as well as in real scenarios. Both scenarios are
coveredindetailsinthefollowingsubsections.
Virtuallygenerateddatasets
Spotlights.1000randomlygenerateddatasetsarecreatedforeachsetup.Eachdatasetcontains
10 randomly positioned light probes of a spot light with random intensity, position, dominant
direction(lightaxis)andbeamspread(beamangle)withquadraticintensityfall-off.Itisassumed
that the spot light is visible in the generated probes with an added random uniform noise (0 to
20%)toitsintensityvalues.
The evaluation results with two of the best performing gradient-based optimizers as well as a
selectedadaptiveinertiaweightPSO(ParticleSwarmOptimizer)byNickabadietal.[Nickabadi11]
areprovidedbelow.Table1showsthesuccessratesforeachalgorithmamong1000datasets,one
run per dataset. Table 2 shows the averaged estimation errors of the spot lights' unknown
parametersi.e.intensity,beamangleanddominantdirection.Runningtimesaremeasuredona
workstationequippedwithanIntelCorei7-3770Kprocessor.
For the gradient-based optimizers i.e. Levenberg–Marquardt and Interior Point, tests have been
performedusingtheMATLABimplementationofthealgorithmswiththefixedstartingpointequal
tointensity=1.0,beamangle=45°,andlightaxis=Z.
TheselectedvariantofPSOisimplementedinPythonusingNumpywiththefollowingparameters:
population size = 100, c1 = c2 = 2.05 (recommended common values), maximum number of
iterations=500,andstagnationthreshold=100iterations.
Constrained Least Square
Interior Point
AIW-PSO
(Levenberg–Marquardt)
0%
5% 10% 20% 0%
5% 10% 20% 0% 5% 10%
98.7% 99.2% 97.3% 91.7% 96.9% 96.8% 96.2% 90.3% 99% 100% 99%
20%
92%
Table 1 Optimization success rates over 1000 random datasets, one run per dataset
Parameters' Error
Intensity
Beam Angle
Light Axis
Elapsed Time (sec)
Constrained Least Square
(LM)
0% 5% 10% 20%
0.2% 2% 3.5% 6.8%
0.1° 1° 1.7° 2.7°
0.2° 1.3° 2.3° 3.7°
0.02 0.03 0.03 0.03
Interior Point
0%
0.2%
0.1°
0.2°
0.06
5%
1.9%
1.0°
1.4°
0.06
10%
3.4%
1.5°
2.1°
0.06
AIW-PSO
20% 0% 5% 10%
6.4% 0.3% 2% 3.5%
2.5° 0.3° 0.8° 1.8°
3.7° 0.3° 1.2° 2.5°
0.06 6.92 6.11 6.03
20%
6.3%
2.7°
3.4°
6.09
Table 2 Estimation errors averaged over the successful results of Table 1
Area lights. In a similar manner, 100 randomly generated datasets are created. Each dataset
containsarectangulararealightwithrandomwidth,height,position,andnormalorientationas
wellas5randomlypositionedlightprobeswherethearealightisvisibleinthem.
Here finding the closed form gradient of the cost function proved to be very difficult and the
Constrained Levenberg–Marquardt and Interior Point algorithms were unable to estimate it
numerically. Therefore, we employ the same PSO variant and its parameter setting mentioned
aboveforfindingtheoptimalparametersofthearealights.Table3showsthesuccessrateaswell
as errors in parameter estimation averaged over 100 random datasets for the selected PSO
variant.
Dreamspace_D5.1.3_v1.4_310316.docx
27
Success Rate
Normal Error
Distance-to-origin Error
Elapsed Time
AIW-PSO (N = 100)
98%
0.0°
1.6%
13.23 sec
AIW-PSO (N = 150)
98%
0.0°
2.4%
20.22 sec
Table 3 Optimization success rates and the estimation errors averaged over 100 random datasets,
one run per dataset
Finally, it is worth mentioning that since the selected cost function is based on maximizing the
overlapping projected areas on the light plane, it is in nature very much invariant to errors
introducedbysegmentationofhighlightsandthecorrespondingpolygonfittingprecision.
Realdatasetusingin-housemocapsystem(firstsetup)
InourearlyexperimentsinDreamspace,toverifythealgorithmsproposedfortheestimationof
spotlightsources,werecordedarealdatasetinourstudioofascene(approx.3x3m)illuminated
withtwo800W3200Ktungstenhalogenlampswithdifferentpositions,directions,intensitiesand
beamspreads.TheintensitiesaresetusingaPC-connectedDMXcontrollerandaDMXcontrolled
dimmerkit.OurprobingcameraisaCanonEOS-5DMarkIIIwithaWalimexPro8mmfisheyelens
whichhas164.5°horizontalfieldofview(fov).Thiscamerawasalsousedtocaptureimagesofa
reference object in a separate step with a rectilinear lens at the same color balance settings.
Regardingthewitnesscamerasusedtoregisterthelightprobes,3GoProHERO3satmediumfield
ofviewsetting(80.5°horizontalfov)arefixedaroundthescene.
Light probes are taken from 18 positions at a fixed height using a moving tripod dolly. At every
position, two horizontal probes are captured with opposite viewing directions as an attempt to
coverthewholescenefromthatviewpoint.Theseprobesareroughlycapturedinthearealitby
thelamps.Theyaretakenwiththelowestexposuresettingonthecamera(ISOspeed50,shutter
speed 1/8000 sec, aperture size f/22) so that the bright areas are not saturated on the sensor
plane.HighDynamicRange(HDR)probesarenotnecessaryinthistestsinceonlythebrightareas
ofthesceneareofinterest.Inourexperiments,weachievedthesameestimationresultswhen
usingHDRprobesincomparisontosingleexposureprobes.
In our first experiment to check the stability of the non-constrained least square minimization
method proposed and verify our choice of the quadratic fall-off curve, we ran the optimization
process1000times.Iftheelementsofthefoundsolutionvectorareoutsidetherangesofinterest,
itisconsideredasaninvalidsolutiontoourprobleme.g.incaseofnegativemaximumintensity.
Table 4 shows the convergence results for both of the lamps in our dataset. Runtimes are
measured using the SciPy implementation under Linux on an Intel Core i7-3770K machine. The
resultsclearlyshowthatalthoughbothlinearandquadraticfall-offcurvesapplyreasonablywell
on lamp 2, linear fall-off curve assumption on lamp 1 suffers from instability and larger sum of
squaredresiduals(morethan5times)incomparisontoquadraticcurve.Inthisexperiment,only
onevalidsolutionwasfoundforeachlampwithacertainfall-offassumptionoverallruns.
#
1
2
Valid Solution
Found
Quadratic
1.77e-3
121.61
101.17 ms
99.3%
Linear
9.44e-3
164.99
130.39 ms
75.3%
Quadratic
9.50
108.90
91.43 ms
99.7%
Linear
4.28
93.64
74.56 ms
99.9%
Table 4 Statistics of least square minimization process averaged over 1000 runs
Fall-off Curve
Residual
Dreamspace_D5.1.3_v1.4_310316.docx
No. of Steps
Elapsed Time
28
Table 5 shows the estimation results for the two lamps assuming a quadratic fall-off function.
Referencevaluesarepreciselymeasuredwithalaserdistancemeterandalightmetertoourbest.
Intermsoftheestimatedpositions,wecanonlyverifytheheightofthelampsfromfloorwhere
the errors (11cm and 4cm) seem acceptable to the scale of our scene. We believe that this
triangulationerrorcanbefurtherreducedifmorethan3witnesscamerasareusedsothatweget
betterregistrationsofprobes.Directionsandbeamanglesarealsoestimatedwithasmallerror;
however, estimation results of lamp 2 are better. The reason is that lamp 2 has smaller beam
angle and lower height which causes the probes to cover its whole beam cone which is not
perfectly the case for lamp 1. We only determine relative intensities. For absolute intensities a
calibration of the camera is needed. For our lamps, the measured ratio by light meter at 1m
distanceis4.0e3/1.59e4=0.25andtheestimatedoneis2.75/8.98=0.31.
#
1
2
Position
Direction
Intensity
Beam Angle
Estimated
(2.76, 3.75, 1.95)
(-0.8,-0.35,-0.46)
2.75
56.22
Measured
(-,-,1.84)
(-0.80,-0.41, -0.43)
4.0e3 lux
58.0º
Error
(-,-,0.11)
3.86º
-
1.78º
Estimated
(-1.11,1.15,1.78)
(0.47,0.52,-0.71)
8.98
50.50º
Measured
(-,-,1.73)
(0.49,0.51,-0.71)
1.59e4 lux
50.30º
Error
(-,-,0.04)
0.88º
-
0.2º
Table 5 Estimation results for the lamps in our dataset assuming a quadratic fall-off curve
Furthermore,weuseabovequantitativeestimationsofthelampstorenderareferenceobjectin
our scene from a principal camera's point of view. A plaster fish statue is scanned with a 3D
scanner and its model is rendered into the image of an empty table with known 3D geometry
withouttherealfishasthebackplate.Shadowsofthe(virtual)fishonthetopsurfaceofthetable
are generated based on the estimated properties of the above two lamps used in the virtual
scene. Intel's Embree is employed for rendering which is a collection of open source high
performance ray tracing kernels optimized for Intel's platforms. We use the implemented path
tracermethodincludedinthepackage.Figure17showsrendersofthefish3Dmodelsidebyside
toitsrealappearanceattwodifferentpositionsonthetabletopabout1mawayfromeachother.
One can visually verify the shape, direction and brightness of the rendered shadows. A fixed
ambientlightisaddedtothescenetoroughlymodelthereflectionsfromtheuncoveredstudio
wallsandfloor.
Dreamspace_D5.1.3_v1.4_310316.docx
29
Figure17:Visualcomparisonofrendersofthereferenceobject(right)totherealones(left)
3.4.1.5Futurework
Oneinterestingfollow-upthatwewillbeinvestigatingintheremainingmonthsoftheprojectisto
usethecaptureddatainHDRlightprobeimagestoautomaticallyjudgethesceneambientlighting
situation and eliminate the need for manual guess. This can be considered as an attempt to
combinediscretelightsourceestimationandimage-basedlightingparadigms.
Anotherinterestingdirectionthatcanbepursuedistodevelopalgorithmsandauserinterfaceto
enablethefilmcrewon-settorecreateacertaincapturedlightingsituationinthestudiow.r.tto
thephysicalconstraintssuchasthestudiosize,numberandtypesoflamps,etc.Thiskindofuser
interfacescanbeveryhelpfulregardingthetimeconstraintson-set.
3.4.2LightingServerandintegrationintotheLiveViewsystem
Oncethevirtualsceneisinitializedwiththeestimatedlightsourcesintherealscene,thecreative
professionals find the opportunity to edit them to improvise lighting on-set to make better
decisions.Therefore,thereisaneedtotrackthesechangesandpassthemtothereallampsinthe
scenetokeepthelightinginboththerealandvirtualscenesharmonized.Thisisbeingperformed
throughadevelopedLightingServer.
Updates of the virtual light sources from the editing clients are sent to a small daemon server
written in Python using ZeroMQ. The server deserializes the update and extracts the relevant
information for the 4 (or more) DMX channels of the (real) lamps: RGB color and intensity. The
server is allowed to go to sleep in between the updates if the lamps retain their last known
configurationanddonotrequireconstantkeepalivesignals.
Hardware-wise, the connection between the synchronization server and all the standard DMX
controllablelampsisestablishedusinganEnttecOpenDMXUSBinterface[Enttec]module(Figure
18),whichallowsoutgoingcommunicationthroughastandardUSBport.Driversforthismodule
areavailableforbothWindowsandLinuxoperatingsystems.
Dreamspace_D5.1.3_v1.4_310316.docx
30
Figure18:EnttecOpenDMXUSBinterfacemodule
3.4.3GUIforLight-editing
Thetabletapplicationdevelopedforseteditingisusedforthescenelightsaswell.However,for
eachlight,thereareadditionalcommonradiometricattributesavailablelikeintensityandcolor.
Furthermore,itincludesspecificlightsourcemodels'attributesaswell,e.g.beamangleincaseof
spotlights(Figure19).
Figure19:SelectedspotlightbeingeditedinUnity-basetabletapp
4Practicaluseoftheon-settools
This section gives some typical use cases and an overview of the practical application of the
developedon-settools.Itisnotmeanttodeeplydescribetheusageorgivinganevaluationofthe
tools.TheseaspectswillbecoveredinthefinalproductiontestandevaluationWP2D2.3.3Final
VirtualProductionsandWP6D6.2.2FinalFieldTrialandUserEvaluationReport.
4.1Setandanimationediting
The virtual shooting as part of the “Skywriter” production was used to evaluate LiveView and
VPET. The shooting achieved framing and directing of a virtual camera to find the best picture.
Those shots show flying airplanes over a city and could not be produced in a real environment
becauseofclose-upsandwidefastmovements.Therefore,thedecisionwasmadetoworkpure
virtually.
Dreamspace_D5.1.3_v1.4_310316.docx
31
ThestageconsistedoftheLiveViewserverascentralunit,Ncamtotrackcameratransformation
and VPET to explore and manipulate virtual assets. The central unit provided the real-time
renderer and output the scene in full quality with highres polygon models, textures light and
shading. It received the camera tracking and airplane transform updates through a network
connectionandprocessedthedataimmediately.Fromthedirector’spointofviewtheproduction
environmentcontainedanoperatorwithNcammountedtoacamerarig,aprojectiondisplaying
the render output and the editing tools on the tablet. Those parts were then connected to the
LiveView System. The workflow was split into three major parts which are: exporting data from
DCCtooltoLiveView,runningtheLiveViewtoolsonsetandconvertingtherecordeddatabackto
theDCCtool.
Figure20:DirectormanipulatingvirtualobjectsusingVPET
4.2Lightcaptureandediting
The light-editing component can be used in two ways in a virtual production: 1. Driven by
automaticallycapturedlightmodelsusingthelightcapturemoduleor2.Byeditingthelightswith
theVPETtools.
Further,bothapproachescanbecombined.Thelight-captureallowsestimatingreallightsources
on-setandtheseestimationscanthenbeimportedintotheLiveViewsystemandusedtolightthe
virtualscenewiththereallightparameters.Oncethereallightingiscapturedandcalibrated,the
reallightscanbefurtherchangedwiththeDMXlightserverandthelightGUI.Thislight-editing
part could be also used stand-alone without the light-capture and a conventional light
harmonizationinpost(case2asexplainedabove).
Dreamspace_D5.1.3_v1.4_310316.docx
32
Forthelightcaptureanumberoflight-probesaretakenfromdifferentpositions,i.e.thelitspace
is sampled by a number of light probes using the light-probing rig (see Figure 15). The required
numberofprobesdependsonthecomplexityofthesceneandtherequiredaccuracyontheother
side. E.g. to capture just point lights it is theoretically possible to use only two light probes;
practically since the underlying technical approach is an mathematical estimation, it is
recommendedtouseatleast4-8lightprobesfromdifferentplaces.
Inordertogetusablefall-offcharacteristicsofaspotlightasequenceoflightprobesneedtobe
takenincludingtheinside(brightarea)ofthelightconeand2-5samplesinthefall-offareas.
Arecommendedpracticalapproachisthereforetotakelight-probesamplesonaregulargridin
thestudiospaceofinterestin50cmto1mdistances.Thespacingmightbeevenfineriftheactive
areaissmaller.
The light-probes are taken using an app running on the capture-rig tablet. Figure 21 shows
screenshotsofthedifferentstatesoftheappduringaworksession.
Afteropeningtheapp,theusermustfirstfollowtheinstructionsonthescreenandholdthetablet
steady for a while until the Tango API initializes its pose and its UI starts to disappear (Figure
Figure21a).Thenonecanpressthecameraon/offbutton(upperleftcorner)tofindallconnected
USBcameras(Figure21b)andselectthelight-probecamerainthefollowingdialogue(Figure21
c).Aftergrantingthenecessarypermissions(Figure21d),thecamerapreviewbox,capturebutton
(bottomleftcorner),bracketingbutton(buttonrightcorner),exposureseekbar(logarithmicscale,
socalledstops)anditscurrentselectedvaluewillappearonthescreen(Figure21e).Theusercan
change the exposure setting to the desired value and capture stills (Figure 21 f). The captured
images along with their tracked poses are saved in Android's default picture folder (under
LightProbes).
Thereisabracketingoptionimplementedaswell.Whilebracketing,theUIisdisabled;however,
theusercanseetheprogressandtheusedexposuresonthescreen.
(a)
Dreamspace_D5.1.3_v1.4_310316.docx
(b)
33
(c)
(d)
(e)
(f)
Figure21ScreenshotsoftheLightCaptureApplication
Conclusions
This report describes the prototype of on-set tools developed in the Dreamspace project. The
toolsallowinterfacingwithvirtualscenecomponentsinanintuitivewayonsetandareintegrated
intotheDreamspaceliveviewingpipeline.
Thetoolsaremodularandset-,lighting-andanimationeditingcanbeusedtogetherorindividual.
Mostaspectsofthetoolshavebeenalreadytestedinexperimentalproductions.Inthefinalphase
oftheprojectthetoolsareavailableformoreproductiontestsandwillundergoafinalevaluation.
5References
[Ackermann13]JensAckermann,SimonFuhrmann,andMichaelGoesele.Geometricpointlight
sourcecalibration.InProc.Vision,Modeling,andVisualization,pages161-168,2013.
[Agarwal03]SameerAgarwal,RaviRamamoorthi,SergeBelongie,andHenrikWannJensen.
Structuredimportancesamplingofenvironmentmaps.InProc.ACMSIGGRAPH,pages605-612,
2003.
[D2.3.2]Deliverable2.3.2:VirtualProductionExperimentResults
[D5.1.2]Deliverable5.1.2:Afirstprototypetoolforvirtualsetandlighting
[D6.1.2]Deliverable6.1.2:PrototypeDreamspaceLiveViewSystem
Dreamspace_D5.1.3_v1.4_310316.docx
34
[Debevec98]PaulDebevec.Renderingsyntheticobjectsintorealscenes:bridgingtraditionaland
image-basedgraphicswithglobalilluminationandhighdynamicrangephotography.InProc.ACM
SIGGRAPH,page32,1998.
[Debevec08]PaulDebevec.Amediancutalgorithmforlightprobesampling.InProc.ACM
SIGGRAPH,page33,2008.
[Enttec]EnttecOpenDMXUSB.URLenttec.com/opendmxusb.php.
[Frahm05]Jan-MichaelFrahm,KevinKoeser,DanielGrest,andReinhardKoch.Markerless
augmentedrealitywithlightsourceestimationfordirectillumination.InProc.IEECVMP,pages
211-220,2005.
[Hara05]KenjiHara,KoNishino,andKatsushiIkeuchi.Lightsourcepositionandreflectance
estimationfromasingleviewwithoutthedistantilluminationassumption.IEEETrans.PAMI,
27(4):493-505,2005.
[Hartley97]RichardIHartleyandPeterSturm.Triangulation.Computervisionandimage
understanding,68(2):146-157,1997.
[Kanbara04]MasayukiKanbaraandNaokazuYokoya.Real-timeestimationoflightsource
environmentforphotorealisticaugmentedreality.InProc.ICPR,pages911-914,2004.
[Kannala06]JuhoKannalaandSamiSBrandt.Agenericcameramodelandcalibrationmethodfor
conventional,wide-angle,andfish-eyelenses.IEEETrans.PAMI,28(8):1335-1340,2006.
[Klein07]GeorgKlein,DavidWMurray.ParallelTrackingandMappingforSmallARWorkspaces.In
Proc.ISMAR,pages225-234,2007.
[Kollig03]ThomasKolligandAlexanderKeller.Efficientilluminationbyhighdynamicrangeimages.
InProc.EurographicsRendering,pages45-51,2003.
[Liu09]YanliLiu,XueyingQin,SonghuaXu,EihachiroNakamae,andQunshengPeng.Lightsource
estimationofoutdoorscenesformixedreality.TheVisualComputer,25(5-7):637-646,2009.
[Mei11]ChristopherMei,GabeSibley,MarkCummins,PaulMNewman,andIanDReid.RSLAM:A
SystemforLarge-ScaleMappinginConstant-TimeUsingStereo.Int.J.Comp.Vision,94(2):198214,2011.
[Mur15]RaulMur-Artal,JMMMontiel,andJuanDTardós.ORB-SLAM:AVersatileandAccurate
MonocularSLAMSystem.IEEETrans.Robotics,31(5):1147-1163,2015.
[ncam]NcamTechnologies,URLncam-tech.com.
[Newcombe11]RichardANewcombe,StevenLovegrove,andAndrewJDavison.DTAM:Dense
trackingandmappinginreal-time.InProc.ICCV,pages2320-2327,2011.
[Nickabadi11]AhmadNickabadi,MohammadMEbadzadeh,andRezaSafabakhsh.Anovelparticle
swarmoptimizationalgorithmwithadaptiveinertiaweight.AppliedSoftComputing,11:36583670,2011.
[OpenCV]Opensourcecomputervision(OpenCV).URLopencv.org.
[Park14]JaesikPark,SudiptaNSinha,YasuyukiMatsushita,Yu-WingTai,andInSoKweon.
Calibratinganon-isotropicnearpointlightsourceusingaplane.InProc.IEEECVPR,pages22672274,2014.
[pfscalibration]PhotometricHDRandLDRcameracalibration.URL
pfstools.sourceforge.net/pfscalibration.html.
[Powell01]MarkW.Powell,SudeepSarkar,andDmitryGoldgof.Asimplestrategyforcalibrating
thegeometryoflightsources.IEEETrans.PAMI,23(9):1022-1027,2001.
[R3DR]Robust3Dreconstructionfromvisualdata(R3DR).URLresources.mpi-inf.mpg.de/R3DR/dlm.
[Schnieders09]DirkSchnieders,Kwan-YeeKWong,andZhenwenDai.Polygonallightsource
estimation.InProc.ACCV,pages96-107,2009.
Dreamspace_D5.1.3_v1.4_310316.docx
35
[Takai09]TakeshiTakai,AtsutoMaki,KoichiroNiinuma,andTakashiMatsuyama.Difference
sphere:anapproachtonearlightsourceestimation.ComputerVisionandImageUnderstanding,
113(9):966-978,2009.
[Tango]ProjectTango.URLgoogle.com/atap/project-tango
[Wang03]YangWangandDimitrisSamaras.Estimationofmultipledirectionallightsourcesfor
synthesisofaugmentedrealityimages.GraphicalModels,65(4):185-205,2003.
[Weber01]MartinWeberandRobertoCipolla.Apracticalmethodforestimationofpointlightsources.InProc.BMVC,pages1-10,2001.
[ZeusScout]http://www.zoicstudios.com/zoic-environmental-unification-system-zeus/
[Zhou08]WeiZhouandChandraKambhamettu.Aunifiedframeworkforsceneilluminant
estimation.ImageandVisionComputing,26(3):415-429,2008.
[Zoic]ZoicStudios.URLhttp://www.zoicstudios.com.
Dreamspace_D5.1.3_v1.4_310316.docx
36