Using OpenGL in Android

Download Report

Transcript Using OpenGL in Android

X3D Extension
for
(Mobile) AR Contents
International AR Standards Workshop
Seoul, Korea
Oct 11-12, 2010
Gerard J. Kim
(WG 6 AR Standards Study Group Coordinator)
Korea University
Approach
Extensibility to existing frameworks
X3D (Scene graph)
Because AR is implemented as VR!
KML, OpenGIS, …
We need location representation
Generality/Flexibility to accommodate
Different AR platforms (~Platform independence)
Mobile, Desktop, HMD, …
Sensors and devices
Vision based, Marker based, Location based, …
Focused on file format (Scene graph based?)
vs. Contents representation
Machine consumption
Various display types and platforms
Camera
Display
[R. Azuma, 1997]
Video
Combiner
Display
Optical
Combiner
<Video See-through>
<Optical See-through>
Video
Combiner
Camera
Display
<Mobile>
<Desktop>
AR/MR Implementation
T = T’
Transform T (from tracking system)
camera
fov
f
Live camera capturing the real world
Virtual cam
fov’ = fov
f’ = f
Synthesized mixed reality world as a virtual
space with video placed at the same distance
(e.g. f = f’) from virtual camera position but
with infinite depth values
Various sensing
Background pixels
rendered at their depth
sensed
Virtual cam
Live video with depth
virtual object
Various sensing
MR/AR Contents
• Context: Condition or situation that triggers an
augmentation and mixing of real and virtual
objects
resource
• Resource: Raw data or information used for
augmentation
• Content: One or more pairings of contexts and
Resources + behaviors (that uses the resources)
context
Related work
Jung et al. (InstantReality Suite)
Extension of Sensor nodes – Physical contexts
Extension of Viewpoint nodes – Specification of camera parameters
Layers: One layer served as background video
Extension of X3DLightNode: Lighting effects
SFImageSenosr : X3DDirectSensorNode {
SFImage
[in/out]
value …
SFBool
[]
out
False
SFString
[]
label
…
}
DEF frame SFImageSensor { label “Video Frames” }
ROUTE frame.value_changed TO surfaceTex.set_image
Major proposals
LC, VC
Extend “View” node:
Resolution between “live” camera and virtual
Define “Live” camera node (G. Lee / ETRI)
– Not necessarily for “AR” contents (e.g. Video textures)
– Parameters set by user
LC
More detailed parameter specification for “View”
Set by user
Routed from “Live” camera node
VC
– With possibility of behavioral manipulation
“Routed” from sensor
– Camera could be tracked separately
Default: same as the world
– Note that view can be relative to anything
LC, VC
Major proposal
Extending movie texture node (for AR background)
Also proposed by G. Lee / Instant Reality
Extend existing virtual “Sensor” nodes
New X3DARNodes for target real object description
ImagePatch, 3DObject, GPSLocation, SingleValue, …
Existing: E.g. Visibility, Proximity, Touch sensor …
New: RangeSensor, UIClickSensor, …
Not included in this proposal
Lighting and Rendering issue
Depth sensing and occlusion effects
Extended point of interest (e.g. path, hierarchical POI)
Platform type specification
e.g. Resolution difference
X3D
(Virtual)
WORLD
View
(Virtual Camera)
Other
X3D Nodes
Movie Texture*
ROUTE*
AR Node +
Sensor
AR contents
(Real/Physical)
Live
Camera
Virtualized
Physical
Contexts
Abstraction of MR/AR contents as a collection of context and resources
connected by “Event in”’s and “Event out”’s.
<Scene>
<Group>
<TouchSensor DEF='TOUCH' description='touch to activate'/>
<TimeSensor DEF='TIME' cycleInterval='3'/>
<PositionInterpolator DEF='INTERP_POS'
key='0 0.25 0.5 0.75 1' keyValue='0 0 0 1 0 0 0 0 0 -1 0 0 0 0 0'/>
<Transform DEF='BALL'>
<Shape>
<Appearance>
<Material/>
</Appearance>
<Sphere/>
</Shape>
</Transform>
</Group>
<ROUTE fromField='touchTime' fromNode='TOUCH'
toField='startTime' toNode='TIME'/>
<ROUTE fromField='fraction_changed' fromNode='TIME'
toField='set_fraction' toNode='INTERP_POS'/>
<ROUTE fromField='value_changed' fromNode='INTERP_POS'
toField='translation' toNode='BALL'/>
</Scene>
<Scene>
<Group>
<Marker DEF = “HIRO” enable “TRUE” filename=”C:\hiro.patt”/>
<VisibilitySensor DEF='Visibility' enabled=”TRUE”/>
<Transform DEF='BALL'>
<Shape>
<Appearance>
<Material/>
</Appearance>
<Sphere/>
</Shape>
</Transform>
</Group>
<ROUTE fromNode=’Vsibility’ fromField='visible' toNode=’BALL’
toField=’visible’ />
</Scene>
X3DNode
X3DARNode
X3DChildNode
ImagePatch
3DObject
X3DSensorNode
X3DEnvironmental
SensorNode
GPSLocation
VisibilitySensor
SingleValue
ProximitySensor
UIDevice
RangeSensor
…
…
UIConfigNode
UIClickSensor
UIScrollSensor
…
• Vision based feature recognition and tracking (e.g. fiducials, markers, 3D points)
• Non-vision based env. sensor events and values (e.g. RFID, GPS, distance)
• User interaction devices events and values (e.g. buttons, touch screen, jog dial)
• Context information (e.g. user age)
Real Object
X3DARNode
Main Attributes
Sensor used
Marker
ImagePatch
ID, Position,
Orientation
Visibility
3D point
3DObject
ID, Type, Position,
Orientation
Visibility
GPS Location
GPSLocation
ID, Coordinate
Range
RFID
SingleValue
Value (Boolean)
Existence
Ultrasonic sensor
SingleValue
Distance (Integer)
Proximity
Button
UIDevice
Value (Boolean)
UIClickSensor
User Age
SingleValue
Age (Integer)
Range
X3DARNode
Placeholders for physical objects within AR/MR world “implementation”
X3DARNode : X3DNode
{
SFNode
[in, out]
SFNode
[in, out]
SFString
[in, out]
SFBool
[in, out]
}
metadata
parent
description
enabled
X3DARNode is the base type for the Marker, Location and General Event, …
ImagePatch (Marker) & VisibilitySensor
ImagePatch : X3DARNode
{
SFNode
[in, out]
SFNode
[in, out]
SFString
[in, out]
SFBool
[in, out]
SFString
[in, out]
SFVec3f
[in, out]
SFRotation
[in, out]
}
metadata
parent
description
enabled
filename
position
orientation
VisibilitySensor : X3DEnvironmentalSensorNode
<!-- Existing -->
{
SFVec3f
[in, out]
center
SFBool
[in, out]
enabled
SFNode
[in, out]
metadata
SFVec3f
[in, out]
size
SFTime
[out]
enterTime
SFTime
[out]
exitTime
SFBool
[out]
isActive
}
Location & RangeSensor
GPSLocation : X3DSensorNode
{
SFNode
[in, out]
SFNode
[in, out]
SFString
[in, out]
SFBool
[in, out]
SFInt32
[in, out]
SFBool
[out]
MFString
[out]
}
metadata
parent
description
enabled
device_description
status
values
RangeSensor : X3DEnvironmentalSensorNode
{
SFVec3f
[in, out]
center
SFBool
[in, out]
enabled
SFNode
[in, out]
metadata
SFVec3f
[in, out]
size
SFTime
[out]
enterTime
SFTime
[out]
exitTime
SFBool
[out]
isActive
SFInt32
[in, out]
sequence
SFString
[in, out]
lBound
SFString
[in, out]
uBound
SFString
[in, out]
value
}
Live camera
LiveCamera = MR/AR Capture Camera
Within the Scene node
Image field is the out value
Camera internal parameter  projmat field
Camera external parameter  Set to World but can be tracked
Live Camera {
SFString
SFString
SFImage
SFMatrix4f
SFBool
SFBool
SFVec3f
SFRotation
}
[in, out]
[out]
[out]
[out]
[out]
[out]
[out]
[out]
label
parent
image
projmat
on
tracking
position
orientation
"default“
"1 0 0 … "
FALSE
FALSE
Routing from LiveCam
From
Live Camera node “image” field
To
Background (LiveURL field)
Shape (MovieTexture field)
Live video  Background
<Scene>
<Background groundAngle='1.309 1.571'
groundColor='0.1 0.1 0 0.4 0.25 0.2 0.6 0.6 0.6'
skyAngle='1.309 1.571'
skyColor='0 0.2 0.7 0 0.5 1 1 1 1'
backUrl='mountns.png'
frontUrl='mountns.png'
leftUrl='mountns.png'
rightUrl='mountns.png'/>
</Scene>
<Scene>
<LiveCamera DEF='USBCam1' source='dev#'/>
<Background liveSource='USBCam1'/>
</Scene>
<Scene>
<Background videoUrl='bgvideo.mpg'/>
</Scene>
MovieTexture Node
Add MovieTexture to X3DTextureNode hierarchy
Used for TextureBackground
Fix TextureBackground relative to camera
Allow connection to live camera (not just through streaming server)
MovieTexture Node
<Shape>
<Appearance>
<MovieTexture loop='true' url=' "wrlpool.mpg"
"http://www.web3d.org/x3d/content/examples/Vrml2.0Sourcebook/wrlpool.mpg" '/>
</Appearance>
<IndexedFaceSet ccw='false' coordIndex='0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16'>
<Coordinate point='2.00 0.6 0.00 1.85 0.6 0.67 1.41 0.6 1.41 0.67 0.6 1.85
0.00 0.6 2.00 -0.67 0.6 1.85 -1.41 0.6 1.41 -1.85 0.6 0.67 -2.00 0.6 0.00 -1.85 0.6
-0.67 -1.41 0.6 -1.41 -0.67 0.6 -1.85 0.00 0.6 -2.00 0.67 0.6 -1.85 1.41 0.6 -1.41
1.85 0.6 -0.67 2.00 0.6 0.00'/>
</IndexedFaceSet>
</Shape>
Live Camera  Movie Texture
<Scene>
<Shape>
<Appearance>
<MovieTexture loop='true' url='wrlpool.mpg'/>
</Appearance>
<IndexedFaceSet ccw='false' coordIndex='0 1 2 ... 15 16'>
<Coordinate point='2.00 0.6 0.00 ... 2.00 0.6 0.00'/>
</IndexedFaceSet>
</Shape>
</Scene>
<Scene>
<LiveCamera DEF='USBCam1' source='dev#'/>
<Shape>
<Appearance>
<MovieTexture liveSource='USBCAM1' keyColor= '0 0 1' />
</Appearance>
<IndexedFaceSet ccw='false' coordIndex='0 1 2 ... 15 16'>
<Coordinate point='2.00 0.6 0.00 ... 2.00 0.6 0.00'/>
</IndexedFaceSet>
</Shape>
</Scene>
Live Camera and Virtual Camera
Calibrating the virtual camera according to the
parameters of live capture camera
Internal parameter = projection matrix
External parameter = camera pose
Manual
Direct specification
Routing
From the Live camera
From the Sensor
W0
T
Method 1
Viewpoint : X3DViewpointNode{
SFMatrix4f [in]
projmat
SFVec3f
[in,out] position
SFRotation [in,out] orientation
SFNode
}
[in,out] liveCamera
Add distortion parameters here
<Scene>
<LiveCamera DEF='USBCam1' source='dev#'/>
<Viewpoint liveCamera='USBCam1'/>
<Shape> … </Shape>
</Scene>
Method 2
<Scene>
<LiveCamera DEF='USBCam1' source='dev#'/>
<Viewpoint DEF='MRView'/>
<Shape> … </Shape>
<ROUTE fromNode='USBCam1' fromField='projmat'
toNode='MRView' toField='projmat'/>
<ROUTE fromNode='Tracker' fromField='position'
toNode='MRView' toField='projmat'/>
<ROUTE fromNode='Tracker' fromField='orientation'
toNode='MRView' toField='projmat'/>
</Scene>
Other Activities
Draft document
Teleconferences with Web3D
Implementation: k-MART
Domestic workshop
April, POSTECH, Pohang, Korea
June, KIST, Seoul, Korea
Future
More
Extensions
Examples
Implementations
International consensus