From 9555742ab809af1f8f91f346368edc9eb463f711 Mon Sep 17 00:00:00 2001
From: Curtis Rueden Behavior nodes provide the means for
-animating objects, processing keyboard and mouse inputs, reacting to
-movement, and enabling and processing pick events. Behavior nodes
-contain Java code and state variables. A Behavior node's Java code can
-interact with Java objects, change node values within a Java 3D
-scene
-graph, change the behavior's internal state-in general, perform any
-computation it wishes.
- Simple behaviors can add surprisingly interesting effects to a scene
-graph. For example, one can animate a rigid object by using a Behavior
-node to repetitively modify the TransformGroup node that points to the
-object one wishes to animate. Alternatively, a Behavior node can track
-the current position of a mouse and modify portions of the scene graph
-in response. A Behavior leaf node object contains a scheduling region and two
-methods: an The scheduling region defines a spatial volume that serves
-to enable the scheduling of Behavior nodes. A Behavior node is active
-(can receive stimuli) whenever an active ViewPlatform's activation
-volume intersects a Behavior object's scheduling region. Only active
-behaviors can receive stimuli.
- The scheduling interval defines a
-partial order of execution for behaviors that wake up in response to
-the same wakeup condition (that is, those behaviors that are processed
-at the same "time"). Given a set of behaviors whose wakeup conditions
-are satisfied at the same time, the behavior scheduler will execute all
-behaviors in a lower scheduling interval before executing any behavior
-in a higher scheduling interval. Within a scheduling interval,
-behaviors can be executed in any order, or in parallel. Note that this
-partial ordering is only guaranteed for those behaviors that wake up at
-the same time in response to the same wakeup condition, for example,
-the set of behaviors that wake up every frame in response to a
-WakeupOnElapsedFrames(0) wakeup condition.
- The A typical behavior will modify one or more nodes or node components
-in
-the scene graph. These modifications can happen in parallel with
-rendering. In general, applications cannot count on behavior execution
-being synchronized with rendering. There are two exceptions to this
-general rule:
- Note that modifications to geometry by-reference or texture
-by-reference are not guaranteed to show up in the same frame as other
-scene graph changes.
- When the Java 3D behavior scheduler invokes a Behavior object's
- The application must provide the Behavior object with references to
-those scene graph elements that the Behavior object will manipulate.
-The application provides those references as arguments to the
-behavior's constructor when it creates the Behavior object.
-Alternatively, the Behavior object itself can obtain access to the
-relevant scene graph elements either when Java 3D invokes its Behavior methods have a very rigid structure. Java 3D assumes
-that
-they
-always run to completion (if needed, they can spawn threads). Each
-method's basic structure consists of the following:
- A WakeupCondition object is
-an
-abstract class specialized to fourteen
-different WakeupCriterion objects and to four combining objects
-containing multiple WakeupCriterion objects.
- A Behavior node provides the Java 3D behavior scheduler with a
-WakeupCondition object. When that object's WakeupCondition has been
-satisfied, the behavior scheduler hands that same WakeupCondition back
-to the Behavior via an enumeration.
-
- Java 3D provides a rich set of wakeup criteria that Behavior
-objects
-can use in specifying a complex WakeupCondition. These wakeup criteria
-can cause Java 3D's behavior scheduler to invoke a behavior's A Behavior object constructs a WakeupCriterion
-by constructing the
-appropriate criterion object. The Behavior object must provide the
-appropriate arguments (usually a reference to some scene graph object
-and possibly a region of interest). Thus, to specify a
-WakeupOnViewPlatformEntry, a behavior would specify the region that
-will cause the behavior to execute if an active ViewPlatform enters it.
- A Behavior object can combine multiple WakeupCriterion objects into
-a
-more powerful, composite WakeupCondition. Java 3D behaviors
-construct a
-composite WakeupCondition in one of the following ways:
- Behavior objects can condition themselves to awaken only when
-signaled
-by another Behavior node. The WakeupOnBehaviorPost
-WakeupCriterion
-takes as arguments a reference to a Behavior node and an integer. These
-two arguments allow a behavior to limit its wakeup criterion to a
-specific post by a specific behavior.
- The WakeupOnBehaviorPost WakeupCriterion permits behaviors to chain
-their computations, allowing parenthetical computations-one behavior
-opens a door and the second closes the same door, or one behavior
-highlights an object and the second unhighlights the same object.
-
- As a virtual universe grows large, Java 3D must carefully
-husband
-its
-resources to ensure adequate performance. In a 10,000-object virtual
-universe with 400 or so Behavior nodes, a naive implementation of Java
-3D could easily end up consuming the majority of its compute cycles in
-executing the behaviors associated with the 400 Behavior objects before
-it draws a frame. In such a situation, the frame rate could easily drop
-to unacceptable levels.
- Behavior objects are usually associated with geometric objects in
-the
-virtual universe. In our example of 400 Behavior objects scattered
-throughout a 10,000-object virtual universe, only a few of these
-associated geometric objects would be visible at a given time. A
-sizable fraction of the Behavior nodes-those associated with nonvisible
-objects-need not be executed. Only those relatively few Behavior
-objects that are associated with visible objects must be executed.
- Java 3D mitigates the problem of a large number of Behavior
-nodes in
-a
-high-population virtual universe through execution culling-choosing to
-invoke only those behaviors that have high relevance.
- Java 3D requires each behavior to have a scheduling region
-and to post a wakeup condition. Together a behavior's scheduling region
-and wakeup condition provide Java 3D's behavior scheduler with
-sufficient domain knowledge to selectively prune behavior invocations
-and invoke only those behaviors that absolutely need to be executed.
-
- Java 3D finds all scheduling regions associated with Behavior
-nodes
-and
-constructs a scheduling/volume tree. It also creates an AND/OR tree
-containing all the Behavior node wakeup criteria. These two data
-structures provide the domain knowledge Java 3D needs to prune
-unneeded
-behavior execution (to perform "execution triage").
- Java 3D must track a behavior's wakeup conditions only if an
-active
-ViewPlatform object's activation volume intersects with that Behavior
-object's scheduling region. If the ViewPlatform object's activation
-volume does not intersect with a behavior's scheduling region,
-Java 3D
-can safely ignore that behavior's wakeup criteria.
- In essence, the Java 3D scheduler performs the following
-checks:
- Java 3D's behavior scheduler executes those Behavior objects
-that
-have
-been scheduled by calling the behavior's This section describes Java 3D's predefined Interpolator behaviors.
-They are called interpolators
-because they smoothly interpolate between the two extreme values that
-an interpolator can produce. Interpolators perform simple behavioral
-acts, yet they provide broad functionality.
- The Java 3D API provides interpolators for a number of
-functions:
-manipulating transforms within a TransformGroup, modifying the values
-of a Switch node, and modifying Material attributes such as color and
-transparency.
- These predefined Interpolator behaviors share the same mechanism for
-specifying and later for converting a temporal value into an alpha
-value. Interpolators consist of two portions: a generic portion that
-all interpolators share and a domain-specific portion.
- The generic portion maps time in milliseconds onto a value in the
-range
-[0.0, 1.0] inclusive. The domain-specific portion maps an alpha value
-in the range [0.0, 1.0] onto a value appropriate to the predefined
-behavior's range of outputs. An alpha value of 0.0 generates an
-interpolator's minimum value, an alpha value of 1.0 generates an
-interpolator's maximum value, and an alpha value somewhere in between
-generates a value proportionally in between the minimum and maximum
-values.
- Several parameters control the mapping of time onto an alpha value
-(see
-the javadoc for the Alpha object for a
-description of the API).
-That mapping is deterministic as long as its parameters do not change.
-Thus, two different interpolators with the same parameters will
-generate the same alpha value given the same time value. This means
-that two interpolators that do not communicate can still precisely
-coordinate their activities, even if they reside in different threads
-or even different processors-as long as those processors have
-consistent clocks.
- Figure
-1
-shows the components of an interpolator's time-to-alpha mapping. Time
-is represented on the horizontal axis. Alpha is represented on the
-vertical axis. As we move from left to right, we see the alpha value
-start at 0.0, rise to 1.0, and then decline back to 0.0 on the
-right-hand side.
- On the left-hand side, the trigger time defines
-when this interpolator's waveform begins in milliseconds. The region
-directly to the right of the trigger time, labeled Phase Delay, defines
-a time period where the waveform does not change. During phase delays
-alpha is either 0 or 1, depending on which region it precedes.
- Phase delays provide an important means for offsetting multiple
-interpolators from one another, especially where the interpolators have
-all the same parameters. The next four regions, labeled α
-increasing, α at 1, α decreasing, and
-α at 0, all specify durations for
-the corresponding values
-of alpha.
- Interpolators have a loop count that determines how many times to
-repeat the sequence of alpha increasing, alpha at 1, alpha decreasing,
-and alpha at 0; they also have associated mode flags that enable either
-the increasing or decreasing portions, or both, of the waveform.
-
-
-Developers can use the loop count in conjunction with the mode flags to
-generate various kinds of actions. Specifying a loop count of 1 and
-enabling the mode flag for only the alpha-increasing and alpha-at-1
-portion of the waveform, we would get the waveform shown in Figure
-2.
-
-
-In Figure
-2,
-the alpha value is 0 before the combination of trigger time plus the
-phase delay duration. The alpha value changes from 0 to 1 over a
-specified interval of time, and thereafter the alpha value remains 1
-(subject to the reprogramming of the interpolator's parameters). A
-possible use of a single alpha-increasing value might be to combine it
-with a rotation interpolator to program a door opening.
- Similarly, by specifying a loop count of 1 and
-a mode flag that enables only the alpha-decreasing and alpha-at-0
-portion of the waveform, we would get the waveform shown in Figure
-3.
- In Figure
-3,
-the alpha value is 1 before the combination of trigger time plus the
-phase delay duration. The alpha value changes from 1 to 0 over a
-specified interval; thereafter the alpha value remains 0 (subject to
-the reprogramming of the interpolator's parameters). A possible use of
-a single α-decreasing value might be to combine it with a
-rotation
-interpolator to program a door closing.
-
-
-We can combine both of the above waveforms by specifying a loop count
-of 1 and setting the mode flag to enable both the alpha-increasing and
-alpha-at-1 portion of the waveform as well as the alpha-decreasing and
-alpha-at-0 portion of the waveform. This combination would result in
-the waveform shown in Figure
-4.
-
-
-In Figure
-4,
-the alpha value is 0 before the combination of trigger time plus the
-phase delay duration. The alpha value changes from 0 to 1 over a
-specified period of time, remains at 1 for another specified period of
-time, then changes from 1 to 0 over a third specified period of time;
-thereafter the alpha value remains 0 (subject to the reprogramming of
-the interpolator's parameters). A possible use of an alpha-increasing
-value followed by an alpha-decreasing value might be to combine it with
-a rotation interpolator to program a door swinging open and then
-closing.
- By increasing the loop count, we can get
-repetitive behavior, such as a door swinging open and closed some
-number of times. At the extreme, we can specify a loop count of -1
-(representing infinity).
- We can construct looped versions of the waveforms shown in Figure
-2, Figure
-3, and Figure
-4. Figure
-5 shows a looping interpolator with mode flags set to enable
-only the alpha-increasing and alpha-at-1 portion of the waveform.
-
-
-In Figure
-5, alpha goes from 0 to 1 over a fixed duration of time, stays
-at 1 for another fixed duration of time, and then repeats.
- Similarly, Figure
-6 shows a looping interpolator with mode flags set to enable
-only the alpha-decreasing and alpha-at-0 portion of the waveform.
-
-
-Finally, Figure
-7 shows a looping interpolator with both the increasing and
-decreasing portions of the waveform enabled.
- In all three cases shown by Figure
-5, Figure
-6, and Figure
-7, we can compute the exact value of alpha at any point in time.
-
-
-Java 3D's preprogrammed behaviors permit other behaviors to change
-their parameters. When such a change occurs, the alpha value changes to
-match the state of the newly parameterized interpolator.
- Commonly, developers want alpha to change slowly at first and then
-to
-speed up until the change in alpha reaches some appropriate rate. This
-is analogous to accelerating your car up to the speed limit-it does not
-start off immediately at the speed limit. Developers specify this
-"ease-in, ease-out" behavior through two additional parameters, the Each of these parameters specifies a period within the increasing or
-decreasing alpha duration region during which the "change in alpha" is
-accelerated (until it reaches its maximum per-unit-of-time step size)
-and then symmetrically decelerated. Figure
-8 shows three general examples of how the
- The Java 3D API specification serves to define objects, methods, and
-their actions precisely. Describing how to use an API belongs in a
-tutorial or programmer's
-reference manual, and is well beyond the scope of this specification.
-However, a short introduction to the main concepts in Java 3D will
-provide the context for understanding the detailed, but isolated,
-specification found in the class and method descriptions. We introduce
-some of the key Java 3D concepts and illustrate them with some simple
-program fragments.
-
- A scene graph is a "tree" structure that contains data arranged in a
-hierarchical manner. The scene graph consists of parent nodes, child
-nodes, and data objects. The parent nodes, called Group nodes, organize
-and, in some cases, control how Java 3D interprets their descendants.
-Group nodes serve as the glue that holds a scene graph together. Child
-nodes can be either Group nodes or Leaf nodes. Leaf nodes have no
-children. They encode the core semantic elements of a scene graph- for
-example, what to draw (geometry), what to play (audio), how to
-illuminate objects (lights), or what code to execute (behaviors). Leaf
-nodes refer to data objects, called NodeComponent objects.
-NodeComponent objects are not scene graph nodes, but they contain the
-data that Leaf nodes require, such as the geometry to draw or the sound
-sample to play.
- A Java 3D application builds and manipulates a scene graph by
-constructing Java 3D objects and then later modifying those objects by
-using their methods. A Java 3D program first constructs a scene graph,
-then, once built, hands that scene graph to Java 3D for processing.
- The structure of a scene graph determines the relationships among
-the
-objects in the graph and determines which objects a programmer can
-manipulate as a single entity. Group nodes provide a single point for
-handling or manipulating all the nodes beneath it. A programmer can
-tune a scene graph appropriately by thinking about what manipulations
-an application will need to perform. He or she can make a particular
-manipulation easy or difficult by grouping or regrouping nodes in
-various ways.
-
- The following code constructs a simple scene graph consisting of a
-group node and two leaf
-nodes.
-Listing 1 – Code for Constructing a Simple Scene Graph
- It first constructs one leaf node, the first of two Shape3D
-nodes, using a constructor that takes both a Geometry and an Appearance
-NodeComponent object. It then constructs the second Shape3D node, with
-only a Geometry object. Next, since the second Shape3D node was created
-without an Appearance object, it supplies the missing Appearance object
-using the Shape3D node's Java 3D places restrictions on how a program can insert a scene
-graph
-into a universe.
- A Java 3D environment consists of two superstructure objects,
-VirtualUniverse and Locale, and one or more graphs, rooted by a special
-BranchGroup node. Figure 2 shows these objects
-in context with other scene graph objects.
- The VirtualUniverse object defines a universe. A universe allows a
-Java
-3D program to create a separate and distinct arena for defining objects
-and their relationships to one another. Typically, Java 3D programs
-have only one VirtualUniverse object. Programs that have more than one
-VirtualUniverse may share NodeComponent objects but not scene graph
-node objects.
- The Locale object specifies a fixed position within the universe.
-That
-fixed position defines an origin for all scene graph nodes beneath it.
-The Locale object allows a programmer to specify that origin very
-precisely and with very high dynamic range. A Locale can accurately
-specify a location anywhere in the known physical universe and at the
-precision of Plank's distance. Typically, Java 3D programs have only
-one Locale object with a default origin of (0, 0, 0). Programs that
-have more than one Locale object will set the location of the
-individual Locale objects so that they provide an appropriate local
-origin for the nodes beneath them. For example, to model the Mars
-landing, a programmer might create one Locale object with an origin at
-Cape Canaveral and another with an origin located at the landing site
-on Mars.
-
-The BranchGroup node serves as the root of a branch graph.
-Collectively, the BranchGroup node and all of its children form the
-branch graph. The two kinds of branch graphs are called content
-branches and view branches. A content branch contains only
-content-related leaf nodes, while a view branch
-contains a ViewPlatform leaf node and may contain other content-related
-leaf nodes. Typically, a universe contains more than one branch
-graph-one view branch, and any number of content branches.
- Besides serving as the root of a branch graph, the BranchGroup node
-has
-two special properties: It alone may be inserted into a Locale object,
-and it may be compiled. Java 3D treats uncompiled and compiled branch
-graphs identically, though compiled branch graphs will typically render
-more efficiently.
- We could not insert the scene graph created by our simple example (Listing
-1) into a Locale because it does not have a BranchGoup node for
-its root. Listing 2
-shows a modified version of our first code example that creates a
-simple content branch graph and the minimum of superstructure objects.
-Of special note, Locales do not have children, and they are not part of
-the scene graph. The method for inserting a branch graph is
-Listing 2 – Code for Constructing a
-Scene Graph and Some
-Superstructure Objects
- Listing 3 – Code
-for Constructing a Scene Graph Using the Universe
-Package
- The order that a particular Java 3D implementation renders objects
-onto
-the display is carefully not defined. One implementation might render
-the first Shape3D object and then the second. Another might first
-render the second Shape3D node before it renders the first one. Yet
-another implementation may render both Shape3D nodes in parallel.
-
- Java 3D provides different techniques for controlling the effect of
-various features. Some techniques act fairly locally, such as getting
-the color of a vertex. Other techniques have broader influence, such as
-changing the color or appearance of an entire object. Still other
-techniques apply to a broad number of objects. In the first two cases,
-the programmer can modify a particular object or an object associated
-with the affected object. In the latter case, Java 3D provides a means
-for specifying more than one object spatially.
-
- Bounds objects specify a volume in which particular operations
-apply.
-Environmental effects such as lighting, fog, alternate appearance, and
-model clipping planes use bounds objects to specify their region of
-influence. Any object that falls within the space defined by the bounds
-object has the particular environmental effect applied. The proper use
-of bounds objects can ensure that these environmental effects are
-applied only to those objects in a particular volume, such as a light
-applying only to the objects within a single room.
- Bounds objects are also used to specify a region of action.
-Behaviors
-and sounds execute or play only if they are close enough to the viewer.
-The use of behavior and sound bounds objects allows Java 3D to cull
-away those behaviors and sounds that are too far away to affect the
-viewer (listener). By using bounds properly, a programmer can ensure
-that only the relevant behaviors and sounds execute or play.
- Finally, bounds objects are used to specify a region of application
-for
-per-view operations such as background, clip, and soundscape selection.
-For example, the background node whose region of application is closest
-to the viewer is selected for a given view.
-
- Listing 4 –
-Capabilities Example
- By setting the capability to write the transform, Java 3D will allow
-the following code to execute:
- It is important to ensure that all needed capabilities are set and
-that
-unnecessary capabilities are not set. The process of compiling a branch
-graph examines the capability bits and uses that information to reduce
-the amount of computation needed to run a program.
- Here are code fragments from a simple program, Java 3D is fundamentally a scene graph-based API. Most of
-the constructs in the API are biased toward retained mode and
-compiled-retained mode rendering. However, there are some applications
-that want both the control and the flexibility that immediate-mode
-rendering offers.
- Immediate-mode applications can either use or ignore Java 3D's
-scene
-graph structure. By using immediate mode, end-user applications have
-more freedom, but this freedom comes at the expense of performance. In
-immediate mode, Java 3D has no high-level information concerning
-graphical objects or their composition. Because it has minimal global
-knowledge, Java 3D can perform only localized optimizations on
-behalf
-of the application programmer.
-
-
-
-Java 3D provides utility functions that create much of this
-structure
-on behalf of a pure immediate-mode application, making it less
-noticeable from the application's perspective-but the structure must
-exist.
- All rendering is done completely under user control. It is necessary
-for the user to clear the 3D canvas, render all geometry, and swap the
-buffers. Additionally, rendering the right and left eye for stereo
-viewing becomes the sole responsibility of the application.
- In pure immediate mode, the user must stop the Java 3D
-renderer, via
-the Canvas3D object
- The basic Java 3D stereo rendering loop, executed for
-each
-Canvas3D, is as follows:
- Java 3D's execution and rendering model assumes the
-existence of a VirtualUniverse
-object and an attached scene graph. This
-scene graph can be minimal and not noticeable from an application's
-perspective when using immediate-mode rendering, but it must exist.
- Java 3D's execution model intertwines with its rendering modes
-and
-with
-behaviors and their scheduling. This chapter first describes the three
-rendering modes, then describes how an application starts up a
-Java 3D
-environment, and finally it discusses how the various rendering modes
-work within this framework.
-
- Java 3D supports three different modes for rendering scenes:
-immediate
-mode, retained mode, and compiled-retained mode. These three levels of
-API support represent a potentially large variation in graphics
-processing speed and in on-the-fly restructuring.
- Immediate mode allows maximum flexibility at some cost in rendering
-speed. The application programmer can either use or ignore the scene
-graph structure inherent in Java 3D's design. The programmer can
-choose
-to draw geometry directly or to define a scene graph. Immediate mode
-can be either used independently or mixed with retained and/or
-compiled-retained mode rendering. The immediate-mode API is described
-in the "Immediate-Mode Rendering" section.
- Retained mode allows a great deal of the flexibility provided by
-immediate mode while also providing a substantial increase in rendering
-speed. All objects defined in the scene graph are accessible and
-manipulable. The scene graph itself is fully manipulable. The
-application programmer can rapidly construct the scene graph, create
-and delete nodes, and instantly "see" the effect of edits. Retained
-mode also allows maximal access to objects through a general pick
-capability.
- Java 3D's retained mode allows a programmer to construct
-objects,
-insert objects into a database, compose objects, and add behaviors to
-objects.
- In retained mode, Java 3D knows that the programmer has defined
-objects, knows how the programmer has combined those objects into
-compound objects or scene graphs, and knows what behaviors or actions
-the programmer has attached to objects in the database. This knowledge
-allows Java 3D to perform many optimizations. It can construct
-specialized data structures that hold an object's geometry in a manner
-that enhances the speed at which the Java 3D system can render it.
-It
-can compile object behaviors so that they run at maximum speed when
-invoked. It can flatten transformation manipulations and state changes
-where possible in the scene graph.
-
- Compiled-retained mode allows the Java 3D API to perform an
-arbitrarily
-complex series of optimizations including, but not restricted to,
-geometry compression, scene graph flattening, geometry grouping, and
-state change clustering.
- Compiled-retained mode provides hooks for end-user manipulation and
-picking. Pick operations return the closest object (in scene graph
-space) associated with the picked geometry.
- Java 3D's compiled-retained mode ensures effective graphics
-rendering
-speed in yet one more way. A programmer can request that Java 3D
-compile an object or a scene graph. Once it is compiled, the programmer
-has minimal access to the internal structure of the object or scene
-graph. Capability flags provide access to specified components that the
-application program may need to modify on a continuing basis.
- A compiled object or scene graph consists of whatever internal
-structures Java 3D wishes to create to ensure that objects or
-scene
-graphs render at maximal rates. Because Java 3D knows that the
-majority
-of the compiled object's or scene graph's components will not change,
-it can perform an extraordinary number of optimizations, including the
-fusing of multiple objects into one conceptual object, turning an
-object into compressed geometry or even breaking an object up into
-like-kind components and reassembling the like-kind components into new
-"conceptual objects."
-
- From an application's perspective, Java 3D's render loop runs
-continuously. Whenever an application adds a scene branch to the
-virtual world, that scene branch is instantly visible. This high-level
-view of the render loop permits concurrent implementations of
-Java 3D
-as well as serial implementations. The remainder of this section
-describes the Java 3D render loop bootstrap process from a
-serialized
-perspective. Differences that would appear in concurrent
-implementations are noted as well.
- First the application must construct its scene graphs. It does this
-by
-constructing scene graph nodes and component objects and linking them
-into self-contained trees with a BranchGroup node as a root. The
-application next must obtain a reference to any constituent nodes or
-objects within that branch that it may wish to manipulate. It sets the
-capabilities of all the objects to match their anticipated use and only
-then compiles the branch using the BranchGroup's This initialization process is identical for retained and
-compiled-retained modes. In both modes, the application builds a scene
-graph. In compiled-retained mode, the application compiles the scene
-graph. Then the application inserts the (possibly compiled) scene graph
-into the virtual universe.
- A scene graph consists of Java 3D
-objects, called nodes,
-arranged in a tree structure. The user creates one or more scene
-subgraphs and attaches them to a virtual universe. The individual
-connections between Java 3D nodes always represent a directed
-relationship: parent to child. Java 3D restricts scene graphs in one
-major way: Scene graphs may not contain cycles. Thus, a Java 3D scene
-graph is a directed acyclic graph (DAG). See Figure
-1.
- Java 3D refines the Node object class
-into two subclasses: Group
-and
-Leaf node objects. Group node objects group
-together one or more child
-nodes. A group node can point to zero or more children but can have
-only one parent. The SharedGroup node cannot have any parents (although
-it allows sharing portions of a scene graph, as described in "Reusing Scene Graphs").
-Leaf node objects contain the actual definitions of shapes (geometry),
-lights, fog, sounds, and so forth. A leaf node has no children and only
-one parent. The semantics of the various group and leaf nodes are
-described in subsequent chapters. A scene graph organizes and controls the rendering
-of its constituent objects. The Java 3D renderer draws a scene graph in
-a consistent way that allows for concurrence. The Java 3D renderer can
-draw one object independently of other objects. Java 3D can allow such
-independence because its scene graphs have a particular form and cannot
-share state among branches of a tree.
- The hierarchy of the scene graph encourages a natural spatial
-grouping
-on the geometric objects found at the leaves of the graph. Internal
-nodes act to group their children together. A group node also defines a
-spatial bound that contains all the geometry defined by its
-descendants. Spatial grouping allows for efficient implementation of
-operations such as proximity detection, collision detection, view
-frustum culling, and occlusion culling.
- A leaf node's state is defined by the nodes in a direct path between
-the scene graph's root and the leaf. Because a leaf's graphics context
-relies only on a linear path between the root and that node, the Java
-3D renderer can decide to traverse the scene graph in whatever order it
-wishes. It can traverse the scene graph from left to right and top to
-bottom, in level order from right to left, or even in parallel. The
-only exceptions to this rule are spatially bounded attributes such as
-lights and fog.
- This characteristic is in marked contrast to many older scene
-graph-based APIs (including PHIGS and SGI's Inventor) where, if a node
-above or to the left of a node changes the graphics state, the change
-affects the graphics state of all nodes below it or to its right. The most common node object, along the path from the root to the
-leaf,
-that changes the graphics state is the TransformGroup object. The
-TransformGroup object can change the position, orientation, and scale
-of the objects below it. Most graphics state attributes are set by a Shape3D leaf node
-through
-its constituent Appearance object, thus allowing parallel rendering.
-The Shape3D node also has a constituent Geometry object that specifies
-its geometry-this permits different shape objects to share common
-geometry without sharing material attributes (or vice versa). The Java 3D renderer incorporates all graphics state changes made in
-a
-direct path from a scene graph root to a leaf object in the drawing of
-that leaf object. Java 3D provides this semantic for both retained and
-compiled-retained modes.
- A Java 3D scene graph consists of a collection of Java 3D node
-objects
-connected in a tree structure. These node objects reference other scene
-graph objects called node component objects.
-All scene graph node and component objects are subclasses of a common
-SceneGraphObject class. The
-SceneGraphObject class is an abstract class
-that defines methods that are common among nodes and component objects.
- Scene graph objects are constructed by creating a new instance of
-the
-desired class and are accessed and manipulated using the object's An important characteristic of all scene graph objects is that
-they can
-be accessed or modified only during the creation of a scene graph,
-except where explicitly allowed. Access to most A Locale has no parent in the scene graph but is implicitly
-attached to
-a virtual universe when it is constructed. A Locale may reference an
-arbitrary number of BranchGroup nodes but has no explicit children. The coordinates of all scene graph objects are relative to the
-HiResCoord of the Locale in which they are contained. Operations on a
-Locale include setting or getting the HiResCoord of the Locale, adding
-a subgraph, and removing a subgraph. The View object is the central Java 3D object for coordinating all
-aspects of viewing.
-All viewing parameters in Java 3D are directly contained either within
-the View object or within objects pointed to by a View object. Java 3D
-supports multiple simultaneously active View objects, each of which can
-render to one or more canvases. The PhysicalEnvironment object encapsulates all of the parameters
-associated with the physical environment, such as calibration
-information for the tracker base for the head or hand tracker.
-
-Java 3D provides application programmers
-with two different means for reusing scene graphs. First, multiple
-scene graphs can share a common subgraph. Second, the node hierarchy of
-a common subgraph can be cloned, while still sharing large component
-objects such as geometry and texture objects. In the first case,
-changes in the shared subgraph affect all scene graphs that refer to
-the shared subgraph. In the second case, each instance is unique-a
-change in one instance does not affect any other instance.
- An application that wishes to share a subgraph from multiple places
-in
-a scene graph must do so through the use of the Link
-leaf node and an
-associated SharedGroup node. The
-SharedGroup node serves as the root of
-the shared subgraph. The Link leaf node refers to the SharedGroup node.
-It does not incorporate the shared scene graph directly into its scene
-graph.
- A SharedGroup node allows multiple Link leaf nodes to share its
-subgraph as shown in Figure
-1 below. An application developer may wish to reuse a common subgraph without
-completely sharing that subgraph. For example, the developer may wish
-to create a parking lot scene consisting of multiple cars, each with a
-different color. The developer might define three basic types of cars,
-such as convertible, truck, and sedan. To create the parking lot scene,
-the application will instantiate each type of car several times. Then
-the application can change the color of the various instances to create
-more variety in the scene. Unlike shared subgraphs, each instance is a
-separate copy of the scene graph definition: Changes to one instance do
-not affect any other instance.
- Java 3D provides the When Alternatively, the NodeComponent object can be duplicated, in which
-case the new leaf node would reference the duplicated object. This mode
-allows data referenced by the newly created leaf node to be modified
-without that modification affecting the original leaf node.
- Figure
-2
-shows two instances of NodeComponent objects that are shared and one
-NodeComponent element that is duplicated for the cloned subgraph.
-
- To handle these ambiguities, a callback mechanism is provided.
-
-A leaf node that needs to update referenced nodes upon being duplicated
-by a call to Suppose, for instance, that the leaf node Lf1 in Figure
-3 implemented the
-
-All predefined Java 3D nodes will automatically have their
-
-When a dangling reference is discovered, Leaf node subclasses (for example, Behaviors) that contain any user
-node-specific data that needs to be duplicated during a NodeComponent subclasses that contain any user node-specific data
-must define the following two methods:
- Java 3D introduces a new view model that takes Java's
-vision of "write once, run anywhere" and generalizes it to include
-display devices and six-degrees-of-freedom input peripherals such as
-head trackers. This "write once, view everywhere" nature of the new
-view model means that an application or applet written using the Java
-3D view model can render images to a broad range of display devices,
-including standard computer displays, multiple-projection display
-rooms, and head-mounted displays, without modification of the scene
-graph. It also means that the same application, once again without
-modification, can render stereoscopic views and can take advantage of
-the input from a head tracker to control the rendered view.
- Java 3D's view model achieves this versatility by cleanly
-separating
-the virtual and the physical world. This model distinguishes between
-how an application positions, orients, and scales a ViewPlatform object
-(a viewpoint) within the virtual world and how the Java 3D
-renderer
-constructs the final view from that viewpoint's position and
-orientation. The application controls the ViewPlatform's position and
-orientation; the renderer computes what view to render using this
-position and orientation, a description of the end-user's physical
-environment, and the user's position and orientation within the
-physical environment.
- This document first explains why Java 3D chose a different view
-model
-and some of the philosophy behind that choice. It next describes how
-that model operates in the simple case of a standard computer screen
-without head tracking—the most common case. Finally, it presents
-advanced material that was originally published in Appendix C of the
-API specification guide.
-
- Camera-based view models, as found in low-level APIs, give
-developers
-control over all rendering parameters. This makes sense when dealing
-with custom applications, less sense when dealing with systems that
-wish to have broader applicability: systems such as viewers or browsers
-that load and display whole worlds as a single unit or systems where
-the end users view, navigate, display, and even interact with the
-virtual world.
- Camera-based view models emulate a camera in the virtual world, not
-a
-human in a virtual world. Developers must continuously reposition a
-camera to emulate "a human in the virtual world."
- The Java 3D view model incorporates head tracking directly, if
-present,
-with no additional effort from the developer, thus providing end users
-with the illusion that they actually exist inside a virtual world.
- The Java 3D view model, when operating in a non-head-tracked
-environment and rendering to a single, standard display, acts very much
-like a traditional camera-based view model, with the added
-functionality of being able to generate stereo views transparently.
-
- Letting the application control all viewing parameters is not
-reasonable in systems in which the physical environment dictates some
-of the view parameters.
- One example of this is a head-mounted display (HMD), where the
-optics
-of the head-mounted display directly determine the field of view that
-the application should use. Different HMDs have different optics,
-making it unreasonable for application developers to hard-wire such
-parameters or to allow end users to vary that parameter at will.
- Another example is a system that automatically computes view
-parameters
-as a function of the user's current head position. The specification of
-a world and a predefined flight path through that world may not exactly
-specify an end-user's view. HMD users would expect to look and thus see
-to their left or right even when following a fixed path through the
-environment-imagine an amusement park ride with vehicles that follow
-fixed paths to present content to their visitors, but visitors can
-continue to move their heads while on those rides.
- Depending on the physical details of the end-user's environment, the
-values of the viewing parameters, particularly the viewing and
-projection matrices, will vary widely. The factors that influence the
-viewing and projection matrices include the size of the physical
-display, how the display is mounted (on the user's head or on a table),
-whether the computer knows the user's head location in three space, the
-head mount's actual field of view, the display's pixels per inch, and
-other such parameters. For more information, see "View Model Details."
-
- The Java 3D view model separates the virtual environment, where
-the
-application programmer has placed objects in relation to one another,
-from the physical environment, where the user exists, sees computer
-displays, and manipulates input devices.
- Java 3D also defines a fundamental correspondence between the
-user's
-physical world and the virtual world of the graphic application. This
-physical-to-virtual-world correspondence defines a single common space,
-a space where an action taken by an end user affects objects within the
-virtual world and where any activity by objects in the virtual world
-affects the end user's view.
-
- The virtual world is a common space in which virtual objects exist.
-The
-virtual world coordinate system exists relative to a high-resolution
-Locale-each Locale object defines the origin of virtual world
-coordinates for all of the objects attached to that Locale. The Locale
-that contains the currently active ViewPlatform object defines the
-virtual world coordinates that are used for rendering. Java3D
-eventually transforms all coordinates associated with scene graph
-elements into this common virtual world space.
- The physical world is just that-the real, physical world. This is
-the
-space in which the physical user exists and within which he or she
-moves his or her head and hands. This is the space in which any
-physical trackers define their local coordinates and in which several
-calibration coordinate systems are described.
- The physical world is a space, not a common coordinate system
-between
-different execution instances of Java 3D. So while two different
-computers at two different physical locations on the globe may be
-running at the same time, there is no mechanism directly within
-Java 3D
-to relate their local physical world coordinate systems with each
-other. Because of calibration issues, the local tracker (if any)
-defines the local physical world coordinate system known to a
-particular instance of Java 3D.
-
- Java 3D distributes its view model parameters across several
-objects,
-specifically, the View object and its associated component objects, the
-PhysicalBody object, the PhysicalEnvironment object, the Canvas3D
-object, and the Screen3D object. Figure
-1 shows graphically the central role of the View object and the
-subsidiary role of its component objects.
-
-
-The view-related objects shown in Figure
-1
-and their roles are as follows. For each of these objects, the portion
-of the API that relates to modifying the virtual world and the portion
-of the API that is relevant to non-head-tracked standard display
-configurations are derived in this chapter. The remainder of the
-details are described in "View Model
-Details."
- Together, these objects describe the geometry of viewing rather than
-explicitly providing a viewing or projection matrix. The Java 3D
-renderer uses this information to construct the appropriate viewing and
-projection matrices. The geometric focus of these view objects provides
-more flexibility in generating views-a flexibility needed to support
-alternative display configurations.
- A ViewPlatform leaf node defines a coordinate system, and thus a
-reference frame with its associated origin or reference point, within
-the virtual world. The ViewPlatform serves as a point of attachment for
-View objects and as a base for determining a renderer's view.
- Figure
-2
-shows a portion of a scene graph containing a ViewPlatform node. The
-nodes directly above a ViewPlatform determine where that ViewPlatform
-is located and how it is oriented within the virtual world. By
-modifying the Transform3D object associated with a TransformGroup node
-anywhere directly above a ViewPlatform, an application or behavior can
-move that ViewPlatform anywhere within the virtual world. A simple
-application might define one TransformGroup node directly above a
-ViewPlatform, as shown in Figure
-2.
- A VirtualUniverse may have many different ViewPlatforms, but a
-particular View object can attach itself only to a single ViewPlatform.
-Thus, each rendering onto a Canvas3D is done from the point of view of
-a single ViewPlatform.
-
-
- An application navigates within the virtual world by modifying a
-ViewPlatform's parent TransformGroup. Examples of applications that
-modify a ViewPlatform's location and orientation include browsers,
-object viewers that provide navigational controls, applications that do
-architectural walkthroughs, and even search-and-destroy games.
- Controlling the ViewPlatform object can produce very interesting and
-useful results. Our first simple scene graph (see "Introduction," Figure 1)
-defines a scene graph for a simple application that draws an object in
-the center of a window and rotates that object about its center point.
-In that figure, the Behavior object modifies the TransformGroup
-directly above the Shape3D node.
- An alternative application scene graph, shown in Figure
-3,
-leaves the central object alone and moves the ViewPlatform around the
-world. If the shape node contains a model of the earth, this
-application could generate a view similar to that seen by astronauts as
-they orbit the earth.
- Had we populated this world with more objects, this scene graph
-would allow navigation through the world via the Behavior node.
-
-
-Applications and behaviors manipulate a TransformGroup through its
-access methods. These methods allow an application to retrieve and
-set the Group node's Transform3D object. Transform3D Node methods
-include
- A scene graph may contain multiple ViewPlatform
-objects. If a user detaches a View object
-from a ViewPlatform and then
-reattaches that View to a different ViewPlatform, the image on the
-display will now be rendered from the point of view of the new
-ViewPlatform. Java 3D does not have any built-in semantics for displaying a
-visible
-manifestation of a ViewPlatform within the virtual world (an avatar).
-However, a developer can construct and manipulate an avatar using
-standard Java 3D constructs.
- A developer can construct a small scene graph consisting of a
-TransformGroup node, a behavior leaf node, and a shape node and insert
-it directly under the BranchGroup node associated with the ViewPlatform
-object. The shape node would contain a geometric model of the avatar's
-head. The behavior node would change the TransformGroup's transform
-periodically to the value stored in a View object's
- Java 3D generates viewing matrices in one of a few different
-ways,
-depending on whether the end user has a head-mounted or a room-mounted
-display environment and whether head tracking is enabled. This section
-describes the computation for a non-head-tracked, room-mounted
-display-a standard computer display. Other environments are described
-in "View Model Details."
- In the absence of head tracking, the ViewPlatform's origin specifies
-the virtual eye's location and orientation within the virtual world.
-However, the eye location provides only part of the information needed
-to render an image. The renderer also needs a projection matrix. In the
-default mode, Java 3D uses the projection policy, the specified
-field-of-view information, and the front and back clipping distances to
-construct a viewing frustum.
-
- Figure
-4
-shows a simple scene graph. To draw the object labeled "S,"
-Java 3D
-internally constructs the appropriate model, view platform, eye, and
-projection matrices. Conceptually, the model transformation for a
-particular object is computed by concatenating all the matrices in a
-direct path between the object and the VirtualUniverse. The view matrix
-is then computed-again, conceptually-by concatenating all the matrices
-between the VirtualUniverse object and the ViewPlatform attached to the
-current View object. The eye and projection matrices are constructed
-from the View object and its associated component objects.
-
- In our scene graph, what we would normally consider the
-model transformation would consist of the following three
-transformations: LT1T2. By
-multiplying LT1T2
-by a vertex in the shape object, we would transform that vertex into
-the virtual universe's coordinate system. What we would normally
-consider the view platform transformation would be (LTv1)-1
-or Tv1-1L-1.
-This presents a problem since coordinates in the virtual universe are
-256-bit fixed-point values, which cannot be used to represent
-transformed points efficiently.
- Fortunately, however, there is a solution to this problem. Composing
-the model and view platform transformations gives us
- the matrix that takes vertices in an object's local coordinate
-system
-and places them in the ViewPlatform's coordinate system. Note that the
-high-resolution Locale transformations cancel each other out, which
-removes the need to actually transform points into high-resolution
-VirtualUniverse coordinates. The general formula of the matrix that
-transforms object coordinates to ViewPlatform coordinates is Tvn-1...Tv2-1Tv1-1T1T2...Tm.
- As mentioned earlier, the View object contains the remainder of the
-view information, specifically, the eye matrix, E,
-that takes points in the View-Platform's local coordinate system and
-translates them into the user's eye coordinate system, and the
-projection matrix, P, that projects objects in the
-eye's coordinate system into clipping coordinates. The final
-concatenation of matrices for rendering our shape object "S" on the
-specified Canvas3D is PETv1-1T1T2.
-In general this is PETvn-1...Tv2-1Tv1-1T1T2...Tm.
- The details of how Java 3D constructs the matrices E
-and P in different end-user configurations are
-described in "View Model Details."
-
- Java 3D supports multiple high-resolution Locales. In some
-cases,
-these
-Locales are close enough to each other that they can "see" each other,
-meaning that objects can be rendered even though they are not in the
-same Locale as the ViewPlatform object that is attached to the View.
-Java 3D automatically handles this case without the application
-having
-to do anything. As in the previous example, where the ViewPlatform and
-the object being rendered are attached to the same Locale, Java 3D
-internally constructs the appropriate matrices for cases in which the
-ViewPlatform and the object being rendered are not attached
-to the same Locale.
- Let's take two Locales, L1 and L2, with the View attached to a
-ViewPlatform in L1. According to our general formula, the modeling
-transformation-the transformation that takes points in object
-coordinates and transforms them into VirtualUniverse coordinates-is LT1T2...Tm.
-In our specific example, a point in Locale L2 would be transformed into
-VirtualUniverse coordinates by L2T1T2...Tm.
-The view platform transformation would be (L1Tv1Tv1...Tvn)-1
-or Tvn-1...Tv2-1Tv1-1L1-1.
-Composing these two matrices gives us
- Thus, to render objects in another Locale, it is sufficient to
-compute L1-1L2
-and use that as the starting matrix when composing the model
-transformations. Given that a Locale is represented by a single
-high-resolution coordinate position, the transformation L1-1L2
-is a simple translation by L2 - L1.
-Again, it is not actually necessary to transform points into
-high-resolution VirtualUniverse coordinates.
- In general, Locales that are close enough that the difference in
-their
-high-resolution coordinates can be represented in double precision by a
-noninfinite value are close enough to be rendered. In practice, more
-sophisticated culling techniques can be used to render only those
-Locales that really are "close enough."
-
- An application must create a minimal set of Java 3D objects
-before
-Java
-3D can render to a display device. In addition to a Canvas3D object,
-the application must create a View object, with its associated
-PhysicalBody and PhysicalEnvironment objects, and the following scene
-graph elements:
- An application programmer writing a 3D
-graphics program that will deploy on a variety of platforms must
-anticipate the likely end-user environments and must carefully
-construct the view transformations to match those characteristics using
-a low-level API. This appendix addresses many of the issues an
-application must face and describes the sophisticated features that
-Java 3D's advanced view model provides.
-
- Java 3D must handle two rather different head-tracking
-situations.
-In one case, we rigidly attach a tracker's base,
-and thus its coordinate frame, to the display environment. This
-corresponds to placing a tracker base in a fixed position and
-orientation relative to a projection screen within a room, to a
-computer display on a desk, or to the walls of a multiple-wall
-projection display. In the second head-tracking situation, we rigidly
-attach a tracker's sensor, not its base, to the display
-device. This corresponds to rigidly attaching one of that tracker's
-sensors to a head-mounted display and placing the tracker base
-somewhere within the physical environment.
-
- The following two examples show how end-user environments can
-significantly affect how an application must construct viewing
-transformations.
-
- By adding a left and right screen, we give the magic carpet rider a
-more complete view of the virtual world surrounding the carpet. Now our
-end user sees the view to the left or right of the magic carpet by
-turning left or right.
-
- From a camera-based perspective, the application developer must
-construct the camera's position and orientation by combining the
-virtual-world component (the position and orientation of the magic
-carpet) and the physical-world component (the user's instantaneous head
-position and orientation).
- Java 3D's view model incorporates the appropriate abstractions
-to
-compensate automatically for such variability in end-user hardware
-environments.
-
-
-
- The coexistence coordinate system exists half in the virtual world
-and
-half in the physical world. The two transforms that go from the
-coexistence coordinate system to the virtual world coordinate system
-and back again contain all the information needed to expand or shrink
-the virtual world relative to the physical world. It also contains the
-information needed to position and orient the virtual world relative to
-the physical world.
- Modifying the transform that maps the coexistence coordinate system
-into the virtual world coordinate system changes what the end user can
-see. The Java 3D application programmer moves the end user within
-the
-virtual world by modifying this transform.
-
-
-
-
-
-
-A multiple-projection wall display presents a more exotic environment.
-Such environments have multiple screens, typically three or more. Figure
-9 shows a scene graph fragment representing such a system, and Figure
-10 shows the corresponding display environment.
-
-
-
-A multiple-screen environment requires more care during the
-initialization and calibration phase. Java 3D must know how the
-Screen3Ds are placed with respect to one another, the tracking device,
-and the physical portion of the coexistence coordinate system.
-
- The "Generating a View" section
-describes how Java 3D generates a view for a standard flat-screen
-display with no head tracking. In this section, we describe how
-Java 3D
-generates a view in a room-mounted, head-tracked display
-environment-either a computer monitor with shutter glasses and head
-tracking or a multiple-wall display with head-tracked shutter glasses.
-Finally, we describe how Java 3D generates view matrices in a
-head-mounted and head-tracked display environment.
- If any of the parameters of a View object are updated, this will
-effect
-a change in the implicit viewing transform (and thus image) of any
-Canvas3D that references that View object.
-
- A camera-based view model allows application programmers to think
-about
-the images displayed on the computer screen as if a virtual camera took
-those images. Such a view model allows application programmers to
-position and orient a virtual camera within a virtual scene, to
-manipulate some parameters of the virtual camera's lens (specify its
-field of view), and to specify the locations of the near and far
-clipping planes.
- Java 3D allows applications to enable compatibility mode for
-room-mounted, non-head-tracked display environments or to disable
-compatibility mode using the following methods. Camera-based viewing
-functions are available only in compatibility mode. The Note: Use of these view-compatibility
-functions will disable some of Java 3D's view model features and
-limit
-the portability of Java 3D programs. These methods are primarily
-intended to help jump-start porting of existing applications.
- The various parameters that users control in a
-camera-based view model specify the shape of a viewing volume (known as
-a frustum because of its truncated pyramidal shape) and locate that
-frustum within the virtual environment. The rendering pipeline uses the
-frustum to decide which objects to draw on the display screen. The
-rendering pipeline does not draw objects outside the view frustum, and
-it clips (partially draws) objects that intersect the frustum's
-boundaries.
- Though a view frustum's specification may have many items in common
-with those of a physical camera, such as placement, orientation, and
-lens settings, some frustum parameters have no physical analog. Most
-noticeably, a frustum has two parameters not found on a physical
-camera: the near and far clipping planes.
-
-
-The location of the near and far clipping planes allows the application
-programmer to specify which objects Java 3D should not draw.
-Objects
-too far away from the current eyepoint usually do not result in
-interesting images. Those too close to the eyepoint might obscure the
-interesting objects. By carefully specifying near and far clipping
-planes, an application programmer can control which objects the
-renderer will not be drawing.
- From the perspective of the display device, the virtual camera's
-image
-plane corresponds to the display screen. The camera's placement,
-orientation, and field of view determine the shape of the view frustum.
-
- The camera-based view model allows Java 3D to bridge the gap
-between
-existing 3D code and Java 3D's view model. By using the
-camera-based
-view model methods, a programmer retains the familiarity of the older
-view model but gains some of the flexibility afforded by Java 3D's
-new
-view model.
- The traditional camera-based view model is supported in Java 3D
-by
-helping methods in the Transform3D object. These methods were
-explicitly designed to resemble as closely as possible the view
-functions of older packages and thus should be familiar to most 3D
-programmers. The resulting Transform3D objects can be used to set
-compatibility-mode transforms in the View object.
-
- The Transform3D object provides a The Transform3D object provides three methods for
-creating a projection matrix: The The arguments define the frustum and its associated perspective
-projection: The The arguments define the frustum and its associated perspective
-projection:
-
-
-The The arguments define a rectangular box used for projection:
-
- The Java 3D's superstructure consists of one or more
-VirtualUniverse objects, each of which contains a set of one or more
-high-resolution Locale objects. The Locale objects, in turn, contain
-collections of subgraphs that comprise the scene graph (see Figure
-1).
-
- Virtual universes are separate entities in that no node object may
-exist in more than one virtual universe at any one time. Likewise, the
-objects in one virtual universe are not visible in, nor do they
-interact with objects in, any other virtual universe.
- To support large virtual universes, Java 3D introduces the concept
-of Locales that have high-resolution coordinates
-as an origin. Think of high-resolution coordinates as "tie-downs" that
-precisely anchor the locations of objects specified using less precise
-floating-point coordinates that are within the range of influence of
-the high-resolution coordinates.
- A Locale, with its associated high-resolution coordinates, serves as
-the next level of representation down from a virtual universe. All
-virtual universes contain one or more high-resolution-coordinate
-Locales, and all other objects are attached to a Locale.
-High-resolution coordinates act as an upper-level translation-only
-transform node. For example, the coordinates of all objects that are
-attached to a particular Locale are all relative to the location of
-that Locale's high-resolution coordinates.
-
-
-While a virtual universe is similar to the traditional computer
-graphics concept of a scene graph, a given virtual universe can become
-so large that it is often better to think of a scene graph as the
-descendant of a high-resolution-coordinate Locale.
-
- To "shrink" down to a small size (say the size of an IC transistor),
-even very near (0.0, 0.0, 0.0), the same problem arises.
- If a large contiguous virtual universe is to be supported, some form
-of
-higher-resolution addressing is required. Thus the choice of 256-bit
-positional components for "high-resolution" positions.
-
-Behaviors and Interpolators
-Behavior Object
-initialize
method called once when the
-behavior becomes "live" and a processStimulus
-method called whenever appropriate by the Java 3D behavior
-scheduler.
-The Behavior object also contains the state information needed by its initialize
-and processStimulus
methods.
-processStimulus
method receives and processes a
-behavior's ongoing messages. The Java 3D behavior scheduler
-invokes a
-Behavior node's processStimulus
-method when an active ViewPlatform's activation volume intersects a
-Behavior object's scheduling region and all of that behavior's wakeup
-criteria are satisfied. The processStimulus
method
-performs its computations and actions (possibly including the
-registration of state change information that could cause Java 3D
-to
-wake other Behavior objects), establishes its next wakeup condition,
-and finally exits.
-
-
-processStimulus
-method of a single behavior instance are guaranteed to take effect in
-the same rendering frame
-
-processStimulus
-methods of the set of behaviors that wake up in response to a
-WakeupOnElapsedFrames(0) wakeup condition are guaranteed to take effect
-in the same rendering frame.Code Structure
-processStimulus
-method, that method may perform any computation it wishes. Usually, it
-will change its internal state and specify its new wakeup conditions.
-Most probably, it will manipulate scene graph elements. However, the
-behavior code can change only those aspects of a scene graph element
-permitted by the capabilities associated with that scene graph element.
-A scene graph's capabilities restrict behavioral manipulation to those
-manipulations explicitly allowed.
-initialize
-method or each time Java 3D invokes its processStimulus
-method.
-
-
-
-
-
-
-
-
-WakeupCondition Object
-WakeupCriterion Object
-processStimulus
-method whenever
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-Composing WakeupCriterion
-Objects
-
-
- WakeupCriterion && WakeupCriterion && ...
-
-
- WakeupCriterion || WakeupCriterion || ...
-
-
- WakeupOr && WakeupOr && ...
-
-
- WakeupAnd || WakeupAnd || ...
-Composing Behaviors
-Scheduling
-How Java 3D Performs
-Execution Culling
-
-
-
-
-true
,
-schedule that Behavior object for execution.processStimulus
-method.
-Interpolator Behaviors
-Mapping Time to Alpha
-
- Figure 1 – An Interpolator's Generic
-Time-to-Alpha Mapping Sequence
-
-
- Figure 2 – An Interpolator Set to a Loop
-Count of 1 with Mode Flags Set to Enable
-Only the Alpha-Increasing and Alpha-at-1 Portion of the Waveform
-
-
- Figure 3 – An Interpolator Set to a Loop
-Count of 1 with Mode Flags Set to Enable
-Only the Alpha-Decreasing and Alpha-at-0 Portion of the Waveform
-
-
- Figure 4 – An Interpolator Set to a Loop
-Count of 1 with Mode Flags
-Set to Enable All Portions of the Waveform
-
-
- Figure 5 – An Interpolator Set to Loop
-Infinitely and Mode Flags Set to Enable
-Only the Alpha-Increasing and Alpha-at-1 Portion of the Waveform
-
-
- Figure 6 – An Interpolator Set to Loop
-Infinitely and Mode Flags Set to Enable
-Only the Alpha-Decreasing and Alpha-at-0 Portion of the Waveform
-
-
- Figure 7 – An Interpolator Set to Loop
-Infinitely and Mode Flags Set
-to Enable All Portions of the Waveform
-
-Acceleration of Alpha
-increasingAlphaRampDuration
-and the decreasing-AlphaRampDuration
.
-increasingAlphaRampDuration
-method can be used to modify the alpha waveform. A value of 0 for the
-increasing ramp duration implies that α
-is not accelerated; it changes at a constant rate. A value of 0.5 or
-greater (clamped to 0.5) for this increasing ramp duration implies that
-the change in α is accelerated during the first half of the
-period and
-then decelerated during the second half of the period. For a value of n
-that is less than 0.5, alpha is accelerated for duration n,
-held constant for duration (1.0 - 2n), then decelerated for
-duration n of the period.
-
- Figure 8 – How an Alpha-Increasing Waveform
-Changes with Various
-Values of increasing-AlphaRampDuration
-
-
-
diff --git a/src/main/java/org/jogamp/java3d/doc-files/Behaviors1.gif b/src/main/java/org/jogamp/java3d/doc-files/Behaviors1.gif
deleted file mode 100644
index bb288ce..0000000
Binary files a/src/main/java/org/jogamp/java3d/doc-files/Behaviors1.gif and /dev/null differ
diff --git a/src/main/java/org/jogamp/java3d/doc-files/Behaviors2.gif b/src/main/java/org/jogamp/java3d/doc-files/Behaviors2.gif
deleted file mode 100644
index 005564f..0000000
Binary files a/src/main/java/org/jogamp/java3d/doc-files/Behaviors2.gif and /dev/null differ
diff --git a/src/main/java/org/jogamp/java3d/doc-files/Behaviors3.gif b/src/main/java/org/jogamp/java3d/doc-files/Behaviors3.gif
deleted file mode 100644
index a8beb09..0000000
Binary files a/src/main/java/org/jogamp/java3d/doc-files/Behaviors3.gif and /dev/null differ
diff --git a/src/main/java/org/jogamp/java3d/doc-files/Behaviors4.gif b/src/main/java/org/jogamp/java3d/doc-files/Behaviors4.gif
deleted file mode 100644
index 685bcb7..0000000
Binary files a/src/main/java/org/jogamp/java3d/doc-files/Behaviors4.gif and /dev/null differ
diff --git a/src/main/java/org/jogamp/java3d/doc-files/Behaviors5.gif b/src/main/java/org/jogamp/java3d/doc-files/Behaviors5.gif
deleted file mode 100644
index 74783fb..0000000
Binary files a/src/main/java/org/jogamp/java3d/doc-files/Behaviors5.gif and /dev/null differ
diff --git a/src/main/java/org/jogamp/java3d/doc-files/Behaviors6.gif b/src/main/java/org/jogamp/java3d/doc-files/Behaviors6.gif
deleted file mode 100644
index 8614a4e..0000000
Binary files a/src/main/java/org/jogamp/java3d/doc-files/Behaviors6.gif and /dev/null differ
diff --git a/src/main/java/org/jogamp/java3d/doc-files/Behaviors7.gif b/src/main/java/org/jogamp/java3d/doc-files/Behaviors7.gif
deleted file mode 100644
index 0f2ce48..0000000
Binary files a/src/main/java/org/jogamp/java3d/doc-files/Behaviors7.gif and /dev/null differ
diff --git a/src/main/java/org/jogamp/java3d/doc-files/Behaviors8.gif b/src/main/java/org/jogamp/java3d/doc-files/Behaviors8.gif
deleted file mode 100644
index d048cfa..0000000
Binary files a/src/main/java/org/jogamp/java3d/doc-files/Behaviors8.gif and /dev/null differ
diff --git a/src/main/java/org/jogamp/java3d/doc-files/Concepts.html b/src/main/java/org/jogamp/java3d/doc-files/Concepts.html
deleted file mode 100644
index 7b005af..0000000
--- a/src/main/java/org/jogamp/java3d/doc-files/Concepts.html
+++ /dev/null
@@ -1,291 +0,0 @@
-
-
-
-
- Java 3D Concepts
-Basic Scene Graph Concepts
-Constructing a Simple Scene
-Graph
-
-
-Shape3D myShape1 = new Shape3D(myGeometry1, myAppearance1);
-
Shape3D myShape2 = new Shape3D(myGeometry2);
myShape2.setAppearance(myAppearance2);
Group myGroup = new Group();
myGroup.addChild(myShape1);
myGroup.addChild(myShape2);
-setAppearance
method. At this
-point both leaf nodes have been fully constructed. The code next
-constructs a group node to hold the two leaf nodes. It
-uses the Group node's addChild
method to add the two leaf
-nodes as children to the group node, finishing the construction of the
-scene graph. Figure
-1
-shows the constructed scene graph, all the nodes, the node component
-objects, and the variables used in constructing the scene graph.
-
- Figure 1 – A Simple Scene Graph
-
-A Place For Scene Graphs
-Once a scene graph has been constructed, the
-question becomes what to do with it? Java 3D cannot start rendering a
-scene graph until a program "gives" it the scene graph. The program
-does this by inserting the scene graph into the virtual universe.
-
- Figure 2 – Content Branch, View Branch, and
-Superstructure
-
-addBranchGraph
,
-whereas addChild
is the method for adding children to all
-group nodes.
-Shape3D myShape1 = new Shape3D(myGeometry1, myAppearance1);
-
Shape3D myShape2 = new Shape3D(myGeometry2, myAppearance2);
BranchGroup myBranch = new BranchGroup();
myBranch.addChild(myShape1);
myBranch.addChild(myShape2);
myBranch.compile();
VirtualUniverse myUniverse = new VirtualUniverse();
Locale myLocale = new Locale(myUniverse);
myLocale.addBranchGraph(myBranch);
-SimpleUniverse Utility
-Most Java 3D programs build an identical set of superstructure and view
-branch objects, so the Java 3D utility packages provide a universe
-package for constructing and manipulating the objects in a view branch.
-The classes in the universe
package provide a quick means
-for building a single view (single window) application. Listing 3
-shows a code fragment for using the SimpleUniverse class. Note that the
-SimpleUniverse constructor takes a Canvas3D as an argument, in this
-case referred to by the variable myCanvas
.
-
-import com.sun.j3d.utils.universe.*;
-
Shape3D myShape1 = new Shape3D(myGeometry1, myAppearance1);
Shape3D myShape2 = new Shape3D(myGeometry2, myAppearance2);
BranchGroup myBranch = new BranchGroup();
myBranch.addChild(myShape1);
myBranch.addChild(myShape2);
myBranch.compile();
SimpleUniverse myUniv = new SimpleUniverse(myCanvas);
myUniv.addBranchGraph(myBranch);
-Processing a Scene Graph
-When given a scene graph, Java 3D processes that scene graph as
-efficiently as possible. How a Java 3D implementation processes a scene
-graph can vary, as long as the implementation conforms to the semantics
-of the API. In general, a Java 3D implementation will render all
-visible objects, play all enabled sounds, execute all triggered
-behaviors, process any identified input devices, and check for and
-generate appropriate collision events.
-Features of Java 3D
-Java 3D allows a programmer to specify a broad range of information. It
-allows control over the shape of objects, their color, and
-transparency. It allows control over background effects, lighting, and
-environmental effects such as fog. It allows control over the placement
-of all objects (even nonvisible objects such as lights and behaviors)
-in the scene graph and over their orientation and scale. It allows
-control over how those objects move, rotate, stretch, shrink, or morph
-over time. It allows control over what code should execute, what sounds
-should play, and how they should sound and change over time.
-Bounds
-Bounds objects allow a programmer to define a volume in space. There
-are three ways to specify this volume: as a box, a sphere, or a set of
-planes enclosing a space.
-Nodes
-All scene graph nodes have an implicit location in space of (0, 0, 0).
-For objects that exist in space, this implicit location provides a
-local coordinate system for that object, a fixed reference point. Even
-abstract objects that may not seem to have a well-defined location,
-such as behaviors and ambient lights, have this implicit location. An
-object's location provides an origin for its local coordinate system
-and, just as importantly, an origin for any bounding volume information
-associated with that object.
-Live and/or Compiled
-All scene graph objects, including nodes and node component objects,
-are either part of an active universe or not. An object is said to be live
-if it is part of an active universe. Additionally, branch graphs are
-either compiled
-or not. When a node is either live or compiled, Java 3D enforces access
-restrictions to nodes and node component objects. Java 3D allows only
-those operations that are enabled by the program before a node or node
-component becomes live or is compiled. It is best to set capabilities
-when you build your content. Listing 4 shows
-an example where we create a TransformGroup node and
-enable it for writing.
-
-TransformGroup myTrans = new TransformGroup();
-
myTrans.setCapability(Transform.ALLOW_TRANSFORM_WRITE);
-myTrans.setTransform3D(myT3D);
-HelloUniverse: A Sample Java
-3D Program
-HelloUniverse.java
,
-that creates a cube and a RotationInterpolator behavior object that
-rotates the cube at a constant rate of pi/2 radians per second. The
-HelloUniverse class creates the branch graph
-that includes the cube and the RotationInterpolator behavior. It then
-adds this branch graph to the Locale object generated by the
-SimpleUniverse utility.
-
-
-
diff --git a/src/main/java/org/jogamp/java3d/doc-files/Immediate.html b/src/main/java/org/jogamp/java3d/doc-files/Immediate.html
deleted file mode 100644
index 101fe22..0000000
--- a/src/main/java/org/jogamp/java3d/doc-files/Immediate.html
+++ /dev/null
@@ -1,114 +0,0 @@
-
-
-
-
-
public class HelloUniverse ... {
public BranchGroup createSceneGraph() {
// Create the root of the branch graph
BranchGroup objRoot = new BranchGroup();
// Create the TransformGroup node and initialize it to the
// identity. Enable the TRANSFORM_WRITE capability so that
// our behavior code can modify it at run time. Add it to
// the root of the subgraph.
TransformGroup objTrans = new TransformGroup();
objTrans.setCapability(
TransformGroup.ALLOW_TRANSFORM_WRITE);
objRoot.addChild(objTrans);
// Create a simple Shape3D node; add it to the scene graph.
objTrans.addChild(new ColorCube(0.4));
// Create a new Behavior object that will perform the
// desired operation on the specified transform and add
// it into the scene graph.
Transform3D yAxis = new Transform3D();
Alpha rotationAlpha = new Alpha(-1, 4000);
RotationInterpolator rotator = new RotationInterpolator(
rotationAlpha, objTrans, yAxis,
0.0f, (float) Math.PI*2.0f);
BoundingSphere bounds =
new BoundingSphere(new Point3d(0.0,0.0,0.0), 100.0);
rotator.setSchedulingBounds(bounds);
objRoot.addChild(rotator);
// Have Java 3D perform optimizations on this scene graph.
objRoot.compile();
return objRoot;
}
public HelloUniverse() {
<set layout of container, construct canvas3d, add canvas3d>
// Create the scene; attach it to the virtual universe
BranchGroup scene = createSceneGraph();
SimpleUniverse u = new SimpleUniverse(canvas3d);
u.getViewingPlatform().setNominalViewingTransform();
u.addBranchGraph(scene);
}
}Immediate-Mode Rendering
-Two Styles of Immediate-Mode
-Rendering
-Use of Java 3D's immediate mode falls into one of two categories:
-pure
-immediate-mode rendering and mixed-mode rendering in which immediate
-mode and retained or compiled-retained mode interoperate and render to
-the same canvas. The Java 3D renderer is idle in pure immediate
-mode,
-distinguishing it from mixed-mode rendering.
-Pure Immediate-Mode
-Rendering
-Pure immediate-mode rendering provides for those applications and
-applets that do not want Java 3D to do any automatic rendering of
-the
-scene graph. Such applications may not even wish to build a scene graph
-to represent their graphical data. However, they use Java 3D's
-attribute objects to set graphics state and Java 3D's geometric
-objects
-to render geometry.
-
Note: Scene antialiasing is not supported
-in pure immediate mode.
-
A pure immediate mode application must create a
-minimal set of Java 3D
-objects before rendering. In addition to a Canvas3D object, the
-application must create a View object, with its associated PhysicalBody
-and PhysicalEnvironment objects, and the following scene graph
-elements: a VirtualUniverse object, a high-resolution Locale object, a
-BranchGroup node object, a TransformGroup node object with associated
-transform, and, finally, a ViewPlatform leaf node object that defines
-the position and orientation within the virtual universe that generates
-the view (see Figure
-1).
-
-
- Figure 1 – Minimal Immediate-Mode Structure
-
-stopRenderer()
-method, prior to adding the Canvas3D object to an active View object
-(that is, one that is attached to a live ViewPlatform object).
-Mixed-Mode Rendering
-Mixing immediate mode and retained or compiled-retained mode requires
-more structure than pure immediate mode. In mixed mode, the
-Java 3D
-renderer is running continuously, rendering the scene graph into the
-canvas.
-
-
clear canvas (both eyes)call preRender() // user-supplied method
-
set left eye view
render opaque scene graph objects
call renderField(FIELD_LEFT) // user-supplied method
render transparent scene graph objects
set right eye view
render opaque scene graph objects again
call renderField(FIELD_RIGHT) // user-supplied method
render transparent scene graph objects again
call postRender() // user-supplied method
synchronize and swap bufferscall postSwap() // user-supplied method
-The basic Java 3D monoscopic rendering loop is as
-follows:
-
-
clear canvascall preRender() // user-supplied method
-
set view
render opaque scene graph objects
call renderField(FIELD_ALL) // user-supplied method
render transparent scene graph objects
call postRender() // user-supplied method
synchronize and swap bufferscall postSwap() // user-supplied method
-In both cases, the entire loop, beginning with clearing the canvas and
-ending with swapping the buffers, defines a frame. The application is
-given the opportunity to render immediate-mode geometry at any of the
-clearly identified spots in the rendering loop. A user specifies his or
-her own rendering methods by extending the Canvas3D class and
-overriding the preRender
, postRender
, postSwap
,
-and/or renderField
methods.
-
-
diff --git a/src/main/java/org/jogamp/java3d/doc-files/Immediate1.gif b/src/main/java/org/jogamp/java3d/doc-files/Immediate1.gif
deleted file mode 100644
index 2d549b1..0000000
Binary files a/src/main/java/org/jogamp/java3d/doc-files/Immediate1.gif and /dev/null differ
diff --git a/src/main/java/org/jogamp/java3d/doc-files/Rendering.html b/src/main/java/org/jogamp/java3d/doc-files/Rendering.html
deleted file mode 100644
index 7415ce8..0000000
--- a/src/main/java/org/jogamp/java3d/doc-files/Rendering.html
+++ /dev/null
@@ -1,148 +0,0 @@
-
-
-
-
- Execution and Rendering Model
-Three Major Rendering Modes
-Immediate Mode
-Retained Mode
-Compiled-Retained Mode
-Instantiating the Render Loop
-An Application-Level
-Perspective
-compile
-method. Whether it compiles the branch, the application can add it to
-the virtual universe by adding the BranchGroup to a Locale object. The
-application repeats this process for each branch it wishes to create.
-Note that for concurrent Java 3D implementations, whenever an
-application adds a branch to the active virtual universe, that branch
-becomes visible.
-Retained and
-Compiled-Retained Rendering Modes
-Scene Graph Basics
-Scene Graph Structure
-Spatial Separation
-
- Figure 1 – A Java
-3D Scene Graph Is a DAG
-(Directed Acyclic Graph)
-
-State Inheritance
-Rendering
-Scene Graph Objects
-set
-and get
-methods. Once a scene graph object is created and connected to other
-scene graph objects to form a subgraph, the entire subgraph can be
-attached to a virtual universe---via a high-resolution Locale
-object-making the object live. Prior to attaching a subgraph
-to a virtual
-universe, the entire subgraph can be compiled into an
-optimized, internal format (see the
-BranchGroup.compile()
-method). set
and get
-methods of objects that are part of a live or compiled scene graph is
-restricted. Such restrictions provide the scene graph compiler with
-usage information it can use in optimally compiling or rendering a
-scene graph. Each object has a set of capability bits that enable
-certain functionality when the object is live or compiled. By default,
-all capability bits are disabled (cleared). Only those set
-and get
-methods corresponding to capability bits that are explicitly enabled
-(set) prior to the object being compiled or made live are legal.
-Scene Graph Superstructure
-Objects
-Java 3D defines two scene graph superstructure objects,
-VirtualUniverse
-and Locale, which are used to contain
-collections of subgraphs that
-comprise the scene graph. These objects are described in more detail in
-"Scene Graph Superstructure."
-VirtualUniverse Object
-A VirtualUniverse object
-consists of a list of Locale objects that
-contain a collection of scene graph nodes that exist in the universe.
-Typically, an application will need only one VirtualUniverse, even for
-very large virtual databases. Operations on a VirtualUniverse include
-enumerating the Locale objects contained within the universe.
-Locale Object
-The Locale object acts as a container for
-a collection of subgraphs of
-the scene graph that are rooted by a BranchGroup node. A Locale also
-defines a location within the virtual universe using high-resolution
-coordinates (HiResCoord) to specify its position. The HiResCoord serves
-as the origin for all scene graph objects contained within the Locale.
-Scene Graph Viewing Objects
-Java 3D defines five scene graph viewing objects that are not part of
-the scene graph per se but serve to define the viewing parameters and
-to provide hooks into the physical world. These objects are Canvas3D,
-Screen3D, View,
-PhysicalBody, and PhysicalEnvironment. They are
-described in more detail in the "View Model"
-document.
-Canvas3D Object
-The Canvas3D object encapsulates all of
-the parameters associated with
-the window being rendered into.
-When a Canvas3D object is attached to a View object, the Java 3D
-traverser renders the specified view onto the canvas. Multiple Canvas3D
-objects can point to the same View object.
-Screen3D Object
-The Screen3D object encapsulates all of
-the
-parameters associated with the physical screen containing the canvas,
-such as the width and height of the screen in pixels, the physical
-dimensions of the screen, and various physical calibration values.
-View Object
-The View object specifies information
-needed to render the scene graph.
-Figure
-2 shows a View object attached to a simple scene graph for
-viewing the scene.
-PhysicalBody Object
-The PhysicalBody object encapsulates all of the
-parameters associated with the physical body, such as head position,
-right and left eye position, and so forth.
-PhysicalEnvironment Object
-
-
- Figure 2 – Viewing a Scene Graph
-
-
-
diff --git a/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing.html b/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing.html
deleted file mode 100644
index ff80cb4..0000000
--- a/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing.html
+++ /dev/null
@@ -1,250 +0,0 @@
-
-
-
-
- Reusing Scene Graphs
-Sharing Subgraphs
-
-
- Figure 1 – Sharing a Subgraph
-
-Cloning Subgraphs
-cloneTree
-method for this
-purpose. The cloneTree
-method allows the programmer to change some attributes (NodeComponent
-objects) in a scene graph, while at the same time sharing the majority
-of the scene graph data-the geometry.
-References to Node Component
-Objects
-cloneTree
reaches a leaf node,
-there are two possible actions for handling the leaf node's
-NodeComponent objects (such as Material, Texture, and so forth). First,
-the cloned leaf node can reference the original leaf node's
-NodeComponent object-the NodeComponent object itself is not duplicated.
-Since the cloned leaf node shares the NodeComponent object with the
-original leaf node, changing the data in the NodeComponent object will
-effect a change in both nodes. This mode would also be used for objects
-that are read-only at run time.
-
- Figure 2 – Referenced and Duplicated
-NodeComponent Objects
-
-References to Other Scene
-Graph Nodes
-Leaf nodes that contain references to other nodes
-(for example, Light nodes reference a Group node) can create a problem
-for the cloneTree
method. After the cloneTree
-operation is performed, the reference in the cloned leaf node will
-still refer to the node in the original subgraph-a situation that is
-most likely incorrect (see Figure
-3).
-
- Figure 3 – References to Other Scene Graph
-Nodes
-
-cloneTree
must implement the updateNodeReferences
-method. By using this method, the cloned leaf node can determine if any
-nodes referenced by it have been duplicated and, if so, update the
-appropriate references to their cloned counterparts.
-updateNodeReferences
method. Once
-all nodes had been duplicated, the clone-Tree
method
-would then call each cloned leaf's node updateNodeReferences
-method. When cloned leaf node Lf2's method was called, Lf2 could ask if
-the node N1 had been duplicated during the cloneTree
-operation. If the node had been duplicated, leaf Lf2 could then update
-its internal state with the cloned node, N2 (see Figure
-4).
-
- Figure 4 – Updated Subgraph after
-updateNodeReferences Call
-
-updateNodeReferences
-method defined. Only subclassed nodes that reference other nodes need
-to have this method overridden by the user.
-Dangling References
-Because cloneTree
is able to start
-the cloning operation from any node, there is a potential for creating
-dangling references. A dangling reference can occur only when a leaf
-node that contains a reference to another scene graph node is cloned.
-If the referenced node is not cloned, a dangling reference situation
-exists: There are now two leaf nodes that access the same node (Figure
-5). A dangling reference is discovered when a leaf node's updateNodeReferences
-method calls the getNewNodeReference
method and the
-cloned subgraph does not contain a counterpart to the node being looked
-up.
-
-
- Figure 5 – Dangling Reference: Bold Nodes
-Are Being Cloned
-
-cloneTree
can
-handle it in one of two ways. If cloneTree
is called
-without the allowDanglingReferences
parameter set to true
,
-a dangling reference will result in a DanglingReferenceException
-being thrown. The user can catch this exception if desired. If cloneTree
-is called with the allowDanglingReferences
parameter set
-to true
, the update-NodeReferences
method
-will return a reference to the same object passed into the getNewNodeReference
-method. This will result in the cloneTree
operation
-completing with dangling references, as in Figure
-5.
-Subclassing Nodes
-All Java 3D predefined nodes (for example, Interpolators and LOD
-nodes)
-automatically handle all node reference and duplication operations.
-When a user subclasses a Leaf object or a NodeComponent object, certain
-methods must be provided in order to ensure the proper operation of cloneTree
.
-cloneTree
-operation must define the following two methods:
-Node cloneNode(boolean forceDuplicate);
-The
void duplicateNode(Node n, boolean forceDuplicate)cloneNode
method consists of three lines:
-
-The UserSubClass usc = new UserSubClass();
usc.duplicateNode(this, forceDuplicate);
return usc;duplicateNode
method must first call super.duplicateNode
-before duplicating any necessary user-specific data or setting any
-user-specific state.
-NodeComponent cloneNodeComponent();
-The
void duplicateNodeComponent(NodeComponent nc, boolean forceDuplicate);cloneNodeComponent
method consists of three lines:
-
-The UserNodeComponent unc = new UserNodeComponent();
unc.duplicateNodeComponent(this, forceDuplicate);
return un;duplicateNodeComponent
must first call super.duplicateNodeComponent
-and then can duplicate any user-specific data or set any user-specific
-state as necessary.
-NodeReferenceTable Object
-The NodeReferenceTable object is used by a leaf node's updateNodeReferences
-method called by the cloneTree
-operation. The NodeReferenceTable maps nodes from the original subgraph
-to the new nodes in the cloned subgraph. This information can than be
-used to update any cloned leaf node references to reference nodes in
-the cloned subgraph. This object can be created only by Java 3D.
-Example: User Behavior Node
-The following is an example of a user-defined Behavior object to show
-properly how to define a node to be compatible with the cloneTree
-operation.
-
-class RotationBehavior extends Behavior {
-
TransformGroup objectTransform;
WakeupOnElapsedFrames w; Matrix4d rotMat = new Matrix4d();
-
Matrix4d objectMat = new Matrix4d();
Transform3D t = new Transform3D(); // Override Behavior's initialize method to set up wakeup
-
// criteria public void initialize() {
- // Establish initial wakeup criteria
- wakeupOn(w);
-
} // Override Behavior's stimulus method to handle the event
- public void processStimulus(Enumeration criteria) {
- // Rotate by another PI/120.0 radians
- objectMat.mul(objectMat, rotMat);
-
t.set(objectMat);
objectTransform.setTransform(t); // Set wakeup criteria for next time
- wakeupOn(w);
-
} // Constructor for rotation behavior.
- public RotationBehavior(TransformGroup tg, int numFrames) {
-
w = new WakeupOnElapsedFrames(numFrames);
objectTransform = tg; objectMat.setIdentity();
- // Create a rotation matrix that rotates PI/120.0
-
// radians per frame
rotMat.rotX(Math.PI/120.0); // Note: When this object is duplicated via cloneTree,
-
// the cloned RotationBehavior node needs to point to
// the TransformGroup in the just-cloned tree.
} // Sets a new TransformGroup.
- public void setTransformGroup(TransformGroup tg) {
-
objectTransform = tg; }
- // The next two methods are needed for cloneTree to operate
-
// correctly.
// cloneNode is needed to provide a new instance of the user
// derived subclass. public Node cloneNode(boolean forceDuplicate) {
- // Get all data from current node needed for
-
// the constructor
int numFrames = w.getElapsedFrameCount(); RotationBehavior r =
-
new RotationBehavior(objectTransform, numFrames);
r.duplicateNode(this, forceDuplicate);
return r;
} // duplicateNode is needed to duplicate all super class
-
// data as well as all user data. public void duplicateNode(Node originalNode, boolean
-
forceDuplicate) {
super.duplicateNode(originalNode, forceDuplicate); // Nothing to do here - all unique data was handled
-
// in the constructor in the cloneNode routine.
} // Callback for when this leaf is cloned. For this object
-
// we want to find the cloned TransformGroup node that this
// clone Leaf node should reference. public void updateNodeReferences(NodeReferenceTable t) {
- super.updateNodeReferences(t);
- // Update node's TransformGroup to proper reference
- TransformGroup newTg =
-
-
diff --git a/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing1.gif b/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing1.gif
deleted file mode 100644
index f6ca47c..0000000
Binary files a/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing1.gif and /dev/null differ
diff --git a/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing2.gif b/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing2.gif
deleted file mode 100644
index c062c81..0000000
Binary files a/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing2.gif and /dev/null differ
diff --git a/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing3.gif b/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing3.gif
deleted file mode 100644
index 325cab1..0000000
Binary files a/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing3.gif and /dev/null differ
diff --git a/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing4.gif b/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing4.gif
deleted file mode 100644
index 78aeaab..0000000
Binary files a/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing4.gif and /dev/null differ
diff --git a/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing5.gif b/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing5.gif
deleted file mode 100644
index 2ff6547..0000000
Binary files a/src/main/java/org/jogamp/java3d/doc-files/SceneGraphSharing5.gif and /dev/null differ
diff --git a/src/main/java/org/jogamp/java3d/doc-files/ViewBranch.gif b/src/main/java/org/jogamp/java3d/doc-files/ViewBranch.gif
deleted file mode 100644
index 75cc40d..0000000
Binary files a/src/main/java/org/jogamp/java3d/doc-files/ViewBranch.gif and /dev/null differ
diff --git a/src/main/java/org/jogamp/java3d/doc-files/ViewModel.html b/src/main/java/org/jogamp/java3d/doc-files/ViewModel.html
deleted file mode 100644
index 3cc9ece..0000000
--- a/src/main/java/org/jogamp/java3d/doc-files/ViewModel.html
+++ /dev/null
@@ -1,1064 +0,0 @@
-
-
-
-
-
(TransformGroup)t.getNewObjectReference(
objectTransform);
setTransformGroup(newTg);
}
}View Model
-Why a New Model?
-The Physical Environment
-Influences the View
-Separation of Physical and
-Virtual
-The Virtual World
-The Physical World
-The Objects That Define the
-View
-
- Figure 1 – View Object, Its Component
-Objects, and Their
-Interconnection
-
-
-
-
-
-
-
-
-
-
-
-
-
-ViewPlatform: A Place in the
-Virtual World
-
- Figure 2 – A Portion of a Scene Graph
-Containing a ViewPlatform Object
-
-Moving through the Virtual
-World
-
- Figure 3 – A Simple Scene Graph with View
-Control
-
-getTransform
and setTransform
.
-Dropping in on a Favorite
-Place
-Associating Geometry with a
-ViewPlatform
-UserHeadToVworld
-parameter (see "View Model
-Details").
-The avatar's virtual head, represented by the shape node, will now move
-around in lock-step with the ViewPlatform's TransformGroup and any
-relative position and orientation changes of the user's actual physical
-head (if a system has a head tracker).
-Generating a View
-Composing Model and Viewing
-Transformations
-
- Figure 4 – Object and ViewPlatform
-Transformations
-
-
-
-
- Multiple Locales
-
-
-
- A Minimal Environment
-
-
-
-
-
-
-
-
-
-
-
-View Model Details
-An Overview of the
-Java 3D
-View Model
-Both camera-based and Java 3D-based view models allow a programmer
-to
-specify the shape of a view frustum and, under program control, to
-place, move, and reorient that frustum within the virtual environment.
-However, how they do this varies enormously. Unlike the camera-based
-system, the Java 3D view model allows slaving the view frustum's
-position and orientation to that of a six-degrees-of-freedom tracking
-device. By slaving the frustum to the tracker, Java 3D can
-automatically modify the view frustum so that the generated images
-match the end-user's viewpoint exactly.
-Physical Environments and
-Their Effects
-Imagine an application where the end user sits on a magic carpet. The
-application flies the user through the virtual environment by
-controlling the carpet's location and orientation within the virtual
-world. At first glance, it might seem that the application also
-controls what the end user will see-and it does, but only
-superficially.
-A Head-Mounted Example
-Imagine that the end user sees the magic carpet and the virtual world
-with a head-mounted display and head tracker. As the application flies
-the carpet through the virtual world, the user may turn to look to the
-left, to the right, or even toward the rear of the carpet. Because the
-head tracker keeps the renderer informed of the user's gaze direction,
-it might not need to draw the scene directly in front of the magic
-carpet. The view that the renderer draws on the head-mount's display
-must match what the end user would see if the experience had occurred
-in the real world.
-A Room-Mounted Example
-Imagine a slightly different scenario where the end user sits in a
-darkened room in front of a large projection screen. The application
-still controls the carpet's flight path; however, the position and
-orientation of the user's head barely influences the image drawn on the
-projection screen. If a user looks left or right, then he or she sees
-only the darkened room. The screen does not move. It's as if the screen
-represents the magic carpet's "front window" and the darkened room
-represents the "dark interior" of the carpet.
-Impact of Head Position and
-Orientation on the Camera
-In the head-mounted example, the user's head position and orientation
-significantly affects a camera model's camera position and orientation
-but hardly has any effect on the projection matrix. In the room-mounted
-example, the user's head position and orientation contributes little to
-a camera model's camera position and orientation; however, it does
-affect the projection matrix.
-The Coordinate Systems
-The basic view model consists of eight or nine coordinate systems,
-depending on whether the end-user environment consists of a
-room-mounted display or a head-mounted display. First, we define the
-coordinate systems used in a room-mounted display environment. Next, we
-define the added coordinate system introduced when using a head-mounted
-display system.
-Room-Mounted Coordinate
-Systems
-The room-mounted coordinate system is divided into the virtual
-coordinate system and the physical coordinate system. Figure
-5
-shows these coordinate systems graphically. The coordinate systems
-within the grayed area exist in the virtual world; those outside exist
-in the physical world. Note that the coexistence coordinate system
-exists in both worlds.
-The Virtual Coordinate
-Systems
- The Virtual World Coordinate System
-The virtual world coordinate system encapsulates
-the unified coordinate system for all scene graph objects in the
-virtual environment. For a given View, the virtual world coordinate
-system is defined by the Locale object that contains the ViewPlatform
-object attached to the View. It is a right-handed coordinate system
-with +x to the right, +y up, and +z toward
-the viewer.
- The ViewPlatform Coordinate System
-The ViewPlatform coordinate system is the local coordinate system of
-the ViewPlatform leaf node to which the View is attached.
-
-
- Figure 5 – Display Rigidly Attached to the
-Tracker Base
-
- The Coexistence Coordinate System
-A primary implicit goal of any view model is to map a specified local
-portion of the physical world onto a specified portion of the virtual
-world. Once established, one can legitimately ask where the user's head
-or hand is located within the virtual world or where a virtual object
-is located in the local physical world. In this way the physical user
-can interact with objects inhabiting the virtual world, and vice versa.
-To establish this mapping, Java 3D defines a special coordinate
-system,
-called coexistence coordinates, that is defined to exist in both the
-physical world and the virtual world.
-The Physical Coordinate
-Systems
- The Head Coordinate System
-The head coordinate system allows an application to import its user's
-head geometry. The coordinate system provides a simple consistent
-coordinate frame for specifying such factors as the location of the
-eyes and ears.
- The Image Plate Coordinate System
-The image plate coordinate system corresponds with the physical
-coordinate system of the image generator. The image plate is defined as
-having its origin at the lower left-hand corner of the display area and
-as lying in the display area's XY
-plane. Note that image plate is a different coordinate system than
-either left image plate or right image plate. These last two coordinate
-systems are defined in head-mounted environments only.
- The Head Tracker Coordinate System
-The head tracker coordinate system corresponds to the
-six-degrees-of-freedom tracker's sensor attached to the user's head.
-The head tracker's coordinate system describes the user's instantaneous
-head position.
- The Tracker Base Coordinate System
-The tracker base coordinate system corresponds to the emitter
-associated with absolute position/orientation trackers. For those
-trackers that generate relative position/orientation information, this
-coordinate system is that tracker's initial position and orientation.
-In general, this coordinate system is rigidly attached to the physical
-world.
-Head-Mounted Coordinate
-Systems
-Head-mounted coordinate systems divide the same virtual coordinate
-systems and the physical coordinate systems. Figure
-6
-shows these coordinate systems graphically. As with the room-mounted
-coordinate systems, the coordinate systems within the grayed area exist
-in the virtual world; those outside exist in the physical world. Once
-again, the coexistence coordinate system exists in both worlds. The
-arrangement of the coordinate system differs from those for a
-room-mounted display environment. The head-mounted version of
-Java 3D's
-coordinate system differs in another way. It includes two image plate
-coordinate systems, one for each of an end-user's eyes.
- The Left Image Plate and Right Image Plate Coordinate Systems
-The left image plate and right image plate
-coordinate systems correspond with the physical coordinate system of
-the image generator associated with the left and right eye,
-respectively. The image plate is defined as having its origin at the
-lower left-hand corner of the display area and lying in the display
-area's XY plane. Note that the left image plate's XY
-plane does not necessarily lie parallel to the right image plate's XY
-plane. Note that the left image plate and the right image plate are
-different coordinate systems than the room-mounted display
-environment's image plate coordinate system.
-
-
- Figure 6 – Display Rigidly Attached to the
-Head Tracker (Sensor)
-
-The Screen3D Object
-A Screen3D object represents one independent display device. The most
-common environment for a Java 3D application is a desktop computer
-with
-or without a head tracker. Figure
-7 shows a scene graph fragment for a display environment designed
-for such an end-user environment. Figure
-8 shows a display environment that matches the scene graph
-fragment in Figure
-7.
-
-
- Figure 7 – A Portion of a Scene Graph
-Containing a Single Screen3D
-Object
-
-
-
- Figure 8 – A Single-Screen Display
-Environment
-
-
- Figure 9 – A Portion of a Scene Graph
-Containing Three Screen3D
-Objects
-
-
-
- Figure 10 – A Three-Screen Display
-Environment
-
-Viewing in Head-Tracked Environments
-A Room-Mounted Display with
-Head Tracking
-When head tracking combines with a room-mounted
-display environment (for example, a standard flat-screen display), the
-ViewPlatform's origin and orientation serve as a base for constructing
-the view matrices. Additionally, Java 3D uses the end-user's head
-position and orientation to compute where an end-user's eyes are
-located in physical space. Each eye's position serves to offset the
-corresponding virtual eye's position relative to the ViewPlatform's
-origin. Each eye's position also serves to specify that eye's frustum
-since the eye's position relative to a Screen3D uniquely specifies that
-eye's view frustum. Note that Java 3D will access the PhysicalBody
-object to obtain information describing the user's interpupilary
-distance and tracking hardware, values it needs to compute the
-end-user's eye positions from the head position information.
-A Head-Mounted Display with
-Head Tracking
-In a head-mounted environment, the ViewPlatform's origin and
-orientation also serves as a base for constructing view matrices. And,
-as in the head-tracked, room-mounted environment, Java 3D also
-uses the
-end-user's head position and orientation to modify the ViewPlatform's
-position and orientation further. In a head-tracked, head-mounted
-display environment, an end-user's eyes do not move relative to their
-respective display screens, rather, the display screens move relative
-to the virtual environment. A rotation of the head by an end user can
-radically affect the final view's orientation. In this situation, Java
-3D combines the position and orientation from the ViewPlatform with the
-position and orientation from the head tracker to form the view matrix.
-The view frustum, however, does not change since the user's eyes do not
-move relative to their respective display screen, so Java 3D can
-compute the projection matrix once and cache the result.
-Compatibility Mode
-setCompatibilityModeEnable
-method turns compatibility mode on or off. Compatibility mode is
-disabled by default.
-
-
-Overview of the
-Camera-Based View Model
-The traditional camera-based view model, shown in Figure
-11,
-places a virtual camera inside a geometrically specified world. The
-camera "captures" the view from its current location, orientation, and
-perspective. The visualization system then draws that view on the
-user's display device. The application controls the view by moving the
-virtual camera to a new location, by changing its orientation, by
-changing its field of view, or by controlling some other camera
-parameter.
-
- Figure 11 – The Camera-Based View Model
-
-Using the Camera-Based View
-Model
-Creating a Viewing Matrix
-lookAt
utility
-method
-to create a
-viewing matrix. This method specifies the position and orientation of
-a viewing transform. It works similarly to the equivalent function in
-OpenGL. The inverse of this transform can be used to control the
-ViewPlatform object within the scene graph. Alternatively, this
-transform can be passed directly to the View's VpcToEc
-transform via the compatibility-mode viewing functions. The setVpcToEc
-method is used to set the viewing matrix when in compatibility mode.
-
Creating a Projection
-Matrix
-frustum
, perspective
,
-and ortho
. All three map points from eye coordinates
-(EC) to clipping coordinates (CC). Eye coordinates are defined such
-that (0, 0, 0) is at the eye and the projection plane is at z
-= -1.
-frustum
method
-establishes a perspective projection with the eye at the apex of a
-symmetric view frustum. The transform maps points from eye coordinates
-to clipping coordinates. The clipping coordinates generated by the
-resulting transform are in a right-handed coordinate system (as are all
-other coordinate systems in Java 3D).
-(left
, bottom
, -near)
-and (right
, top
, -near)
-specify the point on the near clipping plane that maps onto the
-lower-left and upper-right corners of the window, respectively. The -far
-parameter specifies the far clipping plane. See Figure
-12.
-perspective
method establishes a perspective
-projection with the eye at the apex of a symmetric view frustum,
-centered about the Z-axis,
-with a fixed field of view. The resulting perspective projection
-transform mimics a standard camera-based view model. The transform maps
-points from eye coordinates to clipping coordinates. The clipping
-coordinates generated by the resulting transform are in a right-handed
-coordinate system.
--near
and -far
specify the near
-and far clipping planes; fovx
specifies the field of view
-in the X dimension, in radians; and aspect
-specifies the aspect ratio of the window. See Figure
-13.
-
- Figure 12 – A Perspective Viewing Frustum
-
-
-
- Figure 13 – Perspective View Model Arguments
-
-ortho
method
-establishes a parallel projection. The orthographic projection
-transform mimics a standard camera-based video model. The transform
-maps points from eye coordinates to clipping coordinates. The clipping
-coordinates generated by the resulting transform are in a right-handed
-coordinate system.
-(left
,
-bottom
, -near)
and (right
, top
,
--near)
-specify the point on the near clipping plane that maps onto the
-lower-left and upper-right corners of the window, respectively. The -far
-parameter specifies the far clipping plane. See Figure
-14.
-
- Figure 14 – Orthographic View Model
-
-setLeftProjection
-and setRightProjection
methods are used to set the
-projection matrices for the left eye and right eye, respectively, when
-in compatibility mode.Scene Graph Superstructure
-The Virtual Universe
-Java 3D defines the concept of a virtual universe
-as a three-dimensional space with an associated set of objects. Virtual
-universes serve as the largest unit of aggregate representation, and
-can also be thought of as databases. Virtual universes can be very
-large, both in physical space units and in content. Indeed, in most
-cases a single virtual universe will serve an application's entire
-needs.
-
- Figure 1 – The Virtual Universe
-
-Establishing a Scene
-To construct a three-dimensional scene, the programmer must execute a
-Java 3D program. The Java 3D application must first create a
-VirtualUniverse object and attach at least one Locale to it. Then the
-desired scene graph is constructed, starting with a BranchGroup node
-and including at least one ViewPlatform object, and the scene graph is
-attached to the Locale. Finally, a View object that references the
-ViewPlatform object (see "Structuring
-the Java 3D Program")
-is constructed. As soon as a scene graph containing a ViewPlatform is
-attached to the VirtualUniverse, Java 3D's rendering loop is engaged,
-and the scene will appear on the drawing canvas(es) associated with the
-View object.
-Loading a Virtual Universe
-Java 3D is a runtime application programming
-interface (API), not a file format. As an API, Java 3D provides no
-direct mechanism for loading or storing a virtual universe.
-Constructing a scene graph involves the execution of a Java 3D program.
-However, loaders to convert a number of standard 3D file formats to or
-from Java 3D virtual universes are expected to be generally available.
-Coordinate Systems
-By default, Java 3D coordinate systems are right-handed, with the
-orientation semantics being that +y is the local gravitational
-up, +x is horizontal to the right, and +z is directly
-toward the viewer. The default units are meters.
-High-Resolution Coordinates
-Double-precision floating-point, single-precision floating-point, or
-even fixed-point representations of three-dimensional coordinates are
-sufficient to represent and display rich 3D scenes. Unfortunately,
-scenes are not worlds, let alone universes. If one ventures even a
-hundred miles away from the (0.0, 0.0, 0.0) origin using only
-single-precision floating-point coordinates, representable points
-become quite quantized, to at very best a third of an inch (and much
-more coarsely than that in practice).
-Java 3D High-Resolution
-Coordinates
-Java 3D high-resolution coordinates consist of three 256-bit
-fixed-point numbers, one each for x, y, and z.
-The fixed point is at bit 128, and the value 1.0 is defined to be
-exactly 1 meter. This coordinate system is sufficient to describe a
-universe in excess of several hundred billion light years across, yet
-still define objects smaller than a proton (down to below the planck
-length). Table
-1 shows how many bits are needed above or below the fixed point
-to represent the range of interesting physical dimensions.
-
-
-
-
- 2n Meters
- Units
-
-
- 87.29
- Universe (20 billion light years)
-
-
-
- 69.68
- Galaxy (100,000 light years)
-
-
- 53.07
- Light year
-
-
- 43.43
- Solar system diameter
-
-
- 23.60
- Earth diameter
-
-
- 10.65
- Mile
-
-
- 9.97
- Kilometer
-
-
- 0.00
- Meter
-
-
- -19.93
- Micron
-
-
- -33.22
- Angstrom
-
-
-
- -115.57
- Planck length
-
A 256-bit fixed-point number also has the advantage of being able to -directly represent nearly any reasonable single-precision -floating-point value exactly. -
-High-resolution coordinates in Java 3D are used only to embed more -traditional floating point coordinate systems within a much -higher-resolution substrate. In this way a visually seamless virtual -universe of any conceivable size or scale can be created, without worry -about numerical accuracy. -
--
-The semantics of how file loaders deal with high-resolution -coordinates -is up to the individual file loader, as Java 3D does not directly -define any file-loading semantics. However, some general advice can be -given (note that this advice is not officially part of the -Java 3D specification). -
-For "small" virtual universes (on the order of hundreds of meters -across in relative scale), a single Locale with high-resolution -coordinates at location (0.0, 0.0, 0.0) as the root node (below the -VirtualUniverse object) is sufficient; a loader can automatically -construct this node during the loading process, and the point in -high-resolution coordinates does not need any direct representation in -the external file. -
-Larger virtual universes are expected to be constructed usually like -computer directory hierarchies, that is, as a "root" virtual universe -containing mostly external file references to embedded virtual -universes. In this case, the file reference object (user-specific data -hung off a Java 3D group or hi-res node) defines the location for the -data to be read into the current virtual universe. -
-The data file's contents should be parented to the file object node -while being read, thus inheriting the high-resolution coordinates of -the file object as the new relative virtual universe origin of the -embedded scene graph. If this scene graph itself contains -high-resolution coordinates, it will need to be offset (translated) by -the amount in the file object's high-resolution coordinates and then -added to the larger virtual universe as new high-resolution -coordinates, with their contents hung off below them. Once again, this -procedure is not part of the official Java 3D specification, but some -more details on the care and use of high-resolution coordinates in -external file formats will probably be available as a Java 3D -application note. -
-Authoring tools that directly support high-resolution coordinates -should create additional high-resolution coordinates as a user creates -new geometry "sufficiently" far away (or of different scale) from -existing high-resolution coordinates. -
-Semantics of widely moving objects. Most fixed and -nearly-fixed objects stay attached to the same high-resolution Locale. -Objects that make wide changes in position or scale may periodically -need to be reparented to a more appropriate high-resolution Locale. If -no appropriate high-resolution Locale exists, the application may need -to create a new one. -
-Semantics of viewing. The ViewPlatform object and -the -associated nodes in its hierarchy are very often widely moving objects. -Applications will typically attach the view platform to the most -appropriate high-resolution Locale. For display, all objects will first -have their positions translated by the difference between the location -of their high-resolution Locale and the view platform's high-resolution -Locale. (In the common case of the Locales being the same, no -translation is necessary.) -
- - diff --git a/src/main/java/org/jogamp/java3d/doc-files/intro.gif b/src/main/java/org/jogamp/java3d/doc-files/intro.gif deleted file mode 100644 index 503f818..0000000 Binary files a/src/main/java/org/jogamp/java3d/doc-files/intro.gif and /dev/null differ diff --git a/src/main/java/org/jogamp/java3d/doc-files/intro.html b/src/main/java/org/jogamp/java3d/doc-files/intro.html deleted file mode 100644 index f5ea134..0000000 --- a/src/main/java/org/jogamp/java3d/doc-files/intro.html +++ /dev/null @@ -1,337 +0,0 @@ - - - - --This guide, which contains documentation formerly -published separately from the javadoc-generated API documentation, -is not an -official API specification. This documentation may contain references to -Java and Java 3D, both of which are trademarks of Sun Microsystems, Inc. -Any reference to these and other trademarks of Sun Microsystems are -for explanatory purposes only. Their use does impart any rights beyond -those listed in the source code license. In particular, Sun Microsystems -retains all intellectual property and trademark rights as described in -the proprietary rights notice in the COPYRIGHT.txt file. - -
-The Java 3D API is an application -programming interface used for writing three-dimensional graphics -applications and applets. It gives developers high-level constructs for -creating and manipulating 3D geometry and for constructing the -structures used in rendering that geometry. Application developers can -describe very large virtual worlds using these constructs, which -provide Java 3D with enough information to render these worlds -efficiently. -
-Java 3D delivers Java's "write once, run anywhere" -benefit to -developers of 3D graphics applications. Java 3D is part of the -JavaMedia suite of APIs, making it available on a wide range of -platforms. It also integrates well with the Internet because -applications and applets written using the Java 3D API have access to -the entire set of Java classes. -
-The Java 3D API draws its ideas from existing
-graphics APIs and from
-new technologies. Java 3D's low-level graphics constructs synthesize
-the best ideas found in low-level APIs such as Direct3D, OpenGL,
-QuickDraw3D, and XGL. Similarly, its higher-level constructs synthesize
-the best ideas found in several scene graph-based systems. Java 3D
-introduces some concepts not commonly considered part of the graphics
-environment, such as 3D spatial sound. Java 3D's sound capabilities
-help to provide a more immersive experience for the user.
-
-
-The Java 3D API improves on previous graphics APIs -by eliminating many -of the bookkeeping and programming chores that those APIs impose. Java -3D allows the programmer to think about geometric objects rather than -about triangles-about the scene and its composition rather than about -how to write the rendering code for efficiently displaying the scene. -
--
-super.setXxxxx
"
-for any attribute state set method that is overridden.
-Applications can extend Java 3D's classes and add -their own methods. -However, they may not override Java 3D's scene graph traversal -semantics because the nodes do not contain explicit traversal and draw -methods. Java 3D's renderer retains those semantics internally. -
-Java 3D does provide hooks for mixing -Java 3D-controlled scene graph rendering and user-controlled rendering -using Java 3D's immediate mode constructs (see "Mixed-Mode Rendering"). Alternatively, -the application can -stop Java 3D's renderer and do all its drawing in immediate mode (see "Pure Immediate-Mode Rendering"). -
-Behaviors require applications to extend the -Behavior object and to -override its methods with user-written Java code. These extended -objects should contain references to those scene graph objects that -they will manipulate at run time. The "Behaviors -and Interpolators" document describes Java 3D's behavior -model. -
--
-Additionally, leaving the details of rendering to -Java 3D allows it to -tune the rendering to the underlying hardware. For example, relaxing -the strict rendering order imposed by other APIs allows parallel -traversal as well as parallel rendering. Knowing which portions of the -scene graph cannot be modified at run time allows Java 3D to flatten -the tree, pretransform geometry, or represent the geometry in a native -hardware format without the need to keep the original data. -
--
-Java 3D implementations are expected to provide -useful rendering rates -on most modern PCs, especially those with 3D graphics accelerator -cards. On midrange workstations, Java 3D is expected to provide -applications with nearly full-speed hardware performance. -
-Finally, Java 3D is designed to scale as the -underlying hardware -platforms increase in speed over time. Tomorrow's 3D PC game -accelerators will support more complex virtual worlds than high-priced -workstations of a few years ago. Java 3D is prepared to meet this -increase in hardware performance. -
--
-This section illustrates how a developer might -structure a Java 3D application. The simple application in this example -creates a scene graph that draws an object in the middle of a window -and rotates the object about its center point. -
-The scene graph for the sample application is shown below. -
-The scene graph consists of superstructure -components—a VirtualUniverse -object and a Locale object—and a set of branch graphs. Each branch -graph is a subgraph that is rooted by a BranchGroup node that is -attached to the superstructure. For more information, see "Scene Graph Basics." -
- --
--A VirtualUniverse object defines a named universe. Java 3D permits the -creation of more than one universe, though the vast majority of -applications will use just one. The VirtualUniverse object provides a -grounding for scene graphs. All Java 3D scene graphs must connect to a -VirtualUniverse object to be displayed. For more information, see "Scene Graph Superstructure." -
-Below the VirtualUniverse object is a Locale object. -The Locale object -defines the origin, in high-resolution coordinates, of its attached -branch graphs. A virtual universe may contain as many Locales as -needed. In this example, a single Locale object is defined with its -origin at (0.0, 0.0, 0.0). -
-The scene graph itself starts with the BranchGroup -nodes. -A BranchGroup serves as the root of a -subgraph, called a branch graph, of the scene graph. Only -BranchGroup objects can attach to Locale objects. -
-In this example there are two branch graphs and, -thus, two BranchGroup -nodes. Attached to the left BranchGroup are two subgraphs. One subgraph -consists of a user-extended Behavior leaf node. The Behavior node -contains Java code for manipulating the transformation matrix -associated with the object's geometry. -
-The other subgraph in this BranchGroup consists of a -TransformGroup -node that specifies the position (relative to the Locale), orientation, -and scale of the geometric objects in the virtual universe. A single -child, a Shape3D leaf node, refers to two component objects: a Geometry -object and an Appearance object. The Geometry object describes the -geometric shape of a 3D object (a cube in our simple example). The -Appearance object describes the appearance of the geometry (color, -texture, material reflection characteristics, and so forth). -
-The right BranchGroup has a single subgraph that -consists of a -TransformGroup node and a ViewPlatform leaf node. The TransformGroup -specifies the position (relative to the Locale), orientation, and scale -of the ViewPlatform. This transformed ViewPlatform object defines the -end user's view within the virtual universe. -
-Finally, the ViewPlatform is referenced by a View -object that specifies -all of the parameters needed to render the scene from the point of view -of the ViewPlatform. Also referenced by the View object are other -objects that contain information, such as the drawing canvas into which -Java 3D renders, the screen that contains the canvas, and information -about the physical environment. -
--
-The following steps are taken by the example program to create the -scene graph elements and link them together. Java 3D will then render -the scene graph and display the graphics in a window on the screen:
-2. Create a BranchGroup as the root of the scene branch graph.
-3. Construct a Shape3D node with a TransformGroup node above it.
-4. Attach a RotationInterpolator behavior to the TransformGroup.
-5. Call the simple universe utility function to do the following:
-b. Create the PhysicalBody, PhysicalEnvironment, View, and -ViewPlat-form objects.
-c. Create a BranchGroup as the root of the view platform branch -graph.
-d. Insert the view platform branch graph into the Locale.
-The Java 3D renderer then starts running in an infinite loop. The -renderer conceptually performs the following operations:
-while(true) {-
Process input
If (request to exit) break
Perform Behaviors
Traverse the scene graph and render visible objects
}
Cleanup and exit
Click here to see code fragments
-from a simple program, HelloUniverse.java
,
-that creates a cube and a RotationInterpolator behavior object that
-rotates the cube at a constant rate of pi/2 radians per second.
-
Here are other documents that provide explanatory material,
-previously included as part of
-the Java 3D API Specification Guide.
-
-
Provides the core set of classes for the -3D graphics API for the Java platform; click here for more information, -including explanatory material that was formerly found in the guide. -
- -The 3D API is an application -programming interface used for writing three-dimensional graphics -applications and applets. It gives developers high-level constructs for -creating and manipulating 3D geometry and for constructing the -structures used in rendering that geometry. Application developers can -describe very large virtual worlds using these constructs, which -provide the runtime system with enough information to render these worlds -efficiently. -
- - - - - diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/Behaviors.html b/src/main/javadoc/org/jogamp/java3d/doc-files/Behaviors.html new file mode 100644 index 0000000..7bcc4a2 --- /dev/null +++ b/src/main/javadoc/org/jogamp/java3d/doc-files/Behaviors.html @@ -0,0 +1,596 @@ + + + + +Behavior nodes provide the means for +animating objects, processing keyboard and mouse inputs, reacting to +movement, and enabling and processing pick events. Behavior nodes +contain Java code and state variables. A Behavior node's Java code can +interact with Java objects, change node values within a Java 3D +scene +graph, change the behavior's internal state-in general, perform any +computation it wishes. +
+Simple behaviors can add surprisingly interesting effects to a scene +graph. For example, one can animate a rigid object by using a Behavior +node to repetitively modify the TransformGroup node that points to the +object one wishes to animate. Alternatively, a Behavior node can track +the current position of a mouse and modify portions of the scene graph +in response.
+A Behavior leaf node object contains a scheduling region and two
+methods: an initialize
method called once when the
+behavior becomes "live" and a processStimulus
+method called whenever appropriate by the Java 3D behavior
+scheduler.
+The Behavior object also contains the state information needed by its initialize
+and processStimulus
methods.
+
The scheduling region defines a spatial volume that serves +to enable the scheduling of Behavior nodes. A Behavior node is active +(can receive stimuli) whenever an active ViewPlatform's activation +volume intersects a Behavior object's scheduling region. Only active +behaviors can receive stimuli. +
+The scheduling interval defines a +partial order of execution for behaviors that wake up in response to +the same wakeup condition (that is, those behaviors that are processed +at the same "time"). Given a set of behaviors whose wakeup conditions +are satisfied at the same time, the behavior scheduler will execute all +behaviors in a lower scheduling interval before executing any behavior +in a higher scheduling interval. Within a scheduling interval, +behaviors can be executed in any order, or in parallel. Note that this +partial ordering is only guaranteed for those behaviors that wake up at +the same time in response to the same wakeup condition, for example, +the set of behaviors that wake up every frame in response to a +WakeupOnElapsedFrames(0) wakeup condition. +
+The processStimulus
method receives and processes a
+behavior's ongoing messages. The Java 3D behavior scheduler
+invokes a
+Behavior node's processStimulus
+method when an active ViewPlatform's activation volume intersects a
+Behavior object's scheduling region and all of that behavior's wakeup
+criteria are satisfied. The processStimulus
method
+performs its computations and actions (possibly including the
+registration of state change information that could cause Java 3D
+to
+wake other Behavior objects), establishes its next wakeup condition,
+and finally exits.
+
A typical behavior will modify one or more nodes or node components +in +the scene graph. These modifications can happen in parallel with +rendering. In general, applications cannot count on behavior execution +being synchronized with rendering. There are two exceptions to this +general rule: +
+processStimulus
+method of a single behavior instance are guaranteed to take effect in
+the same rendering frameprocessStimulus
+methods of the set of behaviors that wake up in response to a
+WakeupOnElapsedFrames(0) wakeup condition are guaranteed to take effect
+in the same rendering frame.Note that modifications to geometry by-reference or texture +by-reference are not guaranteed to show up in the same frame as other +scene graph changes. +
+When the Java 3D behavior scheduler invokes a Behavior object's
+processStimulus
+method, that method may perform any computation it wishes. Usually, it
+will change its internal state and specify its new wakeup conditions.
+Most probably, it will manipulate scene graph elements. However, the
+behavior code can change only those aspects of a scene graph element
+permitted by the capabilities associated with that scene graph element.
+A scene graph's capabilities restrict behavioral manipulation to those
+manipulations explicitly allowed.
+
The application must provide the Behavior object with references to
+those scene graph elements that the Behavior object will manipulate.
+The application provides those references as arguments to the
+behavior's constructor when it creates the Behavior object.
+Alternatively, the Behavior object itself can obtain access to the
+relevant scene graph elements either when Java 3D invokes its initialize
+method or each time Java 3D invokes its processStimulus
+method.
+
Behavior methods have a very rigid structure. Java 3D assumes +that +they +always run to completion (if needed, they can spawn threads). Each +method's basic structure consists of the following: +
+A WakeupCondition object is +an +abstract class specialized to fourteen +different WakeupCriterion objects and to four combining objects +containing multiple WakeupCriterion objects. +
+A Behavior node provides the Java 3D behavior scheduler with a +WakeupCondition object. When that object's WakeupCondition has been +satisfied, the behavior scheduler hands that same WakeupCondition back +to the Behavior via an enumeration. +
++
+Java 3D provides a rich set of wakeup criteria that Behavior
+objects
+can use in specifying a complex WakeupCondition. These wakeup criteria
+can cause Java 3D's behavior scheduler to invoke a behavior's processStimulus
+method whenever
+
A Behavior object constructs a WakeupCriterion +by constructing the +appropriate criterion object. The Behavior object must provide the +appropriate arguments (usually a reference to some scene graph object +and possibly a region of interest). Thus, to specify a +WakeupOnViewPlatformEntry, a behavior would specify the region that +will cause the behavior to execute if an active ViewPlatform enters it. +
+A Behavior object can combine multiple WakeupCriterion objects into +a +more powerful, composite WakeupCondition. Java 3D behaviors +construct a +composite WakeupCondition in one of the following ways: +
+WakeupCriterion && WakeupCriterion && ...+
WakeupCriterion || WakeupCriterion || ...+
WakeupOr && WakeupOr && ...+
WakeupAnd || WakeupAnd || ...+
Behavior objects can condition themselves to awaken only when +signaled +by another Behavior node. The WakeupOnBehaviorPost +WakeupCriterion +takes as arguments a reference to a Behavior node and an integer. These +two arguments allow a behavior to limit its wakeup criterion to a +specific post by a specific behavior. +
+The WakeupOnBehaviorPost WakeupCriterion permits behaviors to chain +their computations, allowing parenthetical computations-one behavior +opens a door and the second closes the same door, or one behavior +highlights an object and the second unhighlights the same object. +
++
+As a virtual universe grows large, Java 3D must carefully +husband +its +resources to ensure adequate performance. In a 10,000-object virtual +universe with 400 or so Behavior nodes, a naive implementation of Java +3D could easily end up consuming the majority of its compute cycles in +executing the behaviors associated with the 400 Behavior objects before +it draws a frame. In such a situation, the frame rate could easily drop +to unacceptable levels. +
+Behavior objects are usually associated with geometric objects in +the +virtual universe. In our example of 400 Behavior objects scattered +throughout a 10,000-object virtual universe, only a few of these +associated geometric objects would be visible at a given time. A +sizable fraction of the Behavior nodes-those associated with nonvisible +objects-need not be executed. Only those relatively few Behavior +objects that are associated with visible objects must be executed. +
+Java 3D mitigates the problem of a large number of Behavior +nodes in +a +high-population virtual universe through execution culling-choosing to +invoke only those behaviors that have high relevance. +
+Java 3D requires each behavior to have a scheduling region +and to post a wakeup condition. Together a behavior's scheduling region +and wakeup condition provide Java 3D's behavior scheduler with +sufficient domain knowledge to selectively prune behavior invocations +and invoke only those behaviors that absolutely need to be executed. +
++
+Java 3D finds all scheduling regions associated with Behavior +nodes +and +constructs a scheduling/volume tree. It also creates an AND/OR tree +containing all the Behavior node wakeup criteria. These two data +structures provide the domain knowledge Java 3D needs to prune +unneeded +behavior execution (to perform "execution triage"). +
+Java 3D must track a behavior's wakeup conditions only if an +active +ViewPlatform object's activation volume intersects with that Behavior +object's scheduling region. If the ViewPlatform object's activation +volume does not intersect with a behavior's scheduling region, +Java 3D +can safely ignore that behavior's wakeup criteria. +
+In essence, the Java 3D scheduler performs the following +checks: +
+true
,
+schedule that Behavior object for execution.Java 3D's behavior scheduler executes those Behavior objects
+that
+have
+been scheduled by calling the behavior's processStimulus
+method.
+
This section describes Java 3D's predefined Interpolator behaviors. +They are called interpolators +because they smoothly interpolate between the two extreme values that +an interpolator can produce. Interpolators perform simple behavioral +acts, yet they provide broad functionality. +
+The Java 3D API provides interpolators for a number of +functions: +manipulating transforms within a TransformGroup, modifying the values +of a Switch node, and modifying Material attributes such as color and +transparency. +
+These predefined Interpolator behaviors share the same mechanism for +specifying and later for converting a temporal value into an alpha +value. Interpolators consist of two portions: a generic portion that +all interpolators share and a domain-specific portion. +
+The generic portion maps time in milliseconds onto a value in the +range +[0.0, 1.0] inclusive. The domain-specific portion maps an alpha value +in the range [0.0, 1.0] onto a value appropriate to the predefined +behavior's range of outputs. An alpha value of 0.0 generates an +interpolator's minimum value, an alpha value of 1.0 generates an +interpolator's maximum value, and an alpha value somewhere in between +generates a value proportionally in between the minimum and maximum +values. +
+Several parameters control the mapping of time onto an alpha value +(see +the javadoc for the Alpha object for a +description of the API). +That mapping is deterministic as long as its parameters do not change. +Thus, two different interpolators with the same parameters will +generate the same alpha value given the same time value. This means +that two interpolators that do not communicate can still precisely +coordinate their activities, even if they reside in different threads +or even different processors-as long as those processors have +consistent clocks. +
+Figure +1 +shows the components of an interpolator's time-to-alpha mapping. Time +is represented on the horizontal axis. Alpha is represented on the +vertical axis. As we move from left to right, we see the alpha value +start at 0.0, rise to 1.0, and then decline back to 0.0 on the +right-hand side. +
+On the left-hand side, the trigger time defines +when this interpolator's waveform begins in milliseconds. The region +directly to the right of the trigger time, labeled Phase Delay, defines +a time period where the waveform does not change. During phase delays +alpha is either 0 or 1, depending on which region it precedes. +
+Phase delays provide an important means for offsetting multiple +interpolators from one another, especially where the interpolators have +all the same parameters. The next four regions, labeled α +increasing, α at 1, α decreasing, and +α at 0, all specify durations for +the corresponding values +of alpha. +
+Interpolators have a loop count that determines how many times to +repeat the sequence of alpha increasing, alpha at 1, alpha decreasing, +and alpha at 0; they also have associated mode flags that enable either +the increasing or decreasing portions, or both, of the waveform. +
+ ++
++Developers can use the loop count in conjunction with the mode flags to +generate various kinds of actions. Specifying a loop count of 1 and +enabling the mode flag for only the alpha-increasing and alpha-at-1 +portion of the waveform, we would get the waveform shown in Figure +2. +
+ ++
++In Figure +2, +the alpha value is 0 before the combination of trigger time plus the +phase delay duration. The alpha value changes from 0 to 1 over a +specified interval of time, and thereafter the alpha value remains 1 +(subject to the reprogramming of the interpolator's parameters). A +possible use of a single alpha-increasing value might be to combine it +with a rotation interpolator to program a door opening. +
+Similarly, by specifying a loop count of 1 and +a mode flag that enables only the alpha-decreasing and alpha-at-0 +portion of the waveform, we would get the waveform shown in Figure +3. +
+In Figure +3, +the alpha value is 1 before the combination of trigger time plus the +phase delay duration. The alpha value changes from 1 to 0 over a +specified interval; thereafter the alpha value remains 0 (subject to +the reprogramming of the interpolator's parameters). A possible use of +a single α-decreasing value might be to combine it with a +rotation +interpolator to program a door closing. +
+ ++
++We can combine both of the above waveforms by specifying a loop count +of 1 and setting the mode flag to enable both the alpha-increasing and +alpha-at-1 portion of the waveform as well as the alpha-decreasing and +alpha-at-0 portion of the waveform. This combination would result in +the waveform shown in Figure +4. +
+ ++
++In Figure +4, +the alpha value is 0 before the combination of trigger time plus the +phase delay duration. The alpha value changes from 0 to 1 over a +specified period of time, remains at 1 for another specified period of +time, then changes from 1 to 0 over a third specified period of time; +thereafter the alpha value remains 0 (subject to the reprogramming of +the interpolator's parameters). A possible use of an alpha-increasing +value followed by an alpha-decreasing value might be to combine it with +a rotation interpolator to program a door swinging open and then +closing. +
+By increasing the loop count, we can get +repetitive behavior, such as a door swinging open and closed some +number of times. At the extreme, we can specify a loop count of -1 +(representing infinity). +
+We can construct looped versions of the waveforms shown in Figure +2, Figure +3, and Figure +4. Figure +5 shows a looping interpolator with mode flags set to enable +only the alpha-increasing and alpha-at-1 portion of the waveform. +
+ ++
++In Figure +5, alpha goes from 0 to 1 over a fixed duration of time, stays +at 1 for another fixed duration of time, and then repeats. +
+Similarly, Figure +6 shows a looping interpolator with mode flags set to enable +only the alpha-decreasing and alpha-at-0 portion of the waveform. +
+ ++
++Finally, Figure +7 shows a looping interpolator with both the increasing and +decreasing portions of the waveform enabled. +
+In all three cases shown by Figure +5, Figure +6, and Figure +7, we can compute the exact value of alpha at any point in time. +
+ ++
++Java 3D's preprogrammed behaviors permit other behaviors to change +their parameters. When such a change occurs, the alpha value changes to +match the state of the newly parameterized interpolator. +
+Commonly, developers want alpha to change slowly at first and then
+to
+speed up until the change in alpha reaches some appropriate rate. This
+is analogous to accelerating your car up to the speed limit-it does not
+start off immediately at the speed limit. Developers specify this
+"ease-in, ease-out" behavior through two additional parameters, the increasingAlphaRampDuration
+and the decreasing-AlphaRampDuration
.
+
Each of these parameters specifies a period within the increasing or
+decreasing alpha duration region during which the "change in alpha" is
+accelerated (until it reaches its maximum per-unit-of-time step size)
+and then symmetrically decelerated. Figure
+8 shows three general examples of how the increasingAlphaRampDuration
+method can be used to modify the alpha waveform. A value of 0 for the
+increasing ramp duration implies that α
+is not accelerated; it changes at a constant rate. A value of 0.5 or
+greater (clamped to 0.5) for this increasing ramp duration implies that
+the change in α is accelerated during the first half of the
+period and
+then decelerated during the second half of the period. For a value of n
+that is less than 0.5, alpha is accelerated for duration n,
+held constant for duration (1.0 - 2n), then decelerated for
+duration n of the period.
+
+
+The Java 3D API specification serves to define objects, methods, and +their actions precisely. Describing how to use an API belongs in a +tutorial or programmer's +reference manual, and is well beyond the scope of this specification. +However, a short introduction to the main concepts in Java 3D will +provide the context for understanding the detailed, but isolated, +specification found in the class and method descriptions. We introduce +some of the key Java 3D concepts and illustrate them with some simple +program fragments. +
++
+A scene graph is a "tree" structure that contains data arranged in a +hierarchical manner. The scene graph consists of parent nodes, child +nodes, and data objects. The parent nodes, called Group nodes, organize +and, in some cases, control how Java 3D interprets their descendants. +Group nodes serve as the glue that holds a scene graph together. Child +nodes can be either Group nodes or Leaf nodes. Leaf nodes have no +children. They encode the core semantic elements of a scene graph- for +example, what to draw (geometry), what to play (audio), how to +illuminate objects (lights), or what code to execute (behaviors). Leaf +nodes refer to data objects, called NodeComponent objects. +NodeComponent objects are not scene graph nodes, but they contain the +data that Leaf nodes require, such as the geometry to draw or the sound +sample to play. +
+A Java 3D application builds and manipulates a scene graph by +constructing Java 3D objects and then later modifying those objects by +using their methods. A Java 3D program first constructs a scene graph, +then, once built, hands that scene graph to Java 3D for processing. +
+The structure of a scene graph determines the relationships among +the +objects in the graph and determines which objects a programmer can +manipulate as a single entity. Group nodes provide a single point for +handling or manipulating all the nodes beneath it. A programmer can +tune a scene graph appropriately by thinking about what manipulations +an application will need to perform. He or she can make a particular +manipulation easy or difficult by grouping or regrouping nodes in +various ways. +
++
+The following code constructs a simple scene graph consisting of a
+group node and two leaf
+nodes.
+
+Listing 1 – Code for Constructing a Simple Scene Graph +
+Shape3D myShape1 = new Shape3D(myGeometry1, myAppearance1);+
Shape3D myShape2 = new Shape3D(myGeometry2);
myShape2.setAppearance(myAppearance2);
Group myGroup = new Group();
myGroup.addChild(myShape1);
myGroup.addChild(myShape2);
It first constructs one leaf node, the first of two Shape3D
+nodes, using a constructor that takes both a Geometry and an Appearance
+NodeComponent object. It then constructs the second Shape3D node, with
+only a Geometry object. Next, since the second Shape3D node was created
+without an Appearance object, it supplies the missing Appearance object
+using the Shape3D node's setAppearance
method. At this
+point both leaf nodes have been fully constructed. The code next
+constructs a group node to hold the two leaf nodes. It
+uses the Group node's addChild
method to add the two leaf
+nodes as children to the group node, finishing the construction of the
+scene graph. Figure
+1
+shows the constructed scene graph, all the nodes, the node component
+objects, and the variables used in constructing the scene graph.
+
Java 3D places restrictions on how a program can insert a scene +graph +into a universe. +
+A Java 3D environment consists of two superstructure objects, +VirtualUniverse and Locale, and one or more graphs, rooted by a special +BranchGroup node. Figure 2 shows these objects +in context with other scene graph objects. +
+The VirtualUniverse object defines a universe. A universe allows a +Java +3D program to create a separate and distinct arena for defining objects +and their relationships to one another. Typically, Java 3D programs +have only one VirtualUniverse object. Programs that have more than one +VirtualUniverse may share NodeComponent objects but not scene graph +node objects. +
+The Locale object specifies a fixed position within the universe. +That +fixed position defines an origin for all scene graph nodes beneath it. +The Locale object allows a programmer to specify that origin very +precisely and with very high dynamic range. A Locale can accurately +specify a location anywhere in the known physical universe and at the +precision of Plank's distance. Typically, Java 3D programs have only +one Locale object with a default origin of (0, 0, 0). Programs that +have more than one Locale object will set the location of the +individual Locale objects so that they provide an appropriate local +origin for the nodes beneath them. For example, to model the Mars +landing, a programmer might create one Locale object with an origin at +Cape Canaveral and another with an origin located at the landing site +on Mars. +
+ ++The BranchGroup node serves as the root of a branch graph. +Collectively, the BranchGroup node and all of its children form the +branch graph. The two kinds of branch graphs are called content +branches and view branches. A content branch contains only +content-related leaf nodes, while a view branch +contains a ViewPlatform leaf node and may contain other content-related +leaf nodes. Typically, a universe contains more than one branch +graph-one view branch, and any number of content branches. +
+Besides serving as the root of a branch graph, the BranchGroup node +has +two special properties: It alone may be inserted into a Locale object, +and it may be compiled. Java 3D treats uncompiled and compiled branch +graphs identically, though compiled branch graphs will typically render +more efficiently. +
+We could not insert the scene graph created by our simple example (Listing
+1) into a Locale because it does not have a BranchGoup node for
+its root. Listing 2
+shows a modified version of our first code example that creates a
+simple content branch graph and the minimum of superstructure objects.
+Of special note, Locales do not have children, and they are not part of
+the scene graph. The method for inserting a branch graph is addBranchGraph
,
+whereas addChild
is the method for adding children to all
+group nodes.
+Listing 2 – Code for Constructing a +Scene Graph and Some +Superstructure Objects +
+Shape3D myShape1 = new Shape3D(myGeometry1, myAppearance1);+
Shape3D myShape2 = new Shape3D(myGeometry2, myAppearance2);
BranchGroup myBranch = new BranchGroup();
myBranch.addChild(myShape1);
myBranch.addChild(myShape2);
myBranch.compile();
VirtualUniverse myUniverse = new VirtualUniverse();
Locale myLocale = new Locale(myUniverse);
myLocale.addBranchGraph(myBranch);
universe
+package for constructing and manipulating the objects in a view branch.
+The classes in the universe
package provide a quick means
+for building a single view (single window) application. Listing 3
+shows a code fragment for using the SimpleUniverse class. Note that the
+SimpleUniverse constructor takes a Canvas3D as an argument, in this
+case referred to by the variable myCanvas
.
+Listing 3 – Code +for Constructing a Scene Graph Using the Universe +Package +
+import com.sun.j3d.utils.universe.*;+
Shape3D myShape1 = new Shape3D(myGeometry1, myAppearance1);
Shape3D myShape2 = new Shape3D(myGeometry2, myAppearance2);
BranchGroup myBranch = new BranchGroup();
myBranch.addChild(myShape1);
myBranch.addChild(myShape2);
myBranch.compile();
SimpleUniverse myUniv = new SimpleUniverse(myCanvas);
myUniv.addBranchGraph(myBranch);
The order that a particular Java 3D implementation renders objects +onto +the display is carefully not defined. One implementation might render +the first Shape3D object and then the second. Another might first +render the second Shape3D node before it renders the first one. Yet +another implementation may render both Shape3D nodes in parallel. +
++
+Java 3D provides different techniques for controlling the effect of +various features. Some techniques act fairly locally, such as getting +the color of a vertex. Other techniques have broader influence, such as +changing the color or appearance of an entire object. Still other +techniques apply to a broad number of objects. In the first two cases, +the programmer can modify a particular object or an object associated +with the affected object. In the latter case, Java 3D provides a means +for specifying more than one object spatially. +
++
+Bounds objects specify a volume in which particular operations +apply. +Environmental effects such as lighting, fog, alternate appearance, and +model clipping planes use bounds objects to specify their region of +influence. Any object that falls within the space defined by the bounds +object has the particular environmental effect applied. The proper use +of bounds objects can ensure that these environmental effects are +applied only to those objects in a particular volume, such as a light +applying only to the objects within a single room. +
+Bounds objects are also used to specify a region of action. +Behaviors +and sounds execute or play only if they are close enough to the viewer. +The use of behavior and sound bounds objects allows Java 3D to cull +away those behaviors and sounds that are too far away to affect the +viewer (listener). By using bounds properly, a programmer can ensure +that only the relevant behaviors and sounds execute or play. +
+Finally, bounds objects are used to specify a region of application +for +per-view operations such as background, clip, and soundscape selection. +For example, the background node whose region of application is closest +to the viewer is selected for a given view. +
++
+Listing 4 – +Capabilities Example +
+TransformGroup myTrans = new TransformGroup();+
myTrans.setCapability(Transform.ALLOW_TRANSFORM_WRITE);
By setting the capability to write the transform, Java 3D will allow +the following code to execute: +
+myTrans.setTransform3D(myT3D);+
It is important to ensure that all needed capabilities are set and +that +unnecessary capabilities are not set. The process of compiling a branch +graph examines the capability bits and uses that information to reduce +the amount of computation needed to run a program. +
+ + diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/Concepts1.gif b/src/main/javadoc/org/jogamp/java3d/doc-files/Concepts1.gif new file mode 100644 index 0000000..8aa0dbc Binary files /dev/null and b/src/main/javadoc/org/jogamp/java3d/doc-files/Concepts1.gif differ diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/Concepts2.gif b/src/main/javadoc/org/jogamp/java3d/doc-files/Concepts2.gif new file mode 100644 index 0000000..f21e085 Binary files /dev/null and b/src/main/javadoc/org/jogamp/java3d/doc-files/Concepts2.gif differ diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/DAG.gif b/src/main/javadoc/org/jogamp/java3d/doc-files/DAG.gif new file mode 100644 index 0000000..8479136 Binary files /dev/null and b/src/main/javadoc/org/jogamp/java3d/doc-files/DAG.gif differ diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/HelloUniverse.html b/src/main/javadoc/org/jogamp/java3d/doc-files/HelloUniverse.html new file mode 100644 index 0000000..5e37bd6 --- /dev/null +++ b/src/main/javadoc/org/jogamp/java3d/doc-files/HelloUniverse.html @@ -0,0 +1,21 @@ + + + + +Here are code fragments from a simple program, HelloUniverse.java
,
+that creates a cube and a RotationInterpolator behavior object that
+rotates the cube at a constant rate of pi/2 radians per second. The
+HelloUniverse class creates the branch graph
+that includes the cube and the RotationInterpolator behavior. It then
+adds this branch graph to the Locale object generated by the
+SimpleUniverse utility.
+
+ + diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/Immediate.html b/src/main/javadoc/org/jogamp/java3d/doc-files/Immediate.html new file mode 100644 index 0000000..101fe22 --- /dev/null +++ b/src/main/javadoc/org/jogamp/java3d/doc-files/Immediate.html @@ -0,0 +1,114 @@ + + + + +
public class HelloUniverse ... {
public BranchGroup createSceneGraph() {
// Create the root of the branch graph
BranchGroup objRoot = new BranchGroup();
// Create the TransformGroup node and initialize it to the
// identity. Enable the TRANSFORM_WRITE capability so that
// our behavior code can modify it at run time. Add it to
// the root of the subgraph.
TransformGroup objTrans = new TransformGroup();
objTrans.setCapability(
TransformGroup.ALLOW_TRANSFORM_WRITE);
objRoot.addChild(objTrans);
// Create a simple Shape3D node; add it to the scene graph.
objTrans.addChild(new ColorCube(0.4));
// Create a new Behavior object that will perform the
// desired operation on the specified transform and add
// it into the scene graph.
Transform3D yAxis = new Transform3D();
Alpha rotationAlpha = new Alpha(-1, 4000);
RotationInterpolator rotator = new RotationInterpolator(
rotationAlpha, objTrans, yAxis,
0.0f, (float) Math.PI*2.0f);
BoundingSphere bounds =
new BoundingSphere(new Point3d(0.0,0.0,0.0), 100.0);
rotator.setSchedulingBounds(bounds);
objRoot.addChild(rotator);
// Have Java 3D perform optimizations on this scene graph.
objRoot.compile();
return objRoot;
}
public HelloUniverse() {
<set layout of container, construct canvas3d, add canvas3d>
// Create the scene; attach it to the virtual universe
BranchGroup scene = createSceneGraph();
SimpleUniverse u = new SimpleUniverse(canvas3d);
u.getViewingPlatform().setNominalViewingTransform();
u.addBranchGraph(scene);
}
}
Java 3D is fundamentally a scene graph-based API. Most of +the constructs in the API are biased toward retained mode and +compiled-retained mode rendering. However, there are some applications +that want both the control and the flexibility that immediate-mode +rendering offers. +
+Immediate-mode applications can either use or ignore Java 3D's +scene +graph structure. By using immediate mode, end-user applications have +more freedom, but this freedom comes at the expense of performance. In +immediate mode, Java 3D has no high-level information concerning +graphical objects or their composition. Because it has minimal global +knowledge, Java 3D can perform only localized optimizations on +behalf +of the application programmer. +
++
++
++Java 3D provides utility functions that create much of this +structure +on behalf of a pure immediate-mode application, making it less +noticeable from the application's perspective-but the structure must +exist. +
+All rendering is done completely under user control. It is necessary +for the user to clear the 3D canvas, render all geometry, and swap the +buffers. Additionally, rendering the right and left eye for stereo +viewing becomes the sole responsibility of the application. +
+In pure immediate mode, the user must stop the Java 3D
+renderer, via
+the Canvas3D object stopRenderer()
+method, prior to adding the Canvas3D object to an active View object
+(that is, one that is attached to a live ViewPlatform object).
+
+
+The basic Java 3D stereo rendering loop, executed for +each +Canvas3D, is as follows: +
++
clear canvas (both eyes)
call preRender() // user-supplied method+
set left eye view
render opaque scene graph objects
call renderField(FIELD_LEFT) // user-supplied method
render transparent scene graph objects
set right eye view
render opaque scene graph objects again
call renderField(FIELD_RIGHT) // user-supplied method
render transparent scene graph objects again
call postRender() // user-supplied method
synchronize and swap buffers
call postSwap() // user-supplied method+The basic Java 3D monoscopic rendering loop is as +follows: +
+
clear canvas
call preRender() // user-supplied method+
set view
render opaque scene graph objects
call renderField(FIELD_ALL) // user-supplied method
render transparent scene graph objects
call postRender() // user-supplied method
synchronize and swap buffers
call postSwap() // user-supplied method+In both cases, the entire loop, beginning with clearing the canvas and +ending with swapping the buffers, defines a frame. The application is +given the opportunity to render immediate-mode geometry at any of the +clearly identified spots in the rendering loop. A user specifies his or +her own rendering methods by extending the Canvas3D class and +overriding the
preRender
, postRender
, postSwap
,
+and/or renderField
methods.
+
+
diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/Immediate1.gif b/src/main/javadoc/org/jogamp/java3d/doc-files/Immediate1.gif
new file mode 100644
index 0000000..2d549b1
Binary files /dev/null and b/src/main/javadoc/org/jogamp/java3d/doc-files/Immediate1.gif differ
diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/Rendering.html b/src/main/javadoc/org/jogamp/java3d/doc-files/Rendering.html
new file mode 100644
index 0000000..7415ce8
--- /dev/null
+++ b/src/main/javadoc/org/jogamp/java3d/doc-files/Rendering.html
@@ -0,0 +1,148 @@
+
+
+
+
+ Java 3D's execution and rendering model assumes the +existence of a VirtualUniverse +object and an attached scene graph. This +scene graph can be minimal and not noticeable from an application's +perspective when using immediate-mode rendering, but it must exist. +
+Java 3D's execution model intertwines with its rendering modes +and +with +behaviors and their scheduling. This chapter first describes the three +rendering modes, then describes how an application starts up a +Java 3D +environment, and finally it discusses how the various rendering modes +work within this framework. +
++
+Java 3D supports three different modes for rendering scenes: +immediate +mode, retained mode, and compiled-retained mode. These three levels of +API support represent a potentially large variation in graphics +processing speed and in on-the-fly restructuring. +
+ +Immediate mode allows maximum flexibility at some cost in rendering +speed. The application programmer can either use or ignore the scene +graph structure inherent in Java 3D's design. The programmer can +choose +to draw geometry directly or to define a scene graph. Immediate mode +can be either used independently or mixed with retained and/or +compiled-retained mode rendering. The immediate-mode API is described +in the "Immediate-Mode Rendering" section.
++
+Retained mode allows a great deal of the flexibility provided by +immediate mode while also providing a substantial increase in rendering +speed. All objects defined in the scene graph are accessible and +manipulable. The scene graph itself is fully manipulable. The +application programmer can rapidly construct the scene graph, create +and delete nodes, and instantly "see" the effect of edits. Retained +mode also allows maximal access to objects through a general pick +capability. +
+Java 3D's retained mode allows a programmer to construct +objects, +insert objects into a database, compose objects, and add behaviors to +objects. +
+In retained mode, Java 3D knows that the programmer has defined +objects, knows how the programmer has combined those objects into +compound objects or scene graphs, and knows what behaviors or actions +the programmer has attached to objects in the database. This knowledge +allows Java 3D to perform many optimizations. It can construct +specialized data structures that hold an object's geometry in a manner +that enhances the speed at which the Java 3D system can render it. +It +can compile object behaviors so that they run at maximum speed when +invoked. It can flatten transformation manipulations and state changes +where possible in the scene graph. +
++
+Compiled-retained mode allows the Java 3D API to perform an +arbitrarily +complex series of optimizations including, but not restricted to, +geometry compression, scene graph flattening, geometry grouping, and +state change clustering. +
+Compiled-retained mode provides hooks for end-user manipulation and +picking. Pick operations return the closest object (in scene graph +space) associated with the picked geometry. +
+Java 3D's compiled-retained mode ensures effective graphics +rendering +speed in yet one more way. A programmer can request that Java 3D +compile an object or a scene graph. Once it is compiled, the programmer +has minimal access to the internal structure of the object or scene +graph. Capability flags provide access to specified components that the +application program may need to modify on a continuing basis. +
+A compiled object or scene graph consists of whatever internal +structures Java 3D wishes to create to ensure that objects or +scene +graphs render at maximal rates. Because Java 3D knows that the +majority +of the compiled object's or scene graph's components will not change, +it can perform an extraordinary number of optimizations, including the +fusing of multiple objects into one conceptual object, turning an +object into compressed geometry or even breaking an object up into +like-kind components and reassembling the like-kind components into new +"conceptual objects." +
++
+From an application's perspective, Java 3D's render loop runs +continuously. Whenever an application adds a scene branch to the +virtual world, that scene branch is instantly visible. This high-level +view of the render loop permits concurrent implementations of +Java 3D +as well as serial implementations. The remainder of this section +describes the Java 3D render loop bootstrap process from a +serialized +perspective. Differences that would appear in concurrent +implementations are noted as well. +
+ +First the application must construct its scene graphs. It does this
+by
+constructing scene graph nodes and component objects and linking them
+into self-contained trees with a BranchGroup node as a root. The
+application next must obtain a reference to any constituent nodes or
+objects within that branch that it may wish to manipulate. It sets the
+capabilities of all the objects to match their anticipated use and only
+then compiles the branch using the BranchGroup's compile
+method. Whether it compiles the branch, the application can add it to
+the virtual universe by adding the BranchGroup to a Locale object. The
+application repeats this process for each branch it wishes to create.
+Note that for concurrent Java 3D implementations, whenever an
+application adds a branch to the active virtual universe, that branch
+becomes visible.
+
This initialization process is identical for retained and +compiled-retained modes. In both modes, the application builds a scene +graph. In compiled-retained mode, the application compiles the scene +graph. Then the application inserts the (possibly compiled) scene graph +into the virtual universe. +
+ + diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphOverview.html b/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphOverview.html new file mode 100644 index 0000000..f1616df --- /dev/null +++ b/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphOverview.html @@ -0,0 +1,226 @@ + + + + +A scene graph consists of Java 3D +objects, called nodes, +arranged in a tree structure. The user creates one or more scene +subgraphs and attaches them to a virtual universe. The individual +connections between Java 3D nodes always represent a directed +relationship: parent to child. Java 3D restricts scene graphs in one +major way: Scene graphs may not contain cycles. Thus, a Java 3D scene +graph is a directed acyclic graph (DAG). See Figure +1. +
+Java 3D refines the Node object class +into two subclasses: Group +and +Leaf node objects. Group node objects group +together one or more child +nodes. A group node can point to zero or more children but can have +only one parent. The SharedGroup node cannot have any parents (although +it allows sharing portions of a scene graph, as described in "Reusing Scene Graphs"). +Leaf node objects contain the actual definitions of shapes (geometry), +lights, fog, sounds, and so forth. A leaf node has no children and only +one parent. The semantics of the various group and leaf nodes are +described in subsequent chapters.
+A scene graph organizes and controls the rendering +of its constituent objects. The Java 3D renderer draws a scene graph in +a consistent way that allows for concurrence. The Java 3D renderer can +draw one object independently of other objects. Java 3D can allow such +independence because its scene graphs have a particular form and cannot +share state among branches of a tree. +
+The hierarchy of the scene graph encourages a natural spatial +grouping +on the geometric objects found at the leaves of the graph. Internal +nodes act to group their children together. A group node also defines a +spatial bound that contains all the geometry defined by its +descendants. Spatial grouping allows for efficient implementation of +operations such as proximity detection, collision detection, view +frustum culling, and occlusion culling. +
+ ++
+
A leaf node's state is defined by the nodes in a direct path between +the scene graph's root and the leaf. Because a leaf's graphics context +relies only on a linear path between the root and that node, the Java +3D renderer can decide to traverse the scene graph in whatever order it +wishes. It can traverse the scene graph from left to right and top to +bottom, in level order from right to left, or even in parallel. The +only exceptions to this rule are spatially bounded attributes such as +lights and fog. +
+This characteristic is in marked contrast to many older scene +graph-based APIs (including PHIGS and SGI's Inventor) where, if a node +above or to the left of a node changes the graphics state, the change +affects the graphics state of all nodes below it or to its right.
+The most common node object, along the path from the root to the +leaf, +that changes the graphics state is the TransformGroup object. The +TransformGroup object can change the position, orientation, and scale +of the objects below it.
+Most graphics state attributes are set by a Shape3D leaf node +through +its constituent Appearance object, thus allowing parallel rendering. +The Shape3D node also has a constituent Geometry object that specifies +its geometry-this permits different shape objects to share common +geometry without sharing material attributes (or vice versa).
++
The Java 3D renderer incorporates all graphics state changes made in +a +direct path from a scene graph root to a leaf object in the drawing of +that leaf object. Java 3D provides this semantic for both retained and +compiled-retained modes. +
++
A Java 3D scene graph consists of a collection of Java 3D node +objects +connected in a tree structure. These node objects reference other scene +graph objects called node component objects. +All scene graph node and component objects are subclasses of a common +SceneGraphObject class. The +SceneGraphObject class is an abstract class +that defines methods that are common among nodes and component objects. +
+Scene graph objects are constructed by creating a new instance of
+the
+desired class and are accessed and manipulated using the object's set
+and get
+methods. Once a scene graph object is created and connected to other
+scene graph objects to form a subgraph, the entire subgraph can be
+attached to a virtual universe---via a high-resolution Locale
+object-making the object live. Prior to attaching a subgraph
+to a virtual
+universe, the entire subgraph can be compiled into an
+optimized, internal format (see the
+BranchGroup.compile()
+method).
An important characteristic of all scene graph objects is that
+they can
+be accessed or modified only during the creation of a scene graph,
+except where explicitly allowed. Access to most set
and get
+methods of objects that are part of a live or compiled scene graph is
+restricted. Such restrictions provide the scene graph compiler with
+usage information it can use in optimally compiling or rendering a
+scene graph. Each object has a set of capability bits that enable
+certain functionality when the object is live or compiled. By default,
+all capability bits are disabled (cleared). Only those set
+and get
+methods corresponding to capability bits that are explicitly enabled
+(set) prior to the object being compiled or made live are legal.
+
+
+
+
A Locale has no parent in the scene graph but is implicitly +attached to +a virtual universe when it is constructed. A Locale may reference an +arbitrary number of BranchGroup nodes but has no explicit children.
+The coordinates of all scene graph objects are relative to the +HiResCoord of the Locale in which they are contained. Operations on a +Locale include setting or getting the HiResCoord of the Locale, adding +a subgraph, and removing a subgraph.
++
+
+
+
The View object is the central Java 3D object for coordinating all +aspects of viewing. +All viewing parameters in Java 3D are directly contained either within +the View object or within objects pointed to by a View object. Java 3D +supports multiple simultaneously active View objects, each of which can +render to one or more canvases.
++
+
The PhysicalEnvironment object encapsulates all of the parameters
+associated with the physical environment, such as calibration
+information for the tracker base for the head or hand tracker.
+
+
++
+Java 3D provides application programmers +with two different means for reusing scene graphs. First, multiple +scene graphs can share a common subgraph. Second, the node hierarchy of +a common subgraph can be cloned, while still sharing large component +objects such as geometry and texture objects. In the first case, +changes in the shared subgraph affect all scene graphs that refer to +the shared subgraph. In the second case, each instance is unique-a +change in one instance does not affect any other instance. +
+An application that wishes to share a subgraph from multiple places +in +a scene graph must do so through the use of the Link +leaf node and an +associated SharedGroup node. The +SharedGroup node serves as the root of +the shared subgraph. The Link leaf node refers to the SharedGroup node. +It does not incorporate the shared scene graph directly into its scene +graph. +
+A SharedGroup node allows multiple Link leaf nodes to share its
+subgraph as shown in Figure
+1 below.
+
An application developer may wish to reuse a common subgraph without +completely sharing that subgraph. For example, the developer may wish +to create a parking lot scene consisting of multiple cars, each with a +different color. The developer might define three basic types of cars, +such as convertible, truck, and sedan. To create the parking lot scene, +the application will instantiate each type of car several times. Then +the application can change the color of the various instances to create +more variety in the scene. Unlike shared subgraphs, each instance is a +separate copy of the scene graph definition: Changes to one instance do +not affect any other instance. +
+Java 3D provides the cloneTree
+method for this
+purpose. The cloneTree
+method allows the programmer to change some attributes (NodeComponent
+objects) in a scene graph, while at the same time sharing the majority
+of the scene graph data-the geometry.
+
When cloneTree
reaches a leaf node,
+there are two possible actions for handling the leaf node's
+NodeComponent objects (such as Material, Texture, and so forth). First,
+the cloned leaf node can reference the original leaf node's
+NodeComponent object-the NodeComponent object itself is not duplicated.
+Since the cloned leaf node shares the NodeComponent object with the
+original leaf node, changing the data in the NodeComponent object will
+effect a change in both nodes. This mode would also be used for objects
+that are read-only at run time.
+
Alternatively, the NodeComponent object can be duplicated, in which +case the new leaf node would reference the duplicated object. This mode +allows data referenced by the newly created leaf node to be modified +without that modification affecting the original leaf node. +
+Figure +2 +shows two instances of NodeComponent objects that are shared and one +NodeComponent element that is duplicated for the cloned subgraph. +
+ ++
+cloneTree
method. After the cloneTree
+operation is performed, the reference in the cloned leaf node will
+still refer to the node in the original subgraph-a situation that is
+most likely incorrect (see Figure
+3).
+To handle these ambiguities, a callback mechanism is provided. +
+ +
+A leaf node that needs to update referenced nodes upon being duplicated
+by a call to cloneTree
must implement the updateNodeReferences
+method. By using this method, the cloned leaf node can determine if any
+nodes referenced by it have been duplicated and, if so, update the
+appropriate references to their cloned counterparts.
+
Suppose, for instance, that the leaf node Lf1 in Figure
+3 implemented the updateNodeReferences
method. Once
+all nodes had been duplicated, the clone-Tree
method
+would then call each cloned leaf's node updateNodeReferences
+method. When cloned leaf node Lf2's method was called, Lf2 could ask if
+the node N1 had been duplicated during the cloneTree
+operation. If the node had been duplicated, leaf Lf2 could then update
+its internal state with the cloned node, N2 (see Figure
+4).
+
+
+
+All predefined Java 3D nodes will automatically have their updateNodeReferences
+method defined. Only subclassed nodes that reference other nodes need
+to have this method overridden by the user.
+
cloneTree
is able to start
+the cloning operation from any node, there is a potential for creating
+dangling references. A dangling reference can occur only when a leaf
+node that contains a reference to another scene graph node is cloned.
+If the referenced node is not cloned, a dangling reference situation
+exists: There are now two leaf nodes that access the same node (Figure
+5). A dangling reference is discovered when a leaf node's updateNodeReferences
+method calls the getNewNodeReference
method and the
+cloned subgraph does not contain a counterpart to the node being looked
+up.
+
++
+
+When a dangling reference is discovered, cloneTree
can
+handle it in one of two ways. If cloneTree
is called
+without the allowDanglingReferences
parameter set to true
,
+a dangling reference will result in a DanglingReferenceException
+being thrown. The user can catch this exception if desired. If cloneTree
+is called with the allowDanglingReferences
parameter set
+to true
, the update-NodeReferences
method
+will return a reference to the same object passed into the getNewNodeReference
+method. This will result in the cloneTree
operation
+completing with dangling references, as in Figure
+5.
+
cloneTree
.
+Leaf node subclasses (for example, Behaviors) that contain any user
+node-specific data that needs to be duplicated during a cloneTree
+operation must define the following two methods:
+
Node cloneNode(boolean forceDuplicate);+The
void duplicateNode(Node n, boolean forceDuplicate)
cloneNode
method consists of three lines:
++TheUserSubClass usc = new UserSubClass();
usc.duplicateNode(this, forceDuplicate);
return usc;
duplicateNode
method must first call super.duplicateNode
+before duplicating any necessary user-specific data or setting any
+user-specific state.
+NodeComponent subclasses that contain any user node-specific data +must define the following two methods: +
+NodeComponent cloneNodeComponent();+The
void duplicateNodeComponent(NodeComponent nc, boolean forceDuplicate);
cloneNodeComponent
method consists of three lines:
++TheUserNodeComponent unc = new UserNodeComponent();
unc.duplicateNodeComponent(this, forceDuplicate);
return un;
duplicateNodeComponent
must first call super.duplicateNodeComponent
+and then can duplicate any user-specific data or set any user-specific
+state as necessary.
+updateNodeReferences
+method called by the cloneTree
+operation. The NodeReferenceTable maps nodes from the original subgraph
+to the new nodes in the cloned subgraph. This information can than be
+used to update any cloned leaf node references to reference nodes in
+the cloned subgraph. This object can be created only by Java 3D.
+cloneTree
+operation.
+class RotationBehavior extends Behavior {+
TransformGroup objectTransform;
WakeupOnElapsedFrames w;
Matrix4d rotMat = new Matrix4d();+
Matrix4d objectMat = new Matrix4d();
Transform3D t = new Transform3D();
// Override Behavior's initialize method to set up wakeup+
// criteria
public void initialize() {+
// Establish initial wakeup criteria+
wakeupOn(w);+
}
// Override Behavior's stimulus method to handle the event+
public void processStimulus(Enumeration criteria) {+
// Rotate by another PI/120.0 radians+
objectMat.mul(objectMat, rotMat);+
t.set(objectMat);
objectTransform.setTransform(t);
// Set wakeup criteria for next time+
wakeupOn(w);+
}
// Constructor for rotation behavior.+
public RotationBehavior(TransformGroup tg, int numFrames) {+
w = new WakeupOnElapsedFrames(numFrames);
objectTransform = tg;
objectMat.setIdentity();+
// Create a rotation matrix that rotates PI/120.0+
// radians per frame
rotMat.rotX(Math.PI/120.0);
// Note: When this object is duplicated via cloneTree,+
// the cloned RotationBehavior node needs to point to
// the TransformGroup in the just-cloned tree.
}
// Sets a new TransformGroup.+
public void setTransformGroup(TransformGroup tg) {+
objectTransform = tg;
}+
// The next two methods are needed for cloneTree to operate+
// correctly.
// cloneNode is needed to provide a new instance of the user
// derived subclass.
public Node cloneNode(boolean forceDuplicate) {+
// Get all data from current node needed for+
// the constructor
int numFrames = w.getElapsedFrameCount();
RotationBehavior r =+
new RotationBehavior(objectTransform, numFrames);
r.duplicateNode(this, forceDuplicate);
return r;
}
// duplicateNode is needed to duplicate all super class+
// data as well as all user data.
public void duplicateNode(Node originalNode, boolean+
forceDuplicate) {
super.duplicateNode(originalNode, forceDuplicate);
// Nothing to do here - all unique data was handled+
// in the constructor in the cloneNode routine.
}
// Callback for when this leaf is cloned. For this object+
// we want to find the cloned TransformGroup node that this
// clone Leaf node should reference.
public void updateNodeReferences(NodeReferenceTable t) {+
super.updateNodeReferences(t);+
// Update node's TransformGroup to proper reference+
TransformGroup newTg =+ + diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphSharing1.gif b/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphSharing1.gif new file mode 100644 index 0000000..f6ca47c Binary files /dev/null and b/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphSharing1.gif differ diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphSharing2.gif b/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphSharing2.gif new file mode 100644 index 0000000..c062c81 Binary files /dev/null and b/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphSharing2.gif differ diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphSharing3.gif b/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphSharing3.gif new file mode 100644 index 0000000..325cab1 Binary files /dev/null and b/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphSharing3.gif differ diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphSharing4.gif b/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphSharing4.gif new file mode 100644 index 0000000..78aeaab Binary files /dev/null and b/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphSharing4.gif differ diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphSharing5.gif b/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphSharing5.gif new file mode 100644 index 0000000..2ff6547 Binary files /dev/null and b/src/main/javadoc/org/jogamp/java3d/doc-files/SceneGraphSharing5.gif differ diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/ViewBranch.gif b/src/main/javadoc/org/jogamp/java3d/doc-files/ViewBranch.gif new file mode 100644 index 0000000..75cc40d Binary files /dev/null and b/src/main/javadoc/org/jogamp/java3d/doc-files/ViewBranch.gif differ diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/ViewModel.html b/src/main/javadoc/org/jogamp/java3d/doc-files/ViewModel.html new file mode 100644 index 0000000..3cc9ece --- /dev/null +++ b/src/main/javadoc/org/jogamp/java3d/doc-files/ViewModel.html @@ -0,0 +1,1064 @@ + + + + +
(TransformGroup)t.getNewObjectReference(
objectTransform);
setTransformGroup(newTg);
}
}
Java 3D introduces a new view model that takes Java's +vision of "write once, run anywhere" and generalizes it to include +display devices and six-degrees-of-freedom input peripherals such as +head trackers. This "write once, view everywhere" nature of the new +view model means that an application or applet written using the Java +3D view model can render images to a broad range of display devices, +including standard computer displays, multiple-projection display +rooms, and head-mounted displays, without modification of the scene +graph. It also means that the same application, once again without +modification, can render stereoscopic views and can take advantage of +the input from a head tracker to control the rendered view. +
+Java 3D's view model achieves this versatility by cleanly +separating +the virtual and the physical world. This model distinguishes between +how an application positions, orients, and scales a ViewPlatform object +(a viewpoint) within the virtual world and how the Java 3D +renderer +constructs the final view from that viewpoint's position and +orientation. The application controls the ViewPlatform's position and +orientation; the renderer computes what view to render using this +position and orientation, a description of the end-user's physical +environment, and the user's position and orientation within the +physical environment. +
+This document first explains why Java 3D chose a different view +model +and some of the philosophy behind that choice. It next describes how +that model operates in the simple case of a standard computer screen +without head tracking—the most common case. Finally, it presents +advanced material that was originally published in Appendix C of the +API specification guide. +
++
+Camera-based view models, as found in low-level APIs, give +developers +control over all rendering parameters. This makes sense when dealing +with custom applications, less sense when dealing with systems that +wish to have broader applicability: systems such as viewers or browsers +that load and display whole worlds as a single unit or systems where +the end users view, navigate, display, and even interact with the +virtual world. +
+Camera-based view models emulate a camera in the virtual world, not +a +human in a virtual world. Developers must continuously reposition a +camera to emulate "a human in the virtual world." +
+The Java 3D view model incorporates head tracking directly, if +present, +with no additional effort from the developer, thus providing end users +with the illusion that they actually exist inside a virtual world. +
+The Java 3D view model, when operating in a non-head-tracked +environment and rendering to a single, standard display, acts very much +like a traditional camera-based view model, with the added +functionality of being able to generate stereo views transparently. +
++
+Letting the application control all viewing parameters is not +reasonable in systems in which the physical environment dictates some +of the view parameters. +
+One example of this is a head-mounted display (HMD), where the +optics +of the head-mounted display directly determine the field of view that +the application should use. Different HMDs have different optics, +making it unreasonable for application developers to hard-wire such +parameters or to allow end users to vary that parameter at will. +
+Another example is a system that automatically computes view +parameters +as a function of the user's current head position. The specification of +a world and a predefined flight path through that world may not exactly +specify an end-user's view. HMD users would expect to look and thus see +to their left or right even when following a fixed path through the +environment-imagine an amusement park ride with vehicles that follow +fixed paths to present content to their visitors, but visitors can +continue to move their heads while on those rides. +
+Depending on the physical details of the end-user's environment, the +values of the viewing parameters, particularly the viewing and +projection matrices, will vary widely. The factors that influence the +viewing and projection matrices include the size of the physical +display, how the display is mounted (on the user's head or on a table), +whether the computer knows the user's head location in three space, the +head mount's actual field of view, the display's pixels per inch, and +other such parameters. For more information, see "View Model Details." +
++
+The Java 3D view model separates the virtual environment, where +the +application programmer has placed objects in relation to one another, +from the physical environment, where the user exists, sees computer +displays, and manipulates input devices. +
+Java 3D also defines a fundamental correspondence between the +user's +physical world and the virtual world of the graphic application. This +physical-to-virtual-world correspondence defines a single common space, +a space where an action taken by an end user affects objects within the +virtual world and where any activity by objects in the virtual world +affects the end user's view. +
++
+The virtual world is a common space in which virtual objects exist. +The +virtual world coordinate system exists relative to a high-resolution +Locale-each Locale object defines the origin of virtual world +coordinates for all of the objects attached to that Locale. The Locale +that contains the currently active ViewPlatform object defines the +virtual world coordinates that are used for rendering. Java3D +eventually transforms all coordinates associated with scene graph +elements into this common virtual world space. +
+The physical world is just that-the real, physical world. This is +the +space in which the physical user exists and within which he or she +moves his or her head and hands. This is the space in which any +physical trackers define their local coordinates and in which several +calibration coordinate systems are described. +
+The physical world is a space, not a common coordinate system +between +different execution instances of Java 3D. So while two different +computers at two different physical locations on the globe may be +running at the same time, there is no mechanism directly within +Java 3D +to relate their local physical world coordinate systems with each +other. Because of calibration issues, the local tracker (if any) +defines the local physical world coordinate system known to a +particular instance of Java 3D. +
++
+Java 3D distributes its view model parameters across several +objects, +specifically, the View object and its associated component objects, the +PhysicalBody object, the PhysicalEnvironment object, the Canvas3D +object, and the Screen3D object. Figure +1 shows graphically the central role of the View object and the +subsidiary role of its component objects. +
+ ++
++The view-related objects shown in Figure +1 +and their roles are as follows. For each of these objects, the portion +of the API that relates to modifying the virtual world and the portion +of the API that is relevant to non-head-tracked standard display +configurations are derived in this chapter. The remainder of the +details are described in "View Model +Details." +
+Together, these objects describe the geometry of viewing rather than +explicitly providing a viewing or projection matrix. The Java 3D +renderer uses this information to construct the appropriate viewing and +projection matrices. The geometric focus of these view objects provides +more flexibility in generating views-a flexibility needed to support +alternative display configurations. +
+A ViewPlatform leaf node defines a coordinate system, and thus a +reference frame with its associated origin or reference point, within +the virtual world. The ViewPlatform serves as a point of attachment for +View objects and as a base for determining a renderer's view. +
+Figure +2 +shows a portion of a scene graph containing a ViewPlatform node. The +nodes directly above a ViewPlatform determine where that ViewPlatform +is located and how it is oriented within the virtual world. By +modifying the Transform3D object associated with a TransformGroup node +anywhere directly above a ViewPlatform, an application or behavior can +move that ViewPlatform anywhere within the virtual world. A simple +application might define one TransformGroup node directly above a +ViewPlatform, as shown in Figure +2. +
+A VirtualUniverse may have many different ViewPlatforms, but a +particular View object can attach itself only to a single ViewPlatform. +Thus, each rendering onto a Canvas3D is done from the point of view of +a single ViewPlatform. +
+ ++
++
+An application navigates within the virtual world by modifying a +ViewPlatform's parent TransformGroup. Examples of applications that +modify a ViewPlatform's location and orientation include browsers, +object viewers that provide navigational controls, applications that do +architectural walkthroughs, and even search-and-destroy games. +
+Controlling the ViewPlatform object can produce very interesting and +useful results. Our first simple scene graph (see "Introduction," Figure 1) +defines a scene graph for a simple application that draws an object in +the center of a window and rotates that object about its center point. +In that figure, the Behavior object modifies the TransformGroup +directly above the Shape3D node. +
+An alternative application scene graph, shown in Figure +3, +leaves the central object alone and moves the ViewPlatform around the +world. If the shape node contains a model of the earth, this +application could generate a view similar to that seen by astronauts as +they orbit the earth. +
+Had we populated this world with more objects, this scene graph +would allow navigation through the world via the Behavior node. +
+ ++
+
+Applications and behaviors manipulate a TransformGroup through its
+access methods. These methods allow an application to retrieve and
+set the Group node's Transform3D object. Transform3D Node methods
+include getTransform
and setTransform
.
+
+
+A scene graph may contain multiple ViewPlatform +objects. If a user detaches a View object +from a ViewPlatform and then +reattaches that View to a different ViewPlatform, the image on the +display will now be rendered from the point of view of the new +ViewPlatform.
+Java 3D does not have any built-in semantics for displaying a +visible +manifestation of a ViewPlatform within the virtual world (an avatar). +However, a developer can construct and manipulate an avatar using +standard Java 3D constructs. +
+A developer can construct a small scene graph consisting of a
+TransformGroup node, a behavior leaf node, and a shape node and insert
+it directly under the BranchGroup node associated with the ViewPlatform
+object. The shape node would contain a geometric model of the avatar's
+head. The behavior node would change the TransformGroup's transform
+periodically to the value stored in a View object's UserHeadToVworld
+parameter (see "View Model
+Details").
+The avatar's virtual head, represented by the shape node, will now move
+around in lock-step with the ViewPlatform's TransformGroup and any
+relative position and orientation changes of the user's actual physical
+head (if a system has a head tracker).
+
+
+Java 3D generates viewing matrices in one of a few different +ways, +depending on whether the end user has a head-mounted or a room-mounted +display environment and whether head tracking is enabled. This section +describes the computation for a non-head-tracked, room-mounted +display-a standard computer display. Other environments are described +in "View Model Details." +
+In the absence of head tracking, the ViewPlatform's origin specifies +the virtual eye's location and orientation within the virtual world. +However, the eye location provides only part of the information needed +to render an image. The renderer also needs a projection matrix. In the +default mode, Java 3D uses the projection policy, the specified +field-of-view information, and the front and back clipping distances to +construct a viewing frustum. +
++
+Figure +4 +shows a simple scene graph. To draw the object labeled "S," +Java 3D +internally constructs the appropriate model, view platform, eye, and +projection matrices. Conceptually, the model transformation for a +particular object is computed by concatenating all the matrices in a +direct path between the object and the VirtualUniverse. The view matrix +is then computed-again, conceptually-by concatenating all the matrices +between the VirtualUniverse object and the ViewPlatform attached to the +current View object. The eye and projection matrices are constructed +from the View object and its associated component objects. +
+ ++
+In our scene graph, what we would normally consider the +model transformation would consist of the following three +transformations: LT1T2. By +multiplying LT1T2 +by a vertex in the shape object, we would transform that vertex into +the virtual universe's coordinate system. What we would normally +consider the view platform transformation would be (LTv1)-1 +or Tv1-1L-1. +This presents a problem since coordinates in the virtual universe are +256-bit fixed-point values, which cannot be used to represent +transformed points efficiently. +
+Fortunately, however, there is a solution to this problem. Composing +the model and view platform transformations gives us +
+the matrix that takes vertices in an object's local coordinate +system +and places them in the ViewPlatform's coordinate system. Note that the +high-resolution Locale transformations cancel each other out, which +removes the need to actually transform points into high-resolution +VirtualUniverse coordinates. The general formula of the matrix that +transforms object coordinates to ViewPlatform coordinates is Tvn-1...Tv2-1Tv1-1T1T2...Tm. +
+As mentioned earlier, the View object contains the remainder of the +view information, specifically, the eye matrix, E, +that takes points in the View-Platform's local coordinate system and +translates them into the user's eye coordinate system, and the +projection matrix, P, that projects objects in the +eye's coordinate system into clipping coordinates. The final +concatenation of matrices for rendering our shape object "S" on the +specified Canvas3D is PETv1-1T1T2. +In general this is PETvn-1...Tv2-1Tv1-1T1T2...Tm. +
+The details of how Java 3D constructs the matrices E +and P in different end-user configurations are +described in "View Model Details." +
++
+Java 3D supports multiple high-resolution Locales. In some +cases, +these +Locales are close enough to each other that they can "see" each other, +meaning that objects can be rendered even though they are not in the +same Locale as the ViewPlatform object that is attached to the View. +Java 3D automatically handles this case without the application +having +to do anything. As in the previous example, where the ViewPlatform and +the object being rendered are attached to the same Locale, Java 3D +internally constructs the appropriate matrices for cases in which the +ViewPlatform and the object being rendered are not attached +to the same Locale. +
+Let's take two Locales, L1 and L2, with the View attached to a +ViewPlatform in L1. According to our general formula, the modeling +transformation-the transformation that takes points in object +coordinates and transforms them into VirtualUniverse coordinates-is LT1T2...Tm. +In our specific example, a point in Locale L2 would be transformed into +VirtualUniverse coordinates by L2T1T2...Tm. +The view platform transformation would be (L1Tv1Tv1...Tvn)-1 +or Tvn-1...Tv2-1Tv1-1L1-1. +Composing these two matrices gives us +
+Thus, to render objects in another Locale, it is sufficient to +compute L1-1L2 +and use that as the starting matrix when composing the model +transformations. Given that a Locale is represented by a single +high-resolution coordinate position, the transformation L1-1L2 +is a simple translation by L2 - L1. +Again, it is not actually necessary to transform points into +high-resolution VirtualUniverse coordinates. +
+In general, Locales that are close enough that the difference in +their +high-resolution coordinates can be represented in double precision by a +noninfinite value are close enough to be rendered. In practice, more +sophisticated culling techniques can be used to render only those +Locales that really are "close enough." +
++
+An application must create a minimal set of Java 3D objects +before +Java +3D can render to a display device. In addition to a Canvas3D object, +the application must create a View object, with its associated +PhysicalBody and PhysicalEnvironment objects, and the following scene +graph elements: +
+An application programmer writing a 3D +graphics program that will deploy on a variety of platforms must +anticipate the likely end-user environments and must carefully +construct the view transformations to match those characteristics using +a low-level API. This appendix addresses many of the issues an +application must face and describes the sophisticated features that +Java 3D's advanced view model provides. +
++
+Java 3D must handle two rather different head-tracking +situations. +In one case, we rigidly attach a tracker's base, +and thus its coordinate frame, to the display environment. This +corresponds to placing a tracker base in a fixed position and +orientation relative to a projection screen within a room, to a +computer display on a desk, or to the walls of a multiple-wall +projection display. In the second head-tracking situation, we rigidly +attach a tracker's sensor, not its base, to the display +device. This corresponds to rigidly attaching one of that tracker's +sensors to a head-mounted display and placing the tracker base +somewhere within the physical environment. +
++
+The following two examples show how end-user environments can +significantly affect how an application must construct viewing +transformations. +
++
+By adding a left and right screen, we give the magic carpet rider a +more complete view of the virtual world surrounding the carpet. Now our +end user sees the view to the left or right of the magic carpet by +turning left or right. +
++
+From a camera-based perspective, the application developer must +construct the camera's position and orientation by combining the +virtual-world component (the position and orientation of the magic +carpet) and the physical-world component (the user's instantaneous head +position and orientation). +
+Java 3D's view model incorporates the appropriate abstractions +to +compensate automatically for such variability in end-user hardware +environments. +
++
++
++
+The coexistence coordinate system exists half in the virtual world +and +half in the physical world. The two transforms that go from the +coexistence coordinate system to the virtual world coordinate system +and back again contain all the information needed to expand or shrink +the virtual world relative to the physical world. It also contains the +information needed to position and orient the virtual world relative to +the physical world. +
+Modifying the transform that maps the coexistence coordinate system +into the virtual world coordinate system changes what the end user can +see. The Java 3D application programmer moves the end user within +the +virtual world by modifying this transform. +
++
++
++
++
++
++A multiple-projection wall display presents a more exotic environment. +Such environments have multiple screens, typically three or more. Figure +9 shows a scene graph fragment representing such a system, and Figure +10 shows the corresponding display environment. +
+ ++
++
++A multiple-screen environment requires more care during the +initialization and calibration phase. Java 3D must know how the +Screen3Ds are placed with respect to one another, the tracking device, +and the physical portion of the coexistence coordinate system. +
++
+The "Generating a View" section +describes how Java 3D generates a view for a standard flat-screen +display with no head tracking. In this section, we describe how +Java 3D +generates a view in a room-mounted, head-tracked display +environment-either a computer monitor with shutter glasses and head +tracking or a multiple-wall display with head-tracked shutter glasses. +Finally, we describe how Java 3D generates view matrices in a +head-mounted and head-tracked display environment. +
+If any of the parameters of a View object are updated, this will +effect +a change in the implicit viewing transform (and thus image) of any +Canvas3D that references that View object. +
++
+A camera-based view model allows application programmers to think +about +the images displayed on the computer screen as if a virtual camera took +those images. Such a view model allows application programmers to +position and orient a virtual camera within a virtual scene, to +manipulate some parameters of the virtual camera's lens (specify its +field of view), and to specify the locations of the near and far +clipping planes. +
+Java 3D allows applications to enable compatibility mode for
+room-mounted, non-head-tracked display environments or to disable
+compatibility mode using the following methods. Camera-based viewing
+functions are available only in compatibility mode. The setCompatibilityModeEnable
+method turns compatibility mode on or off. Compatibility mode is
+disabled by default.
+
Note: Use of these view-compatibility +functions will disable some of Java 3D's view model features and +limit +the portability of Java 3D programs. These methods are primarily +intended to help jump-start porting of existing applications. +
+The various parameters that users control in a +camera-based view model specify the shape of a viewing volume (known as +a frustum because of its truncated pyramidal shape) and locate that +frustum within the virtual environment. The rendering pipeline uses the +frustum to decide which objects to draw on the display screen. The +rendering pipeline does not draw objects outside the view frustum, and +it clips (partially draws) objects that intersect the frustum's +boundaries. +
+Though a view frustum's specification may have many items in common +with those of a physical camera, such as placement, orientation, and +lens settings, some frustum parameters have no physical analog. Most +noticeably, a frustum has two parameters not found on a physical +camera: the near and far clipping planes. +
+ ++
++The location of the near and far clipping planes allows the application +programmer to specify which objects Java 3D should not draw. +Objects +too far away from the current eyepoint usually do not result in +interesting images. Those too close to the eyepoint might obscure the +interesting objects. By carefully specifying near and far clipping +planes, an application programmer can control which objects the +renderer will not be drawing. +
+From the perspective of the display device, the virtual camera's +image +plane corresponds to the display screen. The camera's placement, +orientation, and field of view determine the shape of the view frustum. +
++
+The camera-based view model allows Java 3D to bridge the gap +between +existing 3D code and Java 3D's view model. By using the +camera-based +view model methods, a programmer retains the familiarity of the older +view model but gains some of the flexibility afforded by Java 3D's +new +view model. +
+The traditional camera-based view model is supported in Java 3D +by +helping methods in the Transform3D object. These methods were +explicitly designed to resemble as closely as possible the view +functions of older packages and thus should be familiar to most 3D +programmers. The resulting Transform3D objects can be used to set +compatibility-mode transforms in the View object. +
++
+The Transform3D object provides a lookAt
utility
+method
+to create a
+viewing matrix. This method specifies the position and orientation of
+a viewing transform. It works similarly to the equivalent function in
+OpenGL. The inverse of this transform can be used to control the
+ViewPlatform object within the scene graph. Alternatively, this
+transform can be passed directly to the View's VpcToEc
+transform via the compatibility-mode viewing functions. The setVpcToEc
+method is used to set the viewing matrix when in compatibility mode.
+
The Transform3D object provides three methods for
+creating a projection matrix: frustum
, perspective
,
+and ortho
. All three map points from eye coordinates
+(EC) to clipping coordinates (CC). Eye coordinates are defined such
+that (0, 0, 0) is at the eye and the projection plane is at z
+= -1.
+
The frustum
method
+establishes a perspective projection with the eye at the apex of a
+symmetric view frustum. The transform maps points from eye coordinates
+to clipping coordinates. The clipping coordinates generated by the
+resulting transform are in a right-handed coordinate system (as are all
+other coordinate systems in Java 3D).
+
The arguments define the frustum and its associated perspective
+projection: (left
, bottom
, -near)
+and (right
, top
, -near)
+specify the point on the near clipping plane that maps onto the
+lower-left and upper-right corners of the window, respectively. The -far
+parameter specifies the far clipping plane. See Figure
+12.
+
The perspective
method establishes a perspective
+projection with the eye at the apex of a symmetric view frustum,
+centered about the Z-axis,
+with a fixed field of view. The resulting perspective projection
+transform mimics a standard camera-based view model. The transform maps
+points from eye coordinates to clipping coordinates. The clipping
+coordinates generated by the resulting transform are in a right-handed
+coordinate system.
+
The arguments define the frustum and its associated perspective
+projection: -near
and -far
specify the near
+and far clipping planes; fovx
specifies the field of view
+in the X dimension, in radians; and aspect
+specifies the aspect ratio of the window. See Figure
+13.
+
+
++
+
+The ortho
method
+establishes a parallel projection. The orthographic projection
+transform mimics a standard camera-based video model. The transform
+maps points from eye coordinates to clipping coordinates. The clipping
+coordinates generated by the resulting transform are in a right-handed
+coordinate system.
+
The arguments define a rectangular box used for projection: (left
,
+bottom
, -near)
and (right
, top
,
+-near)
+specify the point on the near clipping plane that maps onto the
+lower-left and upper-right corners of the window, respectively. The -far
+parameter specifies the far clipping plane. See Figure
+14.
+
+
++
+The setLeftProjection
+and setRightProjection
methods are used to set the
+projection matrices for the left eye and right eye, respectively, when
+in compatibility mode.
Java 3D's superstructure consists of one or more +VirtualUniverse objects, each of which contains a set of one or more +high-resolution Locale objects. The Locale objects, in turn, contain +collections of subgraphs that comprise the scene graph (see Figure +1). +
++
+Virtual universes are separate entities in that no node object may +exist in more than one virtual universe at any one time. Likewise, the +objects in one virtual universe are not visible in, nor do they +interact with objects in, any other virtual universe. +
+To support large virtual universes, Java 3D introduces the concept +of Locales that have high-resolution coordinates +as an origin. Think of high-resolution coordinates as "tie-downs" that +precisely anchor the locations of objects specified using less precise +floating-point coordinates that are within the range of influence of +the high-resolution coordinates. +
+A Locale, with its associated high-resolution coordinates, serves as +the next level of representation down from a virtual universe. All +virtual universes contain one or more high-resolution-coordinate +Locales, and all other objects are attached to a Locale. +High-resolution coordinates act as an upper-level translation-only +transform node. For example, the coordinates of all objects that are +attached to a particular Locale are all relative to the location of +that Locale's high-resolution coordinates. +
+ ++
++While a virtual universe is similar to the traditional computer +graphics concept of a scene graph, a given virtual universe can become +so large that it is often better to think of a scene graph as the +descendant of a high-resolution-coordinate Locale. +
++
+To "shrink" down to a small size (say the size of an IC transistor), +even very near (0.0, 0.0, 0.0), the same problem arises. +
+If a large contiguous virtual universe is to be supported, some form +of +higher-resolution addressing is required. Thus the choice of 256-bit +positional components for "high-resolution" positions. +
++
+2n Meters | +Units | +
---|---|
87.29 | +Universe (20 billion light years) + |
+
69.68 | +Galaxy (100,000 light years) | +
53.07 | +Light year | +
43.43 | +Solar system diameter | +
23.60 | +Earth diameter | +
10.65 | +Mile | +
9.97 | +Kilometer | +
0.00 | +Meter | +
-19.93 | +Micron | +
-33.22 | +Angstrom | +
-115.57 | +Planck length | +
A 256-bit fixed-point number also has the advantage of being able to +directly represent nearly any reasonable single-precision +floating-point value exactly. +
+High-resolution coordinates in Java 3D are used only to embed more +traditional floating point coordinate systems within a much +higher-resolution substrate. In this way a visually seamless virtual +universe of any conceivable size or scale can be created, without worry +about numerical accuracy. +
++
+The semantics of how file loaders deal with high-resolution +coordinates +is up to the individual file loader, as Java 3D does not directly +define any file-loading semantics. However, some general advice can be +given (note that this advice is not officially part of the +Java 3D specification). +
+For "small" virtual universes (on the order of hundreds of meters +across in relative scale), a single Locale with high-resolution +coordinates at location (0.0, 0.0, 0.0) as the root node (below the +VirtualUniverse object) is sufficient; a loader can automatically +construct this node during the loading process, and the point in +high-resolution coordinates does not need any direct representation in +the external file. +
+Larger virtual universes are expected to be constructed usually like +computer directory hierarchies, that is, as a "root" virtual universe +containing mostly external file references to embedded virtual +universes. In this case, the file reference object (user-specific data +hung off a Java 3D group or hi-res node) defines the location for the +data to be read into the current virtual universe. +
+The data file's contents should be parented to the file object node +while being read, thus inheriting the high-resolution coordinates of +the file object as the new relative virtual universe origin of the +embedded scene graph. If this scene graph itself contains +high-resolution coordinates, it will need to be offset (translated) by +the amount in the file object's high-resolution coordinates and then +added to the larger virtual universe as new high-resolution +coordinates, with their contents hung off below them. Once again, this +procedure is not part of the official Java 3D specification, but some +more details on the care and use of high-resolution coordinates in +external file formats will probably be available as a Java 3D +application note. +
+Authoring tools that directly support high-resolution coordinates +should create additional high-resolution coordinates as a user creates +new geometry "sufficiently" far away (or of different scale) from +existing high-resolution coordinates. +
+Semantics of widely moving objects. Most fixed and +nearly-fixed objects stay attached to the same high-resolution Locale. +Objects that make wide changes in position or scale may periodically +need to be reparented to a more appropriate high-resolution Locale. If +no appropriate high-resolution Locale exists, the application may need +to create a new one. +
+Semantics of viewing. The ViewPlatform object and +the +associated nodes in its hierarchy are very often widely moving objects. +Applications will typically attach the view platform to the most +appropriate high-resolution Locale. For display, all objects will first +have their positions translated by the difference between the location +of their high-resolution Locale and the view platform's high-resolution +Locale. (In the common case of the Locales being the same, no +translation is necessary.) +
+ + diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/intro.gif b/src/main/javadoc/org/jogamp/java3d/doc-files/intro.gif new file mode 100644 index 0000000..503f818 Binary files /dev/null and b/src/main/javadoc/org/jogamp/java3d/doc-files/intro.gif differ diff --git a/src/main/javadoc/org/jogamp/java3d/doc-files/intro.html b/src/main/javadoc/org/jogamp/java3d/doc-files/intro.html new file mode 100644 index 0000000..f5ea134 --- /dev/null +++ b/src/main/javadoc/org/jogamp/java3d/doc-files/intro.html @@ -0,0 +1,337 @@ + + + + ++This guide, which contains documentation formerly +published separately from the javadoc-generated API documentation, +is not an +official API specification. This documentation may contain references to +Java and Java 3D, both of which are trademarks of Sun Microsystems, Inc. +Any reference to these and other trademarks of Sun Microsystems are +for explanatory purposes only. Their use does impart any rights beyond +those listed in the source code license. In particular, Sun Microsystems +retains all intellectual property and trademark rights as described in +the proprietary rights notice in the COPYRIGHT.txt file. + +
+The Java 3D API is an application +programming interface used for writing three-dimensional graphics +applications and applets. It gives developers high-level constructs for +creating and manipulating 3D geometry and for constructing the +structures used in rendering that geometry. Application developers can +describe very large virtual worlds using these constructs, which +provide Java 3D with enough information to render these worlds +efficiently. +
+Java 3D delivers Java's "write once, run anywhere" +benefit to +developers of 3D graphics applications. Java 3D is part of the +JavaMedia suite of APIs, making it available on a wide range of +platforms. It also integrates well with the Internet because +applications and applets written using the Java 3D API have access to +the entire set of Java classes. +
+The Java 3D API draws its ideas from existing
+graphics APIs and from
+new technologies. Java 3D's low-level graphics constructs synthesize
+the best ideas found in low-level APIs such as Direct3D, OpenGL,
+QuickDraw3D, and XGL. Similarly, its higher-level constructs synthesize
+the best ideas found in several scene graph-based systems. Java 3D
+introduces some concepts not commonly considered part of the graphics
+environment, such as 3D spatial sound. Java 3D's sound capabilities
+help to provide a more immersive experience for the user.
+
+
+The Java 3D API improves on previous graphics APIs +by eliminating many +of the bookkeeping and programming chores that those APIs impose. Java +3D allows the programmer to think about geometric objects rather than +about triangles-about the scene and its composition rather than about +how to write the rendering code for efficiently displaying the scene. +
++
+super.setXxxxx
"
+for any attribute state set method that is overridden.
+Applications can extend Java 3D's classes and add +their own methods. +However, they may not override Java 3D's scene graph traversal +semantics because the nodes do not contain explicit traversal and draw +methods. Java 3D's renderer retains those semantics internally. +
+Java 3D does provide hooks for mixing +Java 3D-controlled scene graph rendering and user-controlled rendering +using Java 3D's immediate mode constructs (see "Mixed-Mode Rendering"). Alternatively, +the application can +stop Java 3D's renderer and do all its drawing in immediate mode (see "Pure Immediate-Mode Rendering"). +
+Behaviors require applications to extend the +Behavior object and to +override its methods with user-written Java code. These extended +objects should contain references to those scene graph objects that +they will manipulate at run time. The "Behaviors +and Interpolators" document describes Java 3D's behavior +model. +
++
+Additionally, leaving the details of rendering to +Java 3D allows it to +tune the rendering to the underlying hardware. For example, relaxing +the strict rendering order imposed by other APIs allows parallel +traversal as well as parallel rendering. Knowing which portions of the +scene graph cannot be modified at run time allows Java 3D to flatten +the tree, pretransform geometry, or represent the geometry in a native +hardware format without the need to keep the original data. +
++
+Java 3D implementations are expected to provide +useful rendering rates +on most modern PCs, especially those with 3D graphics accelerator +cards. On midrange workstations, Java 3D is expected to provide +applications with nearly full-speed hardware performance. +
+Finally, Java 3D is designed to scale as the +underlying hardware +platforms increase in speed over time. Tomorrow's 3D PC game +accelerators will support more complex virtual worlds than high-priced +workstations of a few years ago. Java 3D is prepared to meet this +increase in hardware performance. +
++
+This section illustrates how a developer might +structure a Java 3D application. The simple application in this example +creates a scene graph that draws an object in the middle of a window +and rotates the object about its center point. +
+The scene graph for the sample application is shown below. +
+The scene graph consists of superstructure +components—a VirtualUniverse +object and a Locale object—and a set of branch graphs. Each branch +graph is a subgraph that is rooted by a BranchGroup node that is +attached to the superstructure. For more information, see "Scene Graph Basics." +
+ ++
++A VirtualUniverse object defines a named universe. Java 3D permits the +creation of more than one universe, though the vast majority of +applications will use just one. The VirtualUniverse object provides a +grounding for scene graphs. All Java 3D scene graphs must connect to a +VirtualUniverse object to be displayed. For more information, see "Scene Graph Superstructure." +
+Below the VirtualUniverse object is a Locale object. +The Locale object +defines the origin, in high-resolution coordinates, of its attached +branch graphs. A virtual universe may contain as many Locales as +needed. In this example, a single Locale object is defined with its +origin at (0.0, 0.0, 0.0). +
+The scene graph itself starts with the BranchGroup +nodes. +A BranchGroup serves as the root of a +subgraph, called a branch graph, of the scene graph. Only +BranchGroup objects can attach to Locale objects. +
+In this example there are two branch graphs and, +thus, two BranchGroup +nodes. Attached to the left BranchGroup are two subgraphs. One subgraph +consists of a user-extended Behavior leaf node. The Behavior node +contains Java code for manipulating the transformation matrix +associated with the object's geometry. +
+The other subgraph in this BranchGroup consists of a +TransformGroup +node that specifies the position (relative to the Locale), orientation, +and scale of the geometric objects in the virtual universe. A single +child, a Shape3D leaf node, refers to two component objects: a Geometry +object and an Appearance object. The Geometry object describes the +geometric shape of a 3D object (a cube in our simple example). The +Appearance object describes the appearance of the geometry (color, +texture, material reflection characteristics, and so forth). +
+The right BranchGroup has a single subgraph that +consists of a +TransformGroup node and a ViewPlatform leaf node. The TransformGroup +specifies the position (relative to the Locale), orientation, and scale +of the ViewPlatform. This transformed ViewPlatform object defines the +end user's view within the virtual universe. +
+Finally, the ViewPlatform is referenced by a View +object that specifies +all of the parameters needed to render the scene from the point of view +of the ViewPlatform. Also referenced by the View object are other +objects that contain information, such as the drawing canvas into which +Java 3D renders, the screen that contains the canvas, and information +about the physical environment. +
++
+The following steps are taken by the example program to create the +scene graph elements and link them together. Java 3D will then render +the scene graph and display the graphics in a window on the screen:
+2. Create a BranchGroup as the root of the scene branch graph.
+3. Construct a Shape3D node with a TransformGroup node above it.
+4. Attach a RotationInterpolator behavior to the TransformGroup.
+5. Call the simple universe utility function to do the following:
+b. Create the PhysicalBody, PhysicalEnvironment, View, and +ViewPlat-form objects.
+c. Create a BranchGroup as the root of the view platform branch +graph.
+d. Insert the view platform branch graph into the Locale.
+The Java 3D renderer then starts running in an infinite loop. The +renderer conceptually performs the following operations:
+while(true) {+
Process input
If (request to exit) break
Perform Behaviors
Traverse the scene graph and render visible objects
}
Cleanup and exit
Click here to see code fragments
+from a simple program, HelloUniverse.java
,
+that creates a cube and a RotationInterpolator behavior object that
+rotates the cube at a constant rate of pi/2 radians per second.
+
Here are other documents that provide explanatory material,
+previously included as part of
+the Java 3D API Specification Guide.
+
+
Provides the core set of classes for the +3D graphics API for the Java platform; click here for more information, +including explanatory material that was formerly found in the guide. +
+ +The 3D API is an application +programming interface used for writing three-dimensional graphics +applications and applets. It gives developers high-level constructs for +creating and manipulating 3D geometry and for constructing the +structures used in rendering that geometry. Application developers can +describe very large virtual worlds using these constructs, which +provide the runtime system with enough information to render these worlds +efficiently. +
+ + + + + -- cgit v1.2.3