X3D Crease Angle Interpretation

 


From: Justin Couch <justin@vlc.com.au>
Date: Wednesday, 8 September 2004 5:21 AM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: [h-anim] H-anim with hardware shaders issues

Appologies if anyone else has raised this issue, but the link to the
archives is dead so I can't backtrack any previous discussions.

As some of you are aware, we got a NIST SBIR grant that specifically
requires implementing the H-Anim spec using hardware accelaration of the
  mesh structures. In general principles, this is fine, and is a
reasonably easy task. We've managed to get Xj3D up to the point of doing
mesh animation work in software relatively easily.

However, we're having a rather hard time dealing with the X3D spec and
the way it has H-Anim integrate with the rest of the geometry
definitions. The major issue that we're running into is the requirement
that the skin coordinates have to be readable again after any mesh
deformations - in particular when combined with a node like IndexedFaceSet.

At the top level, Humanoid has the skin field, which is the root cause
of the problem. Allowing of arbitrary geometry here means that for the
majority of the time we can't use hardware acceleration. Almost every
model we've seen uses an IFS. The user provides the coordinate
information in the skinCoord field, then a Shape/IFS combo in the skin
field. Because of this, we now need to make sure that all the
coordinates generated have to be placed back into the Coordinate node's
fields so that the IFS picks it up, and then can do it's own handling of
the data - particularly normal generation etc. It's the requirement that
we must put those coordinates back into the Coordinate node that
prevents any form of hardware accelaration (particularly if the user
then wants to do something like read the coordinates out). Anything that
a shader touches cannot be read back from the hardware, meaning that
mesh deformation either has to ignore large parts of the X3D
specification, or we can't use hardware acceleration.

The only way we can see to get around this is to not use the skin field
at all (ignore it) and turn Humanoid into a Shape proxy - adding extra
fields for things like an Appearance, texture coordinates etc - and then
doing all the processing internally.  It would be something along the
lines of this:

HAnimHumanoid : X3DChildNode, X3DBoundedObject {
   SFVec3f    [in,out] center           0 0 0    (-8,8)
   MFString   [in,out] info             []
   MFNode     [in,out] joints           []       [HAnimJoint]
   SFNode     [in,out] metadata         NULL     [X3DMetadataObject]
   SFString   [in,out] name             ""
   SFRotation [in,out] rotation         0 0 1 0  (-8,8)|[-1,1]
   SFVec3f    [in,out] scale            1 1 1    (0,8)
   SFRotation [in,out] scaleOrientation 0 0 1 0  (-8,8)|[-1,1]
   MFNode     [in,out] segments         []       [HAnimSegment]
   MFNode     [in,out] sites            []       [HAnimSite]
   MFNode     [in,out] skeleton         []       [HAnimJoint]
   SFVec3f    [in,out] translation      0 0 0    (-8,8)
   SFString   [in,out] version          ""
   MFNode     [in,out] viewpoints       []       [Viewpoint]
   SFVec3f    []       bboxCenter       0 0 0    (-8,8)
   SFVec3f    []       bboxSize         -1 -1 -1 [0,8) or -1 -1 -1
   SFNode     [] skinCoord              NULL     [X3DCoordinateNode]
   SFNode     [] skinNormal             NULL     [X3DNormalNode]
   SFNode     [] skinTextureCoords      NULL     [X3DTextureCoordinateNode]
   SFNode     [] skinColor              NULL     [X3DColorNode]
   SFNode     [in,out] appearance       NULL     [X3DAppearanceNode]
   SFInt32    [] skinFaceIndex         []
}

Has anyone attempted to implement a shader version of the H-Anim spec
and has suggested workarounds or some sort of functionality that is
marginally sane?

--
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Look through the lens, and the light breaks down into many lights.
  Turn it or move it, and a new set of arrangements appears... is it
  a single light or many lights, lights that one must know how to
  distinguish, recognise and appreciate? Is it one light with many
  frames or one frame for many lights?"      -Subcomandante Marcos
-------------------------------------------------------------------


From: thyme <techuelife@tpg.com.au>
Date: Wednesday, 8 September 2004 12:36 PM
To: Justin Couch <justin@vlc.com.au>; h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Hi Justin and H-Anim list :)

Thanks Justin for raising this issue. I have thought about this before and
my
conclusions are that if the Coordinate's point values are meant to be read
in a deformed state, this is not be a good design for hardware
acceleration in mind.
Does the spec actually state the Coordinate's point values must be read
in a deformed state?
I have implemented HanimJoint nodes for Seamless3d some months ago
now using DirectX 8.1 hardware acceleration (The code would be near
identical to DirectX 9 as far as I know)
This means you can open an avatar output by Seamless3d as H-Anim
nodes and view these standard X3D H-Anim nodes being animated
utilising hardware acceleration in Seamless3d so long as standard
interpolators are used for animation. (Script nodes don't run inside
Seamless3d)
In Seamless3d the user can read the point values at any time but they
never get deformed by animation. This would be uniform with how point
values do not get deformed by a Transform node.
However if it is the spec to have them read deformed, I would think it
would not be to difficult (at least in C++ using operator overloading) to
make it so that the vertices utilise hardware normally but if the point
field is read then software processing would result to ensure the point
values get read deformed by what ever reads the point field. This I would
think should result in normal efficient hardware rendering with no penalties
most of the time. Hopefully point values would be rarely read in an X3D
file.
Perhaps this would be quite simple to code behind the scenes for a point
field but I doubt if many would see it's worth the extra code just to make
it
conform to the specs.
I have also implemented HanimDisplacer nodes so that they can be
directly animated in Seamless3d too but I have used a little software
processing along with the hardware acceleration for this node. I think the
HanimDisplacer node is perhaps a greater challenge to implement efficiently
for hardware acceleration compared to the HanimJoint node.

I posted to this list some months ago about some of the possible problems
I can see in the design of the H-Anim Nodes. My own Seamless3d specific
nodes bypass a number of problems because they were designed free from
the restrictions imposed by the H-Anim specs. I feel my own design makes
the possibility of multi material settings within a single mesh easier to
implement, more modular, smaller file size, possibly more efficient to
render
and simpler for the end user to comprehend.

> The only way we can see to get around this is to not use the skin field
> at all (ignore it) and turn Humanoid into a Shape proxy - adding extra
> fields for things like an Appearance, texture coordinates etc - and then
> doing all the processing internally.

This is why in the end I decided not use standard nodes like Shape and
IndexedFaceSet nodes in my own specific single skin mesh seamless
nodes. I can appreciate the attempt to make components modular in X3D
but here is an example where I can not see how it makes life any easier to
use already existing nodes for the role of rendering. It only creates
complexity and workarounds for the programmer writing a browser and adds
unnecessary complexity for the end user.

best wishes
thyme
creator of Seamless3d and Techuelife Island
http://www4.tpg.com.au/users/gperrett/seamless3d/index.html


----- Original Message -----
From: "Justin Couch" < justin@vlc.com.au >
To: < h-anim@web3d.org >
Sent: Wednesday, September 08, 2004 5:21 AM
Subject: [h-anim] H-anim with hardware shaders issues


> Appologies if anyone else has raised this issue, but the link to the
> archives is dead so I can't backtrack any previous discussions.
>
> As some of you are aware, we got a NIST SBIR grant that specifically
> requires implementing the H-Anim spec using hardware accelaration of the
>   mesh structures. In general principles, this is fine, and is a
> reasonably easy task. We've managed to get Xj3D up to the point of doing
> mesh animation work in software relatively easily.
>
> However, we're having a rather hard time dealing with the X3D spec and
> the way it has H-Anim integrate with the rest of the geometry
> definitions. The major issue that we're running into is the requirement
> that the skin coordinates have to be readable again after any mesh
> deformations - in particular when combined with a node like
IndexedFaceSet.
>
> At the top level, Humanoid has the skin field, which is the root cause
> of the problem. Allowing of arbitrary geometry here means that for the
> majority of the time we can't use hardware acceleration. Almost every
> model we've seen uses an IFS. The user provides the coordinate
> information in the skinCoord field, then a Shape/IFS combo in the skin
> field. Because of this, we now need to make sure that all the
> coordinates generated have to be placed back into the Coordinate node's
> fields so that the IFS picks it up, and then can do it's own handling of
> the data - particularly normal generation etc. It's the requirement that
> we must put those coordinates back into the Coordinate node that
> prevents any form of hardware accelaration (particularly if the user
> then wants to do something like read the coordinates out). Anything that
> a shader touches cannot be read back from the hardware, meaning that
> mesh deformation either has to ignore large parts of the X3D
> specification, or we can't use hardware acceleration.
>
> The only way we can see to get around this is to not use the skin field
> at all (ignore it) and turn Humanoid into a Shape proxy - adding extra
> fields for things like an Appearance, texture coordinates etc - and then
> doing all the processing internally.  It would be something along the
> lines of this:
>
> HAnimHumanoid : X3DChildNode, X3DBoundedObject {
>    SFVec3f    [in,out] center           0 0 0    (-8,8)
>    MFString   [in,out] info             []
>    MFNode     [in,out] joints           []       [HAnimJoint]
>    SFNode     [in,out] metadata         NULL     [X3DMetadataObject]
>    SFString   [in,out] name             ""
>    SFRotation [in,out] rotation         0 0 1 0  (-8,8)|[-1,1]
>    SFVec3f    [in,out] scale            1 1 1    (0,8)
>    SFRotation [in,out] scaleOrientation 0 0 1 0  (-8,8)|[-1,1]
>    MFNode     [in,out] segments         []       [HAnimSegment]
>    MFNode     [in,out] sites            []       [HAnimSite]
>    MFNode     [in,out] skeleton         []       [HAnimJoint]
>    SFVec3f    [in,out] translation      0 0 0    (-8,8)
>    SFString   [in,out] version          ""
>    MFNode     [in,out] viewpoints       []       [Viewpoint]
>    SFVec3f    []       bboxCenter       0 0 0    (-8,8)
>    SFVec3f    []       bboxSize         -1 -1 -1 [0,8) or -1 -1 -1
>    SFNode     [] skinCoord              NULL     [X3DCoordinateNode]
>    SFNode     [] skinNormal             NULL     [X3DNormalNode]
>    SFNode     [] skinTextureCoords      NULL
[X3DTextureCoordinateNode]
>    SFNode     [] skinColor              NULL     [X3DColorNode]
>    SFNode     [in,out] appearance       NULL     [X3DAppearanceNode]
>    SFInt32    [] skinFaceIndex         []
> }
>
> Has anyone attempted to implement a shader version of the H-Anim spec
> and has suggested workarounds or some sort of functionality that is
> marginally sane?
>
> --
> Justin Couch                         http://www.vlc.com.au/~justin/
> Java Architect & Bit Twiddler              http://www.yumetech.com/
> Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
> -------------------------------------------------------------------
> "Look through the lens, and the light breaks down into many lights.
>   Turn it or move it, and a new set of arrangements appears... is it
>   a single light or many lights, lights that one must know how to
>   distinguish, recognise and appreciate? Is it one light with many
>   frames or one frame for many lights?"      -Subcomandante Marcos
> -------------------------------------------------------------------
>


From: Joe D Williams <JOEDWIL@earthlink.net>
Date: Thursday, 9 September 2004 12:54 AM
To: Justin Couch <justin@vlc.com.au>; VRML Human Animation working group <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Justin,

> HAnimHumanoidGameThing : X3DChildNode, X3DBoundedObject {
>    SFVec3f    [in,out] center           0 0 0    (-8,8)
>    MFString   [in,out] info             []
>    SFNode     [in,out] metadata         NULL     [X3DMetadataObject]
>    SFString   [in,out] name             ""
>    SFRotation [in,out] rotation         0 0 1 0  (-8,8)|[-1,1]
>    SFVec3f    [in,out] scale            1 1 1    (0,8)
>    SFRotation [in,out] scaleOrientation 0 0 1 0  (-8,8)|[-1,1]
>    SFVec3f    [in,out] translation      0 0 0    (-8,8)
>    SFString   [in,out] version          ""
>    SFVec3f    []       bboxCenter       0 0 0    (-8,8)
>    SFVec3f    []       bboxSize         -1 -1 -1 [0,8) or -1 -1 -1
>    SFNode     [] skinCoord              NULL     [X3DCoordinateNode]
>    SFNode     [] skinNormal             NULL     [X3DNormalNode]
>    SFNode     [] skinTextureCoords      NULL     [X3DTextureCoordinateNode]
>    SFNode     [] skinColor              NULL     [X3DColorNode]
>    SFNode     [in,out] appearance       NULL     [X3DAppearanceNode]
>    SFInt32    [] skinFaceIndex         []
> }
>

I think you need a different node.
(i was thinking and the idea I have been working on)
is that one reason the skin coords need to be available
live in the node is due a method used to animate -
move a joint and the 'attached' segment, site, skin also
moves. All the parts are available and can communicate
position with each other
This gives a way to simulate what is really happening
in a physical model.

What you suggest is fine for animating the skin without
regard to anything else that might need to be animated
along with it that are outside the 'shader' node.
Just get a mesh and do whatever you wish to it
in the shader miniscenegraph.
How is collision done?
How do you know the bounding boxes?
If Site(s) are 'attached' to skin, that is the only way
their actual position out to the rest of the scene
may be to get updated skin cooords.

All in all, this effort should not be confused with
advancement ot the current h-anim spec,
but instead devoted to an entire new special purpose
extension profile, called something like:
'H-Anim Game Rendering Component'
with no internals or complications like Sites,
well there now could be volumetric shapes,
but really just skin.

But so far, from what I have seen, this is too big a break
from what is there now - I mean it is really a totally
different problem than the current node is set to address.

Thank You and Best Regards,
Joe



From: Dan Silverglate <dan@vcom3d.com>
Date: Thursday, 9 September 2004 3:54 AM
To: Justin Couch <justin@vlc.com.au>; h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Hi Justin,

I now understand (mostly) the problems you are having with the spec.

The spec says the skin field shall contain one or more indexed face set
definitions and that these shall reference the skinCoord and skinNormal
fields for their coord and normal fields respectively.  There are legitimate
reasons for allowing more than one Shape/IndexedFaceSet combo in the skin
field.  We often find the need to prevent a crease angle from being applied
across some parts of the same mesh.  A shirt sleeve on the upper arm, for
example, will have shadowing problems if the arm and the sleeve belong to
the same Shape node geometry.  Multiple textures was another reason for
having multiple Shape nodes in the skin field (before inherent X3D support
of Multiple Textures).

It may not be the cleanest approach, but the browser could force the
Shape/IndexedFaceSet node children of the skin field to use the shaders
instead of the standard approach.  Or perhaps, to facilitate this, you can
make a custom SkinShape node whose default implementation is the standard
Shape node but natively in the browser, uses the shaders.

As for access to the skinCoord, it may not be ideal, but I do not think it
is a violation of the spec to return the initial (default) value of the mesh
instead of the deformed values.

The goal here would be to not preclude non-native implementations of a
seamless mesh algorithm on other browsers.  I agree, however, that the spec
in this area could be clearer and is worth taking another look at.

Dan Silverglate
Chief Software Developer
Vcom3D, Inc.
http://www.vcom3d.com
======================================================
  Quidquid latine dictum sit, altum viditur.
======================================================
----- Original Message -----
From: "Justin Couch" < justin@vlc.com.au >
To: < h-anim@web3d.org >
Sent: Tuesday, September 07, 2004 3:21 PM
Subject: [h-anim] H-anim with hardware shaders issues


> Appologies if anyone else has raised this issue, but the link to the
> archives is dead so I can't backtrack any previous discussions.
>
> As some of you are aware, we got a NIST SBIR grant that specifically
> requires implementing the H-Anim spec using hardware accelaration of the
>   mesh structures. In general principles, this is fine, and is a
> reasonably easy task. We've managed to get Xj3D up to the point of doing
> mesh animation work in software relatively easily.
>
> However, we're having a rather hard time dealing with the X3D spec and
> the way it has H-Anim integrate with the rest of the geometry
> definitions. The major issue that we're running into is the requirement
> that the skin coordinates have to be readable again after any mesh
> deformations - in particular when combined with a node like
IndexedFaceSet.
>
> At the top level, Humanoid has the skin field, which is the root cause
> of the problem. Allowing of arbitrary geometry here means that for the
> majority of the time we can't use hardware acceleration. Almost every
> model we've seen uses an IFS. The user provides the coordinate
> information in the skinCoord field, then a Shape/IFS combo in the skin
> field. Because of this, we now need to make sure that all the
> coordinates generated have to be placed back into the Coordinate node's
> fields so that the IFS picks it up, and then can do it's own handling of
> the data - particularly normal generation etc. It's the requirement that
> we must put those coordinates back into the Coordinate node that
> prevents any form of hardware accelaration (particularly if the user
> then wants to do something like read the coordinates out). Anything that
> a shader touches cannot be read back from the hardware, meaning that
> mesh deformation either has to ignore large parts of the X3D
> specification, or we can't use hardware acceleration.
>
> The only way we can see to get around this is to not use the skin field
> at all (ignore it) and turn Humanoid into a Shape proxy - adding extra
> fields for things like an Appearance, texture coordinates etc - and then
> doing all the processing internally.  It would be something along the
> lines of this:
>
> HAnimHumanoid : X3DChildNode, X3DBoundedObject {
>    SFVec3f    [in,out] center           0 0 0    (-8,8)
>    MFString   [in,out] info             []
>    MFNode     [in,out] joints           []       [HAnimJoint]
>    SFNode     [in,out] metadata         NULL     [X3DMetadataObject]
>    SFString   [in,out] name             ""
>    SFRotation [in,out] rotation         0 0 1 0  (-8,8)|[-1,1]
>    SFVec3f    [in,out] scale            1 1 1    (0,8)
>    SFRotation [in,out] scaleOrientation 0 0 1 0  (-8,8)|[-1,1]
>    MFNode     [in,out] segments         []       [HAnimSegment]
>    MFNode     [in,out] sites            []       [HAnimSite]
>    MFNode     [in,out] skeleton         []       [HAnimJoint]
>    SFVec3f    [in,out] translation      0 0 0    (-8,8)
>    SFString   [in,out] version          ""
>    MFNode     [in,out] viewpoints       []       [Viewpoint]
>    SFVec3f    []       bboxCenter       0 0 0    (-8,8)
>    SFVec3f    []       bboxSize         -1 -1 -1 [0,8) or -1 -1 -1
>    SFNode     [] skinCoord              NULL     [X3DCoordinateNode]
>    SFNode     [] skinNormal             NULL     [X3DNormalNode]
>    SFNode     [] skinTextureCoords      NULL
[X3DTextureCoordinateNode]
>    SFNode     [] skinColor              NULL     [X3DColorNode]
>    SFNode     [in,out] appearance       NULL     [X3DAppearanceNode]
>    SFInt32    [] skinFaceIndex         []
> }
>
> Has anyone attempted to implement a shader version of the H-Anim spec
> and has suggested workarounds or some sort of functionality that is
> marginally sane?
>
> --
> Justin Couch                         http://www.vlc.com.au/~justin/
> Java Architect & Bit Twiddler              http://www.yumetech.com/
> Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
> -------------------------------------------------------------------
> "Look through the lens, and the light breaks down into many lights.
>   Turn it or move it, and a new set of arrangements appears... is it
>   a single light or many lights, lights that one must know how to
>   distinguish, recognise and appreciate? Is it one light with many
>   frames or one frame for many lights?"      -Subcomandante Marcos
> -------------------------------------------------------------------



From: Justin Couch <justin@vlc.com.au>
Date: Thursday, 9 September 2004 4:11 AM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

thyme wrote:

  > conclusions are that if the Coordinate's point values are meant to be
read
  > in a deformed state, this is not be a good design for hardware
  > acceleration in mind.


The issue I see is that there is no way around it not being read. The
intersection that is particularly difficult is IndexedFaceSet. The
Coordinate node is shared with it, and IFS has the creaseAngle
parameter. That means, every frame that the coordinates are moved, you
have to go through the IFS and determine whether you can use a single
vertex shared across faces or you need to generate multiple vertices,
each with a different normal. Since the shader does not have access to
any other vertex than the one it's currently processing, nor can it
generate new vertices, then we're pretty much knackered before we even
start.

  > Does the spec actually state the Coordinate's point values must be read
  > in a deformed state?


Unfortunately, the spec doesn't say anything about any of these sorts of
issues. It just washes it's hands of it. X3D, OTOH, just points at the
H-Anim spec and states "do what they say". Both of which are less than
useful to us :(

  > However if it is the spec to have them read deformed, I would think it
  > would not be to difficult (at least in C++ using operator overloading) to
  > make it so that the vertices utilise hardware normally but if the point
  > field is read then software processing would result to ensure the point
  > values get read deformed by what ever reads the point field.


That is unworkable as by the time they get read, it may be too late and
you've already shipped the vertices off to the video card. And, as
stated above, there is already an implicit read in the system if IFS is
used with no Normal node/normalIndex provided.

  > think should result in normal efficient hardware rendering with no
penalties
  > most of the time. Hopefully point values would be rarely read in an X3D
  > file.


Not correct. In VRML97, the only option available was IFS. In X3D there
are the Triangle*Set nodes to use as well, but nobody is generating
those yet. All the examples you see are using IFS (in mesh form.
Articulated don't count for these discussions).

  > I have also implemented HanimDisplacer nodes so that they can be
  > directly animated in Seamless3d too but I have used a little software
  > processing along with the hardware acceleration for this node. I
think the
  > HanimDisplacer node is perhaps a greater challenge to implement
efficiently
  > for hardware acceleration compared to the HanimJoint node.


I haven't even got that far yet. Just trying to work out the basics first.

  > I posted to this list some months ago about some of the possible problems
  > I can see in the design of the H-Anim Nodes. My own Seamless3d specific
  > nodes bypass a number of problems because they were designed free from
  > the restrictions imposed by the H-Anim specs. I feel my own design makes
  > the possibility of multi material settings within a single mesh easier to
  > implement, more modular, smaller file size, possibly more efficient to
  > render and simpler for the end user to comprehend.


Multiple material stuff is almost never more efficient to render than a
single big mesh - particularly once you start dealing with textures.
It's an end-user convenience, not something that is there for optimal
performance.

--
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Look through the lens, and the light breaks down into many lights.
  Turn it or move it, and a new set of arrangements appears... is it
  a single light or many lights, lights that one must know how to
  distinguish, recognise and appreciate? Is it one light with many
  frames or one frame for many lights?"      -Subcomandante Marcos
-------------------------------------------------------------------


From: Joe D Williams <JOEDWIL@earthlink.net>
Date: Thursday, 9 September 2004 6:35 AM
To: Justin Couch <justin@vlc.com.au>; VRML Human Animation working group <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

> > If Site(s) are 'attached' to skin, that is the only way
> > their actual position out to the rest of the scene
> > may be to get updated skin cooords.
>
> They're not. They're attached to the skeleton, which is a different
> structure.

Actually shown in the spec as Site is a child of a Segment.

HAnimJoint : X3DGroupingNode {
...
  MFInt32    [in,out] skinCoordIndex   []
  MFFloat    [in,out] skinCoordWeight  []
......
}

is another connection that would be (sort of) hidden
given the current suggestion.
These are fields used to control basic animation of the
skin with respect to the skeleton.
Maybe changed from time to time during the animation?.
If these are [] then that would go along with the
humanoid node fields changed to [].

In addition, connections with the Displacer need to be
considered before making the skin singleMeshInitOnly?

Thanks Justin, it just seems like making the skin [] is
only covering a very special case that might just
work itself out OK if the author does the best he can
to make it that way - like if he just chose another logical
option now present in X3D, the indexedtriset as you
suggested.
Because this seems like such a big change to me and
because it still appears to cover only a subset of the
current spec, I ask:
Do we need more than careful authoring and use of
ITS rather than IFS to get this done?

Thank You and Best Regards,
Joe


From: Matthew T. Beitler <beitler@cis.upenn.edu>
Date: Thursday, 9 September 2004 9:22 AM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Justin Couch < justin@vlc.com.au > wrote:
  >
  > At the top level, Humanoid has the skin field, which is the root cause
  > of the problem. Allowing of arbitrary geometry here means that for the
  > majority of the time we can't use hardware acceleration. Almost every
  > model we've seen uses an IFS. The user provides the coordinate
  > information in the skinCoord field, then a Shape/IFS combo in the skin
  > field. Because of this, we now need to make sure that all the
  > coordinates generated have to be placed back into the Coordinate
  > node's fields so that the IFS picks it up, and then can do it's own
  > handling of the data - particularly normal generation etc. It's the
  > requirement that we must put those coordinates back into the
  > Coordinate node that prevents any form of hardware accelaration
  > (particularly if the user then wants to do something like read the
  > coordinates out). Anything that a shader touches cannot be read back
  > from the hardware, meaning that mesh deformation either has to ignore
  > large parts of the X3D specification, or we can't use hardware
  > acceleration.
  >
It is possible to do a readback of the vertex values, there just isn't
a formal mechanism (yet) which is specifically designed for this
purpose...  For example, Brabec/Seidel implemented a vertex readback by
encoding the new vertex results as color values...
    Shadow Volumes on Programmable Graphics Hardware
http://www.mpi-sb.mpg.de/~brabec/doc/brabec_eg03.pdf
A drawback of this method is that it requires that the CPU convert the
color value back into a vertex value...

The OpenGL ARB has a "superbuffers" (aka uber buffers) working group
that has been attempting to address this issue, but it seems to have
been bogged down in ARB politicking for a while...  However a formalized
method for doing vertex readbacks will happen eventually...

For now my recommendation is that we write into the spec that it is
alright if implementations don't update the Coordinate node with the
vertex positions calculated by the GPU, but also note that such an
implementation is possible (cite the Brabec/Seidel paper) and require
that once the ARB decides on a formal mechanism for this, developers
should implement the readback capability...

Does that sound reasonable to everyone???

-Matt


--
Matthew T. Beitler ( beitler@cis.upenn.edu ) ( beitler@acm.org )
http://www.cis.upenn.edu/~beitler
   Center for Human Modeling and Simulation
   University of Pennsylvania


From: thyme <techuelife@tpg.com.au>
Date: Thursday, 9 September 2004 11:03 AM
To: Justin Couch <justin@vlc.com.au>; h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Hi Justin

>  > thyme wrote:
>  > conclusions are that if the Coordinate's point values are meant to be
>  >  read  in a deformed state, this is not be a good design for hardware
>  > acceleration in mind.
>
> Justin wrote:
> The issue I see is that there is no way around it not being read. The
> intersection that is particularly difficult is IndexedFaceSet. The
> Coordinate node is shared with it, and IFS has the creaseAngle
> parameter. That means, every frame that the coordinates are moved, you
> have to go through the IFS and determine whether you can use a single
> vertex shared across faces or you need to generate multiple vertices,
> each with a different normal. Since the shader does not have access to
> any other vertex than the one it's currently processing, nor can it
> generate new vertices, then we're pretty much knackered before we even
> start.

I can not see a problem here. I don't see why normals have to be created
for each frame when using creaseAngle. The code for Seamless3d generates
the normals internally before any extra internal vertices are created so
that the geometry's normals/color/texture can be rendered true by hardware.
This has to be done whether IndexedFaceSets are used by HAnimJoint
nodes or not. When rendering weighted vertices in DirectX8.1 the
hardware takes care of the normals there after the same as it does for
the coordinates. Therefore the normals only have to be generated during
the initialisation stage or when a field like the Coordinate's point is
modified by a script or a ROUTE. This is the same issue for an
IndexedFaceSets how ever its used.

>  > thyme wrote
>  > Does the spec actually state the Coordinate's point values must be read
>  > in a deformed state?
>
> Justin wrote:
> Unfortunately, the spec doesn't say anything about any of these sorts of
> issues. It just washes it's hands of it. X3D, OTOH, just points at the
> H-Anim spec and states "do what they say". Both of which are less than
> useful to us :(

This is good news to hear, since we can assume the point values are not
modified by animation when read which is uniform to how a Transform
node functions, what I call dynamic transformation as opposed to static
transformation, don't know what the correct terms are but just seems
strange to me that the point values would end up getting corrupted
by animation.

>  > thyme wrote:
>  > However if it is the spec to have them read deformed, I would think it
>  > would not be to difficult (at least in C++ using operator overloading)
>  > to make it so that the vertices utilise hardware normally but if the
>  > point field is read then software processing would result to ensure the
>  > point  values get read deformed by what ever reads the point
>  > field.
>
> Justin wrote:
> That is unworkable as by the time they get read, it may be too late and
> you've already shipped the vertices off to the video card. And, as
> stated above, there is already an implicit read in the system if IFS is
> used with no Normal node/normalIndex provided.

If we are going at 25 frames per second or more why would this be a
problem if the frame the user saw (or did not see) gets read delayed?
But I don't see why this is even a problem since if its being read the
renderer can do slow inefficient software processing which I still
would think would be very rarely done so the penalty wont happen
in most cases. But my own feelings are, it is more logically uniform
for the vertices to be read undeformed. Imagine an application
opening a file, animating it and then saving the vertices deformed.

>  > thyme wrote:
> > think should result in normal efficient hardware rendering with no
> > penalties most of the time. Hopefully point values would be rarely
> > read in an X3D file.
>
>

> Justin wrote:
> Not correct. In VRML97, the only option available was IFS. In X3D there
> are the Triangle*Set nodes to use as well, but nobody is generating
> those yet. All the examples you see are using IFS (in mesh form.
> Articulated don't count for these discussions).

This is what I am discussing here, single skin mesh (vertex weighted)
animation.
Seamless3d is biased towards this type of animation.
Seamless3d has supported the option of generating (can also render) standard
X3D IndexedTriangleSet nodes for the best part of this year but I don't see
what the problem is whether using IndexedTriangleSet or
IndexedFaceSet nodes as I explained above.

>  > thyme wrote
>  > I posted to this list some months ago about some of the possible
>  > problems I can see in the design of the H-Anim Nodes. My own
>  > Seamless3d specific nodes bypass a number of problems because
>  > they were designed free from
>  > the restrictions imposed by the H-Anim specs. I feel my own design
>  > makes the possibility of multi material settings within a single mesh
>  > easier to implement, more modular, smaller file size, possibly more
>  > efficient to render and simpler for the end user to
>  > comprehend.
>
>
> Justin wrote:
> Multiple material stuff is almost never more efficient to render than a
> single big mesh - particularly once you start dealing with textures.
> It's an end-user convenience, not something that is there for optimal
> performance.

Yes I understand multiple material stuff is not going to make things
more efficient to render but its inevitable that it will be wanted for
single skin mesh avatars. For example glossy lipstick for a mouth.
Here a different material (more shiny) setting will be wanted for a
single animated mesh. Seamless3d already supports this possibility
(but can be improved a lot yet). Seamless can generate multiple
material settings for standard HAnim nodes by generating extra Shape
nodes that own a different instance of a Material node but sharing
the one Coordinate node. Bitmanagement/Blaxxun Contact can
render such avatars however I doubt as efficiently rendered
(because the coordinate node will be transformed multiple times?)
as a Seamless node because I have designed a Seamless node
for this possibility in mind but then again a browser could optimize
for this a single Coordinate node in mind behind the scenes
one way or another I would think.

Note. Seamless3d can generate multiple materials for single
(in reality multiple but treated as single to the user) skin mesh avatars
but itself can not yet render Seamless nodes with multiple martial settings.

Sorry for referring to my own Seamless node here but how better
to understand something than invent your own.

regards
thyme
creator of Seamless3d and Techuelife Island
http://www4.tpg.com.au/users/gperrett/seamless3d/index.html



From: thyme <techuelife@tpg.com.au>
Date: Thursday, 9 September 2004 1:01 PM
To: Matthew T. Beitler <beitler@cis.upenn.edu>; h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues


Hi Matt and All

> Matt wrote:
> For now my recommendation is that we write into the spec that it is
> alright if implementations don't update the Coordinate node with the
> vertex positions calculated by the GPU, but also note that such an
> implementation is possible (cite the Brabec/Seidel paper) and require
> that once the ARB decides on a formal mechanism for this, developers
> should implement the readback capability...
>
> Does that sound reasonable to everyone???

I am very glad to hear this suggestion. I really do think point fields
should never get corrupted by animation and should be stated in the
specs to be this way explicitly.
Weighted vertex meshes means each vertex has its own transform matrix,
it does not mean the vertices are actually modified. Modifying vertices is
something more an editor does not what a renderer typically would do in
concept.
If a script wants to read the vertices transformed then this I think would
be
best left up to the script to do the task or some specialised member
function
belonging to the HAnim node that can be called by the script as opposed to
forcing this on everyone. If the point field were allowed to be corrupted in
the specs it would create big problems for an application to be able to load
a HAnim avatar, animate it, possibly edit it and then save the file
without the vertices getting messed up for next time the model is loaded.
I understand that with a CoordinateInterpolator the point field gets
modified during animation but this is a very different concept to how a
HanimDispacer node is used because a CoordinateInterpolator sends
modified coordinates to the point field directly and the coordinate values
owned by the Coordinate node will remain uncorrupted so can be saved
and reopened. I don't know if the HanimDispacer node was designed
for this in mind but what ever the case it avoids this problem nicely.
Instead of sending a load of coordinates, only a single float is sent
for the weight which is logically uniform to sending a rotation value
to a Transform node.

best wishes
thyme
creator of Seamless3d and Techuelife Island
http://www4.tpg.com.au/users/gperrett/seamless3d/index.html


From: Justin Couch <justin@vlc.com.au>
Date: Friday, 10 September 2004 1:55 AM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Matthew T. Beitler wrote:

> purpose...  For example, Brabec/Seidel implemented a vertex readback by
> encoding the new vertex results as color values...
>    Shadow Volumes on Programmable Graphics Hardware
>    http://www.mpi-sb.mpg.de/~brabec/doc/brabec_eg03.pdf
> A drawback of this method is that it requires that the CPU convert the
> color value back into a vertex value...

And there's a rather significant performance penalty too for reading
anything back out of buffers rather than copying between buffers.

> The OpenGL ARB has a "superbuffers" (aka uber buffers) working group
> that has been attempting to address this issue, but it seems to have
> been bogged down in ARB politicking for a while...

Not anymore. OpenGL 2.0 was formally released yesterday. Super buffers
are in. Now all we need is driver and hardware support for it - which
will probably take the next generation of chips to be any good.

> For now my recommendation is that we write into the spec that it is
> alright if implementations don't update the Coordinate node with the
> vertex positions calculated by the GPU, but also note that such an
> implementation is possible (cite the Brabec/Seidel paper) and require
> that once the ARB decides on a formal mechanism for this, developers
> should implement the readback capability...
>
> Does that sound reasonable to everyone???

I would far prefer the first part without the second. From an X3D
perspective, it's still not possible to implement the second part and
remain conformant to the existing event model. It assumes that you are
rendering at the same time as the event model is evaluating. If I have
to read the values back from the video card, they won't be aviable until
the frame after they were drawn (or potentially many frames later in a
multithreaded rendering system like what we use). In an theoretical
single threaded renderer you have this cycle (4.4.8.3):

a. Update camera based on currently bound Viewpoint's position and
orientation;
b. Evaluate input from sensors;
c. Evalute routes;
d. If any events were generated from steps b and c, go to step b and
continue.
e. Render graphics

The last step is performed after the event model has competed for this
time stamp. Since that is the point where we are going to then be
walking the skeleton, updating the internal transforms and then issuing
the geometry calls to OpenGL (or D3D), there's no possibility of having
the coordinates updated on the same frame as the rest of the event model.

To make matters worse, on larger-scale systems such as a multiple CPU
system (eg CAVE), it's almost guaranteed that we'll have at least a 3
frame delay between the event model evaluation and what is seen on
screen. Each section of the rendering pipeline is on it's own thread,
probably located on separate CPUs - so that's app, cull, state sort and
render loops, each as a separate thread, each introducing a frame delay
into the equation.

Taking it to even further extremes, we now have a rendering cluster
where the image generators (video cards) are on completely separate
boxes across a network. That will introduce some strange results as the
delay is indeterminant, and we'll have to do some image processing to
combine all the sub-images back into a single one on the controller
machine before being able to update the coordinate node in the event model.

--
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Look through the lens, and the light breaks down into many lights.
  Turn it or move it, and a new set of arrangements appears... is it
  a single light or many lights, lights that one must know how to
  distinguish, recognise and appreciate? Is it one light with many
  frames or one frame for many lights?"      -Subcomandante Marcos
-------------------------------------------------------------------


From: Justin Couch <justin@vlc.com.au>
Date: Friday, 10 September 2004 2:17 AM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

thyme wrote:

> I can not see a problem here. I don't see why normals have to be created
> for each frame when using creaseAngle.

You have to. Read the spec of what creaseAngle does. If the angle
between any two faces using a shared vertex is less than creaseAngle,
you have to smooth the normal between those two faces. You have to
evaluate this for every face that uses that shared vertex. In order to
know whether that shared vertex has to smooth or not, you have to know
the face normals for all contributing faces, which means you have to
know the final coordinates of all contributing faces so that you can
calculate their face normal.

Say you have a creaseAngle of 90 degrees. You have 3 faces using a
shared vertex index, a, such as this:
  ________
  \     /|
   \ 1 / |
    \ /  |
   a /  2|
    / \  |
   /   \ |
  /  3  \|
/-------+
b        c

In the first frame, all 3 faces are less than 90 degrees apart so we
smooth all of those together. Very simple as we have just a single
vertex to send out (everyone uses the same value). In the second frame,
the coordinate b 3 moves so that face 3 now forms a 95 degree angle
between itself and faces 1 and 2. That requires that face 1 and 2 smooth
their normals, but face 3 uses a separate normal value. While 1 and 2
can still share a vertex with a common normal, a new _pair_ of vertices
b' and c' must be put into the pipeline as a different normal value is
needed for these.

As you can see, it's not possible to implement this as a hardware shader
as within a shader you have no concept of the vertices surrounding you,
so you can't know what sort of face normals to generate. Even if you
did, you would have to make sure that the vertex data being sent do you
is in unindexed form so that you have a mass of separate triangles, each
evaluating their own face normals and smoothing with neighbours. That's
some extremely expensive bus usage you've got running there as every
vertex is going to have to also pass over as per-vertex attributes every
other contributing vertex as well as face information etc.

>>That is unworkable as by the time they get read, it may be too late and
>>you've already shipped the vertices off to the video card. And, as
>>stated above, there is already an implicit read in the system if IFS is
>>used with no Normal node/normalIndex provided.
>
>
> If we are going at 25 frames per second or more why would this be a
> problem if the frame the user saw (or did not see) gets read delayed?

See my previous post in response to Matt's suggestion. On higher-end
machines, as well as stuff heading to the desktop with multiple CPUs
(think, in 12 months time, a desktop box is going to have the equivalent
of 4 CPUs in it - dual core, each core with hyperthreading) meaning
there will be significant frame delays between when the event model
evaluates something and when the video card gets to process it.

> Seamless3d has supported the option of generating (can also render) standard
> X3D IndexedTriangleSet nodes for the best part of this year but I don't see
> what the problem is whether using IndexedTriangleSet or
> IndexedFaceSet nodes as I explained above.

Based on your descriptions, it sounds as though you are non-conformant
to the spec, so I'm not surprised.

> Yes I understand multiple material stuff is not going to make things
> more efficient to render but its inevitable that it will be wanted for
> single skin mesh avatars. For example glossy lipstick for a mouth.
> Here a different material (more shiny) setting will be wanted for a
> single animated mesh.

The way you do this is with multitexture. Works really well. Shaders, of
course, are going to completely mess everything up again though as the
user wants to provide their own shader rather than what we want to use
internally.

--
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Look through the lens, and the light breaks down into many lights.
  Turn it or move it, and a new set of arrangements appears... is it
  a single light or many lights, lights that one must know how to
  distinguish, recognise and appreciate? Is it one light with many
  frames or one frame for many lights?"      -Subcomandante Marcos
-------------------------------------------------------------------


From: thyme <techuelife@tpg.com.au>
Date: Friday, 10 September 2004 8:18 AM
To: Justin Couch <justin@vlc.com.au>; h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Hi Justin
I think you are confusing the image the user sees of the animated
IndexedFaceSet with the actual shape of the IndexedFaceSet.
Animating an IndexedFaceSet does not modify the point field therefore
creaseAngle does not come into the equation. For morphing perhaps,
if creaseAngle did have an affect over animation it could have some
advantages I can see but for weighted vertex animation it would more
likely cause problems. Imagine an organic shape bending a limb and
a crease appearing because the angle was to great for the
creaseAngle setting. This would be most undesirable. Perhaps this is
open to interpretation but I feel my logic here is the most sound and
logically uniform in relation to orthodox animation which in concept
does not modify point field values. Why invent a new concept for
vertex weighted animation?
regards
thyme



----- Original Message -----
From: "Justin Couch" < justin@vlc.com.au >
To: < h-anim@web3d.org >
Sent: Friday, September 10, 2004 2:17 AM
Subject: Re: [h-anim] H-anim with hardware shaders issues


> thyme wrote:
>
> > I can not see a problem here. I don't see why normals have to be created
> > for each frame when using creaseAngle.
>
> You have to. Read the spec of what creaseAngle does. If the angle
> between any two faces using a shared vertex is less than creaseAngle,
> you have to smooth the normal between those two faces. You have to
> evaluate this for every face that uses that shared vertex. In order to
> know whether that shared vertex has to smooth or not, you have to know
> the face normals for all contributing faces, which means you have to
> know the final coordinates of all contributing faces so that you can
> calculate their face normal.
>
> Say you have a creaseAngle of 90 degrees. You have 3 faces using a
> shared vertex index, a, such as this:
>   ________
>   \     /|
>    \ 1 / |
>     \ /  |
>    a /  2|
>     / \  |
>    /   \ |
>   /  3  \|
> /-------+
> b        c
>
> In the first frame, all 3 faces are less than 90 degrees apart so we
> smooth all of those together. Very simple as we have just a single
> vertex to send out (everyone uses the same value). In the second frame,
> the coordinate b 3 moves so that face 3 now forms a 95 degree angle
> between itself and faces 1 and 2. That requires that face 1 and 2 smooth
> their normals, but face 3 uses a separate normal value. While 1 and 2
> can still share a vertex with a common normal, a new _pair_ of vertices
> b' and c' must be put into the pipeline as a different normal value is
> needed for these.
>
> As you can see, it's not possible to implement this as a hardware shader
> as within a shader you have no concept of the vertices surrounding you,
> so you can't know what sort of face normals to generate. Even if you
> did, you would have to make sure that the vertex data being sent do you
> is in unindexed form so that you have a mass of separate triangles, each
> evaluating their own face normals and smoothing with neighbours. That's
> some extremely expensive bus usage you've got running there as every
> vertex is going to have to also pass over as per-vertex attributes every
> other contributing vertex as well as face information etc.
>
> >>That is unworkable as by the time they get read, it may be too late and
> >>you've already shipped the vertices off to the video card. And, as
> >>stated above, there is already an implicit read in the system if IFS is
> >>used with no Normal node/normalIndex provided.
> >
> >
> > If we are going at 25 frames per second or more why would this be a
> > problem if the frame the user saw (or did not see) gets read delayed?
>
> See my previous post in response to Matt's suggestion. On higher-end
> machines, as well as stuff heading to the desktop with multiple CPUs
> (think, in 12 months time, a desktop box is going to have the equivalent
> of 4 CPUs in it - dual core, each core with hyperthreading) meaning
> there will be significant frame delays between when the event model
> evaluates something and when the video card gets to process it.
>
> > Seamless3d has supported the option of generating (can also render)
standard
> > X3D IndexedTriangleSet nodes for the best part of this year but I don't
see
> > what the problem is whether using IndexedTriangleSet or
> > IndexedFaceSet nodes as I explained above.
>
> Based on your descriptions, it sounds as though you are non-conformant
> to the spec, so I'm not surprised.
>
> > Yes I understand multiple material stuff is not going to make things
> > more efficient to render but its inevitable that it will be wanted for
> > single skin mesh avatars. For example glossy lipstick for a mouth.
> > Here a different material (more shiny) setting will be wanted for a
> > single animated mesh.
>
> The way you do this is with multitexture. Works really well. Shaders, of
> course, are going to completely mess everything up again though as the
> user wants to provide their own shader rather than what we want to use
> internally.
>
> --
> Justin Couch                         http://www.vlc.com.au/~justin/
> Java Architect & Bit Twiddler              http://www.yumetech.com/
> Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
> -------------------------------------------------------------------
> "Look through the lens, and the light breaks down into many lights.
>   Turn it or move it, and a new set of arrangements appears... is it
>   a single light or many lights, lights that one must know how to
>   distinguish, recognise and appreciate? Is it one light with many
>   frames or one frame for many lights?"      -Subcomandante Marcos
> -------------------------------------------------------------------
>


From: Justin Couch <justin@vlc.com.au>
Date: Friday, 10 September 2004 3:02 PM
To: thyme <techuelife@tpg.com.au>
Cc: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues


thyme wrote:

> I think you are confusing the image the user sees of the animated
> IndexedFaceSet with the actual shape of the IndexedFaceSet.

I'm not at all. We're still dealing with the X3D specification here. You
cannot just arbitrarily ignore the rules because it does not fit what
your personal view of an animated character should look like. IFS
clearly states what it's rendering behaviour is. Users make heavy use of
this implied behaviour - they don't provide normals and they set an
explicit creaseAngle so that shading is correctly performed. Arbitrarily
ignoring parts of the specification that are "not favourable" is not an
acceptable position to take.

> Animating an IndexedFaceSet does not modify the point field therefore
> creaseAngle does not come into the equation.

You're mis-reading what I stated. At no point is the coordinate node's
field changed. What is changed is the list of coordinates that are sent
through to the graphics card. Each frame, these are potentially
different because different tesselation has to be performed based on the
values of creaseAngle. Some vertices that can be shared in one frame
cannot in the next, and there is no way of knowing this until the
vertices have been morphed already, in order to calculate the face
normals, to then work out if smooth normals can be applied or not.

> likely cause problems. Imagine an organic shape bending a limb and
> a crease appearing because the angle was to great for the
> creaseAngle setting. This would be most undesirable.

Whether it is desirable or not is not something that you as an
implementor of the spec should be considering. That is a content
problem. The user screwed up and didn't put a "big enough" value in. The
spec is very clear about what the intended visual results are. The user
has to know and understand this. If they don't that's their problem, not
yours. What if that crease angle was the chin-line of the character?
What was supposed to be a very square-jawed mucho male just got turned
into a smooth Oil of Olay female model.

> open to interpretation but I feel my logic here is the most sound and
> logically uniform in relation to orthodox animation which in concept
> does not modify point field values. Why invent a new concept for
> vertex weighted animation?

I'll turn this around: Why invent a new concept for the specification?
You've just completely thrown the book out on what the spec says should
happen because of some gut feel. That leads us right back down the path
of VRML97. Your logic here is highly flawed in that is based on your own
premontions of what is right, rather than being based on a rather hefty
and fairly well defined international specification that explicitly
states how rendering should look and behave.

If you have a problem with the interaction of various parts of the
specification, raise the issue and get some discussions going about it
(as I'm doing with this thread). Don't just unilaterally decide to
ignore certain rather critical behaviours of the spec just because it
doesn't meet your own personal ideals.

--
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Look through the lens, and the light breaks down into many lights.
  Turn it or move it, and a new set of arrangements appears... is it
  a single light or many lights, lights that one must know how to
  distinguish, recognise and appreciate? Is it one light with many
  frames or one frame for many lights?"      -Subcomandante Marcos
-------------------------------------------------------------------


From: thyme <techuelife@tpg.com.au>
Date: Friday, 10 September 2004 4:35 PM
To: thyme <techuelife@tpg.com.au>; Justin Couch <justin@vlc.com.au>; h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues


Hi Justin
The VRML97 specs will only possibly state how creaseAngle affects an
IndexedFaceSet it will not state anywhere that creaseAngle apples to how
vertices are transformed during rendering.  With VRML97 when ever an
IndexedFaceSet is rendered all vertices will be all transformed the same to
a single weight.
HanimJoint nodes introduce the possibility if vertices having different
weights within the one IndexedFaceSet. This does not imply in away way to me
that this has to mean creaseAngle has to be recalculated for each frame.
regards
thyme

----- Original Message -----
From: "thyme" < techuelife@tpg.com.au >
To: "Justin Couch" < justin@vlc.com.au >; < h-anim@web3d.org >
Sent: Friday, September 10, 2004 8:18 AM
Subject: Re: [h-anim] H-anim with hardware shaders issues


> Hi Justin
> I think you are confusing the image the user sees of the animated
> IndexedFaceSet with the actual shape of the IndexedFaceSet.
> Animating an IndexedFaceSet does not modify the point field therefore
> creaseAngle does not come into the equation. For morphing perhaps,
> if creaseAngle did have an affect over animation it could have some
> advantages I can see but for weighted vertex animation it would more
> likely cause problems. Imagine an organic shape bending a limb and
> a crease appearing because the angle was to great for the
> creaseAngle setting. This would be most undesirable. Perhaps this is
> open to interpretation but I feel my logic here is the most sound and
> logically uniform in relation to orthodox animation which in concept
> does not modify point field values. Why invent a new concept for
> vertex weighted animation?
> regards
> thyme
>
>
>
> ----- Original Message -----
> From: "Justin Couch" < justin@vlc.com.au >
> To: < h-anim@web3d.org >
> Sent: Friday, September 10, 2004 2:17 AM
> Subject: Re: [h-anim] H-anim with hardware shaders issues
>
>
> > thyme wrote:
> >
> > > I can not see a problem here. I don't see why normals have to be
created
> > > for each frame when using creaseAngle.
> >
> > You have to. Read the spec of what creaseAngle does. If the angle
> > between any two faces using a shared vertex is less than creaseAngle,
> > you have to smooth the normal between those two faces. You have to
> > evaluate this for every face that uses that shared vertex. In order to
> > know whether that shared vertex has to smooth or not, you have to know
> > the face normals for all contributing faces, which means you have to
> > know the final coordinates of all contributing faces so that you can
> > calculate their face normal.
> >
> > Say you have a creaseAngle of 90 degrees. You have 3 faces using a
> > shared vertex index, a, such as this:
> >   ________
> >   \     /|
> >    \ 1 / |
> >     \ /  |
> >    a /  2|
> >     / \  |
> >    /   \ |
> >   /  3  \|
> > /-------+
> > b        c
> >
> > In the first frame, all 3 faces are less than 90 degrees apart so we
> > smooth all of those together. Very simple as we have just a single
> > vertex to send out (everyone uses the same value). In the second frame,
> > the coordinate b 3 moves so that face 3 now forms a 95 degree angle
> > between itself and faces 1 and 2. That requires that face 1 and 2 smooth
> > their normals, but face 3 uses a separate normal value. While 1 and 2
> > can still share a vertex with a common normal, a new _pair_ of vertices
> > b' and c' must be put into the pipeline as a different normal value is
> > needed for these.
> >
> > As you can see, it's not possible to implement this as a hardware shader
> > as within a shader you have no concept of the vertices surrounding you,
> > so you can't know what sort of face normals to generate. Even if you
> > did, you would have to make sure that the vertex data being sent do you
> > is in unindexed form so that you have a mass of separate triangles, each
> > evaluating their own face normals and smoothing with neighbours. That's
> > some extremely expensive bus usage you've got running there as every
> > vertex is going to have to also pass over as per-vertex attributes every
> > other contributing vertex as well as face information etc.
> >
> > >>That is unworkable as by the time they get read, it may be too late
and
> > >>you've already shipped the vertices off to the video card. And, as
> > >>stated above, there is already an implicit read in the system if IFS
is
> > >>used with no Normal node/normalIndex provided.
> > >
> > >
> > > If we are going at 25 frames per second or more why would this be a
> > > problem if the frame the user saw (or did not see) gets read delayed?
> >
> > See my previous post in response to Matt's suggestion. On higher-end
> > machines, as well as stuff heading to the desktop with multiple CPUs
> > (think, in 12 months time, a desktop box is going to have the equivalent
> > of 4 CPUs in it - dual core, each core with hyperthreading) meaning
> > there will be significant frame delays between when the event model
> > evaluates something and when the video card gets to process it.
> >
> > > Seamless3d has supported the option of generating (can also render)
> standard
> > > X3D IndexedTriangleSet nodes for the best part of this year but I
don't
> see
> > > what the problem is whether using IndexedTriangleSet or
> > > IndexedFaceSet nodes as I explained above.
> >
> > Based on your descriptions, it sounds as though you are non-conformant
> > to the spec, so I'm not surprised.
> >
> > > Yes I understand multiple material stuff is not going to make things
> > > more efficient to render but its inevitable that it will be wanted for
> > > single skin mesh avatars. For example glossy lipstick for a mouth.
> > > Here a different material (more shiny) setting will be wanted for a
> > > single animated mesh.
> >
> > The way you do this is with multitexture. Works really well. Shaders, of
> > course, are going to completely mess everything up again though as the
> > user wants to provide their own shader rather than what we want to use
> > internally.
> >
> > --
> > Justin Couch                         http://www.vlc.com.au/~justin/
> > Java Architect & Bit Twiddler              http://www.yumetech.com/
> > Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
> > -------------------------------------------------------------------
> > "Look through the lens, and the light breaks down into many lights.
> >   Turn it or move it, and a new set of arrangements appears... is it
> >   a single light or many lights, lights that one must know how to
> >   distinguish, recognise and appreciate? Is it one light with many
> >   frames or one frame for many lights?"      -Subcomandante Marcos
> > -------------------------------------------------------------------
> >
>


From: Justin Couch <justin@vlc.com.au>
Date: Friday, 10 September 2004 5:33 PM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

thyme wrote:

> The VRML97 specs will only possibly state how creaseAngle affects an
> IndexedFaceSet it will not state anywhere that creaseAngle apples to how
> vertices are transformed during rendering.

Please justify that statement with actual spec wordage.

The spec quite clearly states what an IFS and other geometry is to do
when taking the contents of the scene graph and turning it into
geometry. For example, here's a section from the IFS spec in X3D (which
is virtually unchanged from VRML97)

"If the normal field is NULL, the browser shall automatically generate
normals, using creaseAngle to determine if and how normals are smoothed
across shared vertices (see 11.2.3 Common geometry fields)."

Going to the referenced section:

"Certain geometry nodes have several fields that provide information
about the rendering of the geometry.

...

"The creaseAngle field affects how default normals are generated. If the
angle between the geometric normals of two adjacent faces is less than
the crease angle, normals shall be calculated so that the faces are
shaded smoothly across the edge; otherwise, normals shall be calculated
so that a lighting discontinuity across the edge is produced."


Note that nowhere in this specification does it make a distinction on
how or where those coordinates come from. It doesn't even mention
coordinates at all! What it is caring about is face normals and how
those interact to form a final rendered appearance. Those face normals
must be generated from somewhere, which is some sort of coordinate data.
There's no mention here about needing to generate extra vertices or
anything. All it states is policy - if the face normals are too far
apart, there should be a distinct visual shading difference between
them. How they got that way, it doesn't care about, just what happens
after they find themself in that particular situation - whether it be a
weighted set of vertices or a CoordinateInterpolator.

If a user was to modify the coordinates directly using a script and
their own behaviour, rather than an internal implementation of the same
thing, would you expect things to appear any differently? Of course not.
That's what is being considered here. The spec is getting in the way of
having this situation happen (two different types of implementations are
not capable of having identical visual output).

> HanimJoint nodes introduce the possibility if vertices having different
> weights within the one IndexedFaceSet. This does not imply in away way to me
> that this has to mean creaseAngle has to be recalculated for each frame.

It quite clearly states how one is to generate normals for a mesh
_during rendering_. There's no policy about when this should or should
not apply. It applies at all times, under all circumstances, unless
specified otherwise. If you are generating normals, then these are the
rules you are to follow. That's one of the strengths of X3D - there are
no conditionals. Stuff should behave the same all the time, regardless
of how it is put into a scene graph. How something is implemented
internally is not a concern of VRML/X3D, it's interested only in the end
result - what it looks like on screen.

--
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Look through the lens, and the light breaks down into many lights.
  Turn it or move it, and a new set of arrangements appears... is it
  a single light or many lights, lights that one must know how to
  distinguish, recognise and appreciate? Is it one light with many
  frames or one frame for many lights?"      -Subcomandante Marcos
-------------------------------------------------------------------


From: Matthew T. Beitler <beitler@cis.upenn.edu>
Date: Saturday, 11 September 2004 1:32 AM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Justin Couch < justin@vlc.com.au > wrote:
  >
  > And there's a rather significant performance penalty too for reading
  > anything back out of buffers rather than copying between buffers.
  >
If you can get your hands on a new PCIe card/system, things are much
better...  It still is slower than pushing data on the card, but it is
now significantly faster than the old PCI readbacks were...


Justin Couch < justin@vlc.com.au > wrote:
  >
  > Not anymore. OpenGL 2.0 was formally released yesterday. Super buffers
  > are in. Now all we need is driver and hardware support for it - which
  > will probably take the next generation of chips to be any good.
  >
Are you sure, I looked at it and haven't been able to find a description
of extensions that provide this functionality...  If you have a
referencelink/page#, post it here...


Justin Couch < justin@vlc.com.au > wrote:
  >
  > I would far prefer the first part without the second. From an X3D
  > perspective, it's still not possible to implement the second part and
  > remain conformant to the existing event model. It assumes that you are
  > rendering at the same time as the event model is evaluating. If I have
  > to read the values back from the video card, they won't be aviable
  > until the frame after they were drawn (or potentially many frames
  > later in a multithreaded rendering system like what we use).  In an
  > theoretical single threaded renderer you have this cycle (4.4.8.3):
  >
  > a. Update camera based on currently bound Viewpoint's position and
  > orientation;
  > b. Evaluate input from sensors;
  > c. Evalute routes;
  > d. If any events were generated from steps b and c, go to step b and
  > continue.
  > e. Render graphics
  >
The way to handle this is to have an off screen context within which the
vertex shader for the H-Anim skin is evaluated as part of step b of the
event cascade...

This eliminates the possibility for delay complications no matter how
many processors are added to the system...

I hope that helps...

-Matt


--
Matthew T. Beitler ( beitler@cis.upenn.edu ) ( beitler@acm.org )
http://www.cis.upenn.edu/~beitler
   Center for Human Modeling and Simulation
   University of Pennsylvania


From: Justin Couch <justin@vlc.com.au>
Date: Saturday, 11 September 2004 2:25 AM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Matthew T. Beitler wrote:

> If you can get your hands on a new PCIe card/system, things are much
> better...  It still is slower than pushing data on the card, but it is
> now significantly faster than the old PCI readbacks were...

The bus bandwidth is better, but there is still the problem of reading
out of the video memory itself. That's still an expensive operation to
perform as most cards are not optimised for it.

>
> Are you sure, I looked at it and haven't been able to find a description
> of extensions that provide this functionality...  If you have a
> referencelink/page#, post it here...

Multiple Render Targets is what you're after as that allows a shader or
fixed function app to write to multiple output buffers in a single pass.
  If you set one of those buffers to use the Render to Vertex Array
capabilties, it works even better than writing directly to a texture.

The 2.0 spec PDF is here:
http://www.opengl.org/documentation/specs/version2.0/glspec20.pdf


> The way to handle this is to have an off screen context within which the
> vertex shader for the H-Anim skin is evaluated as part of step b of the
> event cascade...
  >
> This eliminates the possibility for delay complications no matter how
> many processors are added to the system...

Actually, makes no difference at all. Everything that gets splatted to
the texture is synched to the rest of the rendering pipeline to avoid
out-of-synch problems in the rest of the scene. For example, say you're
doing shadow volumes - you still need the geometry to be set to the same
frame cycle as that generating the shadow to the offscreen texture,
which is the same frame that it is used to render the full screen. At
least in our low-level API, the offscreen drawables are subject to the
same pipeline process as the on-screens: culling and sorting are still
very important to do for anything more than a simplistic offscreen draw.

In the case of a clustered system, you really have no control over it at
all. Once stuff disappears down the wire to your slave IGs, getting the
data back make take quite some considerable time. Synchronising
readbacks in a multipass system, which is effectively what you're
proposing, would cause some very large delays happening in the rendering
loop.

Finally, rendering to an offscreen, reading back and using that to drive
the normal generation and final shading still is going to have some
massive overheads as now that's twice the number of vertices that have
to be pushed down the geometry pipeline every frame. In our more extreme
case, that's going from 750K vertices to 1.5 million vertices. Not good.

--
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Look through the lens, and the light breaks down into many lights.
  Turn it or move it, and a new set of arrangements appears... is it
  a single light or many lights, lights that one must know how to
  distinguish, recognise and appreciate? Is it one light with many
  frames or one frame for many lights?"      -Subcomandante Marcos
-------------------------------------------------------------------


From: Matthew T. Beitler <beitler@cis.upenn.edu>
Date: Saturday, 11 September 2004 6:28 AM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Justin Couch < justin@vlc.com.au > wrote:
  >
  >> Are you sure, I looked at it and haven't been able to find a
  >> description of extensions that provide this functionality...  If you
  >> have a referencelink/page#, post it here...
  >
  > Multiple Render Targets is what you're after as that allows a
  > shader or fixed function app to write to multiple output buffers
  > in a single pass.  If you set one of those buffers to use the
  > Render to Vertex Array capabilties, it works even better than
  > writing directly to a texture.
  >
This "render target" approach isn't the extension capability I was
referring to...  I've been able to confirm (via John Leech's notes from
SIGGRAPH, http://www.opengl.org/about/news/siggraph2004 ) that the uber
buffer extensions to the spec were left out of 2.0...


Justin Couch < justin@vlc.com.au > wrote:
  >
  > Actually, makes no difference at all. Everything that gets splatted to
  > the texture is synched to the rest of the rendering pipeline to avoid
  > out-of-synch problems in the rest of the scene. For example, say
  > you're doing shadow volumes - you still need the geometry to be set
  > to the same frame cycle as that generating the shadow to the
  > offscreen texture, which is the same frame that it is used to render
  > the full screen. At least in our low-level API, the offscreen
  > drawables are subject to the same pipeline process as the
  > on-screens: culling and sorting are still very important to do for
  > anything more than a simplistic offscreen draw.
  >
But there isn't anything with OpenGL which prevents you from separating
the pbuffer drawables and the screen drawables into separate pipeline
processes, right???  I'm just want to make sure that your talking
specifically about how things are implemented for Xj3D...


Justin Couch < justin@vlc.com.au > wrote:
  >
  > Finally, rendering to an offscreen, reading back and using that to
  > drive the normal generation and final shading still is going to have
  > some massive overheads as now that's twice the number of vertices
  > that have to be pushed down the geometry pipeline every frame. In
  > our more extreme case, that's going from 750K vertices to 1.5
  > million vertices. Not good.
  >
I think there is a case to be made for both sides of the fence, some
applications need to be able to determine the deformed state of the
surface of the skin (even if it decreases performance a bit) and some
applications don't care where the surface is and would rather have the
performance gained by not transferring the data across the bus...

This will make a good example for the x3d-shaders group to look at,
since it hightlights the control tradeoffs...  However, I think we need
to come up with a solution now that fits the requirements of those who
would adopt H-Anim...  Here is a possible solution that will allow us to
say "there is no fence"...

What do people think about a field addition such as:
   SFBool    [in,out] readback     TRUE

This would keep the behavior which the group originally intended, while
allowing for the behavior which Justin is interested in...

-Matt


--
Matthew T. Beitler ( beitler@cis.upenn.edu ) ( beitler@acm.org )
http://www.cis.upenn.edu/~beitler
   Center for Human Modeling and Simulation
   University of Pennsylvania


From: Justin Couch <justin@vlc.com.au>
Date: Saturday, 11 September 2004 7:10 AM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Matthew T. Beitler wrote:

> This "render target" approach isn't the extension capability I was
> referring to...  I've been able to confirm (via John Leech's notes from
> SIGGRAPH, http://www.opengl.org/about/news/siggraph2004 ) that the uber
> buffer extensions to the spec were left out of 2.0...

Super buffers wouldn't have gotten you what you wanted anyway. They just
describe a generalised memory model and capabilities request mechanism.
Multiple Render Target is the one that you really need because that way
the shader can generate both your pixels on screen and also write the
coordinate values to a pbuffer texture for later use in a single pass.

> But there isn't anything with OpenGL which prevents you from separating
> the pbuffer drawables and the screen drawables into separate pipeline
> processes, right???

There's a big "it depends" there. pBuffers are managed by the on-screen
surface that they came from and there's some interaction, particularly
if you're using shared GL contexts to play with. It may be possible to
separate them out, but it's definitely a case-by-case basis. If you're
layering an X3D browser over any of the more common scene graphs, I
don't believe you can do this.

On top of this, the Java bindings to OpenGL do actually have a problem
here as I can't force the pBuffers to draw in a separate thread to the
main canvas. The way it's put together is that we must call the display
for the pbuffer while processing the display for the main canvas. That's
a PITA restriction to deal with :(

> This will make a good example for the x3d-shaders group to look at,
> since it hightlights the control tradeoffs...  However, I think we need
> to come up with a solution now that fits the requirements of those who
> would adopt H-Anim...  Here is a possible solution that will allow us to
> say "there is no fence"...

The one that we've been bantering around here is that we'll just have to
treat each geometry on a per-object basis. If the users provides an
indexed face set then we'll use CPU mesh handling. If they provide one
of the Triangle*Set nodes then they can get GPU mesh handling. That will
also mean flat shading unless they also provide a set of normals to work
with.

> What do people think about a field addition such as:
>   SFBool    [in,out] readback     TRUE
>
> This would keep the behavior which the group originally intended, while
> allowing for the behavior which Justin is interested in...

Ah... where would that go - on Humanoid? More details please.

--
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Look through the lens, and the light breaks down into many lights.
  Turn it or move it, and a new set of arrangements appears... is it
  a single light or many lights, lights that one must know how to
  distinguish, recognise and appreciate? Is it one light with many
  frames or one frame for many lights?"      -Subcomandante Marcos
-------------------------------------------------------------------


From: thyme <techuelife@tpg.com.au>
Date: Saturday, 11 September 2004 11:29 AM
To: Matthew T. Beitler <beitler@cis.upenn.edu>; h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Hi Matt

> Matt wrote:
> What do people think about a field addition such as:
>    SFBool    [in,out] readback     TRUE
>
> This would keep the behavior which the group originally intended, while
> allowing for the behavior which Justin is interested in...

I don't know much about the history of the group. Are you sure this is what
the group originally intended for their design? I very much doubt they would
have intended for creases to appear and disappear during animation.
If the point field values are not meant to be updated by animation I can not
see any point developing a workaround to a problem that does not exist.
regards
thyme



From: thyme <techuelife@tpg.com.au>
Date: Saturday, 11 September 2004 11:29 AM
To: Justin Couch <justin@vlc.com.au>; h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Hi Justin

The X3D specs state:

The creaseAngle field affects how default normals are generated. If the
angle between the geometric normals of two adjacent faces is less than the
crease angle, normals shall be calculated so that the faces are shaded
smoothly across the edge; otherwise, normals shall be calculated so that a
lighting discontinuity across the edge is produced.

No where here does it state that creaseAngle applies to the transformed
vertices as opposed to the IndexedFaceSet's field values.

Your interpretation would bring in many problems not only performance issues
but other serious problems like creases appearing and disappearing when not
wanted (you over looked how serious a problems this would be in a previous
posting from me). creaseAngle would be valueless like this unless it was
always set to 2 * PI radians (no crease) or if you did not animate it. No
it would not be "a content problem." "The user screwed up and didn't put a
"big enough" value in" as you put it. It would be a serious design flaw
because a limb can bend at any angle during animation. Another problem you
may not realise is when a model animates there is the very real problem of
triangles getting to much bunched up in places (like the shoulders) for
normals to be generated properly causing nasty dark or white patches to
appear during animation. I have seen this happen so from my experience I
would strongly advise that normals should only be generated at the non
deformed stage.

The specs should be more explicit perhaps with some words like:

creaseAngle is only relevant to the IndexedFaceSets field values it does not
apply to the vertices after they have been transformed.

Because it does not state it clear enough leaving the possibility to
interoperate creasAngle applies to transformed vertices instead of the
actual field values, what good reason is there for going with the worst
possible interpretation of the two? Surely the most obvious workable
interpretation would be the most wise to pursue.
regards
thyme

----- Original Message -----
From: "Justin Couch" < justin@vlc.com.au >
To: < h-anim@web3d.org >
Sent: Friday, September 10, 2004 5:33 PM
Subject: Re: [h-anim] H-anim with hardware shaders issues


> thyme wrote:
>
> > The VRML97 specs will only possibly state how creaseAngle affects an
> > IndexedFaceSet it will not state anywhere that creaseAngle apples to how
> > vertices are transformed during rendering.
>
> Please justify that statement with actual spec wordage.
>
> The spec quite clearly states what an IFS and other geometry is to do
> when taking the contents of the scene graph and turning it into
> geometry. For example, here's a section from the IFS spec in X3D (which
> is virtually unchanged from VRML97)
>
> "If the normal field is NULL, the browser shall automatically generate
> normals, using creaseAngle to determine if and how normals are smoothed
> across shared vertices (see 11.2.3 Common geometry fields)."
>
> Going to the referenced section:
>
> "Certain geometry nodes have several fields that provide information
> about the rendering of the geometry.
>
> ...
>
> "The creaseAngle field affects how default normals are generated. If the
> angle between the geometric normals of two adjacent faces is less than
> the crease angle, normals shall be calculated so that the faces are
> shaded smoothly across the edge; otherwise, normals shall be calculated
> so that a lighting discontinuity across the edge is produced."
>
>
> Note that nowhere in this specification does it make a distinction on
> how or where those coordinates come from. It doesn't even mention
> coordinates at all! What it is caring about is face normals and how
> those interact to form a final rendered appearance. Those face normals
> must be generated from somewhere, which is some sort of coordinate data.
> There's no mention here about needing to generate extra vertices or
> anything. All it states is policy - if the face normals are too far
> apart, there should be a distinct visual shading difference between
> them. How they got that way, it doesn't care about, just what happens
> after they find themself in that particular situation - whether it be a
> weighted set of vertices or a CoordinateInterpolator.
>
> If a user was to modify the coordinates directly using a script and
> their own behaviour, rather than an internal implementation of the same
> thing, would you expect things to appear any differently? Of course not.
> That's what is being considered here. The spec is getting in the way of
> having this situation happen (two different types of implementations are
> not capable of having identical visual output).
>
> > HanimJoint nodes introduce the possibility if vertices having different
> > weights within the one IndexedFaceSet. This does not imply in away way
to me
> > that this has to mean creaseAngle has to be recalculated for each frame.
>
> It quite clearly states how one is to generate normals for a mesh
> _during rendering_. There's no policy about when this should or should
> not apply. It applies at all times, under all circumstances, unless
> specified otherwise. If you are generating normals, then these are the
> rules you are to follow. That's one of the strengths of X3D - there are
> no conditionals. Stuff should behave the same all the time, regardless
> of how it is put into a scene graph. How something is implemented
> internally is not a concern of VRML/X3D, it's interested only in the end
> result - what it looks like on screen.
>
> --
> Justin Couch                         http://www.vlc.com.au/~justin/
> Java Architect & Bit Twiddler              http://www.yumetech.com/
> Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
> -------------------------------------------------------------------
> "Look through the lens, and the light breaks down into many lights.
>   Turn it or move it, and a new set of arrangements appears... is it
>   a single light or many lights, lights that one must know how to
>   distinguish, recognise and appreciate? Is it one light with many
>   frames or one frame for many lights?"      -Subcomandante Marcos
> -------------------------------------------------------------------
>


From: Matthew T. Beitler <beitler@cis.upenn.edu>
Date: Saturday, 11 September 2004 2:59 PM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Justin Couch < justin@vlc.com.au > wrote:
>
> Super buffers wouldn't have gotten you what you wanted anyway.
> They just describe a generalised memory model and capabilities
> request mechanism.  Multiple Render Target is the one that you
> really need because that way the shader can generate both your
> pixels on screen and also write the coordinate values to a
> pbuffer texture for later use in a single pass.
>
I brought it up because it would provide a more formalized mechanism for
reading vertex data back off of the framebuffer (as opposed to doing a
Brabec/Seidel readback, which some [not me] might consider to be a
kludge)...  The superbuffers structure which Rob Mace has been
advocating over the past couple of years would of put a more formalized
structure in place for doing this...


Justin Couch < justin@vlc.com.au > wrote:
>
> There's a big "it depends" there. pBuffers are managed by the
> on-screen surface that they came from and there's some interaction,
> particularly if you're using shared GL contexts to play with. It
> may be possible to separate them out, but it's definitely a
> case-by-case basis. If you're layering an X3D browser over any of
> the more common scene graphs, I don't believe you can do this.
>
I'll try to mockup a little example that pulls this off...  All of my
current shader code (work I've been doing which has nothing to do with
H-Anim) only does calculations and shunts the results to a pbuffer,
which are then read back to the main algorithm (the results are never
visualized on screen)...  I do ~4 successive, but different dataset
calculations on the GPU which have no significant graphically
visualizable result, but I think I can add in an unrelated on screen
visualization without too much added difficulty (probably a week or so
till I have time in my schedule) which would demonstrate this capability...


Justin Couch < justin@vlc.com.au > wrote:
>
>> What do people think about a field addition such as:
>>   SFBool    [in,out] readback     TRUE
>>
>> This would keep the behavior which the group originally intended,
>> while allowing for the behavior which Justin is interested in...
>
> Ah... where would that go - on Humanoid? More details please.
>
That's what I was thinking, but I'm open to other options for specifying
which behavior is desired...  Thoughts???

-Matt


--
Matthew T. Beitler ( beitler@cis.upenn.edu ) ( beitler@acm.org )
http://www.cis.upenn.edu/~beitler
   Center for Human Modeling and Simulation
   University of Pennsylvania



From: Paul Aslin <fabricatorgeneral@yahoo.com>
Date: Saturday, 11 September 2004 4:50 PM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

--- "Matthew T. Beitler" < beitler@cis.upenn.edu > wrote:

> For now my recommendation is that we write into the spec
> that it is
> alright if implementations don't update the Coordinate
> node with the
> vertex positions calculated by the GPU, but also note
> that such an
> implementation is possible (cite the Brabec/Seidel paper)
> and require
> that once the ARB decides on a formal mechanism for this,
> developers
> should implement the readback capability...

I have no particular problem with not updating the
Coordinate field, despite having a tool which relies on
this. However I wonder if this would cause problems in
other areas ???


> Does that sound reasonable to everyone???

Would it be possible to use the approach of not updating
the Coordinate node with vertex changes unless another node
specifically tries to read the values ?

By this I mean via direct access from a Script node or from
the Browser itself, but perhaps not eventOut cascades. The
simple reason being applications which rely on being able
to get vertex information for the current state of the
scene.

For example:
I put a TouchSensor on an H-Anim figure and try to get
hitPoint_changed values these could then be compared to the
coordinate field to figure out which vertex the pointer is
over.

Other examples would be collision detection or raytracing.

Perhaps these applications could be done as a secondary
(slower process) using software only, simply because
updates would not be required continiously.



_______________________________
Do you Yahoo!?
Shop for Back-to-School deals on Yahoo! Shopping.
http://shopping.yahoo.com/backtoschool


From: Matthew T. Beitler <beitler@cis.upenn.edu>
Date: Saturday, 11 September 2004 10:23 PM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

thyme < techuelife@tpg.com.au > wrote:
  >
  > I don't know much about the history of the group. Are you sure this
  > is what the group originally intended for their design? I very much
  > doubt they would have intended for creases to appear and disappear
  > during animation.  If the point field values are not meant to be
  > updated by animation I can not see any point developing a
  > workaround to a problem that does not exist.
  >
I think you've misunderstood the crux of the matter at hand...  I'll
attempt to summarize the issue more clearly...

With most of the H-Anim continuous mesh example implementations to date
(like Boxman), the values of the Humanoid.skinCoord.point field are
altered according to the skin weights & the changes of the rotation
fields of the figure's Joint nodes and one can determine the exact
location of the points in the skin at any point in time (accessible via
ROUTEing/etc machanisms of the X3D scenegraph)...

With continuous mesh implementations which utilize a vertex shader to
perform the deformations, the deformed values are calculated on the
graphics card...  Under this kind of implementation, there are 2
behavior scenerios being advocated:
1) With the behavior Justin describes (aka readback=FALSE) the values
would never be propagated back to the X3D scenegraph (i.e. one wouldn't
be able to determine the exact location of the points in the skin at any
point in time via ROUTEing/etc machanisms)...
2) With the behavior I have described (aka readback=TRUE) the values
would are propagated back to the X3D scenegraph (i.e. one is able to
determine the exact location of the points in the skin at any point in
time via ROUTEing/etc machanisms)...  This behavior preserves the
behavior (like boxman) which the group intended...

Justin is correct to bring up the point that he does and we definitely
need a means of controlling this readback behavior, but question at hand
is what is the best/cleanest way to do that...

I hope that summary was succinct...

-Matt


--
Matthew T. Beitler ( beitler@cis.upenn.edu ) ( beitler@acm.org )
http://www.cis.upenn.edu/~beitler
   Center for Human Modeling and Simulation
   University of Pennsylvania


From: Joe D Williams <JOEDWIL@earthlink.net>
Date: Sunday, 12 September 2004 1:30 AM
To: Matthew T. Beitler <beitler@cis.upenn.edu>; h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

> one can determine the exact
> location of the points in the skin at any point in time

A Site has a position coincident with a skin vertex.
During movement the skin may be deformed
according to weighting, which may be updated
frame to frame.
The need is to simulate contact between
a Site of this humanoid geometry with a
specific Site of this or another humanoid geometry.

rigid bodies move within defined limits,
the vertices attached to them also move
according to the weight assigned to each
attached vertex.

Sometimes these vertices will interact with
other vertices that may be a part of the
body or part of other scene elements.

With readback false, vertex position is hidden
from scene elements external to the humanoid.
This may be OK if the humanoid is the top scene
element and collision detection with external
objects, and lighting, is not required.
The visual effect may be that no new normals
are generated as the humanoid moves, and is not
responsive to 'global' lighting.

Tech Tip: [] skin
               [] weight

Uses: Scenes involving many objects
where each object is an instance of a shader.

With readback true, up-to-date vertex position
information is available to scene elements
external to the humanoid, like we need for
everything that is fun.

Best Regards,
Joe


From: thyme <techuelife@tpg.com.au>
Date: Sunday, 12 September 2004 5:48 AM
To: Joe D Williams <JOEDWIL@earthlink.net>; Matthew T. Beitler <beitler@cis.upenn.edu>; h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Hi Joe and Matt and all

> The visual effect may be that no new normals
> are generated as the humanoid moves, and is not
> responsive to 'global' lighting.

Why would you ever want to stop global lighting?
Hardware will transform the normals properly. This is not a problem.
The only problem here is people thinking there is one!
I have done all this in Seamless3d with IndexedFaceSets which involves
extra vertices being created for the internal wraparound texCoords and
the shading looks great in hardware how the IndexedFaceSet is meant
to look.
regards
thyme


----- Original Message -----
From: "Joe D Williams" < JOEDWIL@earthlink.net >
To: "Matthew T. Beitler" < beitler@cis.upenn.edu >; < h-anim@web3d.org >
Sent: Sunday, September 12, 2004 1:30 AM
Subject: Re: H-anim with hardware shaders issues [h-anim]


> > one can determine the exact
> > location of the points in the skin at any point in time
>
> A Site has a position coincident with a skin vertex.
> During movement the skin may be deformed
> according to weighting, which may be updated
> frame to frame.
> The need is to simulate contact between
> a Site of this humanoid geometry with a
> specific Site of this or another humanoid geometry.
>
> rigid bodies move within defined limits,
> the vertices attached to them also move
> according to the weight assigned to each
> attached vertex.
>
> Sometimes these vertices will interact with
> other vertices that may be a part of the
> body or part of other scene elements.
>
> With readback false, vertex position is hidden
> from scene elements external to the humanoid.
> This may be OK if the humanoid is the top scene
> element and collision detection with external
> objects, and lighting, is not required.
> The visual effect may be that no new normals
> are generated as the humanoid moves, and is not
> responsive to 'global' lighting.
>
> Tech Tip: [] skin
>                [] weight
>
> Uses: Scenes involving many objects
> where each object is an instance of a shader.
>
> With readback true, up-to-date vertex position
> information is available to scene elements
> external to the humanoid, like we need for
> everything that is fun.
>
> Best Regards,
> Joe
>


From: Paul Aslin <fabricatorgeneral@yahoo.com>
Date: Monday, 13 September 2004 2:32 AM
To: thyme <techuelife@tpg.com.au>; Joe D Williams <JOEDWIL@earthlink.net>; Matthew T. Beitler <beitler@cis.upenn.edu>; h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues


I think an example would help here.

Lets say we make a figure wearing a shirt, or other cotton
clothing. Where the Joints are you would expect to see
sharp folds when that area is bent, this would only occur
if the normals are calculated on the fly.

Bascially the Browser/application needs to treat every
alteration to the mesh duing animation as if it were a new
IFS.


I have one sugestion though. Apparently is is possible, but
not easy, to write to textures from within a Shader script.
Obviously there is direct read/write with texture memory
from the CPU side of things.

--- thyme < techuelife@tpg.com.au > wrote:

> Hi Joe and Matt and all
>
> > The visual effect may be that no new normals
> > are generated as the humanoid moves, and is not
> > responsive to 'global' lighting.
>
> Why would you ever want to stop global lighting?
> Hardware will transform the normals properly. This is not
> a problem.
> The only problem here is people thinking there is one!
> I have done all this in Seamless3d with IndexedFaceSets
> which involves
> extra vertices being created for the internal wraparound
> texCoords and
> the shading looks great in hardware how the
> IndexedFaceSet is meant
> to look.
> regards
> thyme




__________________________________
Do you Yahoo!?
Yahoo! Mail Address AutoComplete - You start. We finish.
http://promotions.yahoo.com/new_mail


From: thyme <techuelife@tpg.com.au>
Date: Tuesday, 14 September 2004 2:04 AM
To: Matthew T. Beitler <beitler@cis.upenn.edu>; h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Hi Matt and All

> thyme < techuelife@tpg.com.au > wrote:
>  >
>  > Matt < beitler@cis.upenn.edu > wrote:
>  > I don't know much about the history of the group. Are you sure this
>  > is what the group originally intended for their design? I very much
>  > doubt they would have intended for creases to appear and disappear
>  > during animation.  If the point field values are not meant to be
>  > updated by animation I can not see any point developing a
>  > workaround to a problem that does not exist.
>  >
> I think you've misunderstood the crux of the matter at hand...  I'll
> attempt to summarize the issue more clearly...

Thank you for going to the trouble to try and explain things more
clearly but sorry I think You, Justin and others seriously
misunderstand weighted vertex animation and shading.

In the following URL (that should open in any browser):

http://www4.tpg.com.au/users/gilldawn/seamless3d/originalIFS.wrl

we see a simple VRML97 file containing a single IndexedFaceSet.
It was generated and trimmed by Seamless3d so is a high quality
avatar with no seams arising from the default normals and
wraparound texture.

Instead of directly specifying the normals, the IndexedFaceSet
uses texCoordIndex and creaseAngle set to 3.14 to achieve a
smooth look. Behind the scenes Seamless3d first calculates the
normals for the IndexedFaceSet then it creates the extra vertices
so that the number of texCoords and coordinates match up for
the hardware shader.

In the next pictures:

http://www4.tpg.com.au/users/gilldawn/seamless3d/correctNormals.png
http://www4.tpg.com.au/users/gilldawn/seamless3d/correctNormals2.png

we see the IndexedFaceSet added to a HAnimHumanoid node
animated and rendered using Seamless3d's hardware shader.
It looks how it is meant to look, true to how the artist intended
the IndexedFaceSet to look. The normals look correct because
the hardware shader was designed for this task and so takes
care of transforming the normals for us.

In the next example:

http://www4.tpg.com.au/users/gilldawn/seamless3d/recalculatedNormals2.png
http://www4.tpg.com.au/users/gilldawn/seamless3d/recalculatedNormals.png

we see how things go horribly wrong when
we try to recalculate the normals for each frame (Same as the
boxman). The triangles in places like the elbows, knees and
especially for the shoulders and the crotch look very dark
because the normals can not be generated properly after they
have been transformed.
Misunderstanding what vertex weighted animation is all about
and misinterpreting the VRML/X3D specs will only lead to a
big ugly task for the X3D programmer which will create very
poor results no one will be interested in.
Why do we want to tamper with the renderer? No one has yet
been able to explain why and sound like they know what they
are talking about.

Please verify my results by downloading Seamless3d from:

http://www4.tpg.com.au/users/gperrett/seamless3d/index.html

It is only 603 KB to download.
Unzip the file to your hard disk drive, run it and open the following file:

http://www4.tpg.com.au/users/gilldawn/seamless3d/buildTextureAv.x3dvz

and the texture for it:

http://www4.tpg.com.au/users/gilldawn/seamless3d/buildTextureAv.png

This file must be downloaded to the hard drive too.
Press the space bar to toggle out of wire-frame mode and right drag to
examine.

The URL for the software version that recalculates the creaseAngle for each
frame is:

http://www4.tpg.com.au/users/gilldawn/seamless3d/recalculatedNormals.wrl

This has been tested in BitManagement/Blaxxun Contact (Don't
open the recalculatedNormals.wrl in Seamless3d because it will
ignore the script and render it to look good)

BTW, the boxman example by writing to the point field one
element at a time is illegal for VRML or X3D. Braden originally
pointed this out to me. I pointed this out in a posting here a few
months ago. To my understanding the code in my software
version is legal and runs faster but wont compare to the 400
to 500 frames per second I get in Seamless3d using a cheap
ATI Radeon 9200 video card.

I made Seamless3d able to render standard HAnimJoint nodes
using a DirectX single skin vertex weighted shader months ago
and have spent the last 3 years of my life developing code and
art that revolves around single skin mesh animation.
I do not yet know of anyone else who has a hardware shader
in their program to render a standard X3D avatar utilising the
HAnimJoint node. If anyone can point out why Seamless3d
is rendering this H-Anim avatar in a non compliant way as
Justin says it sounds like it is I would much like to hear why.

regards
thyme
creator of Seamless3d and Techuelife Island
http://www4.tpg.com.au/users/gperrett/seamless3d/index.html



----- Original Message -----
From: "Matthew T. Beitler" < beitler@cis.upenn.edu >
To: < h-anim@web3d.org >
Sent: Saturday, September 11, 2004 10:23 PM
Subject: Re: H-anim with hardware shaders issues [h-anim]


> thyme < techuelife@tpg.com.au > wrote:
>  >
>  > I don't know much about the history of the group. Are you sure this
>  > is what the group originally intended for their design? I very much
>  > doubt they would have intended for creases to appear and disappear
>  > during animation.  If the point field values are not meant to be
>  > updated by animation I can not see any point developing a
>  > workaround to a problem that does not exist.
>  >
> I think you've misunderstood the crux of the matter at hand...  I'll
> attempt to summarize the issue more clearly...
>
> With most of the H-Anim continuous mesh example implementations to date
> (like Boxman), the values of the Humanoid.skinCoord.point field are
> altered according to the skin weights & the changes of the rotation
> fields of the figure's Joint nodes and one can determine the exact
> location of the points in the skin at any point in time (accessible via
> ROUTEing/etc machanisms of the X3D scenegraph)...
>
> With continuous mesh implementations which utilize a vertex shader to
> perform the deformations, the deformed values are calculated on the
> graphics card...  Under this kind of implementation, there are 2
> behavior scenerios being advocated:
> 1) With the behavior Justin describes (aka readback=FALSE) the values
> would never be propagated back to the X3D scenegraph (i.e. one wouldn't
> be able to determine the exact location of the points in the skin at any
> point in time via ROUTEing/etc machanisms)...
> 2) With the behavior I have described (aka readback=TRUE) the values
> would are propagated back to the X3D scenegraph (i.e. one is able to
> determine the exact location of the points in the skin at any point in
> time via ROUTEing/etc machanisms)...  This behavior preserves the
> behavior (like boxman) which the group intended...
>
> Justin is correct to bring up the point that he does and we definitely
> need a means of controlling this readback behavior, but question at hand
> is what is the best/cleanest way to do that...
>
> I hope that summary was succinct...
>
> -Matt
>
>
> --
> Matthew T. Beitler ( beitler@cis.upenn.edu ) ( beitler@acm.org )
>    http://www.cis.upenn.edu/~beitler
>    Center for Human Modeling and Simulation
>    University of Pennsylvania
>


From: Justin Couch <justin@vlc.com.au>
Date: Tuesday, 14 September 2004 2:56 AM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

thyme wrote:

> Thank you for going to the trouble to try and explain things more
> clearly but sorry I think You, Justin and others seriously
> misunderstand weighted vertex animation and shading.

Actually, we understand it perfectly. We also happen to understand the
X3D spec perfectly. In fact, I wrote a large portion of X3D (and a large
part of VRML97 Part 2), so I know exactly what the spec means. We're
trying to tell you that your interpretation of the spec is not correct
but that does not seem to be getting through to you. Weighted vertex
animation is one thing when it is considered in a vaccuum. However, in
the realm of X3D there are many, many other nodes and structures it must
interact with. These structures have very clearly defined behaviours and
there is no leeway in their interpretation or implementation. It is a
case of follow it or else be labelled as non-conformant. In the grand
scheme of things that could also potentially result in legal action over
the use of service marks, usage of the term "X3D" etc.

--
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Look through the lens, and the light breaks down into many lights.
  Turn it or move it, and a new set of arrangements appears... is it
  a single light or many lights, lights that one must know how to
  distinguish, recognise and appreciate? Is it one light with many
  frames or one frame for many lights?"      -Subcomandante Marcos
-------------------------------------------------------------------


From: Justin Couch <justin@vlc.com.au>
Date: Tuesday, 14 September 2004 3:29 AM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

thyme wrote:

> The X3D specs state:
>
> The creaseAngle field affects how default normals are generated.

[snip]

> No where here does it state that creaseAngle applies to the transformed
> vertices as opposed to the IndexedFaceSet's field values.

It doesn't have to because it doesn't care how you got to the normals in
the first place. Normal generation, by definition, must happen after you
know where your coordinates are. How those coordinates got to where they
are, the spec does not care. You could be using a weighted mesh, a NURBS
surface, or a CoordinateInterpolator to change the coordinates. Once you
have those final vertex positions, you can then calculate the individual
face normals, from which the rest of the particular node behaviour takes
place. In the case of IFS, you must do some funky stuff with creaseAngle
- a requirement that other nodes like TriangleSet do not have.

> Your interpretation would bring in many problems not only performance issues
> but other serious problems like creases appearing and disappearing when not
> wanted (you over looked how serious a problems this would be in a previous
> posting from me).

I deliberately pointed out that it is not a problem but a _desired_ and
_required_ behaviour. When the VRML and X3D specifications were
designed, the highest priority was for ease of end user use, not for
performance. That's why in VRML you see IFS, Extrusion and ElevationGrid
but none of the simpler primitive types like a basic bag of triangles.
If the user does not want a crease to appear during animation, they know
how to fix it. It's there in black and white in the spec - make
creaseAngle bigger until the crease no longer appears in animation.

The fact that it causes problems from a performance perspective (means
that I can't implement it with hardware shaders) is precisely the reason
I started this thread in the first place. It's to make people aware of
this issue and to see if we can come up with alternatives or
recommendations on how users should structure their scene graph.

> creaseAngle would be valueless like this unless it was
> always set to 2 * PI radians (no crease) or if you did not animate it. No
> it would not be "a content problem." "The user screwed up and didn't put a
> "big enough" value in" as you put it. It would be a serious design flaw
> because a limb can bend at any angle during animation.

There is no design flaw. It is a deliberately designed requirement. If
that limb goes over the user-provided angle then a crease is _required_
to appear. Just because you don't like it, does not mean that the spec
should be ignored. Perhaps the user deliberately set the crease angle so
that creases would appear in certain situations (see my example earlier
about the jaw line on the face). In a medical training application, that
crease can be a very significant factor - it may indicate the presence
of the end of a broken bone attempting to poke through the skin rather
than a large absyss that a more rounded skin surface may look like.

> The specs should be more explicit perhaps with some words like:
>
> creaseAngle is only relevant to the IndexedFaceSets field values it does not
> apply to the vertices after they have been transformed.

Define "transformed". Transformed is a very generic term in 3D graphics.
The way you are using is far from common usage. It most often is used in
the following case


Transform {
   translation 0 12 3
   children Shape {
     geometry IndexedFaceSet { ...
     }
   }
}




> Because it does not state it clear enough leaving the possibility to
> interoperate creasAngle applies to transformed vertices instead of the
> actual field values, what good reason is there for going with the worst
> possible interpretation of the two?

Because it is extremely clear about the meaning. There is no other
possible interpretation. It is clear that the spec is talking about face
normals and how they interact to produce a normal at each vertex. Face
normals are derived from the coordinates of the face. How those
coordinates came in to being, the spec does not care about. It presumes
that they exist.

--
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Look through the lens, and the light breaks down into many lights.
  Turn it or move it, and a new set of arrangements appears... is it
  a single light or many lights, lights that one must know how to
  distinguish, recognise and appreciate? Is it one light with many
  frames or one frame for many lights?"      -Subcomandante Marcos
-------------------------------------------------------------------



From: thyme <techuelife@tpg.com.au>
Date: Tuesday, 14 September 2004 4:00 AM
To: Paul Aslin <fabricatorgeneral@yahoo.com>; Joe D Williams <JOEDWIL@earthlink.net>; Matthew T. Beitler <beitler@cis.upenn.edu>; h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Hi Paul, Joe, Matt and All:

>Paul Aslin < fabricatorgeneral@yahoo.com > wrote:
> I think an example would help here.
>
> Lets say we make a figure wearing a shirt, or other cotton
> clothing. Where the Joints are you would expect to see
> sharp folds when that area is bent, this would only occur
> if the normals are calculated on the fly.

The hardware shader naturaly creates the correct types of
creases for this sort of thing.
Please look at these 2 pictures of Seamless3d with a harware
renderer in action for a single skin mesh HAnim compliant avatar.

http://www4.tpg.com.au/users/gilldawn/seamless3d/legCrease.png
http://www4.tpg.com.au/users/gilldawn/seamless3d/legCrease2.png

I for one don't want diamond edges appearing in organic
shapes or clothes, I can not imagine that is going to look
any good at all and even if  it did how can you depend on
the crease appearing where you want it? what is to stop it
from appearing on the front of the knee if its the same
angle as the back of the knee?
This was not the original intent of creaseAngle I am sure.
In a case where we do want creaseAngle for a non organic item
that's part of the single IndexedFaceSet we will mess
things up if creaseAngle is going to recalculate the creases
on the fly because it will cause the organic shape to have
creases when it animates in unwanted places.

>Paul Aslin < fabricatorgeneral@yahoo.com > wrote:
> Bascially the Browser/application needs to treat every
> alteration to the mesh duing animation as if it were a new
> IFS.

Where do you get this assumption from? please look at the
posting I just sent to Matt.
The more I think about it the more I am really puzzled why
anyone is persisting with something so impracticable especially
since it is not specified anywhere in the specs to recalculate
creaseAngle. We are simply doing transformations, we are not
updating fields so there is not a problem because we can render
single skin mesh models exactly the same way IndexedFaceSets
have always been transformed except we are using multiple
transformations for the coordinates and normals.
This makes no difference because the render takes care of the
normals. That's the way its always been anything else is an
invention. I would like to have at least a good reason for
straying from conventional wisdom not a bad one.

regards
thyme


From: Justin Couch <justin@vlc.com.au>
Date: Tuesday, 14 September 2004 5:07 AM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Joe D Williams wrote:


> A Site has a position coincident with a skin vertex.
> During movement the skin may be deformed
> according to weighting, which may be updated
> frame to frame.
> The need is to simulate contact between
> a Site of this humanoid geometry with a
> specific Site of this or another humanoid geometry.

This really isn't possible for internal skin mesh implementation work.
Due to the large amount of calculations that need to be performed, and
the possibility of a large number of joints, you delay updating the skin
coordinates until after the end of the event model, and just before
rendering to the screen. In this way, you only need to propogate all the
matrix calculations once through the entire skeleton and then do the
coordinate calculations. The only way you could keep it coincident with
the surface is to do your own mesh animation through a script and keep
track of the nearest coordinates there.

--
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Look through the lens, and the light breaks down into many lights.
  Turn it or move it, and a new set of arrangements appears... is it
  a single light or many lights, lights that one must know how to
  distinguish, recognise and appreciate? Is it one light with many
  frames or one frame for many lights?"      -Subcomandante Marcos
-------------------------------------------------------------------


From: Justin Couch <justin@vlc.com.au>
Date: Tuesday, 14 September 2004 5:07 AM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Joe D Williams wrote:


> A Site has a position coincident with a skin vertex.
> During movement the skin may be deformed
> according to weighting, which may be updated
> frame to frame.
> The need is to simulate contact between
> a Site of this humanoid geometry with a
> specific Site of this or another humanoid geometry.

This really isn't possible for internal skin mesh implementation work.
Due to the large amount of calculations that need to be performed, and
the possibility of a large number of joints, you delay updating the skin
coordinates until after the end of the event model, and just before
rendering to the screen. In this way, you only need to propogate all the
matrix calculations once through the entire skeleton and then do the
coordinate calculations. The only way you could keep it coincident with
the surface is to do your own mesh animation through a script and keep
track of the nearest coordinates there.

--
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Look through the lens, and the light breaks down into many lights.
  Turn it or move it, and a new set of arrangements appears... is it
  a single light or many lights, lights that one must know how to
  distinguish, recognise and appreciate? Is it one light with many
  frames or one frame for many lights?"      -Subcomandante Marcos
-------------------------------------------------------------------


From: Justin Couch <justin@vlc.com.au>
Date: Tuesday, 14 September 2004 5:09 AM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Matthew T. Beitler wrote:

>>> What do people think about a field addition such as:
>>>   SFBool    [in,out] readback     TRUE
>>>
>>> This would keep the behavior which the group originally intended,
>>> while allowing for the behavior which Justin is interested in...
>>
>>
>> Ah... where would that go - on Humanoid? More details please.
>>
> That's what I was thinking, but I'm open to other options for specifying
> which behavior is desired...  Thoughts???

That would be OK with me so long as the support is required at a higher
component level than the base spec. Supporting that field will guarantee
that Xj3D has to fall back to software rendering to be able to handle it
and I'd like to avoid that if possible.

--
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Look through the lens, and the light breaks down into many lights.
  Turn it or move it, and a new set of arrangements appears... is it
  a single light or many lights, lights that one must know how to
  distinguish, recognise and appreciate? Is it one light with many
  frames or one frame for many lights?"      -Subcomandante Marcos
-------------------------------------------------------------------


From: Mark Callow <msc@meer.net>
Date: Wednesday, 15 September 2004 12:45 AM
To: Justin Couch <justin@vlc.com.au>
Cc: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues


Justin Couch wrote:

> thyme wrote:
>
>> No where here does it state that creaseAngle applies to the transformed
>> vertices as opposed to the IndexedFaceSet's field values.
>
>
> It doesn't have to because it doesn't care how you got to the normals
> in the first place. Normal generation, by definition, must happen
> after you know where your coordinates are. How those coordinates got
> to where they are, the spec does not care. You could be using a
> weighted mesh, a NURBS surface, or a CoordinateInterpolator to change
> the coordinates. Once you have those final vertex positions, you can
> then calculate the individual face normals, from which the rest of the
> particular node behaviour takes place. In the case of IFS, you must do
> some funky stuff with creaseAngle - a requirement that other nodes
> like TriangleSet do not have.
>

If I provide an array of normals along with my vertex positions, does
X3D recompute my normals after transforming the vertices? I don't think
so. (For one thing performance with any graphics hardware (that does
transforms in hardware) would be horrible.) It just transforms them.

But you are saying that if I don't supply normals and let X3D calculate
them for me, it will recalculate them after every vertex transformation.
That seems to me surprisingly different behaviour that happens when
normals are supplied by the content. I thought creaseAngle is basically
a way to reduce the amount of data sent over the wire but behaviour
during transformation was intended to be similar to user-supplied
normals. I certainly wouldn't be expecting a huge performance drop!

Regards

    -Mark


From: Justin Couch <justin@vlc.com.au>
Date: Wednesday, 15 September 2004 4:09 AM
To: h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

Mark Callow wrote:

> If I provide an array of normals along with my vertex positions, does
> X3D recompute my normals after transforming the vertices? I don't think
> so.

That's correct. The particular spec wording for IFS is:

"If the normal field is not NULL, it shall contain a Normal node whose
normals are applied to the vertices or faces of the IndexedFaceSet in a
manner exactly equivalent to that described above for applying colours
to vertices/faces (where normalPerVertex corresponds to colorPerVertex
and normalIndex corresponds to colorIndex). If the normal field is NULL,
the browser shall automatically generate normals, using creaseAngle to
determine if and how normals are smoothed across shared vertices (see
11.2.3 Common geometry fields)."

So that means if explicit normals are defined, they are applied to each
vertex. If normalPerVertex is true, for each vertex, pull the
corresponding normal from the Normal node and send it along. If
normalPerVertex is false, then you are given a face normal, which then
just gets placed in every vertex that is used for that face. That also
means you need to treat the vertex for each face separately from every
other face that may happen to share that vertex.

> But you are saying that if I don't supply normals and let X3D calculate
> them for me, it will recalculate them after every vertex transformation.
> That seems to me surprisingly different behaviour that happens when
> normals are supplied by the content.

That's correct, and is the required behaviour. Auto generating normals
is a convenience for the user, particularly with regards to file size.
However, it has some very severe performance impacts on the rendered
geometry simply because you have to regenerate the entire geometry state
every frame if something changes - normals, colours, vertices, texture
coordinates etc. In Xj3D, using auto generated normals cuts the frame
rate to about a third of not using them in a relatively complex scene.
We're fairly efficient in what we do, but there's no way the code can
get to even a half of the speed of providing explicit normals.

--
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Look through the lens, and the light breaks down into many lights.
  Turn it or move it, and a new set of arrangements appears... is it
  a single light or many lights, lights that one must know how to
  distinguish, recognise and appreciate? Is it one light with many
  frames or one frame for many lights?"      -Subcomandante Marcos
-------------------------------------------------------------------


From: thyme <techuelife@tpg.com.au>
Date: Wednesday, 15 September 2004 5:06 AM
To: Justin Couch <justin@vlc.com.au>; h-anim@web3d.org <h-anim@web3d.org>
Subject: Re: [h-anim] H-anim with hardware shaders issues

> > thyme < techuelife@tpg.com.au > wrote:
> > The specs should be more explicit perhaps with some words like:
> >
> > creaseAngle is only relevant to the IndexedFaceSets field values it does
not
> > apply to the vertices after they have been transformed.
>
> Justin Couch < justin@vlc.com.au > wrote:
> Define "transformed". Transformed is a very generic term in 3D graphics.
> The way you are using is far from common usage. It most often is used in
> the following case
>
>
> Transform {
>    translation 0 12 3
>    children Shape {
>      geometry IndexedFaceSet { ...
>      }
>    }
> }
>

With DirectX which is commonly used for 3d graphics, whether you want
to transform a group of vertices to a single transform matrix or transform
a group of vertices with multiple transform matrices (weighted mesh) you
use a SetTransform function.
Because of this reality from a programmers point of view at least, it is not
hard to view a HAnimJoint node as a kind of transform node.
If you look at the code for the boxman (written by "James Smith
- james@vapourtech.com ") Matt referred to (in this very thread) as an
example of a HAnim avatar that uses the HAnimJoint node you will see
code using the word transform and the comment:

// Transforms the vertices related to a joint

regards
thyme
creator of Seamless3d and Techuelife Island
http://www4.tpg.com.au/users/gperrett/seamless3d/index.html