| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
JAWTWindow.lockSurface(): Check AWT component's native peer
- Fix GLAutoDrawable.dispose(): Dispose drawable even w/o context
- It is possible to have the GLContext not being created (not made current),
so drawable shall be disposed independent.
- Merge Runnable 'postDisposeOnEDTAction' to dispose Runnable for clarity
- GLDrawableHelper: Split disposeGL from invokeGLImpl for clarity
- JAWTWindow.lockSurface(): Check AWT component's native peer
- W/o a native peer (!isDisplayable()), JAWT locking cannot succeed.
- On OSX OpenJDK 1.7, attempting to JAWT lock a peer-less component crashes the VM
- MacOSXJAWTWindow.lockSurfaceImpl(): Remove redundant null checks
|
|
|
|
| |
samplingSink*
|
|
|
|
| |
setSamplingSink(..). Create MSAA samplingSink lazy if null.
|
|
|
|
|
|
|
| |
b83b068c0f426f24a58e2bd9f52de9ebd0c7876d, sync GL command stream before FBO reconfig
Even though we currently have no bug experienced on this, it seems to be a good idea for
highly concurrently GL driver implementations.
|
|
|
|
|
|
|
| |
destruction of drawable
Lack of finishing the GL command stream lead to a SIGSEGV on Windows w/ Nvidia driver
where probably pending GL commands were still being processed concurrently.
|
|
|
|
|
|
|
|
|
|
| |
c002e04f848116922a1ed7bd96ead54961649bbd
As suggested by Julien Gouesse, align 'enqueue(..)' method w/ 'invoke(..)':
- public void enqueue(GLRunnable glRunnable);
+ public boolean invoke(boolean wait, List<GLRunnable> glRunnables);
|
|
|
|
|
|
| |
and after ctx/drawable swap - sync'ing GL state
Otherwise a driver crash may occur on Windows/NVidia.
|
|
|
|
| |
pause/stop - taking execution frequency into account
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(API Change) ; Added GLDrawableUtil
A GLEventListener resides in two states, initialized and uninitialized.
When added to a GLAutoDrawable, it is uninitialized.
A first 'display()' will issue GLEventListener's 'init(..)' which renders it initialized.
This is usually accompanied by 'reshape(..)' propagating the drawable's dimension.
Destruction of the GLAutoDrawable will issue GLEventListener's 'dispose(..)' which renders it uninitialized.
It turns our these means of GLEventListener controls are not sufficient in case
the user requires to remove and add them during the lifecycle and rendering of their GLAutoDrawable host.
GLAutoDrawable 'removeGLEventListener(..)' merely removes the GLEventListener from the list,
but does not complete it's lifecycle, i.e. issues 'dispose(..)' if initialized to realease GL related resources.
Hence the following essential API changes are made to complete the lifecycle:
+ public GLEventListener disposeGLEventListener(GLEventListener listener, boolean remove);
disposing a single GLEventListener, allowing it's removal from the list being optional
This is demonstrated via GLDrawableUtil.swapGLContextAndAllGLEventListener(GLAutoDrawable a, GLAutoDrawable b), see below.
++++++++
Further more the following API changes were made to expose complete control of
GLEventListener to the user:
- public void removeGLEventListener(GLEventListener listener);
+ public GLEventListener removeGLEventListener(GLEventListener listener);
The return value allows simple pipelining, and also delivers information whether
the passed listener was actually removed.
- public GLEventListener removeGLEventListener(int index) throws IndexOutOfBoundsException;
+ public int getGLEventListenerCount();
+ public GLEventListener getGLEventListener(int index) throws IndexOutOfBoundsException;
Dropping the redundant removal by index, while adding count and get methods.
+ public boolean getGLEventListenerInitState(GLEventListener listener);
+ public void setGLEventListenerInitState(GLEventListener listener, boolean initialized);
Allows retrieving and setting of listener states.
All in all these API changes allows a user to experience all freedoms in dealing w/
GLEventListeners hosted by GLAutoDrawable impl. and shall be future proof.
Note that we have avoided the Iterator pattern due to it's overhead of temporal objects creation.
The simple indexed access allows us to implement each method as an atomic operation.
+++++++++++
Further more a simple enqueue(..) method has been added, allowing to just enqueue a GLRunnable
w/o provoking it's execution - as invoke(..) does.
This method pleases a use case where GLRunnables are batched and shall be executed later on..
public boolean invoke(boolean wait, GLRunnable glRunnable);
+ public void enqueue(GLRunnable glRunnable);
+++++++++++
Added GLDrawableUtil, exposes utility function to rearrange GLEventListener, modifiy GLAutoDrawable, etc.
GLDrawableUtil.swapGLContextAndAllGLEventListener(GLAutoDrawable a, GLAutoDrawable b)
is tested and demonstrated w/ TestGLContextDrawableSwitchNEWT.
Manually tested on X11, OSX and Windows.
|
|
|
|
|
|
|
| |
Propagate drawable change to MacOSXCGLContext where either context/NSView or context/NSOpenGLLayer
association needs to get updated.
Fixes drawable/context switch.
|
|
|
|
| |
reparenting TestParentingFocusTraversal01AWT
|
| |
|
|
|
|
|
|
| |
reduce buffer usage (performance) in favor of float[].
Thomas De Bodt reported this error and provided the unit test.
|
|
|
|
|
|
|
|
| |
(GL3), use GL3.2 compatible shader; Use VBO in general.
Covered by:
Auto unit tests: TestOffscreenLayer01GLCanvasAWT, TestOffscreenLayer02NewtCanvasAWT
Manual: TestGearsES2AWT '-gl3 -layered'
|
|
|
|
| |
add forceGL3; TextureDraw01ES2Listener uses defaultShaderCustomization()
|
|
|
|
| |
for success.
|
|
|
|
| |
CGL/CGLExt Robustness ..
|
|
|
|
| |
internal APIs, critical array is not required, hence redundant.
|
|
|
|
| |
internal APIs, critical array is not required, hence redundant.
|
|
|
|
| |
internal APIs, critical array is not required, hence redundant.
|
|
|
|
| |
internal APIs, critical array is not required, hence redundant.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Setting up default VAO for all GL >= 3.2 core ctx.
Refines commit 9b6448b1d54716fd455c0cad0c6133c0edeb3bb8
Due to GL 3.2 core spec: E.2. DEPRECATED AND REMOVED FEATURES (p 331)
"There is no more default VAO buffer 0 bound, hence generating and binding one
to avoid INVALID_OPERATION at VertexAttribPointer."
More clear is GL 4.3 core spec: 10.4 (p 307):
"An INVALID_OPERATION error is generated by any commands which
modify, draw from, or query vertex array state when no vertex array is bound.
This occurs in the initial GL state, and may occur as a result of BindVertexAr-
ray or a side effect of DeleteVertexArrays."
+++
I just have read (same spec) 2.10 (p 46/47):
"An INVALID_OPERATION error is generated if any of the *Pointer commands
specifying the location and organization of vertex array data are called while zero
is bound to the ARRAY_BUFFER buffer object binding point, and the pointer argu-
ment is not NULL."
.. which only constraints the *Pointer command use to _VBO_, not forcing a VAO.
+++
|
|
|
|
| |
simple major version number check.
|
|
|
|
| |
w/ higher GLSL versions
|
|
|
|
| |
except for sampler2D (mediump instead of lowp)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GLContextImpl: Bind default VAO if having quirk RequiresBoundVAO.
OSX w/ OpenGL >= 3 core context implementation requires a bound VAO for vertex attribute operations,
i.e. VertexAttributePointer(..). This has been experienced on OSX 10.7.5, OpenGL 3.2 core w/ Nvidia GPU
and in several forum posts. Such 'behavior' violates the GL 3.2 core specification,
which does not state this requirement, hence it is a bug. (Please correct me if I am wrong!)
GLContextImpl works around this quirk, by generating a default VAO and binds it at 1st makeCurrent (@creation)
and deletes it at destroy. This is minimal invasive since no action is required for subsequent makeCurrent or release.
We assume if a user uses and binds a VAO herself, she will mind this quirk.
Note: We could enhance this workaround by quering for a currently bound VAO at makeCurrent() and bind our default if none.
However, we refrain from this operation to minimize the workaround and complexity.
|
| |
|
|
|
|
| |
GLSL version and default precision (if GLES) - Used by GearsES2/RedSquare/PointDemo (Made GLSL version proof)
|
|
|
|
|
|
|
|
|
| |
string (for shader programs)
Uses GL_SHADING_LANGUAGE_VERSION and parses it via VersionNumber, as well as having a static fallback
using the GL context version.
The value is valid and can be retrieved after ctx has been made current once.
|
| |
|
|
|
|
| |
EXT_packed_depth_stencil extension
|
|
|
|
| |
(gl_PointCoords n/a otherwise); Add FFP Emul point test in TestPointNEWT/PointDemoES1.
|
|
|
|
| |
fourth element was invalid
|
| |
|
|
|
|
|
|
|
|
|
| |
All *Pointer methods used 'normalized:=false', but we cannot assume
the fixed function code does use normalized (0f..1f) values.
On the contrary, it usually uses the native format value range.
Hence we have to pass normalized:=true for all fixed point data types
and normalized:=false for floating point data types.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GL_POINT_SOFT and dist/fade attenuation (Adding basic POINT unit tests)
gl_PointSize (and all other uniform array elems) was not propagated due to wrong usage of GLUniformData component param.
Due to efficiency, we use vec4[2] now and #defines in shader to easy readability.
GL_POINT_SOFT uses gl_PointCoord to determnine inside/outside circle position
while adding a seam of 10% in/out. This almost matches 'other' implementations and gives a nice smooth circle.
!GL_POINT_SOFT produces a proper square (billboard).
Point-Vertex shader takes dist/fade attentuation into account.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
allowing passing a GL action w/ custom argument and return value.
Adding simple POINTS shader not regarding POINTS parameters and not using a texture (commented out).
FIXME:
Event thought it works using a texture and gl_PointCoord in frag shader,
I don't see the point here (lol) if gl_PointSize must be 1.0 in vert shader ..
otherwise nothing is seen on ES2.0.
On Desktop POINTS are always shown as 1 pixel sized points!
|
|
|
|
|
|
| |
twice (duh!) almost halfed performance :)
TODO: Create GL_POINT texture and render w/ glDraw*()
|
|
|
|
|
|
| |
ES1 impl. detection
'glBegin' is not ES1, duh!
|
| |
|
|
|
|
|
|
| |
found in ES1 library
This is the case in BCM-VC-IV blobs, tested on Raspeberry-Pi
|
|
|
|
| |
discarding pixels of culled faces.
|
|
|
|
| |
according it's usage (update Mvi/Mvit only if lighting is being used)
|
| |
|
|
|
|
| |
resize element count
|
|
|
|
|
|
|
|
| |
and size gross-net > PAGE_SIZE
Usually PAGE_SIZE is written within one DMA xfer command,
so if the gross buffer bulk transfer contains more unused data than PAGE_SIZE
we may win when transfering each single buffer at buffer update.
|
|
|
|
| |
imm. gl* functions; Default color padding is 1f; Make fields private.
|
|
|
|
| |
glColor4f() more efficient, use pre-alloc NIO buffer
|
| |
|