| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
simple major version number check.
|
|
|
|
| |
w/ higher GLSL versions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GLContextImpl: Bind default VAO if having quirk RequiresBoundVAO.
OSX w/ OpenGL >= 3 core context implementation requires a bound VAO for vertex attribute operations,
i.e. VertexAttributePointer(..). This has been experienced on OSX 10.7.5, OpenGL 3.2 core w/ Nvidia GPU
and in several forum posts. Such 'behavior' violates the GL 3.2 core specification,
which does not state this requirement, hence it is a bug. (Please correct me if I am wrong!)
GLContextImpl works around this quirk, by generating a default VAO and binds it at 1st makeCurrent (@creation)
and deletes it at destroy. This is minimal invasive since no action is required for subsequent makeCurrent or release.
We assume if a user uses and binds a VAO herself, she will mind this quirk.
Note: We could enhance this workaround by quering for a currently bound VAO at makeCurrent() and bind our default if none.
However, we refrain from this operation to minimize the workaround and complexity.
|
|
|
|
|
|
|
|
|
| |
string (for shader programs)
Uses GL_SHADING_LANGUAGE_VERSION and parses it via VersionNumber, as well as having a static fallback
using the GL context version.
The value is valid and can be retrieved after ctx has been made current once.
|
| |
|
|
|
|
| |
(gl_PointCoords n/a otherwise); Add FFP Emul point test in TestPointNEWT/PointDemoES1.
|
|
|
|
|
|
|
|
|
| |
All *Pointer methods used 'normalized:=false', but we cannot assume
the fixed function code does use normalized (0f..1f) values.
On the contrary, it usually uses the native format value range.
Hence we have to pass normalized:=true for all fixed point data types
and normalized:=false for floating point data types.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GL_POINT_SOFT and dist/fade attenuation (Adding basic POINT unit tests)
gl_PointSize (and all other uniform array elems) was not propagated due to wrong usage of GLUniformData component param.
Due to efficiency, we use vec4[2] now and #defines in shader to easy readability.
GL_POINT_SOFT uses gl_PointCoord to determnine inside/outside circle position
while adding a seam of 10% in/out. This almost matches 'other' implementations and gives a nice smooth circle.
!GL_POINT_SOFT produces a proper square (billboard).
Point-Vertex shader takes dist/fade attentuation into account.
|
|
|
|
|
|
|
|
|
|
|
|
| |
allowing passing a GL action w/ custom argument and return value.
Adding simple POINTS shader not regarding POINTS parameters and not using a texture (commented out).
FIXME:
Event thought it works using a texture and gl_PointCoord in frag shader,
I don't see the point here (lol) if gl_PointSize must be 1.0 in vert shader ..
otherwise nothing is seen on ES2.0.
On Desktop POINTS are always shown as 1 pixel sized points!
|
|
|
|
|
|
| |
twice (duh!) almost halfed performance :)
TODO: Create GL_POINT texture and render w/ glDraw*()
|
|
|
|
|
|
| |
ES1 impl. detection
'glBegin' is not ES1, duh!
|
|
|
|
|
|
| |
found in ES1 library
This is the case in BCM-VC-IV blobs, tested on Raspeberry-Pi
|
|
|
|
| |
discarding pixels of culled faces.
|
|
|
|
| |
according it's usage (update Mvi/Mvit only if lighting is being used)
|
|
|
|
| |
glColor4f() more efficient, use pre-alloc NIO buffer
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| | |
Missing format specifier for the first argument would lead to this throwing IllegalFormatException.
Signed-off-by: Harvey Harrison <[email protected]>
|
|/
|
|
| |
properties, drawIndices QUAD w/ proper range and add uint; FixedFunctionHook: drawIndices QUAD w/ proper range and add uint
|
| |
|
|
|
|
| |
code: Remove precision for default precision types.
|
| |
|
|
|
|
| |
ShaderSelectionMode.AUTO (good for mobile); Lazy shader instantiation.
|
|
|
|
| |
becomes public method to JoglVersion
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
923d9dd7f1d40db72d35ca76a761ca14babf147f
We are aware that Google's ANGLE (Windows EGL/ES2 impl. based on D3D)
crashes using eglInitialize(..) w/ EGL_DEFAULT_DISPLAY.
Commit 923d9dd7f1d40db72d35ca76a761ca14babf147f moved the EGL device initialization
into the EGLDrawableFactory ctor and hence slipped out ANGLE workaround to disable it per default.
- Moving property static flags from GLProfile -> GLDrawableFactory
- Moving ANGLE workaround right into EGLDrawableFactory (where it belongs)
- Moving optional EGL/ES disable code to GLDrawableFactory (where it belongs)
Tested on Windows w/ Java-32bit and latest Chrome ANGLE DLLs
|
|
|
|
| |
GL_QUAD_STRIP/GL_POLYGON/GL_QUADS mapping, glTexImage2D internalformat/format match.
|
|
|
|
|
|
|
|
|
| |
detail w/ data sync within GLArrayHandle,
which also removed redundant code (VBO data sync and binding).
Refines commit 8582ece7dc7f65271b3184261697a542766d9864
and f49f8e22953ed2426fd4264ee407e2dc3fc07cfc
|
|
|
|
|
|
|
|
|
| |
(fix, incomplete still), ShaderSelectionMode, Fix default values
Besides the above mentioned additional features towards completness of the FFP emu,
the ShaderSelectionMode allows fixating a shader program configuration,
i.e. AUTO switch (default) or choosing a static shader program to avoid heavy program switches
incl. uniform/attribute updates.
|
|
|
|
|
|
|
|
|
|
| |
VBO: Always unbind VBO ASAP after data transfer (glBufferData())
and assignment (glVertexPointer(..), glVertexAttribPointer()).
It's a bug to leave it bound .. due to redundancy
and other calls which could have change the VBO binding.
Removed syncData(..), now it's only issued at enable
and hence migrated into the enable method.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and add API docs. (API Change)
- Changed create*(..) factory methods (API Change)
- Drop passing GL instance, not needed
- allows creation of ImmModeSink as final field w/o GL context
- Use 'glBufferUsage' to determine whether to use VBO or not ( 0 == glBufferUsage )
- Use glBufferUsage in glBufferData(..) call (oops)
- Toggle vboUsage per object ( 0 == glBufferUsage ? nonVBO : VBO )
remove static VBO usage flag
- Fix render mode
- GL_POLYGON -> GL_TRIANGLE_FAN (not GL_LINES)
- GL_QUADS -> Looped GL_TRIANGLE_FAN (is !GL2) in draw(..) w/ and w/o indices
- Buffer usage
- documented
- allow creating sink w/ all components (vertices, color, normal and texCoords)
bit render and grow only used parts.
This allows proper usage of sink where it is not known
which types are being used.
- Added test case
- Manually tested w/ Jake2 ES1
Jake2 uses the FFP immediate mode rendering, where we utilize this sink
w/o rendering artifacts.
|
|
|
|
|
|
| |
of commit 455fed40391afe10ce5ffb9146ca325af63b0a49
Add drawable null check before using.
|
|
|
|
|
|
| |
GLAutoDrawableBase
Simply lock drawable and issue drawable.swapBuffers(), no need to make context current.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Colorbuffer w/ diff size leads to GL_FRAMEBUFFER_UNSUPPORTED.
This occured at least on:
- OS X 10.6.8
- GL_RENDERER NVIDIA GeForce 7300 GT OpenGL Engine
- GL_VERSION 2.1 NVIDIA-1.6.36
Remedy is to catch the exception @ GLFBODrawableImpl.reset(..) and switch over to
fallback 'reset' method:
FBO reattachment -> FBO complete recreation
Of course, the FBO recreation is noticable slower,
but at least it seems to work on the offending system.
Not tested on the offending system, but manually provoked GLException on FBOObject
to trigger fallback, which is working here.
|
| |
|
|
|
|
|
|
|
| |
setRealized(true/false) already, refine at initialize(true)
Allowing to validate the on-/offscreen state after setRealized(true).
Adding comment in GLFBODrawable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
underlying dummy surface/drawable
If stencil or MSAA has been selected, the underlying dummy drawable doesn't need to have this configuration,
i.e. doesn't need to waste the resources.
- Creation of the dummy surface/drawable uses a simple GLCapabilities
- Requested FBO GLCapabilities is being passes down to the dummy drawable
- GLFBODrawableImpl ctor leaves caps untouched
- GLFBODrawableImpl.initialize(boolean realize)
- realize == true: using the requested FBO caps and setting it in the parent dummy drawable
- realize == false: restore the original caps of dummy drawable
|
|
|
|
| |
helper.isAnimatorAnimating() for decision whether to display() now; Minor API comments.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
reshape() ; GLCanvas.reshape() only if drawble valid ; GLCanvas.validateGLDrawable() also test isDisplayable() ; Fix size validation ; resizeOffscreenDrawable(..) don't validate 'safe' size 1x1
- GLCanvas.validateGLDrawable() @ display() and reshape()
To help users using GLCanvas w/ having a realized GLCanvas/Drawable,
validateGLDrawable() is also called at reshape().
This shall ensure a valid drawable after even a non AWT-EDT issued first setVisible().
- GLCanvas.reshape() only if drawble valid
Otherwise offscreen reshape attempts would happen even on unrealized drawable,
which is not necessary.
- GLCanvas.validateGLDrawable() also test isDisplayable()
To make sure the native peer is valid, also test isDisplayable()
- Fix size validation
Since we have experienced odd size like 0 x -41
test each component, i.e. 0 < width && 0 < height.
This is done through all JOGL/NEWT components.
- resizeOffscreenDrawable(..) don't validate 'safe' size 1x1
In case method is called w/ odd size, i.e. 0 x -41,
the safe size 1x1 is used. However, we cannot validate this size.
Dump WARNING if odd size is detected.
|
|
|
|
| |
NSOpenGLPFAPixelBuffer is ambiguous
|
|
|
|
|
|
| |
compile FixedFuncHook due to removed 'isDirty()'
- getModifiedBits() -> getModifiedBits(boolean clear)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
get-Mvi/Mvit-Matrix operation. (Minor API change)
Using bitmask for requested Mvi and Mvit matrices,
same as dirty-bits to ease matching and update operation.
Update of Mvi and Mvit will be performed only if it's dirty-bit and request-bit set within update().
The individual dirty bit is cleared only if it's matrix update is performed.
Update is also issued at get-Mvi/Mvit-Matrix operations
to ensure proper values w/o update call w/o clearing the modified-bits.
update() returns true if the Mvi or Mvit matrix got updated
_or_ one of the modified bits is set. update() clears the modified-bits.
Adding explicit getModifiedBits() to get and clear it's state.
Adding unit test.
Lots of API docs ..
|
|
|
|
|
|
|
|
|
| |
XInitThreads()]
NativeWindowFactory.getDefaultToolkitLock() is no more a global singleton,
but an instance which has to track/lock a single resource.
Hence the decoration w/ it in GLDrawableFactory is useless and applying lock/unlock
on a new instance also a bug/regression.
|
|
|
|
|
|
| |
92398025abdabb2fdef0d78edd41e730991a6f94 GlobalToolkitLock for create/destroy
Turns out on it has no effect and ATI prop. driver still has XCB failures at this point.
|
| |
|
|
|
|
|
|
|
| |
- Enh. Debug output a bit
- FBObject: Detect glError @ syncFramebuffer MSAA blit, throw GLException if glError to fail-fast
- TODO: May add Mesa NoFBOMSAA Quirk to disable even trying it ..
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Utilizing a GlobalToolkitLock in general to lock the display connection results in deadlock
situations where locked surfaces signal other [offscreen] surfaces to render.
We have to see whether we find a better solution, for now sporadic XCB assertion still happen.
But it is preferrable to point to the root cause, then to jumping through hoops to complicate locking
or even to deadlock.
Locking:
- X11GLXGraphicsConfigurationFactory add missing device locking in:
- getAvailableCapabilities
- chooseGraphicsConfigurationStatic
- Newt/X11Window: Discard display events after window close.
Relax ATI XCB/threading bug workaround:
- ToolkitProperties: requiresGlobalToolkitLock() -> hasThreadingIssues()
- NativeWindowFactory: Don't use GlobalToolkitLock in case of 'threadingIssues' the impact is too severe (see above)
- NativeWindowFactory: Add getGlobalToolkitLockIfRequired(): To be used for small code blocks.
If having 'threadingIssues' a GlobalToolkitLock is returned, otherwise NullToolkitLock.
- X11GLXContext: [create/destroy]ContextARBImpl: Use 'NativeWindowFactory.getGlobalToolkitLockIfRequired()' for extra locking
Misc Cleanup:
- *DrawableFactory createMutableSurface: Also create new device if type is not suitable
- *DrawableFactory createDummySurfaceImpl: Pass chosenCaps and use it (preserves orig. requested user caps)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
driver and w/o native X11 locking
The proprietary ATI X11 driver does not handle multi-threaded [GL] clients well,
i.e. triggers an XCB assertion 'from time to time'.
It almost seems like that the driver either:
- aliases all display connections to it's connection name, i.e. server; or
- utilizes a build-in display connection w/o locking, used for some reason
+++
- X11Lib: Add QueryExtension(dpy, name) allowing early driver determination w/o GL
- X11Util detects 'requiresGlobalToolkitLock' and 'markAllDisplaysUnclosable' via
X11 extensions. In case certain ATI extensions are available, both are set to true.
- X11GLXDrawableFactory: Dropped setting 'markAllDisplaysUnclosable', using X11Util's detection (see above).
- New GlobalToolkitLock to satisfy certain driver restrictions (ATI's XCB multithreading bug)
- NativeWindowFactory handles new property requiresGlobalToolkitLock,
in which case the new GlobalToolkitLock is being used instead of ResourceToolkitLock.
- JAWTUtil ToolkitLock locks GlobalToolkitLock 1st to match new 'requiresGlobalToolkitLock' property.
- Document static method requirement of X11Util, GDIUtil and OSXUtil via marker interface ToolkitProperties
- ToolkitLock: New method 'validateLocked()', allowing use to validate whether the device/toolkit
is properly locked and hence to detect implementation bugs. See unit test class: ValidateLockListener
|
|
|
|
| |
commented out GLX_MESA_swap_control; native test of Mesa context-retarget bug, cannot reproduce yet.
|
|
|
|
|
|
| |
reshape
Adding boolean sendReshape argument to be set to false, if subsequent display won't reshape.
|