| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
| |
All *Pointer methods used 'normalized:=false', but we cannot assume
the fixed function code does use normalized (0f..1f) values.
On the contrary, it usually uses the native format value range.
Hence we have to pass normalized:=true for all fixed point data types
and normalized:=false for floating point data types.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GL_POINT_SOFT and dist/fade attenuation (Adding basic POINT unit tests)
gl_PointSize (and all other uniform array elems) was not propagated due to wrong usage of GLUniformData component param.
Due to efficiency, we use vec4[2] now and #defines in shader to easy readability.
GL_POINT_SOFT uses gl_PointCoord to determnine inside/outside circle position
while adding a seam of 10% in/out. This almost matches 'other' implementations and gives a nice smooth circle.
!GL_POINT_SOFT produces a proper square (billboard).
Point-Vertex shader takes dist/fade attentuation into account.
|
|
|
|
|
|
|
|
|
|
|
|
| |
allowing passing a GL action w/ custom argument and return value.
Adding simple POINTS shader not regarding POINTS parameters and not using a texture (commented out).
FIXME:
Event thought it works using a texture and gl_PointCoord in frag shader,
I don't see the point here (lol) if gl_PointSize must be 1.0 in vert shader ..
otherwise nothing is seen on ES2.0.
On Desktop POINTS are always shown as 1 pixel sized points!
|
|
|
|
|
|
| |
twice (duh!) almost halfed performance :)
TODO: Create GL_POINT texture and render w/ glDraw*()
|
|
|
|
|
|
| |
ES1 impl. detection
'glBegin' is not ES1, duh!
|
|
|
|
|
|
| |
found in ES1 library
This is the case in BCM-VC-IV blobs, tested on Raspeberry-Pi
|
|
|
|
| |
discarding pixels of culled faces.
|
|
|
|
| |
according it's usage (update Mvi/Mvit only if lighting is being used)
|
|
|
|
| |
glColor4f() more efficient, use pre-alloc NIO buffer
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| | |
Missing format specifier for the first argument would lead to this throwing IllegalFormatException.
Signed-off-by: Harvey Harrison <[email protected]>
|
|/
|
|
| |
properties, drawIndices QUAD w/ proper range and add uint; FixedFunctionHook: drawIndices QUAD w/ proper range and add uint
|
| |
|
|
|
|
| |
code: Remove precision for default precision types.
|
| |
|
|
|
|
| |
ShaderSelectionMode.AUTO (good for mobile); Lazy shader instantiation.
|
|
|
|
| |
becomes public method to JoglVersion
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
923d9dd7f1d40db72d35ca76a761ca14babf147f
We are aware that Google's ANGLE (Windows EGL/ES2 impl. based on D3D)
crashes using eglInitialize(..) w/ EGL_DEFAULT_DISPLAY.
Commit 923d9dd7f1d40db72d35ca76a761ca14babf147f moved the EGL device initialization
into the EGLDrawableFactory ctor and hence slipped out ANGLE workaround to disable it per default.
- Moving property static flags from GLProfile -> GLDrawableFactory
- Moving ANGLE workaround right into EGLDrawableFactory (where it belongs)
- Moving optional EGL/ES disable code to GLDrawableFactory (where it belongs)
Tested on Windows w/ Java-32bit and latest Chrome ANGLE DLLs
|
|
|
|
| |
GL_QUAD_STRIP/GL_POLYGON/GL_QUADS mapping, glTexImage2D internalformat/format match.
|
|
|
|
|
|
|
|
|
| |
detail w/ data sync within GLArrayHandle,
which also removed redundant code (VBO data sync and binding).
Refines commit 8582ece7dc7f65271b3184261697a542766d9864
and f49f8e22953ed2426fd4264ee407e2dc3fc07cfc
|
|
|
|
|
|
|
|
|
| |
(fix, incomplete still), ShaderSelectionMode, Fix default values
Besides the above mentioned additional features towards completness of the FFP emu,
the ShaderSelectionMode allows fixating a shader program configuration,
i.e. AUTO switch (default) or choosing a static shader program to avoid heavy program switches
incl. uniform/attribute updates.
|
|
|
|
|
|
|
|
|
|
| |
VBO: Always unbind VBO ASAP after data transfer (glBufferData())
and assignment (glVertexPointer(..), glVertexAttribPointer()).
It's a bug to leave it bound .. due to redundancy
and other calls which could have change the VBO binding.
Removed syncData(..), now it's only issued at enable
and hence migrated into the enable method.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
and add API docs. (API Change)
- Changed create*(..) factory methods (API Change)
- Drop passing GL instance, not needed
- allows creation of ImmModeSink as final field w/o GL context
- Use 'glBufferUsage' to determine whether to use VBO or not ( 0 == glBufferUsage )
- Use glBufferUsage in glBufferData(..) call (oops)
- Toggle vboUsage per object ( 0 == glBufferUsage ? nonVBO : VBO )
remove static VBO usage flag
- Fix render mode
- GL_POLYGON -> GL_TRIANGLE_FAN (not GL_LINES)
- GL_QUADS -> Looped GL_TRIANGLE_FAN (is !GL2) in draw(..) w/ and w/o indices
- Buffer usage
- documented
- allow creating sink w/ all components (vertices, color, normal and texCoords)
bit render and grow only used parts.
This allows proper usage of sink where it is not known
which types are being used.
- Added test case
- Manually tested w/ Jake2 ES1
Jake2 uses the FFP immediate mode rendering, where we utilize this sink
w/o rendering artifacts.
|
|
|
|
|
|
| |
of commit 455fed40391afe10ce5ffb9146ca325af63b0a49
Add drawable null check before using.
|
|
|
|
|
|
| |
GLAutoDrawableBase
Simply lock drawable and issue drawable.swapBuffers(), no need to make context current.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Colorbuffer w/ diff size leads to GL_FRAMEBUFFER_UNSUPPORTED.
This occured at least on:
- OS X 10.6.8
- GL_RENDERER NVIDIA GeForce 7300 GT OpenGL Engine
- GL_VERSION 2.1 NVIDIA-1.6.36
Remedy is to catch the exception @ GLFBODrawableImpl.reset(..) and switch over to
fallback 'reset' method:
FBO reattachment -> FBO complete recreation
Of course, the FBO recreation is noticable slower,
but at least it seems to work on the offending system.
Not tested on the offending system, but manually provoked GLException on FBOObject
to trigger fallback, which is working here.
|
| |
|
|
|
|
|
|
|
| |
setRealized(true/false) already, refine at initialize(true)
Allowing to validate the on-/offscreen state after setRealized(true).
Adding comment in GLFBODrawable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
underlying dummy surface/drawable
If stencil or MSAA has been selected, the underlying dummy drawable doesn't need to have this configuration,
i.e. doesn't need to waste the resources.
- Creation of the dummy surface/drawable uses a simple GLCapabilities
- Requested FBO GLCapabilities is being passes down to the dummy drawable
- GLFBODrawableImpl ctor leaves caps untouched
- GLFBODrawableImpl.initialize(boolean realize)
- realize == true: using the requested FBO caps and setting it in the parent dummy drawable
- realize == false: restore the original caps of dummy drawable
|
|
|
|
| |
helper.isAnimatorAnimating() for decision whether to display() now; Minor API comments.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
reshape() ; GLCanvas.reshape() only if drawble valid ; GLCanvas.validateGLDrawable() also test isDisplayable() ; Fix size validation ; resizeOffscreenDrawable(..) don't validate 'safe' size 1x1
- GLCanvas.validateGLDrawable() @ display() and reshape()
To help users using GLCanvas w/ having a realized GLCanvas/Drawable,
validateGLDrawable() is also called at reshape().
This shall ensure a valid drawable after even a non AWT-EDT issued first setVisible().
- GLCanvas.reshape() only if drawble valid
Otherwise offscreen reshape attempts would happen even on unrealized drawable,
which is not necessary.
- GLCanvas.validateGLDrawable() also test isDisplayable()
To make sure the native peer is valid, also test isDisplayable()
- Fix size validation
Since we have experienced odd size like 0 x -41
test each component, i.e. 0 < width && 0 < height.
This is done through all JOGL/NEWT components.
- resizeOffscreenDrawable(..) don't validate 'safe' size 1x1
In case method is called w/ odd size, i.e. 0 x -41,
the safe size 1x1 is used. However, we cannot validate this size.
Dump WARNING if odd size is detected.
|
|
|
|
| |
NSOpenGLPFAPixelBuffer is ambiguous
|
|
|
|
|
|
| |
compile FixedFuncHook due to removed 'isDirty()'
- getModifiedBits() -> getModifiedBits(boolean clear)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
get-Mvi/Mvit-Matrix operation. (Minor API change)
Using bitmask for requested Mvi and Mvit matrices,
same as dirty-bits to ease matching and update operation.
Update of Mvi and Mvit will be performed only if it's dirty-bit and request-bit set within update().
The individual dirty bit is cleared only if it's matrix update is performed.
Update is also issued at get-Mvi/Mvit-Matrix operations
to ensure proper values w/o update call w/o clearing the modified-bits.
update() returns true if the Mvi or Mvit matrix got updated
_or_ one of the modified bits is set. update() clears the modified-bits.
Adding explicit getModifiedBits() to get and clear it's state.
Adding unit test.
Lots of API docs ..
|
|
|
|
|
|
|
|
|
| |
XInitThreads()]
NativeWindowFactory.getDefaultToolkitLock() is no more a global singleton,
but an instance which has to track/lock a single resource.
Hence the decoration w/ it in GLDrawableFactory is useless and applying lock/unlock
on a new instance also a bug/regression.
|
|
|
|
|
|
| |
92398025abdabb2fdef0d78edd41e730991a6f94 GlobalToolkitLock for create/destroy
Turns out on it has no effect and ATI prop. driver still has XCB failures at this point.
|
| |
|
|
|
|
|
|
|
| |
- Enh. Debug output a bit
- FBObject: Detect glError @ syncFramebuffer MSAA blit, throw GLException if glError to fail-fast
- TODO: May add Mesa NoFBOMSAA Quirk to disable even trying it ..
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Utilizing a GlobalToolkitLock in general to lock the display connection results in deadlock
situations where locked surfaces signal other [offscreen] surfaces to render.
We have to see whether we find a better solution, for now sporadic XCB assertion still happen.
But it is preferrable to point to the root cause, then to jumping through hoops to complicate locking
or even to deadlock.
Locking:
- X11GLXGraphicsConfigurationFactory add missing device locking in:
- getAvailableCapabilities
- chooseGraphicsConfigurationStatic
- Newt/X11Window: Discard display events after window close.
Relax ATI XCB/threading bug workaround:
- ToolkitProperties: requiresGlobalToolkitLock() -> hasThreadingIssues()
- NativeWindowFactory: Don't use GlobalToolkitLock in case of 'threadingIssues' the impact is too severe (see above)
- NativeWindowFactory: Add getGlobalToolkitLockIfRequired(): To be used for small code blocks.
If having 'threadingIssues' a GlobalToolkitLock is returned, otherwise NullToolkitLock.
- X11GLXContext: [create/destroy]ContextARBImpl: Use 'NativeWindowFactory.getGlobalToolkitLockIfRequired()' for extra locking
Misc Cleanup:
- *DrawableFactory createMutableSurface: Also create new device if type is not suitable
- *DrawableFactory createDummySurfaceImpl: Pass chosenCaps and use it (preserves orig. requested user caps)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
driver and w/o native X11 locking
The proprietary ATI X11 driver does not handle multi-threaded [GL] clients well,
i.e. triggers an XCB assertion 'from time to time'.
It almost seems like that the driver either:
- aliases all display connections to it's connection name, i.e. server; or
- utilizes a build-in display connection w/o locking, used for some reason
+++
- X11Lib: Add QueryExtension(dpy, name) allowing early driver determination w/o GL
- X11Util detects 'requiresGlobalToolkitLock' and 'markAllDisplaysUnclosable' via
X11 extensions. In case certain ATI extensions are available, both are set to true.
- X11GLXDrawableFactory: Dropped setting 'markAllDisplaysUnclosable', using X11Util's detection (see above).
- New GlobalToolkitLock to satisfy certain driver restrictions (ATI's XCB multithreading bug)
- NativeWindowFactory handles new property requiresGlobalToolkitLock,
in which case the new GlobalToolkitLock is being used instead of ResourceToolkitLock.
- JAWTUtil ToolkitLock locks GlobalToolkitLock 1st to match new 'requiresGlobalToolkitLock' property.
- Document static method requirement of X11Util, GDIUtil and OSXUtil via marker interface ToolkitProperties
- ToolkitLock: New method 'validateLocked()', allowing use to validate whether the device/toolkit
is properly locked and hence to detect implementation bugs. See unit test class: ValidateLockListener
|
|
|
|
| |
commented out GLX_MESA_swap_control; native test of Mesa context-retarget bug, cannot reproduce yet.
|
|
|
|
|
|
| |
reshape
Adding boolean sendReshape argument to be set to false, if subsequent display won't reshape.
|
|
|
|
|
|
|
|
|
|
|
|
| |
setSwapInterval() after changing the context's drawable w/ 'Mesa 8.0.4' dri2SetSwapInterval/DRI2 (soft & intel)
Analyzing 'TestGLContextDrawableSwitchNEWT' crash at setSwapInterval -> dri2SetSwapInterval
after retargeting the context (new drawable association).
Turns out Mesa's dri2SetSwapInterval may have a bug.
+++
GLContext TRACE_SWITCH: Add drawable handle to debug/trace output.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
syncFramebuffer(..) -> syncSamplingSink(..)
- reset(..) adds a new argument, boolean resetSamplingSink, allowing to trigger a reset
on the samplink sink as well. Use cases are documented.
- made public: resetSamplingSink()
- Rename syncFramebuffer(..) -> syncSamplingSink(..) to clarify semantics
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
locking, resulting in a native-lock-free impl.
The X11 implementation details of NativeWindow and NEWT used the X11 implicit locking facility
XLockDisplay/XUnlockDisplay, enabled via XInitThreads().
The latter useage is complicated within an unsure environment where the initialization point of JOGL
is unknown, but XInitThreads() requires to be called once and before any other X11 calls.
The solution is simple and thorough, replace native X11 locking w/ 'application level' locking.
Following this pattern actually cleans up a pretty messy part of X11 NativeWindow and NEWT,
since the generalization of platform independent locking simplifies code.
Simply using our RecursiveLock also speeds up locking, since it doesn't require JNI calls down to X11 anymore.
It allows us to get rid of X11ToolkitLock and X11JAWTToolkitLock.
Using the RecursiveLock also allows us to remove the shortcut of explicitly createing
a NullToolkitLocked device for 'private' display connections.
All devices use proper locking as claimed in their toolkit util 'requiresToolkitLock()' in X11Util, OSXUtil, ..
Further more a bug has been fixed of X11ErrorHandler usage, i.e. we need to keep our handler alive at all times
due to async X11 messaging behavior. This allows to remove the redundant code in X11/NEWT.
The AbstractGraphicsDevice lifecycle has been fixed as well, i.e. called when closing NEWT's Display
for all driver implementations.
On the NEWT side the Display's AbstractGraphicsDevice semantics has been clarified,
i.e. it's usage for EDT and lifecycle operations.
Hence the X11 Display 2nd device for rendering operations has been moved to X11 Window
where it belongs - and the X11 Display's default device used for EDT/lifecycle-ops as it should be.
This allows running X11/NEWT properly with the default usage, where the Display instance
and hence the EDT thread is shared with many Screen and Window.
Rendering using NEWT Window is decoupled from it's shared Display lock
via it's own native X11 display.
Lock free AbstractGraphicsDevice impl. (Windows, OSX, ..) don't require any attention in this regard
since they use NullToolkitLock.
Tests:
======
This implementation has been tested manually with Mesa3d (soft, Intel), ATI and Nvidia
on X11, Windows and OSX w/o any regressions found in any unit test.
Issues on ATI:
==============
Only on ATI w/o a composite renderer the unit tests expose a driver or WM bug where XCB
claims a lack of locking. Setting env. var 'LIBXCB_ALLOW_SLOPPY_LOCK=true' is one workaround
if users refuse to enable compositing. We may investigate this issue in more detail later on.
|
|
|
|
| |
not offscreen-layer.
|
|
|
|
| |
it's transient, store it.
|