| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| |/ |
|
| |
| |
| |
| | |
dedicated read-drawable is being used (double buffering)
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
mechanism ; Refined API doc getDefaultReadBuffer() ; Add GLDrawableUtil.swapBuffersBeforeRead(..)
Commit 82f679b064784213591b460fc5eaa1f5f196fbd1 which introduces the default swap-buffers
mechanism is erroneous:
The OffscreenBack backend requires the following operation order:
Order-1:
[1] - GL display
[2] - GL swapBuffers (always due to single-buffer non-MSAA or MSAA offscreen drawable)
[3] - readPixels
+++
Commit 82f679b064784213591b460fc5eaa1f5f196fbd1 however introduced:
Order-2:
[a] - GL display
[b] - readPixels
[c] - GL swapBuffers (always due to single-buffer non-MSAA or MSAA offscreen drawable)
since [a] and [b] happened in Updater's display method, and [c] followed the same
triggered by GLAutoDrawableHelper.
+++
The proof, commit d46d9ad8f998a7128d9f023294d5f489673d6d8a, is faulty,
since it always included the 'snapshot' GL event listener
which turned-off auto-swap and swapped before read-pixels.
TL;DR it enforced proper Order-1.
+++
This fix allows the Backend to intercept disable GLDrawableHelper's setAutoSwapBufferMode(..)
and perform the auto-swap mode itself in the proper Order-1.
The unit test has been refined to optionally disable the snapshot
to validate auto-swap mode.
+++
Refined GLBase and GLContext's API doc for 'getDefaultReadBuffer()'
+++
Add GLDrawableUtil.swapBuffersBeforeRead(..)
and reuse it for TileRendererBase (original impl.).
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
test for visual validation of 'no frame lag'
To validate whether a 'display' command w/o animator results to the desired frame
we introduce a 'userCounter' in TextRendererGLEL.
The latter gets increased and maybe visually validated by a key-press -> display.
Results: In all modes, MSAA or !MSAA, or flip - the result is valid.
Tested on windows and linux.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Issue:
[NULL @ 0x35bde60] insufficient thread locking around avcodec_open/close()
Decorating said libav functions w/ mutex lock/release.
Abstract impl. to either use pthread or JNI Monitor,
but using the latter to reduce dependencies (ming64 windows).
FFMPEGNatives is now an abstract class containing the
'static final Object mutex_avcodec_openclose'
|
| | |
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| | |
Add reshapeNotify(..) for NOP PMV reshape notification
|
| |
| |
| |
| | |
packages
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
- Using update openal-soft (commit 7297c3214a4c648aaee81a9877da15b88f798197)
- Analyzed openal-soft threading issues:
- a global-lock would have removed the issue
- turns out that using ALC_EXT_thread_local_context's alcSetThreadContext(..)
instead of alcMakeContextCurrent(..) solves the issue
- Cleaned up al*GetError() queries and handling
- Simplified flush/dequeue buffers
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
auto-swap mechanims
Refines commit 908ebd99d1eb57ce773a1fdd67c76886da86b9e6
Note that the test case decide whether to auto-swap (after read-pixels)
or not auto-swap (manual swap before read-pixels).
See UITestCase.swapBuffersBeforeRead(GLCapabilitiesImmutable chosenCaps):
Determines whether the chosen GLCapabilitiesImmutable requires a swap-buffers before reading pixels.
Usually one uses the default-read-buffer, i.e. GL.GL_FRONT for single-buffer
and GL.GL_BACK for double-buffer GLDrawables
and GL.GL_COLOR_ATTACHMENT0 for offscreen framebuffer objects.
Here swap-buffers shall happen after calling reading pixels, the default.
However, multisampling offscreen GLFBODrawables utilize swap-buffers to downsample
the multisamples into the readable sampling sink.
In this case, we require a swap-buffers before reading pixels.
Returns: chosenCaps.isFBO() && chosenCaps.getSampleBuffers()
+++
- GLJPanel:
- Remove SurfaceUpdatedListener mechanism in favor of
default auto-swap-buffer via GLDrawableHelper.
This removes complexity.
- postGL does not need to perform explicit swapBuffer operation,
but rely on GLDrawableHelper and the default mechanism.
This is also compatible w/ J2D backend.
- Use GLDrawableHelper for setAutoSwapBufferMode(..) and getAutoSwapBufferMode()
+++
UnitTests:
- UITestCase:
- Add 'boolean swapBuffersBeforeRead(GLCapabilitiesImmutable chosenCaps)'
to determine whether swapBuffers() must occure before read-pixels. See above.
- GLReadBuffer00Base*
- remove explicit addSnapshotGLEL/removeSnapshotGLEL
- add TextRendererGLEL, to display frame-count and -dimension
- SnapshotGLEL*
- simply toggle auto-swap in their init(..) and dispose(..) method!
- clear back-buffer if 'swapBuffersBeforeRead'
to test whether the right buffer is being used for read-pixels.
|
| |
| |
| |
| | |
to default.
|
| |
| |
| |
| | |
modes. GLStateTracker: Use proper GL names for enums
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
GLEventListener using [AWT]GLReadBufferUtil)
When utilizing [AWT]GLReadBufferUtil it is usually desired to read from the front-buffer
instead the back-buffer. The latter may not be defined, e.g. when using MSAA.
A GLEventListener utilizing [AWT]GLReadBufferUtil,
must perform the drawable.swapBuffers() to be able to read from the front-buffer.
Usually GLAutoDrawable.setAutoSwapBuffer(false) should be called here,
to avoid a double swap - however GLJPanel does not support toggling auto-swap
since it requires to control swap for it's own read-pixels.
Remedy for GLJPanel:
- GLJPanel issues helper.setAutoSwapBufferMode(false) - immutable
- Enable GLJPanel.swapBuffer() if initializes
This was previously disabled.
- GLJPanel's OffscreenBackend listens to surfaceUpdated,
to be notified whether postGL needs to swap buffer
or the drawable.swapBuffer() was already called between preGL and postGL.
See unit tests adding/removing a snapshot GLEventListener
performing swapBuffers() and setting auto-swap accordingly.
|
| |
| |
| |
| | |
SurfaceUpdatedListener: Mark methods final, use volatile 'isEmpty' to bail out early @ surfaceUpdated.
|
| |
| |
| |
| | |
long indirect_offset, int drawcount, int stride)
|
| |
| |
| |
| | |
getMustFlipVertically() to PNGPixelRect
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
images]; Fix GLReadBufferUtil GL_PACK_ROW_LENGTH
AWTGLPixelBuffer is being reused when used via AWTGLPixelBufferProvider
even when resized.
AWTGLPixelBufferProvider uses GLPixelBufferProvider's requiresNewBuffer(..)
which returns true if
- allowRowStride==true and pixel-buffer size < required-size, or
- allowRowStride==false and pixel-buffer size < required _or_ width doesn't match
otherwise it returns true, i.e. the AWTGLPixelBuffer is reused.
Hence the used BufferedImage might need to be aligned,
i.e. using AWTGLPixelBuffer's getAlignedImage(..).
+++
GLReadBufferUtil shall use current texture-data width for GL_PACK_ROW_LENGTH,
not the static GLPixelBuffer's width, which may not reflect image dimension (resize)
+++
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
PNGPixelRect and PixelFormatUtil)
TextureData IntBuffer could be caused by AWT read-pixels
but is not seamlessly supported via PNGPixelRect since the latter
uses a hardcoded ByteBuffer.
Add static PNGPixelRect.write(..) supporting IntBuffer
to support this case for now.
PNGPixelRect instances do not support any Buffer type to avoid
a bloated implementation.
PixelFormatUtil adds support for int32 pixel format conversion.
|
| |
| |
| |
| | |
specific implementations (Scaling)
|
| |
| |
| |
| |
| |
| | |
specific implementations (Scaling)
Add FIXME note ..
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
GLContextImpl, DisplayImpl
GLProfile, GLContextImpl:
- ReflectionUtil.DEBUG_STATS_FORNAME: Dump forName stats if set
- Cache GL*Impl and GL*ProcAddressTable Constructor<?> for GLContextImpl's createInstance(..)
- Remove off-thread early classloading thread which only adds complications
DisplayImpl:
- Remove one redundant availability test
|
| |
| |
| |
| |
| |
| |
| | |
appropriately
Instead of using [mWin orderBack: mWin] for child windows,
utilize [mWin orderWindow: NSWindowOut relativeTo: [pWin windowNumber]]
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
different toolkits!
With Applet3 plugin (firefox - using GTK), our child window seems to receives the absolute position,
or 'arbitrary' values (?).
Will need to figure out how to properly determine these cases.
In the meantime, simply turn off waitForPosition(..) for child windows,
which shall not harm NEWT.
Impacts following actions as child window:
- createNativeWindow
- reparent
- fullscreen
|
| | |
|
| |
| |
| |
| | |
window.
|
| |
| |
| |
| | |
back to top-level ctor if parentWindow is null
|
| |
| |
| |
| | |
UpstreamSurfaceHookMutableSizePos to take position into account; WrappedWindow: invalidate and destroy - display device could be opened.
|
| |
| |
| |
| | |
NativeWindow location on screen
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| | |
- Adding 'plugin3-public' jar and sources for Applet3 support, copied from icedtea-web3
- Added com.jogamp.newt.util.applet.JOGLNewtApplet3Run capable to run Applet3
|
| |
| |
| |
| | |
displayConnection string and screen-idx)
|
| |
| |
| |
| | |
AbstractGraphicsDevice/NativeWindow
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
deadlock due to AWTTreeLock acquisition while add/remove AWT listener
The AWTTreeLock is acquired by Component.removeHierarchyListener
and as for _every_ AWT component, modifications shall happen on the AWT-EDT.
IMHO the user shall offload AWT modifications to the AWT-EDT
similar to what JOGL's GLCanvas and NEWTCanvasAWT does.
However, since JAWTWindow also represents a NativeWindow instance
we shall offload AWTTreeLock methods ourselves!
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
multiple instances in parallel
Tested on GNU/Linux x86_64,
Result: Plays well here audio and video, i.e. audio is actually mixed from both movies.
Even if one movie (below) stops and restarts (AL buffer reset),
it didn't crash.
+++
LIB_AV Codec : 54.92.100 [cc 54]
LIB_AV Format : 54.63.104 [cc 54]
LIB_AV Util : 52.18.100 [cc 52]
LIB_AV Resample: 1.0.1 [cc 1, loaded true]
LIB_SW Resample: 0.17.102 [cc 0, loaded true]
LIB_AV Device : [loaded true]
LIB_AV Class : FFMPEGv09Natives
+++
(enable MovieSimple in scripts/tests.sh)
bash scripts/tests-x64.sh -loop -windows 2 \
-urlN 0 http://download.blender.org/peach/bigbuckbunny_movies/big_buck_bunny_720p_surround.avi \
-urlN 1 http://video.webmfiles.org/elephants-dream.webm
+++
2 Streaming threads, i.e. decoder threads:
"Thread-5-StreamWorker_1" daemon prio=10 tid=0x00007f994c102000 nid=0x5826 in Object.wait() [0x00007f996fa37000]
at jogamp.opengl.util.av.GLMediaPlayerImpl$StreamWorker.run(GLMediaPlayerImpl.java:1231)
"Thread-4-StreamWorker_0" daemon prio=10 tid=0x00007f99600ed000 nid=0x5825 in Object.wait() [0x00007f996cd09000]
at jogamp.opengl.util.av.GLMediaPlayerImpl$StreamWorker.run(GLMediaPlayerImpl.java:1231)
|
| |
| |
| |
| | |
GLMediaEventListener impl. to access GLMediaPlayer associated objects
|
| |
| |
| |
| | |
on for NewtVersionActivity)
|
| | |
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Refines commit fbe00e6f5dca8043b40dd96f096fecc9424e0cc3
Instead of querying driver artifacts (vendor, platform, version ..)
we simply can autodetect this quirk by trying to get a second egl-display handle
when initializing the EGLDrawablFactory's default device:
EGL.eglGetDisplay(EGL.EGL_DEFAULT_DISPLAY)
|
| | |
|
| |
| |
| |
| | |
a.so.N' symlinks for debian i386 libs on x86_64
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Display via eglGetDisplay(..)
NVIDIA 331.38 (Linux X11) EGL impl. only supports _one_ EGL Display via eglGetDisplay.
- Subsequent eglGetDisplay(..) calls fail.
- Using the same 'global' egl-display does work though
Remedy: Add 'GLRendererQuirks.SingletonEGLDisplayOnly'
Detection of quirk is done as usual in GLContextImpl.setRendererQuirks(..),
and EGLDrawableFactory passes the quirk, if detected, down to EGLDisplayUtil.
The latter implements the singleton eglDisplay handle.
EGLDisplayUtil: Cleaned up ..
- EGLDisplayRef employs the reference handling incl. eglInitialize(..) and eglTerminate(),
as well as the new singleton quirk.
- Mark all internal methods 'private',
to remove possible [untested] sideffects.
|
| |
| |
| |
| |
| |
| |
| |
| | |
[GLContext|GL].hasFullFBOSupport() == true
OpenGL ES 3.0 supports full framebuffer operations, incl. multiple color-attachments and multisample.
Hence [GLContext|GL].hasFullFBOSupport() shall returns true w/ a ES 3.0 context.
|
| |
| |
| |
| | |
EGLDrawableFactory.mapAvailableEGLESConfig(..): Clarify
|