| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
'getNextTexture(..)' is issued here!
Thanks to Xerxes to analyze this issue thoroughly.
TODO: Implement EOS for 'Audio Only' and test seek, pause, etc .. - Apply manual tests in MovieSimple
|
| |
|
|
|
|
|
|
| |
w/ xcode's xcrun) - Remove abs. include path.
#include </usr/include/machine/types.h> -> #include <machine/types.h>
|
|
|
|
|
|
|
| |
appletviewer) when move horizontal slider (vertical: ok)
Moving horizontal slider if run as applet (Firefox, Safari - not appletviewer)
doesn't move the GLCanvas even though it is resized.
|
|
|
|
|
|
|
|
|
| |
JAWT_OSX_CALAYER_QUIRK_SIZE and JAWT_OSX_CALAYER_QUIRK_POSITION.
- Provide quirk bits for OSX CALayer depending on used JVM/AWT
and act accordingly.
- TestBug816OSXCALayerPosAWT: Add resize by frame
|
|
|
|
| |
FFMPEGNatives.initIDS0() -> FFMPEGStaticNatives.initIDS0(); Cleanup up warnings and includes (clang); Forgot to commit new ffmpeg_static.h
|
|
|
|
| |
up warnings and includes (clang).
|
|
|
|
|
|
|
| |
compiled for it -> pass
Scenario ffmpeg-0.10, where we are not prepared (compiled-in) for sw-resample support.
Don't use if compiled in version (CC) is < 0 (n/a), and allow to pass at load time.
|
| |
|
|
|
|
| |
don't use tchar.h; Fix compiler warning: Add missing (intptr_t) cast.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
orientation change (flipped), API-doc,
- State
- Fix state transition (initGL() error)
- Camera options
- options uses ';' as query separator
- don't use 'default' options, driver should know
- Detect and act on orientation change (flipped)
- ffmpeg impl detects if flipped changes and triggers a SIZE update event.
This allows application to react, i.e. re-init GL and use new TextureCoord's.
Test: Works well on Windows w/ rawvideo dshow camera driver/codec.
- API-doc
- TexSeqEventListener/GLMediaEventListener usage / constraints (GL, ..)
- State transition fix
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
'camera ID'
If linesize is < 0, it is not invalid as assumed in commit eca6a5cb1e2beda84dfbafc31ed225e272f4f3fb,
but vertically flipped (bottom-up).
We have to adjust the data pointers, which are moved to the upper end of memory as well
and can proceed as usual.
TODO:
- Update texture 'mustFlipVertically' to 'false' in this case.
- Later:
- Allow updating texture size ..
- Whole pixel-fmt/texture-lookup-shader association must scale better,
i.e. extract the 'knowledge' into one class, use a static shader code
using uniforms instead of hard-coded values .. etc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
windows, 2 more pixel formats, fail-safe data handling
- add support for ffmpeg 2 / libav 10 -> lavc55_lavf55_lavu52_lavr01
- add support for ffmpeg libswresample (similar to libavresample)
- handle BGRA (GL type) and BGR24 (texture shader)
- Change Camera URI semantics, drop 'host' and use 'path' for camera ID
and use 'query' for options.
- add support for Window's DShow camera selection
- our camera id -> index of list of video-input devices,
this gives us same behavior as w/ Linux
- requires windows libs: strmiids, uuid, ole32, oleaut32
- Compiles w/ MingW64, works w/ libav/ffmpeg
- TODO: test compilation w/ MingW 32bit !
- don't push data to texture if (linesize <= 0)
this may happen due to buggy decoder / setup ..
Tested manually on GNU/Linux x64 and Windows x64:
- GNU/Linux libav 0.8, libav 9, libav 10, ffmpeg 1.2, ffmpeg 2.0
- Windows libav 0.8, libav 9, ffmpeg 2.0
- videos and camera
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
def. high camera options, cleanup symbols)
- Fix libav/ffmpeg compilation
- Split native GLContext code from JoglCommon
- JoglCommon is required for ffmpeg_* c-compile/link
- Supported versions now:
- 0.8 53.53.51
- 9.0 54.54.52
- FFMPEGMediaPlayer
- Update API doc, add compatibility .. etc
- Pixel format conversions (via shader texture lookup func):
- YUV420P, YUVJ420P
- YUV422P, YUVJ422P
- YUYV422
- Properly handle aid/vid
- In camera mode: set high default values
- TODO: Make it configurable via camera URI:
- video_size
- framerate
- ?
- FFMPEGDynamicLibraryBundleInfo
- Cleanup symbols / remove unused (pre 53)
- Add av_dict_* methods
|
|
|
|
|
|
|
|
|
|
|
| |
version dependent c-files individually and inject object files. ; ffmpeg *register_all() at setStream0(..)
- Use 'dot less' dir/file names
- Compile ffmpeg version dependent c-files individually and inject object files.
- ffmpeg *register_all() at setStream0(..)
- Only register devices if available _and_ camera is requested.
|
|
|
|
| |
'stub_includes'
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
missing symbol 'av_realloc'.
- Add camera input
- Use URI w/ scheme 'camera' to determine camera input is desired,
use URI host as camera id.
E.g. 'camera://0' for 1st camera.
- AndroidGLMediaPlayerAPI14: Via 'Camera'
- FFMPEG*: Via libavdevice, device name and input format
- TODO: Add controls to manipulate camera if available
- FFMPEG*
- Add symbols
- avcodec_register_all
- av_realloc (was missing)
- avdevice_register_all
- Load libavdevice (opt)
- Camera:
- Use <ID> (windows) and /dev/video<ID> other OS
- simply find the input format in native code
- Support YUYV422 (used in video4linux2, etc.)
- Stuff 2x 16bpp (YUYV) into one RGBA pixel!
- Add texture format for 16bpp
- Add texture lookup shader
- Fix av_packet leak in readNextImpl(..)
- Restore orig pointer and size values,
we may have moved along within packet.
Then call av_free_packet().
- Use null AudioSink if audio-id is NONE
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
FFMPEGNatives -> FFMPEGv08Natives + FFMPEGv09Natives
Enables FFMPEGMediaPlayer to work w/ either ffmpeg/libav version 8 or 9 w/ same JOGL binary
Same C source code is compiled against
1: version 0.8 FFMPEGv08Natives lavc53.lavf53.lavu51
2: version 0.9 FFMPEGv09Natives lavc54.lavf54.lavu52.lavr01
FFMPEGv08Natives and FFMPEGv09Natives implements FFMPEGNatives,
native C code uses CPP '##' macro concatenation to produce unique function names.
To enable 'cpp' to find the libav* header files matching the desired version,
we have placed them in the c-file's folder, issued '#include "path/file.h"
and added symbolic links to allow finding same module and 'sister modules':
ls -l libavformat/
..
lrwxrwxrwx 1 sven sven 13 Aug 26 12:56 libavcodec -> ../libavcodec
lrwxrwxrwx 1 sven sven 14 Aug 26 12:56 libavformat -> ../libavformat
lrwxrwxrwx 1 sven sven 12 Aug 26 12:57 libavutil -> ../libavutil
..
At static init FFMPEGDynamicLibraryBundleInfo, determines the runtime version
and instantiates the matching FFMPEGNatives, or null if non matches.
FFMPEGMediaPlayer still compares the compile-time and runtime versions.
FFMPEGMediaPlayer passes it's own instance to FFMPEGNatives for callbacks.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
negotiation w/ AudioSink; Misc
- Add libavresample support
- Resample if avail && (!AV_SAMPLE_FMT_S16 || !prefSampleRate || !sinkSupported)
- Resample to: prefSampleRate (if set), AV_SAMPLE_FMT_S16 and min(channelCount, maxChannelCount)
- Proper AudioFormat negotiation w/ AudioSink;
- Utilize AudioSink's 'isSupported(AudioFormat)'
- Misc
- use 'av_get_bytes_per_sample(fmt)' always, don't assume 2
|
|
|
|
|
|
|
|
|
|
|
| |
- Add compile-time/runtime version check, fail if major versions do not match
assuming binary incompatibility
- Add: 'av_find_input_format' for future video input support
- Manually map '/dev/video<NUM>' to video input - not working yet.
- WINDOWS: Set file to '<NUM>'
- Set input format string depending on OS
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
non-frame based AudioSink's to deal w/ desired queue sizes.
- Rename AudioSink.initSink(..) -> AudioSink.init(..)
- Move: "int initialFrameCount, int frameGrowAmount, int frameLimit" to
"int initialQueueSize, int queueGrowAmount, int queueLimit"
based on milliseconds instead of frame count.
- Passing hint 'float frameDuration' to calculate frame count for fame based audio sink, i.e. ALAudioSink.
- Adding sensible static final default values
- AudioDataFormat: Add convenient conversion routines (samples/bytes/frame-count)
- FFMPEGMediaPlayer: Retrieve audio frame size in samples per channel, pass it to AudioSink.init(..)
to properly calculate frame count/limits based on duration.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Multithreaded decoding and API should be considered stable by now,
minor changes may apply if Android/OMX impl. requires it.
We still need to solve TODO's as listed below, copied from 474ce65081ecd452215bc07ab866666cb11ca8b1.
+++
- *TextureFrame OO changes:
- TextureFrame extends TimeFrameI
- GLMediaPlayerImpl*
- Adapt to Ringbuffer changes of GlueGen commit f9f881e59c78e3036cb3f956bc97cfc3197f620d
- Fix impl. method's API doc
- getNextTextureImpl(..) returns video PTS
- Fix audio-only playback
- frame dropping shall only happen if:
- previous frame has not been dropped
- frame is too later
- one decoded frame is already available
- Don't block for decoder anymore:
- nextFrame = "videoFramesDecoded.getBlocking() -> videoFramesDecoded.get()";
No 'next decoded frame avail' only could mean:
- slow decoding/hardware
- slow transport
hence we shall not block rendering.
- Add DEBUG output if using last frame
- Add integer property 'jogl.debug.GLMediaPlayer.StreamWorker.delay' in milliseconds
to simulate slow decoding, i.e. delay is added in StreamWorker after decoding
before pushing new frame to Ringbuffer.
- FFMPEGMediaPlayer:
- audioFrameLimitWithVideo 128 -> 64
- audioFrameLimitAudioOnly 128 -> 32
- uses AudioSink's 'enqueueData(int pts, ByteBuffer bytes, int byteCount)'
- fixes for audio-only playback
+++
Working Tests: MovieSimple and MovieCube
TODO-1: Fix
- Android
- OMXGLMediaPlayer
TODO-2:
- Fix issue where async audio frames arrive much later than 1st video frame, i.e. around 300ms.
- Default TextureCount .. maybe 3 ?
- Adding Audio synchronization ?
- Find 'truth' about correlation of audio and video PTS values,
currently, we assume both to be unrelated ?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Update/fix GLMediaPlayer API doc
- GLMediaEventListener: Add event bits for all state changes to be delivered via attributesChanged(..)
- StreamWorker / Decoder Thread:
- Use StreamWorker only !
- Handle exceptions on StreamWorker via StreamException
- Handles stream initialization and decoding (-> initStream(..))
- Split initGLStream(..) -> initStream(..) + initGL(GL)
- allow initStream(..)'s implementation being executed on StreamWorker
- allow GL initialization to be 'postponed' when stream is read,
i.e. non blocking stream initialization (UI .. etc)
- Handle EOS via END_OF_STREAM_PTS -> pause/event
- Video: Use lock-free LFRingbuffer, similar to
ALAudioSink (commit f18a94b3defef16e98badd6d99f2422609aa56c5)
+++
- FFMPEGDynamicLibraryBundleInfo
- Add avcodec's:
- avcodec_get_frame_defaults, avcodec_free_frame (54.28.0), avcodec_flush_buffers,
- Add avutil's:
- av_frame_unref (55.0.0)
- Add avformat's:
- avformat_seek_file (??)
+++
- FFMPEGMediaPlayer Native:
- add 'snoop' video frames for a/v frame count relation.
disabled per default, since no more needed due to ALAudioSink's
grow-buffer usage of LFRingbuffer.
- use sp_avcodec_free_frame if available
- 'useRefCountedFrames=1' for libav 55.0 to cache more than one audio frame,
not used since ALAudioSink's OpenAL usage does not require it (copies data once).
Note: the above snooped-video frame count is used here.
- use only one cached audio-frame (-> see above, OpenAL copies data once),
while reusing the NIO buffer!
- Perform OpenGL sync (glFinish) in native code!
- find proper PTS value, i.e. either frame's PTS or DTS,
see 'PTSStats'.
- FFMPEGMediaPlayer Java:
- use private fields
- simplified code due to above changes.
+++
Working Tests: MovieSimple and MovieCube
TODO-1: Fix
- Android
- OMXGLMediaPlayer
TODO-2:
- Fix issue where async audio frames arrive much later than 1st video frame, i.e. around 300ms.
- Default TextureCount .. maybe 3 ?
- Adding Audio synchronization ?
- Find 'truth' about correlation of audio and video PTS values,
currently, we assume both to be unrelated ?
|
|
|
|
|
|
|
|
|
|
| |
- GLMediaPlayer: Use URI instead of URL, allowing passing a non resolved location
- Java's URL doesn't allow 'other' protocols, i.e. RTSP
- GLMediaPlayer: Add Table of test streams and their location ..
- FFMPEGMediaPlayer
- Handle av_read_play/pause response on java side, ignore error - simply dump in DEBUG_NATIVE mode
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Use Platform.currentTimeMillis() for accurate timing!
- GLMediaPlayer / GLMediaPlayerImpl
- Add DEBUG_NATIVE property jogl.debug.GLMediaPlayer.Native
for verbose impl. messages, i.e. ffmpeg/libav
- Add 'synchronization' section in GLMediaPlayer API doc (WIP)
- Use passive non-blocking video synchronization,
i.e. repeat frames instead of 'sleep'.
Thx to Xerxes's suggestion.
- Add flushing of cached decoded frames,
allowing to remove complicated 'videoSCR_reset_latch'
- FramePusher (threaded decoding):
- Always create a shared context!
- Release context while pausing
- Pre/post 'getNextTextureImpl()' actions only
at makeCurrent/release.
- newFrameAvailable(..) signal after decoded frame is enqueued
- FFMPEGDynamicLibraryBundleInfo
- Bind add. functions of libavcodec:
+ "av_init_packet",
+ "av_new_packet",
+ "av_destruct_packet",
- Bind add. functions of libavformat:
+ "avformat_seek_file",
+ "av_read_play",
+ "av_read_pause",
- DEBUG property := FFMPEGMediaPlayer.DEBUG || DynamicLibraryBundleInfo.DEBUG;
- FFMPEGMediaPlayer
- Use libavformat's 'av_read_play()' and 'av_read_pause()',
which may get utilized for network streams, e.g. RTSP
- getNextTextureImpl(..):
- Fix retry loop
- Use postNextTextureImpl/preNextTextureImpl if desired (PSM)
- Native:
- Use fixed my_av_q2i32(..) macro (again)
- Use INVALID_PTS marker (synced w/ Java code)
- DEBUG: Dump more detailed frame information
- TODO: Consider passing frame_delay, especially for repeated frames!
- Tests (MovieSimple, MovieCube):
- Refine KeyEvents control for seek and speed.
- TODO:
- Proper audio clock calculation - difficult w/ OpenAL !
- Video / Audio sync:
- seek !
- streams w/ very async A/V frames
- Test Streams:
- Five-minute-sync-test.mp4
- Audio-Video-Sync-Test-Calibration-23.98fps-24fps.mp4
- sound_in_sync_test.mp4
- big_buck_bunny_1080p_surround.avi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- GLMediaPlayer
- Remove State.Stopped and method stop() - redundant, use pause() / destroy()
- Add notion of stream IDs
- Add API doc: State / Stream-ID incl. html-anchor
- Expose video/audio PTS, ..
- Expose optional AudioSink
- Min multithreaded textureCount is 4 (EGL* and FFMPEG*)
- GLMediaPlayerImpl
- Move AudioSink rel. impl. to this class,
allowing a tight video implementation reusing logic.
- Remove 'synchronized' methods, synchronize on State
where applicable
- implement new methods (see above)
- playSpeed is handled partially in AudioSink.
If it exeeds AudioSink's capabilities, drop audio and rely solely on video sync.
- video sync (WIP)
- video pts delay based on geometric weight
- reset video SCR if 'out of range', resync w/ PTS
-
- FramePusher
- allow interruption when pausing/stopping,
while waiting for next avail free frame to decode.
- FFMPEGMediaPlayer
- Add proper AudioDataFormat negotiation AudioSink <-> libav
- Parse libav's SampleFormat
- Remove AudioSink interaction (moved to GLMediaPlayerImpl)
- Tests (MovieSimple, MovieCube):
- Add aid/vid selection
- Add KeyListener for actions: seek(..), play()/pause(), setPlaySpeed(..)
- Dump perf-string each 2s
- TODO:
- Add audio sync in AudioSink, similar to GLMediaPlayer's weighted video delay,
here: drop audio frames.
|
|
|
|
| |
optional OMX to fail to compile
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
available EGL/FFMPeg. WIP!
Off-thread decoding:
If validated (impl) textureCount > 2, decoding happens on extra thread.
If decoding requires GL context, a shared context is created for decoding thread.
API Changes:
- initGLStream(..): Adds 'textureCount' as argument.
- TextureSequence.TexSeqEventListener.newFrameAvailable(..) exposes the new frame available
- TextureSequence.TextureFrame exposes the PTS (video)
Implementation:
- 'int validateTextureCount(int)': implementation decides whether textureCount can be > 2, i.e. off-thread decoding allowed,
default is NO w/ textureCount==2!
- 'boolean requiresOffthreadGLCtx()': implementation decides whether shared context is required for off-thread decoding
- 'syncFrame2Audio(TextureFrame frame)': implementation shall handle a/v sync, due to audio stream details (pts, buffered frames)
- FFMPEGMediaPlayer extends GLMediaPlayerImpl, no more EGLMediaPlayerImpl (redundant)
+++
- SyncedRingbuffer: Expose T[] array
+++
TODO:
- syncAV!
- test Android
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- AudioSink w/ AudioFrame and formats public
- ALAudioSink uses a circular buffer now, hence relaxes the one-threaded player mode
- FFMPEGMediaPlayer uses multiple audio frames (equal to the ALAudioSink number)
and wraps data to NIO buffer w/o copy.
- FFMPEGMediaPlayer audio threading currently disabled: distorted sound
Seems that the ALAudioSink's circular buffer usage is good enough for now.
- Verbosity only w/ DEBUG flag
- New SyncedRingbuffer for effcient synced buffering
|
|\ |
|
| |\
| | |
| | |
| | | |
FFMPEGMediaPlayer
|
| | |
| | |
| | |
| | | |
Signed-off-by: Xerxes Rånby <[email protected]>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Xerxes Rånby <[email protected]>
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
frames to java.
Signed-off-by: Xerxes Rånby <[email protected]>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Xerxes Rånby <[email protected]>
|
| | |
| | |
| | |
| | |
| | |
| | | |
av_samples_get_buffer_size
Signed-off-by: Xerxes Rånby <[email protected]>
|
| | |
| | |
| | |
| | |
| | |
| | | |
free of packet memory.
Signed-off-by: Xerxes Rånby <[email protected]>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Re-enable code to decode audio frame.
Throw a runtime exception for unimplemented sp_avcodec_decode_audio3 fallback.
Fix pts calculation to prevent division by zero caused by type truncation.
Fix aPTS calculation to use valid data.
Hide pts & aPTS info while running non-verbose.
Signed-off-by: Xerxes Rånby <[email protected]>
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
ES3 / GL4.3:
- Update all EGL, GLX, WGL and GL (desktop and mobile) khronos headers to latest version.
- GL3/gl3* -> GL/glcorearb*
- Explicitly preserve ES2_compatibility and ES3_compatibility in header,
most extension grouping was removed in new headers.
- Always load all GLHeader to ensure proper extension association across all profiles.
- Unified method signatures
- Added GL_EXT_map_buffer_range to core
- Using common 'glMapBufferImpl(..)' for all glMapBuffer(..) and glMapBufferRange(..) impl.
- Init necessary fields of GL instances via 'finalizeInit()' called by reflection, if exist.
This allows removing initialization checks, i.e. for all buffer validations.
- BuildStaticGLInfo: Can handle new GL header structure, i.e. one CPP extenstion block incl. define + funcs.
- GLJavaMethodBindingEmitter: Simply print the
- No GL duplication due to new intermediate interfaces, see below
- OO lineare inheritance (Added GL2ES3, GL3ES3 and GL4ES3 intemediates):
GLBase - GL - GL2ES2 - GLES2
GLBase - GL - GL2ES2 - GL2GL3 - [ GL2, GL3 ]
GLBase - GL - GL2ES2 - GL2ES3 - GL3ES3 - [ GL3 ]
GLBase - GL - GL2ES2 - GL2ES3 - GL3ES3 - GL4ES3 - [ GLES3, GL4, .. ]
- Expose 'usable' intermediate interfaces GL3ES3 and GL4ES3 in GLBase/GLProfile/GLContext
via is*() and get*().
- GLContext*:
- isGL3core() is true if [ GL4, GL3, GLES3 ] (added GLES3)
- Added ctxProfile argument to allow handling ES versions:
- getMaxMajor(..), getMaxMinor(..), isValidGLVersion(..) and decrementGLVersion(..)
- mapGLVersions(..) prepared for ES ARB/KHR validation
- EGLContext checks ES3 (via old ctx's GL_VERSION)
- ExtensionAvailabilityCache adds GL_ES_Version_X_Y for ES.
- Prelim tests w/ Mesa 9.1.3
GL Version 3.0 (ES profile, ES2 compat, ES3 compat, FBO, hardware) - OpenGL ES 3.0 Mesa 9.1.3 [GL 3.0.0, vendor 9.1.3 (Mesa 9.1.3)]
- TODO:
- Use KHR_create_context in EGLContext.createContextARBImpl(..)
- More tests (Mobile, ..)
+++
Misc:
- GLContext*:
- Complete glAllocateMemoryNV w/ glFreeMemoryNV.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
between Java GL- and CALayer thread ; Simplify / Fix waitUntilRenderSignal().
Stuttering caused by lack of GL resource synchronization between Java GL- and CALayer thread
+ // Required(?) to finish previous GL rendering to give CALayer proper result,
+ // i.e. synchronize both threads each w/ their GLContext sharing same resources.
+ //
+ // FIXME: IMHO this synchronization should be implicitly performed via 'CGL.flushBuffer(contextHandle)' above,
+ // in case this will be determined a driver bug - use a QUIRK entry in GLRendererQuirks!
+ gl.glFinish();
Simplify / Fix waitUntilRenderSignal()
- remove loop and 'ready' condition -> nonsense
- if too later, i.e. lastWaitTime+TO < now, use now+TO as max. vsync waiting time
Bug735 Tests:
- Make vsync, wait and ECT (exclusive context thread) configurable via main args.
- Inv2*, Inv3* and Inv4*: Fluent Animation
- Works w/ ECT
|
|/ |
|
|
|
|
|
|
|
| |
Prevent division and multiplication by zero errors in native code
after mpeg video seek caused by type truncation.
Signed-off-by: Xerxes Rånby <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Main-Thread
Previous code created, set and unset the root CALayer on the current thread,
which lead to a very delayed destruction of the root CALayer w/.
With Java7 this lead to a possible resource starvation in certain situations,
since Java7 uses an CAOpenGLLayer.
Similar w/ f354fb204d8973453c538dda78a2c82c87be61dc,
creation, set and unset is operated on main-thread.
|
| |
|
| |
|
|
|
|
| |
setView: view] which breaks pbuffer; Add [NSOpenGLContext clearDrawable].
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
896e8b021b39e9415040a57a1d540d7d24b02db1): Run on main-thread w/o blocking ; Misc Changes
Commit 896e8b021b39e9415040a57a1d540d7d24b02db1 moved all native CALayer calls to the current thread
to avoid deadlocks.
Even though this seemed to be fine at least resource GC (release/dealloc calls) were issued
very late in time, probably due to multithreading synchronization of JAWT and/or OSX API.
Example: Our 'TestAddRemove01GLCanvasSwingAWT' test didn't freed CALayer resources incl. GL ctx
when destroying the objects (AWT Frame, GLCanvas, ..), leading to resource starvation .. eventually.
Remedy is a compromise of behavior before commit 896e8b021b39e9415040a57a1d540d7d24b02db1
and that commit, i.e. to run CALayer lifecycle methods on main-thread, but do not block!
The careful part within MacOSXCGLContext.associateDrawable(..) performs the following block on main-thread:
- lock the context
- create NSOpenGLLayer (incl. it's own shared GL context and the DisplayLink)
- attach NSOpenGLLayer to root CALayer
- unlock the context
Due to the GL ctx locking, this async offthread operation is safe within our course of operations.
Details:
- NSOpenGLContext
- Context and CVDisplayLink creation at init
- Call [ctx update] if texture/frame size changed
- 'waitUntilRenderSignal' uses default TO value if given TO is 0 to avoid deadlocks
+++
Misc Changes:
- Fix object type detection: isMemberOfClass -> isKindOfClass
- OSXUtil_isNSView0
OSXUtil_isNSWindow0,
CGL_isNSOpenGLPixelBuffer
- MacOSXCGLDrawable/MacOSXPbufferCGLDrawable: remove getNSViewHandle() method.
MacOSXCGLContext uses common code to detect nature of the drawable handle.
- MacOSXCGLContext/CALayer: Use safe screenVSyncTimeout values, never 0 to avoid deadlock!
- JAWTWindow.invalidate: Call detachSurfaceLayer() if not done yet
|
|
|
|
|
|
|
|
|
|
|
|
| |
need for explicit call
- OffscreenLayerSurface.layoutSurfaceLayer() removed, no more required
- JAWTWindow adds a ComponentListener, which issues FixCALayerLayout() at resized, moved and shown.
- MyNSOpenGLLayer no more requires fix*Size() methods
- MyNSOpenGLLayer::setDedicatedSize() need no explicit CATransaction, performed by caller.
|