summaryrefslogtreecommitdiffstats
path: root/src/jogl/classes/jogamp
Commit message (Collapse)AuthorAgeFilesLines
* AWTTilePainter: Fix DEBUG message (used wrong value at println)Sven Gothel2013-09-181-1/+1
|
* Fix SharedResourceRunner's potential race-conditions. Use top-level ↵Sven Gothel2013-09-183-104/+109
| | | | synchronization simplifying code and better robustness.
* Print Tests: Split 'Printable' to own class, add OffscreenPrintable using ↵Sven Gothel2013-09-171-1/+1
| | | | NIO BufferedImage, adding OffscreenPrintable tests to all unit tests.
* Fix AWT printing issues w/ overlapping and/or non-opaque contents ; Change ↵Sven Gothel2013-09-151-45/+144
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | AWTPrintLifecycle's lifecycle - AWTPrintLifecycle: - Should decorate: PrinterJob.print(..), instead of within Printable.print(..) { .. container.printAll(..); .. } This is due to AWT print implementation, i.e. AWT will issue Printable.print(..) multiple times for 'overlapping' or non-opaque elements! - Move from javax.media.opengl.awt -> com.jogamp.nativewindow.awt - Make _interface_ AWT agnostic, i.e. remove Graphics2D from 'setup(..)' - Add 'int numSamples' to 'setup(..)' to determine the number of samples - AWTTilePrinter: - Use double precision when scaling image-size and clip-rect, then round them to integer values. Otherwise AWT will use the bounding box for the clipping-rectangular. - Clip negative portion of clip-rect, this removes redundant overpaints, as well as increasing the tile count due to the increased clipping-size. - Clip the image-size in the tile-renderer according to the clip-rect. - DEBUG_TILES: Dump tiles to file - Use sub-image of final BuffereImage instead of adding another clipping region. This might increase performance if no clip-rect has been set. TODO: TestTiledPrintingGearsSwingAWT overlapping tests exposes a 'off by one' bug of the first layer's background! Note: The GL content seems to be correct though - maybe it's simply an AWT rounding error ..
* AWTTilePainter: Fix null clip-rect (consider scaling); Fix non GL-oriented ↵Sven Gothel2013-09-131-31/+55
| | | | drawable, skip vertical flip and use 1:1 y-coord.
* Relocate FFMPEGNatives.initIDS0() -> FFMPEGStaticNatives.initIDS0(); Cleanup ↵Sven Gothel2013-09-117-13/+5
| | | | up warnings and includes (clang).
* AWT Printing: AWTTilePainter needs to handle null clip!Sven Gothel2013-09-101-6/+10
|
* Add AWTTilePainter.dumpHintsAndScale(..), removing more duplicated code from ↵Sven Gothel2013-09-081-0/+17
| | | | GLCanvas/GLJPanel
* Test: Don't resize frame, tweek print-matrix; AWTPrintLifecycle: Add scale ↵Sven Gothel2013-09-081-18/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | and convenient AWT container traversal context; GLCanvas/GLJPanel properly handle existing MSAA and req. AA; - Test: Don't resize frame, tweek print-matrix - Use scaleComp72 to scale the frame to fit on page, i.e. global print matrix - Use scaleGLMatXY = 72.0 / glDPI to locally scale on the GL drawable as being passed to AWTPrintLifecycle.setup(..) - Hence frame stays untouched/stable, no need for 'offscreen' print test, which is removed. - AWTPrintLifecycle: Add scale and convenient AWT container traversal context Use a simple decoration for all AWTPrintLifecycle impl. components within a container: final AWTPrintLifecycle.Context ctx = AWTPrintLifecycle.Context.setupPrint(frame, g2d, scaleGLMatXY, scaleGLMatXY); try { } finally { ctx.releasePrint(); } - GLCanvas/GLJPanel properly handle existing MSAA and req. AA; - GLCanvas: Workaround bug where onscreen MSAA cannot switch to offscreen FBO, i.e. stay 'onscreen' - GLJPanel: Use new offscreen FBO if MSAA is requested and not yet used. - GLJPanel.Offscreen.postGL(): always swapBufer(), was missing for !GLSL swapping Results GLCanvas / GLJPanel: - Good scaling - Stable behavior / visibility - High DPI mode works
* TiledPrintingAWTBase: Fix scaling - Fit frame to page, add MSAA ↵Sven Gothel2013-09-071-1/+1
| | | | | | | | | | | | | | RenderingHints test; setupPrint(Graphics2D): Query RenderingHints to use MSAA rendering - AWTPrintLifecycle.setupPrint(Graphics2D): Query RenderingHints to use MSAA rendering - Impl. in GLCanvas - TODO GLJPanel (would need a new offscreen buffer) - TiledPrintingAWTBase: - Fix scaling - Fit frame to page - add MSAA RenderingHints test - GLCanvas: Remove dumpStack() DEBUG output
* AWT/GL Printing WIP: Abstract AWT tile painting code out to AWTTilePainter, ↵Sven Gothel2013-09-071-0/+217
| | | | reused w/ GLCanvas and GLJPanel
* GLVBOArrayHandler: Remove unused importsSven Gothel2013-09-051-3/+0
|
* Revert commit 4beef4fe856690b070ba06a6caf4515aebd7171b manually for testing ↵Sven Gothel2013-09-021-2/+2
| | | | purposes .. (ATI fglrx driver issues)
* X11GLXDrawableFactory.Shutdown: Disable shared context destruction since it ↵Sven Gothel2013-09-021-2/+2
| | | | | | | | | may lead to a JVM freeze .. .. on ATI fglrx driver 32bit on 64bit w/ a frozen shared GL context involved. Hence we have to rely on the driver cleanup when JVM hits 'exit', equal to the Windows implementation.
* Animator/GLWindow: Catch 'ThreadDeath/Throwable' and dump info in DEBUG mode ↵Sven Gothel2013-09-021-1/+1
| | | | (cosmetic change only); Typo in comment; TestSharedContextListNEWT2: Stop animator.
* FFMPEGMediaPlayer: Handle use-case of having [av|sw]resample lib, but not ↵Sven Gothel2013-09-011-2/+2
| | | | | | | compiled for it -> pass Scenario ffmpeg-0.10, where we are not prepared (compiled-in) for sw-resample support. Don't use if compiled in version (CC) is < 0 (n/a), and allow to pass at load time.
* GLMediaPlayer: pause() -> pause(boolean flush): Allowing to flush buffers, ↵Sven Gothel2013-08-311-14/+16
| | | | next frame after play() will provide new frame. Added API doc.
* GLMediaPlayer enhancements: State, Camera options, detect and act on ↵Sven Gothel2013-08-306-35/+75
| | | | | | | | | | | | | | | | | | | | orientation change (flipped), API-doc, - State - Fix state transition (initGL() error) - Camera options - options uses ';' as query separator - don't use 'default' options, driver should know - Detect and act on orientation change (flipped) - ffmpeg impl detects if flipped changes and triggers a SIZE update event. This allows application to react, i.e. re-init GL and use new TextureCoord's. Test: Works well on Windows w/ rawvideo dshow camera driver/codec. - API-doc - TexSeqEventListener/GLMediaEventListener usage / constraints (GL, ..) - State transition fix
* FFMPEGMediaPlayer: Handle v-flipped 'bottom-up' pictures ; Refine API doc ↵Sven Gothel2013-08-301-1/+1
| | | | | | | | | | | | | | | | | | 'camera ID' If linesize is < 0, it is not invalid as assumed in commit eca6a5cb1e2beda84dfbafc31ed225e272f4f3fb, but vertically flipped (bottom-up). We have to adjust the data pointers, which are moved to the upper end of memory as well and can proceed as usual. TODO: - Update texture 'mustFlipVertically' to 'false' in this case. - Later: - Allow updating texture size .. - Whole pixel-fmt/texture-lookup-shader association must scale better, i.e. extract the 'knowledge' into one class, use a static shader code using uniforms instead of hard-coded values .. etc.
* Enhance GLMediaPlayer: Full FFMPeg support, 'dshow' camera support on ↵Sven Gothel2013-08-299-156/+344
| | | | | | | | | | | | | | | | | | | | | | | | | | | | windows, 2 more pixel formats, fail-safe data handling - add support for ffmpeg 2 / libav 10 -> lavc55_lavf55_lavu52_lavr01 - add support for ffmpeg libswresample (similar to libavresample) - handle BGRA (GL type) and BGR24 (texture shader) - Change Camera URI semantics, drop 'host' and use 'path' for camera ID and use 'query' for options. - add support for Window's DShow camera selection - our camera id -> index of list of video-input devices, this gives us same behavior as w/ Linux - requires windows libs: strmiids, uuid, ole32, oleaut32 - Compiles w/ MingW64, works w/ libav/ffmpeg - TODO: test compilation w/ MingW 32bit ! - don't push data to texture if (linesize <= 0) this may happen due to buggy decoder / setup .. Tested manually on GNU/Linux x64 and Windows x64: - GNU/Linux libav 0.8, libav 9, libav 10, ffmpeg 1.2, ffmpeg 2.0 - Windows libav 0.8, libav 9, ffmpeg 2.0 - videos and camera
* Fix libav/ffmpeg compilation; FFMPEGMediaPlayer Enahncements (More YUV*, Use ↵Sven Gothel2013-08-283-154/+286
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | def. high camera options, cleanup symbols) - Fix libav/ffmpeg compilation - Split native GLContext code from JoglCommon - JoglCommon is required for ffmpeg_* c-compile/link - Supported versions now: - 0.8 53.53.51 - 9.0 54.54.52 - FFMPEGMediaPlayer - Update API doc, add compatibility .. etc - Pixel format conversions (via shader texture lookup func): - YUV420P, YUVJ420P - YUV422P, YUVJ422P - YUYV422 - Properly handle aid/vid - In camera mode: set high default values - TODO: Make it configurable via camera URI: - video_size - framerate - ? - FFMPEGDynamicLibraryBundleInfo - Cleanup symbols / remove unused (pre 53) - Add av_dict_* methods
* FFMPEGMediaPlayer: Fix av-audio-fmt -> AudioFormat parsing (fixedP was wrong ↵Sven Gothel2013-08-281-8/+7
| | | | for float values)
* GLMediaPlayer: Add camera input / FFMPEG: Fix 'av_packet' leak and add ↵Sven Gothel2013-08-277-68/+244
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | missing symbol 'av_realloc'. - Add camera input - Use URI w/ scheme 'camera' to determine camera input is desired, use URI host as camera id. E.g. 'camera://0' for 1st camera. - AndroidGLMediaPlayerAPI14: Via 'Camera' - FFMPEG*: Via libavdevice, device name and input format - TODO: Add controls to manipulate camera if available - FFMPEG* - Add symbols - avcodec_register_all - av_realloc (was missing) - avdevice_register_all - Load libavdevice (opt) - Camera: - Use <ID> (windows) and /dev/video<ID> other OS - simply find the input format in native code - Support YUYV422 (used in video4linux2, etc.) - Stuff 2x 16bpp (YUYV) into one RGBA pixel! - Add texture format for 16bpp - Add texture lookup shader - Fix av_packet leak in readNextImpl(..) - Restore orig pointer and size values, we may have moved along within packet. Then call av_free_packet(). - Use null AudioSink if audio-id is NONE
* ALAudioSink: Remove force DEBUG infoSven Gothel2013-08-261-1/+1
|
* FFMPEGNatives*: Add missing license headerSven Gothel2013-08-264-0/+108
|
* libav/ffmpeg: Compile/Link 2 versions of native FFMPEGMediaPlayer methods ↵Sven Gothel2013-08-267-250/+383
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | FFMPEGNatives -> FFMPEGv08Natives + FFMPEGv09Natives Enables FFMPEGMediaPlayer to work w/ either ffmpeg/libav version 8 or 9 w/ same JOGL binary Same C source code is compiled against 1: version 0.8 FFMPEGv08Natives lavc53.lavf53.lavu51 2: version 0.9 FFMPEGv09Natives lavc54.lavf54.lavu52.lavr01 FFMPEGv08Natives and FFMPEGv09Natives implements FFMPEGNatives, native C code uses CPP '##' macro concatenation to produce unique function names. To enable 'cpp' to find the libav* header files matching the desired version, we have placed them in the c-file's folder, issued '#include "path/file.h" and added symbolic links to allow finding same module and 'sister modules': ls -l libavformat/ .. lrwxrwxrwx 1 sven sven 13 Aug 26 12:56 libavcodec -> ../libavcodec lrwxrwxrwx 1 sven sven 14 Aug 26 12:56 libavformat -> ../libavformat lrwxrwxrwx 1 sven sven 12 Aug 26 12:57 libavutil -> ../libavutil .. At static init FFMPEGDynamicLibraryBundleInfo, determines the runtime version and instantiates the matching FFMPEGNatives, or null if non matches. FFMPEGMediaPlayer still compares the compile-time and runtime versions. FFMPEGMediaPlayer passes it's own instance to FFMPEGNatives for callbacks.
* ffmpeg/libav: Remove 'dead' audio/video frame count relation snoop-codeSven Gothel2013-08-261-9/+5
|
* libav/ffmpeg: version9: Add libavresample support ; Proper AudioFormat ↵Sven Gothel2013-08-262-80/+171
| | | | | | | | | | | | | | negotiation w/ AudioSink; Misc - Add libavresample support - Resample if avail && (!AV_SAMPLE_FMT_S16 || !prefSampleRate || !sinkSupported) - Resample to: prefSampleRate (if set), AV_SAMPLE_FMT_S16 and min(channelCount, maxChannelCount) - Proper AudioFormat negotiation w/ AudioSink; - Utilize AudioSink's 'isSupported(AudioFormat)' - Misc - use 'av_get_bytes_per_sample(fmt)' always, don't assume 2
* AudioSink: Fixe type names ; Enhance AudioFormat negotiation ; ALAudioSink ↵Sven Gothel2013-08-263-43/+127
| | | | | | | | | | | | | | | | | | | | | | adds AL_SOFT_buffer_samples support w/ full AL caps - Fixe type names: - Remove AudioDataType, we only support PCM here anyways - AudioDataFormat -> AudioFormat / Add 'planar' attribute to distingush packed/planar data type - Validate float types - Enhance AudioFormat negotiation - Add 'isSupported(AudioFormat format)' which _shall_ be used before 'init(..)' to test/negotiate format - Add getMaxSupportedChannels(), which may be used w/ getPreferredFormat() if orig requested format fails via 'isSupported(..)' - 'init(..)' returns boolean only. - ALAudioSink adds AL_SOFT_buffer_samples support w/ full AL caps - Determine whether AL_SOFT_buffer_samples is supported - Use new JOAL ALHelper to convert AudioFormat -> AL-types, which also answers the 'isSupported(..)' query. - Now allows multiple: channles, sample-types, etc.
* libav/ffmpeg: Prepare for lavc54.lavf54.lavu52Sven Gothel2013-08-252-8/+64
| | | | | | | | | | | - Add compile-time/runtime version check, fail if major versions do not match assuming binary incompatibility - Add: 'av_find_input_format' for future video input support - Manually map '/dev/video<NUM>' to video input - not working yet. - WINDOWS: Set file to '<NUM>' - Set input format string depending on OS
* NullGLMediaPlayer: Fix reported VID (fake 0), no AID, textureCount == 2Sven Gothel2013-08-251-2/+12
|
* GLAutoDrawableBase: DEBUG code - Avoid NPESven Gothel2013-08-251-2/+4
|
* AndroidGLMediaPlayerAPI14: Fix implementation to coop w/ threaded decoder / ↵Sven Gothel2013-08-252-57/+127
| | | | | | | | | | | | | Add EOS detection, setAudioVolume(..) GLMediaPlayerImpl.initStreamGL(..): Only require a minimum texture count of 2, which is the bare minimum to allow our algorithm to work, i.e. having a 'lastFrame' and avail/playing ringbuffer have each one frame. Android's MediaPlayer API can only deal w/ one SurfaceTexture, hence we have to fake a second SurfaceTextureFrame w/ same content to allow our implementation to work w/ the threaded decoder (min 2 frames).
* GLMediaPlayer/AudioSink: Add set[Audio]Volume(float v)/get[Audio]Volume() ↵Sven Gothel2013-08-255-4/+122
| | | | allowing to change the audio volume.
* AudioSink.init(..) abstract 'frame count' -> duration [ms] allowing ↵Sven Gothel2013-08-244-30/+45
| | | | | | | | | | | | | | | | | | | non-frame based AudioSink's to deal w/ desired queue sizes. - Rename AudioSink.initSink(..) -> AudioSink.init(..) - Move: "int initialFrameCount, int frameGrowAmount, int frameLimit" to "int initialQueueSize, int queueGrowAmount, int queueLimit" based on milliseconds instead of frame count. - Passing hint 'float frameDuration' to calculate frame count for fame based audio sink, i.e. ALAudioSink. - Adding sensible static final default values - AudioDataFormat: Add convenient conversion routines (samples/bytes/frame-count) - FFMPEGMediaPlayer: Retrieve audio frame size in samples per channel, pass it to AudioSink.init(..) to properly calculate frame count/limits based on duration.
* GLMediaPlayer Multithreaded Decoding: GLMediaPlayer* (Part-6) - DONESven Gothel2013-08-245-141/+210
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Multithreaded decoding and API should be considered stable by now, minor changes may apply if Android/OMX impl. requires it. We still need to solve TODO's as listed below, copied from 474ce65081ecd452215bc07ab866666cb11ca8b1. +++ - *TextureFrame OO changes: - TextureFrame extends TimeFrameI - GLMediaPlayerImpl* - Adapt to Ringbuffer changes of GlueGen commit f9f881e59c78e3036cb3f956bc97cfc3197f620d - Fix impl. method's API doc - getNextTextureImpl(..) returns video PTS - Fix audio-only playback - frame dropping shall only happen if: - previous frame has not been dropped - frame is too later - one decoded frame is already available - Don't block for decoder anymore: - nextFrame = "videoFramesDecoded.getBlocking() -> videoFramesDecoded.get()"; No 'next decoded frame avail' only could mean: - slow decoding/hardware - slow transport hence we shall not block rendering. - Add DEBUG output if using last frame - Add integer property 'jogl.debug.GLMediaPlayer.StreamWorker.delay' in milliseconds to simulate slow decoding, i.e. delay is added in StreamWorker after decoding before pushing new frame to Ringbuffer. - FFMPEGMediaPlayer: - audioFrameLimitWithVideo 128 -> 64 - audioFrameLimitAudioOnly 128 -> 32 - uses AudioSink's 'enqueueData(int pts, ByteBuffer bytes, int byteCount)' - fixes for audio-only playback +++ Working Tests: MovieSimple and MovieCube TODO-1: Fix - Android - OMXGLMediaPlayer TODO-2: - Fix issue where async audio frames arrive much later than 1st video frame, i.e. around 300ms. - Default TextureCount .. maybe 3 ? - Adding Audio synchronization ? - Find 'truth' about correlation of audio and video PTS values, currently, we assume both to be unrelated ?
* *AudioSink: Add byte/time calc to AudioDataFormat, *AudioFrame OO changes, ↵Sven Gothel2013-08-243-117/+148
| | | | | | | | | | | | | | | | | | | | | | | | | | | Reuses ALAudioFrames to ease GC, Ringbuffer changes - Adapt to Ringbuffer changes of GlueGen commit f9f881e59c78e3036cb3f956bc97cfc3197f620d - Favor AudioSink 'AudioFrame enqueueData(int pts, ByteBuffer bytes, int byteCount)', - Impl. shall reuse AudioFrame's instead of creating them on the fly - User shall simply pass the net data required, while receiving an internal AudioFrame - Add byte/time calc to AudioDataFormat: - Add getDuration(byteCount) and getByteCount(ms). - *AudioFrame OO changes: - abstract AudioFrame extends TimeFrameI - allow setting of all components to reuse instanced (GC clean) - ALAudioSink reuses ALAudioFrames to ease GC: - Remove creating temporary objects to ease GC - ALAudioFrame holds ALBuffer name, remove ActiveBuffer type. - Use ALAudioFrame similar to TextureFrame in GLMediaPlayerImpl, i.e. fill them in 'full' Ringbuffer and move them in-between 'full'/'playing' Ringbuffer. -
* FFMPEGMediaPlayer: Transform URI spaces '%20' to ' ' manually, libav doesn't ↵Sven Gothel2013-08-231-1/+1
| | | | work well w/ URI encoded names.
* GLMediaPlayer Multithreaded Decoding: GLMediaPlayer* (Part-5) - WIPSven Gothel2013-08-237-343/+471
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Update/fix GLMediaPlayer API doc - GLMediaEventListener: Add event bits for all state changes to be delivered via attributesChanged(..) - StreamWorker / Decoder Thread: - Use StreamWorker only ! - Handle exceptions on StreamWorker via StreamException - Handles stream initialization and decoding (-> initStream(..)) - Split initGLStream(..) -> initStream(..) + initGL(GL) - allow initStream(..)'s implementation being executed on StreamWorker - allow GL initialization to be 'postponed' when stream is read, i.e. non blocking stream initialization (UI .. etc) - Handle EOS via END_OF_STREAM_PTS -> pause/event - Video: Use lock-free LFRingbuffer, similar to ALAudioSink (commit f18a94b3defef16e98badd6d99f2422609aa56c5) +++ - FFMPEGDynamicLibraryBundleInfo - Add avcodec's: - avcodec_get_frame_defaults, avcodec_free_frame (54.28.0), avcodec_flush_buffers, - Add avutil's: - av_frame_unref (55.0.0) - Add avformat's: - avformat_seek_file (??) +++ - FFMPEGMediaPlayer Native: - add 'snoop' video frames for a/v frame count relation. disabled per default, since no more needed due to ALAudioSink's grow-buffer usage of LFRingbuffer. - use sp_avcodec_free_frame if available - 'useRefCountedFrames=1' for libav 55.0 to cache more than one audio frame, not used since ALAudioSink's OpenAL usage does not require it (copies data once). Note: the above snooped-video frame count is used here. - use only one cached audio-frame (-> see above, OpenAL copies data once), while reusing the NIO buffer! - Perform OpenGL sync (glFinish) in native code! - find proper PTS value, i.e. either frame's PTS or DTS, see 'PTSStats'. - FFMPEGMediaPlayer Java: - use private fields - simplified code due to above changes. +++ Working Tests: MovieSimple and MovieCube TODO-1: Fix - Android - OMXGLMediaPlayer TODO-2: - Fix issue where async audio frames arrive much later than 1st video frame, i.e. around 300ms. - Default TextureCount .. maybe 3 ? - Adding Audio synchronization ? - Find 'truth' about correlation of audio and video PTS values, currently, we assume both to be unrelated ?
* AudioSink: Add END_OF_STREAM_PTS, initSink(..) args: frameGrowAmount and ↵Sven Gothel2013-08-223-60/+134
| | | | frameLimit allowing an optional used Ringbuffer to grow in implementation.
* SyncedRingbuffer moved to GlueGen, commit ↵Sven Gothel2013-08-221-296/+0
| | | | 30475c6bbeb9a5d48899b281ead8bb305679028d
* GLMediaPlayer: Use URI instead of URL / Misc refinementsSven Gothel2013-08-174-31/+52
| | | | | | | | | | - GLMediaPlayer: Use URI instead of URL, allowing passing a non resolved location - Java's URL doesn't allow 'other' protocols, i.e. RTSP - GLMediaPlayer: Add Table of test streams and their location .. - FFMPEGMediaPlayer - Handle av_read_play/pause response on java side, ignore error - simply dump in DEBUG_NATIVE mode
* GLMediaPlayerImpl: Refine getNextTexture(..) DEBUG output, put 'last SCR ↵Sven Gothel2013-08-161-10/+8
| | | | delay' in regular println.
* GLMediaPlayer Multithreaded Decoding: GLMediaPlayer* (Part-4) - WIPSven Gothel2013-08-167-221/+245
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Use Platform.currentTimeMillis() for accurate timing! - GLMediaPlayer / GLMediaPlayerImpl - Add DEBUG_NATIVE property jogl.debug.GLMediaPlayer.Native for verbose impl. messages, i.e. ffmpeg/libav - Add 'synchronization' section in GLMediaPlayer API doc (WIP) - Use passive non-blocking video synchronization, i.e. repeat frames instead of 'sleep'. Thx to Xerxes's suggestion. - Add flushing of cached decoded frames, allowing to remove complicated 'videoSCR_reset_latch' - FramePusher (threaded decoding): - Always create a shared context! - Release context while pausing - Pre/post 'getNextTextureImpl()' actions only at makeCurrent/release. - newFrameAvailable(..) signal after decoded frame is enqueued - FFMPEGDynamicLibraryBundleInfo - Bind add. functions of libavcodec: + "av_init_packet", + "av_new_packet", + "av_destruct_packet", - Bind add. functions of libavformat: + "avformat_seek_file", + "av_read_play", + "av_read_pause", - DEBUG property := FFMPEGMediaPlayer.DEBUG || DynamicLibraryBundleInfo.DEBUG; - FFMPEGMediaPlayer - Use libavformat's 'av_read_play()' and 'av_read_pause()', which may get utilized for network streams, e.g. RTSP - getNextTextureImpl(..): - Fix retry loop - Use postNextTextureImpl/preNextTextureImpl if desired (PSM) - Native: - Use fixed my_av_q2i32(..) macro (again) - Use INVALID_PTS marker (synced w/ Java code) - DEBUG: Dump more detailed frame information - TODO: Consider passing frame_delay, especially for repeated frames! - Tests (MovieSimple, MovieCube): - Refine KeyEvents control for seek and speed. - TODO: - Proper audio clock calculation - difficult w/ OpenAL ! - Video / Audio sync: - seek ! - streams w/ very async A/V frames - Test Streams: - Five-minute-sync-test.mp4 - Audio-Video-Sync-Test-Calibration-23.98fps-24fps.mp4 - sound_in_sync_test.mp4 - big_buck_bunny_1080p_surround.avi
* SyncedRingbuffer Cleanup: private fields, clarify reset(boolean)Sven Gothel2013-08-161-14/+12
|
* Fix Bug 817 (2/2): GLContextImpl's ↵Sven Gothel2013-08-161-4/+8
| | | | | | | getDefaultPixelDataType()/getDefaultPixelDataFormat() use defaults (fix) GLContextImpl's getDefaultPixelDataType()/getDefaultPixelDataFormat() uses default values if GL query fails.
* SyncedRingbuffer: Add 'reset(boolean full)', simplify 'clear(..)'.Sven Gothel2013-08-151-12/+20
| | | | | 'reset(boolean full)' enables user to reset ringbuffer pointer and assume it's empty or full, while 'clear()' shall only remove all references .. etc.
* Fix Bug 815: GL*: Change glIs<Buffer>Enabled() -> glIs<Buffer>Bound() to ↵Sven Gothel2013-08-141-8/+8
| | | | | | | | | | | | | | | | reflect semanics - Also fix the exception message (enabled/disabled -> bound/unbound) Reason of change: Avoid confusion and point to the cause! API change: glIsVBOArrayEnabled() -> glIsVBOArrayBound() glIsVBOElementArrayEnabled() -> glIsVBOElementArrayBound() glIsPBOPackEnabled() -> glIsPBOPackBound() glIsPBOUnpackEnabled() -> glIsPBOUnpackBound() Exception message change: "must be enabled to call this method" -> "must be bound to call this method" "must be disabled to call this method" -> "must be unbound to call this method"
* GLMediaPlayer Multithreaded Decoding: GLMediaPlayer* (Part-3) - WIPSven Gothel2013-08-146-373/+706
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - GLMediaPlayer - Remove State.Stopped and method stop() - redundant, use pause() / destroy() - Add notion of stream IDs - Add API doc: State / Stream-ID incl. html-anchor - Expose video/audio PTS, .. - Expose optional AudioSink - Min multithreaded textureCount is 4 (EGL* and FFMPEG*) - GLMediaPlayerImpl - Move AudioSink rel. impl. to this class, allowing a tight video implementation reusing logic. - Remove 'synchronized' methods, synchronize on State where applicable - implement new methods (see above) - playSpeed is handled partially in AudioSink. If it exeeds AudioSink's capabilities, drop audio and rely solely on video sync. - video sync (WIP) - video pts delay based on geometric weight - reset video SCR if 'out of range', resync w/ PTS - - FramePusher - allow interruption when pausing/stopping, while waiting for next avail free frame to decode. - FFMPEGMediaPlayer - Add proper AudioDataFormat negotiation AudioSink <-> libav - Parse libav's SampleFormat - Remove AudioSink interaction (moved to GLMediaPlayerImpl) - Tests (MovieSimple, MovieCube): - Add aid/vid selection - Add KeyListener for actions: seek(..), play()/pause(), setPlaySpeed(..) - Dump perf-string each 2s - TODO: - Add audio sync in AudioSink, similar to GLMediaPlayer's weighted video delay, here: drop audio frames.
* GLMediaPlayer Multithreaded Decoding: AudioSink (Part-2) - WIPSven Gothel2013-08-143-169/+506
| | | | | | | | | | | | | | | | | | | | | | | | | | | - AudioSink.AudioDataFormat - add fixedP (fixed-point or floating-point) - AudioSink - rename 'buffer count' to 'frame count' - add setPlaySpeed(..) - add isPlaying() - add play() - add pause() - add flush() - add: getFrameCount(), getQueuedFrameCount(), getFreeFrameCount(), getEnqueuedFrameCount(), - rename: writeData() -> enqueueData(..) - ALAudioSink - multithreaded usage - make ALCcontext current per thread, now required for multithreaded use Use RecursiveLock encapsulating the ALCcontext's makeCurrent/release/destroy, since the native operations seem to be buggy. NOTE: Think about adding these general methods to ALCcontext - implement new methods -