| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add EOS detection, setAudioVolume(..)
GLMediaPlayerImpl.initStreamGL(..):
Only require a minimum texture count of 2,
which is the bare minimum to allow our algorithm to work,
i.e. having a 'lastFrame' and avail/playing ringbuffer have each one frame.
Android's MediaPlayer API can only deal w/ one SurfaceTexture,
hence we have to fake a second SurfaceTextureFrame w/ same content
to allow our implementation to work w/ the threaded decoder (min 2 frames).
|
|
|
|
| |
allowing to change the audio volume.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
non-frame based AudioSink's to deal w/ desired queue sizes.
- Rename AudioSink.initSink(..) -> AudioSink.init(..)
- Move: "int initialFrameCount, int frameGrowAmount, int frameLimit" to
"int initialQueueSize, int queueGrowAmount, int queueLimit"
based on milliseconds instead of frame count.
- Passing hint 'float frameDuration' to calculate frame count for fame based audio sink, i.e. ALAudioSink.
- Adding sensible static final default values
- AudioDataFormat: Add convenient conversion routines (samples/bytes/frame-count)
- FFMPEGMediaPlayer: Retrieve audio frame size in samples per channel, pass it to AudioSink.init(..)
to properly calculate frame count/limits based on duration.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Multithreaded decoding and API should be considered stable by now,
minor changes may apply if Android/OMX impl. requires it.
We still need to solve TODO's as listed below, copied from 474ce65081ecd452215bc07ab866666cb11ca8b1.
+++
- *TextureFrame OO changes:
- TextureFrame extends TimeFrameI
- GLMediaPlayerImpl*
- Adapt to Ringbuffer changes of GlueGen commit f9f881e59c78e3036cb3f956bc97cfc3197f620d
- Fix impl. method's API doc
- getNextTextureImpl(..) returns video PTS
- Fix audio-only playback
- frame dropping shall only happen if:
- previous frame has not been dropped
- frame is too later
- one decoded frame is already available
- Don't block for decoder anymore:
- nextFrame = "videoFramesDecoded.getBlocking() -> videoFramesDecoded.get()";
No 'next decoded frame avail' only could mean:
- slow decoding/hardware
- slow transport
hence we shall not block rendering.
- Add DEBUG output if using last frame
- Add integer property 'jogl.debug.GLMediaPlayer.StreamWorker.delay' in milliseconds
to simulate slow decoding, i.e. delay is added in StreamWorker after decoding
before pushing new frame to Ringbuffer.
- FFMPEGMediaPlayer:
- audioFrameLimitWithVideo 128 -> 64
- audioFrameLimitAudioOnly 128 -> 32
- uses AudioSink's 'enqueueData(int pts, ByteBuffer bytes, int byteCount)'
- fixes for audio-only playback
+++
Working Tests: MovieSimple and MovieCube
TODO-1: Fix
- Android
- OMXGLMediaPlayer
TODO-2:
- Fix issue where async audio frames arrive much later than 1st video frame, i.e. around 300ms.
- Default TextureCount .. maybe 3 ?
- Adding Audio synchronization ?
- Find 'truth' about correlation of audio and video PTS values,
currently, we assume both to be unrelated ?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reuses ALAudioFrames to ease GC, Ringbuffer changes
- Adapt to Ringbuffer changes of GlueGen commit f9f881e59c78e3036cb3f956bc97cfc3197f620d
- Favor AudioSink 'AudioFrame enqueueData(int pts, ByteBuffer bytes, int byteCount)',
- Impl. shall reuse AudioFrame's instead of creating them on the fly
- User shall simply pass the net data required, while receiving an internal AudioFrame
- Add byte/time calc to AudioDataFormat:
- Add getDuration(byteCount) and getByteCount(ms).
- *AudioFrame OO changes:
- abstract AudioFrame extends TimeFrameI
- allow setting of all components to reuse instanced (GC clean)
- ALAudioSink reuses ALAudioFrames to ease GC:
- Remove creating temporary objects to ease GC
- ALAudioFrame holds ALBuffer name, remove ActiveBuffer type.
- Use ALAudioFrame similar to TextureFrame in GLMediaPlayerImpl,
i.e. fill them in 'full' Ringbuffer and move them in-between 'full'/'playing' Ringbuffer.
-
|
|
|
|
| |
basen on integer milliseconds.
|
|
|
|
| |
work well w/ URI encoded names.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Update/fix GLMediaPlayer API doc
- GLMediaEventListener: Add event bits for all state changes to be delivered via attributesChanged(..)
- StreamWorker / Decoder Thread:
- Use StreamWorker only !
- Handle exceptions on StreamWorker via StreamException
- Handles stream initialization and decoding (-> initStream(..))
- Split initGLStream(..) -> initStream(..) + initGL(GL)
- allow initStream(..)'s implementation being executed on StreamWorker
- allow GL initialization to be 'postponed' when stream is read,
i.e. non blocking stream initialization (UI .. etc)
- Handle EOS via END_OF_STREAM_PTS -> pause/event
- Video: Use lock-free LFRingbuffer, similar to
ALAudioSink (commit f18a94b3defef16e98badd6d99f2422609aa56c5)
+++
- FFMPEGDynamicLibraryBundleInfo
- Add avcodec's:
- avcodec_get_frame_defaults, avcodec_free_frame (54.28.0), avcodec_flush_buffers,
- Add avutil's:
- av_frame_unref (55.0.0)
- Add avformat's:
- avformat_seek_file (??)
+++
- FFMPEGMediaPlayer Native:
- add 'snoop' video frames for a/v frame count relation.
disabled per default, since no more needed due to ALAudioSink's
grow-buffer usage of LFRingbuffer.
- use sp_avcodec_free_frame if available
- 'useRefCountedFrames=1' for libav 55.0 to cache more than one audio frame,
not used since ALAudioSink's OpenAL usage does not require it (copies data once).
Note: the above snooped-video frame count is used here.
- use only one cached audio-frame (-> see above, OpenAL copies data once),
while reusing the NIO buffer!
- Perform OpenGL sync (glFinish) in native code!
- find proper PTS value, i.e. either frame's PTS or DTS,
see 'PTSStats'.
- FFMPEGMediaPlayer Java:
- use private fields
- simplified code due to above changes.
+++
Working Tests: MovieSimple and MovieCube
TODO-1: Fix
- Android
- OMXGLMediaPlayer
TODO-2:
- Fix issue where async audio frames arrive much later than 1st video frame, i.e. around 300ms.
- Default TextureCount .. maybe 3 ?
- Adding Audio synchronization ?
- Find 'truth' about correlation of audio and video PTS values,
currently, we assume both to be unrelated ?
|
|
|
|
| |
getNextTexture(..), may blocks .. or not, depending on implementation and state.
|
|
|
|
| |
existing object name.
|
|
|
|
| |
frameLimit allowing an optional used Ringbuffer to grow in implementation.
|
|
|
|
| |
30475c6bbeb9a5d48899b281ead8bb305679028d
|
|
|
|
|
|
|
|
|
|
| |
- GLMediaPlayer: Use URI instead of URL, allowing passing a non resolved location
- Java's URL doesn't allow 'other' protocols, i.e. RTSP
- GLMediaPlayer: Add Table of test streams and their location ..
- FFMPEGMediaPlayer
- Handle av_read_play/pause response on java side, ignore error - simply dump in DEBUG_NATIVE mode
|
|
|
|
| |
delay' in regular println.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Use Platform.currentTimeMillis() for accurate timing!
- GLMediaPlayer / GLMediaPlayerImpl
- Add DEBUG_NATIVE property jogl.debug.GLMediaPlayer.Native
for verbose impl. messages, i.e. ffmpeg/libav
- Add 'synchronization' section in GLMediaPlayer API doc (WIP)
- Use passive non-blocking video synchronization,
i.e. repeat frames instead of 'sleep'.
Thx to Xerxes's suggestion.
- Add flushing of cached decoded frames,
allowing to remove complicated 'videoSCR_reset_latch'
- FramePusher (threaded decoding):
- Always create a shared context!
- Release context while pausing
- Pre/post 'getNextTextureImpl()' actions only
at makeCurrent/release.
- newFrameAvailable(..) signal after decoded frame is enqueued
- FFMPEGDynamicLibraryBundleInfo
- Bind add. functions of libavcodec:
+ "av_init_packet",
+ "av_new_packet",
+ "av_destruct_packet",
- Bind add. functions of libavformat:
+ "avformat_seek_file",
+ "av_read_play",
+ "av_read_pause",
- DEBUG property := FFMPEGMediaPlayer.DEBUG || DynamicLibraryBundleInfo.DEBUG;
- FFMPEGMediaPlayer
- Use libavformat's 'av_read_play()' and 'av_read_pause()',
which may get utilized for network streams, e.g. RTSP
- getNextTextureImpl(..):
- Fix retry loop
- Use postNextTextureImpl/preNextTextureImpl if desired (PSM)
- Native:
- Use fixed my_av_q2i32(..) macro (again)
- Use INVALID_PTS marker (synced w/ Java code)
- DEBUG: Dump more detailed frame information
- TODO: Consider passing frame_delay, especially for repeated frames!
- Tests (MovieSimple, MovieCube):
- Refine KeyEvents control for seek and speed.
- TODO:
- Proper audio clock calculation - difficult w/ OpenAL !
- Video / Audio sync:
- seek !
- streams w/ very async A/V frames
- Test Streams:
- Five-minute-sync-test.mp4
- Audio-Video-Sync-Test-Calibration-23.98fps-24fps.mp4
- sound_in_sync_test.mp4
- big_buck_bunny_1080p_surround.avi
|
| |
|
|
|
|
|
|
|
| |
getDefaultPixelDataType()/getDefaultPixelDataFormat() use defaults (fix)
GLContextImpl's getDefaultPixelDataType()/getDefaultPixelDataFormat()
uses default values if GL query fails.
|
|
|
|
|
| |
GLPixelAttributes checks arguments (componentCount, format / type)
and the queried bytesPerPixel.
|
|
|
|
|
| |
'reset(boolean full)' enables user to reset ringbuffer pointer and assume it's empty or full,
while 'clear()' shall only remove all references .. etc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
reflect semanics - Also fix the exception message (enabled/disabled -> bound/unbound)
Reason of change: Avoid confusion and point to the cause!
API change:
glIsVBOArrayEnabled() -> glIsVBOArrayBound()
glIsVBOElementArrayEnabled() -> glIsVBOElementArrayBound()
glIsPBOPackEnabled() -> glIsPBOPackBound()
glIsPBOUnpackEnabled() -> glIsPBOUnpackBound()
Exception message change:
"must be enabled to call this method" -> "must be bound to call this method"
"must be disabled to call this method" -> "must be unbound to call this method"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- GLMediaPlayer
- Remove State.Stopped and method stop() - redundant, use pause() / destroy()
- Add notion of stream IDs
- Add API doc: State / Stream-ID incl. html-anchor
- Expose video/audio PTS, ..
- Expose optional AudioSink
- Min multithreaded textureCount is 4 (EGL* and FFMPEG*)
- GLMediaPlayerImpl
- Move AudioSink rel. impl. to this class,
allowing a tight video implementation reusing logic.
- Remove 'synchronized' methods, synchronize on State
where applicable
- implement new methods (see above)
- playSpeed is handled partially in AudioSink.
If it exeeds AudioSink's capabilities, drop audio and rely solely on video sync.
- video sync (WIP)
- video pts delay based on geometric weight
- reset video SCR if 'out of range', resync w/ PTS
-
- FramePusher
- allow interruption when pausing/stopping,
while waiting for next avail free frame to decode.
- FFMPEGMediaPlayer
- Add proper AudioDataFormat negotiation AudioSink <-> libav
- Parse libav's SampleFormat
- Remove AudioSink interaction (moved to GLMediaPlayerImpl)
- Tests (MovieSimple, MovieCube):
- Add aid/vid selection
- Add KeyListener for actions: seek(..), play()/pause(), setPlaySpeed(..)
- Dump perf-string each 2s
- TODO:
- Add audio sync in AudioSink, similar to GLMediaPlayer's weighted video delay,
here: drop audio frames.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- AudioSink.AudioDataFormat
- add fixedP (fixed-point or floating-point)
- AudioSink
- rename 'buffer count' to 'frame count'
- add setPlaySpeed(..)
- add isPlaying()
- add play()
- add pause()
- add flush()
- add: getFrameCount(), getQueuedFrameCount(), getFreeFrameCount(), getEnqueuedFrameCount(),
- rename: writeData() -> enqueueData(..)
- ALAudioSink
- multithreaded usage
- make ALCcontext current per thread, now required for multithreaded use
Use RecursiveLock encapsulating the ALCcontext's makeCurrent/release/destroy,
since the native operations seem to be buggy.
NOTE: Think about adding these general methods to ALCcontext
- implement new methods
-
|
| |
|
|
|
|
| |
optional OMX to fail to compile
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
available EGL/FFMPeg. WIP!
Off-thread decoding:
If validated (impl) textureCount > 2, decoding happens on extra thread.
If decoding requires GL context, a shared context is created for decoding thread.
API Changes:
- initGLStream(..): Adds 'textureCount' as argument.
- TextureSequence.TexSeqEventListener.newFrameAvailable(..) exposes the new frame available
- TextureSequence.TextureFrame exposes the PTS (video)
Implementation:
- 'int validateTextureCount(int)': implementation decides whether textureCount can be > 2, i.e. off-thread decoding allowed,
default is NO w/ textureCount==2!
- 'boolean requiresOffthreadGLCtx()': implementation decides whether shared context is required for off-thread decoding
- 'syncFrame2Audio(TextureFrame frame)': implementation shall handle a/v sync, due to audio stream details (pts, buffered frames)
- FFMPEGMediaPlayer extends GLMediaPlayerImpl, no more EGLMediaPlayerImpl (redundant)
+++
- SyncedRingbuffer: Expose T[] array
+++
TODO:
- syncAV!
- test Android
|
|
|
|
| |
throw an exception - only dump DEBUG info!
|
|
|
|
|
|
| |
Make impl. methods final.
createDummyDrawable(..) is useful for efficient shared context w/o actually rendering to this dummy drawable's framebuffer
|
| |
|
|
|
|
| |
(subject to be removed).
|
|
|
|
|
| |
Regression introduced by:
dba2faf8520a43a809eb756869c6c97a0a2ef2cd
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- AudioSink w/ AudioFrame and formats public
- ALAudioSink uses a circular buffer now, hence relaxes the one-threaded player mode
- FFMPEGMediaPlayer uses multiple audio frames (equal to the ALAudioSink number)
and wraps data to NIO buffer w/o copy.
- FFMPEGMediaPlayer audio threading currently disabled: distorted sound
Seems that the ALAudioSink's circular buffer usage is good enough for now.
- Verbosity only w/ DEBUG flag
- New SyncedRingbuffer for effcient synced buffering
|
|\ |
|
| | |
|
|\ \
| |/
|/| |
|
| |
| |
| |
| | |
Signed-off-by: Xerxes Rånby <[email protected]>
|
| |
| |
| |
| | |
Signed-off-by: Xerxes Rånby <[email protected]>
|
| |
| |
| |
| | |
Signed-off-by: Xerxes Rånby <[email protected]>
|
| |
| |
| |
| |
| |
| | |
Fixes OpenAL invalid argument error when trying to fill buffers.
Signed-off-by: Xerxes Rånby <[email protected]>
|
| |\
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Conflicts:
src/jogl/classes/jogamp/opengl/util/av/impl/FFMPEGDynamicLibraryBundleInfo.java
src/jogl/classes/jogamp/opengl/util/av/impl/FFMPEGMediaPlayer.java
Signed-off-by: Xerxes Rånby <[email protected]>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There is still something wrong with the buffering part;
OpenAL will complain at runtime.
Signed-off-by: Xerxes Rånby <[email protected]>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Use ALAudioSink when available and fallback to
JavaSoundAudioSink when JOAL are not found on classpath.
Java Sound playback moved from FFMPEGMediaPlayer into JavaSoundAudioSink.
Signed-off-by: Xerxes Rånby <[email protected]>
|
| | |\
| | | |
| | | |
| | | | |
FFMPEGMediaPlayer
|
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Xerxes Rånby <[email protected]>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
buffer underrun.
Signed-off-by: Xerxes Rånby <[email protected]>
|
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Xerxes Rånby <[email protected]>
|
| | | | |
|
| | | | |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
frames to java.
Signed-off-by: Xerxes Rånby <[email protected]>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Prevent the video sync code to delay a frame more than 1 second.
Signed-off-by: Xerxes Rånby <[email protected]>
|