| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
and subtitle language
|
| |
|
|
|
|
| |
select font for ASS/Text rendering; Remove GLMediaPlayer's getStreamLang() as replaced by getLang()
|
|
|
|
| |
subtititle) etc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
(PGS ..) via FFMPEGFMediaPlayer/FFmpeg
FFMPEGFMediaPlayer related changes:
- Add libswscale (6th FFmpeg lib used) for sws_getCachedContext(), sws_scale() and sws_freeContext(),
used natively to convert the palette'ed bitmap into RGBA colorspace -> GL texture
- Handling AVSubtitleRect.type SUBTITLE_BITMAP
-- only handled if libswscale is available
-- config/adjust texture object
-- sws_scale palette'ed bitmap to texture
-- intermediate memory is cached, may be resized and free'ed at destroy
-- texture objects are managed and passed from GLMediaPlayerImpl,
as they are also forwarded to player client via SubBitmapEvent
- Passing the AVCodecID to GLMediaPlayerImpl, converted to our CodecID enum.
- Unifying creation and opening of AVCodecContext with 'createOpenedAVCodecContext(..)'
+++
SubtitleEvent*
- SubTextEvent now also handles ASS.Dialogue (FFmpeg 4)
besides ASS.Event (FFmpeg 5, 6, ..).
+++
GLMediaPlayerImpl
- Added ringbuffer subTexFree, managing Texture for bitmap'ed subtitles
-- Uses 1 bitmap-subtitle Texture per used textureCount in cache,
as one bitmap-subtile can be displayed per frame.
Could be potentially reduced to just 2 .. but resources used are
relatively low here.
- Validating subTexFree + videoFramesFree usage,
use blocking get/put ringbuffer due to utilization from different threads.
- Receives subtitle content from native getNextPacket0() via callback,
creates SubtitleEvent instance and passes it to a SubtitleEventListener - if exists.
(See MediaButton example)
-- SubBitmapEvent also gets its special SubBitmapEvent.TextureOwner to handle client releasing
the event and allowing us to put back the Texture resource to 'subTexFree'.
This passing through of the Texture object is probably a weakness of this lifecycle
and requires the client to ensure SubtitleEvent.release() gets called.
See MediaButton example!
- Exposing CodecID, allowing clients like MediaButton to handle SubtitleEvent content according to codec
|
|
|
|
|
|
|
|
|
| |
Region.COLORTEXTURE_LETTERBOX_RENDERING_BIT to TextureSequence and add enabling/disabling of aratio adjustment + letter-box back-color
TextureSequence color-texture params fetched from Graph VBORegion* and fed into shader.
This allows more flexibility in aspect-ratio adjustment as well as setting a clipping background color for
the added letter-box space.
|
|
|
|
|
|
|
|
|
|
| |
GLMediaEventListener, easing listener callbacks; Prepare SubtitleEventListener generalization (Bug 1494)
Moves pushSound(), pushSubtitle*() from FFMPEGMediaPlayer to GLMediaPlayerImpl,
as it is handled in a generic way - even though currently only called by native FFMPEGMediaPlayer implementation.
Note: This patch is incomplete, i.e. not even compile clean.
But choses as-is to semantically split the work to ease review.
|
| |
|
|
|
|
|
|
|
|
| |
support via FFMpeg
TODO:
- We may want to refine subtitle PTS handling
- We may want to support bitmapped subtitles
|
| |
|
|
|
|
|
|
|
|
| |
streams and languages and add convenient switchStream(..) entry.
audio/video/subtitle streams and language metadata is maintained by arrays holding the stream-IDs and language string identifier.
Implementation added in FFMPEGPlayer for these data-sets.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
com.jogamp.common.av.PTS.millisToTimeStr(..)
Chapter metadata is now supported via our FFMPEGMediaPlayer implementation.
Added public method: 'Chapters[] GLMediaPlayer.getChapters()'
|
|
|
|
| |
usable w/ fix for Bug 1472 (Enhance A/V Sync)
|
|
|
|
| |
information (if any), FFMPEG* utilizes NativeLibrary.get[Native]LibraryPath()
|
|
|
|
| |
user from selecting video or audio PTS.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
master-clock, enabling proper AV sync w/ untouched audio
We can finally utilize the added pass through audio PTS, see commits
- GlueGen 52725b4c6525487f93407f529dc0a758b387a4fc
- JOAL 12029f1ec1d8afa576e1ac61655f318cc37c1d16
This enables us to use the audio PTS as the master-clock and adjust video to the untouched audio.
In case no audio is selected/playing or audio is muted,
we sync merely on the system-clock (SCR) w/o audio.
AV granularity is 22ms, however, since the ALAudioSink PTS may be a little late,
it renders even a slightly better sync in case of too early audio (d_apts < 0).
Since video frames are sync'ed to audio, the resync procedure may result
in a hysteresis swinging into sync. This might be notable at start
and when resumed audio or after seek.
We leave the audio frames untouched to reduce processing burden
and allow non-disrupted listening.
Passed AV sync tests
- Five-minute-sync-test.mp4
- Audio-Video-Sync-Test-Calibration-23.98fps-24fps.mp4
- Audio-Video-Sync-Test-2.mkv
|
|
|
|
|
|
|
|
| |
audio-volume if not initialized and set it when AudioSink becomes available
Setting the audio volume before initialization shall impact GLMediaPlayer when it becomes initialized.
Further add a query if audio is muted, merely based on volume.
|
|
|
|
|
|
|
| |
AudioSink.setChannelLimit() ..
May be utilized to enforce 1 channel (mono) downsampling
in combination with JOAL/OpenAL to experience spatial 3D position effects.
|
|
|
|
| |
270172bcbd91f96d4a38a3d73e23d744f57a25b8) and joal (commit 03f4bb63ce8a358b1c2ef303480e1887d72ecb2e)
|
|
|
|
| |
with EventMask.Bit/EventMask
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
showing test-texture. Adding stop(); (API Change)
- allow multiple initGL(..) @ uninitialized and initialized
- allows usage before stream is ready
- using a test-texture @ uninitialized
- adding stop()
API change
- initStream() -> playStream()
- play() -> resume()
FFMPEG: Added 'ready' check for robustness
|
|
|
|
| |
GLMediaPlayerImpl.updateAttributes avoid div-by-zero (fps inf)
|
|
|
|
|
|
|
|
|
| |
sed -i 's/javax\.media\.opengl/com\.jogamp\.opengl/g' `grep -Rl "javax\.media\.opengl" src`
sed -i 's/javax\.media\.nativewindow/com\.jogamp\.nativewindow/g' `grep -Rl "javax\.media\.nativewindow" src`
sed -i 's/javax\/media\//com\/jogamp\//g' `grep -Rl "javax/media/" src`
sed -i 's/javax\/media\//com\/jogamp\//g' `grep -Rl "javax/media/" doc`
Manually edited all occurences within make/**
|
|
|
|
|
|
|
|
|
| |
- ShaderCode:
- Using Uri
- Also encode the rel. 'includeFile' (was missing earlier)
- GLMediaPlayer
- Exposes Uri in API, removed URI
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
c47bc86ae2ee268a1f38c5580d11f93d7f8d6e74)
- Change non static accesses to static members using declaring type
- Change indirect accesses to static members to direct accesses (accesses through subtypes)
- Add final modifier to private fields
- Add final modifier to method parameters
- Add final modifier to local variables
- Remove unnecessary casts
- Remove unnecessary '$NON-NLS$' tags
- Remove trailing white spaces on all lines
|
|
|
|
| |
seems to be down / Use h264 stream for 'desktop' as well
|
|
|
|
| |
that commands shall be off-loaded on another thread!
|
|
|
|
| |
GLMediaEventListener impl. to access GLMediaPlayer associated objects
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
; Fallback for EOS Detection ; MovieSimple uses full GLEventListener for 'Audio Only' as well to test seek
Determine StreamWorker usage after init
- To support audio only files, we need to determine to use StreamWorker
after completion of stream-init.
Fix seek(..)
- FFMPeg: pos0 needs to use aPTS for audio-only
- Clip target time [0..duration[
Fallback for EOS Detection
In case the backend does not report proper EOS:
- Utilize 'nullFramesCount >= MAX' -> EOS,
where MAX is number of frames for 3s play duraction
and where 'nullFramesCount' is increased if no valid packet is available
and no decoded-video or -audio in the queue.
- Utilize pts > duration -> EOS
MovieSimple uses full GLEventListener for 'Audio Only' as well to test seek
- Matroska seek for audio-only leads to EOS ..
http://video.webmfiles.org/big-buck-bunny_trailer.webm
- MP4 audio-only seek works
http://download.blender.org/peach/bigbuckbunny_movies/BigBuckBunny_320x180.mp4
MovieSimple/MovieCube:
- Use audio-pts in audio-only to calc target time
Tested:
- A, V and A+V
- Pause, Stop and Seek
- GNU/Linux
|
|
|
|
|
|
|
|
|
| |
'GL_OES_EGL_image_external' not supported
+ // Bug on Nexus 10, ES3 - Android 4.3, where
+ // GL_OES_EGL_image_external extension directive leads to a failure _with_ '#version 300 es' !
+ // P0003: Extension 'GL_OES_EGL_image_external' not supported
+ preludeGLSLVersion = false;
|
|
|
|
|
|
|
|
|
|
|
|
| |
multiple media textures (Android) or shared GL context are not usable.
- GLMediaPlayer:
- TEXTURE_COUNT_MIN is the new minimum: '1' - i.e. no multithreading, single threaded player
- TEXTURE_COUNT_DEFAULT is '4' - multithreaded
- GLMediaPlayerImpl:
- Add Single threaded mode, but perform initStreamImpl(..) off-thread.
-
|
|
|
|
| |
Signed-off-by: Harvey Harrison <[email protected]>
|
|
|
|
| |
Signed-off-by: Harvey Harrison <[email protected]>
|
|
|
|
| |
mp4 instead of webm, fix Camera URI)
|
|
|
|
| |
next frame after play() will provide new frame. Added API doc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
orientation change (flipped), API-doc,
- State
- Fix state transition (initGL() error)
- Camera options
- options uses ';' as query separator
- don't use 'default' options, driver should know
- Detect and act on orientation change (flipped)
- ffmpeg impl detects if flipped changes and triggers a SIZE update event.
This allows application to react, i.e. re-init GL and use new TextureCoord's.
Test: Works well on Windows w/ rawvideo dshow camera driver/codec.
- API-doc
- TexSeqEventListener/GLMediaEventListener usage / constraints (GL, ..)
- State transition fix
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
'camera ID'
If linesize is < 0, it is not invalid as assumed in commit eca6a5cb1e2beda84dfbafc31ed225e272f4f3fb,
but vertically flipped (bottom-up).
We have to adjust the data pointers, which are moved to the upper end of memory as well
and can proceed as usual.
TODO:
- Update texture 'mustFlipVertically' to 'false' in this case.
- Later:
- Allow updating texture size ..
- Whole pixel-fmt/texture-lookup-shader association must scale better,
i.e. extract the 'knowledge' into one class, use a static shader code
using uniforms instead of hard-coded values .. etc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
windows, 2 more pixel formats, fail-safe data handling
- add support for ffmpeg 2 / libav 10 -> lavc55_lavf55_lavu52_lavr01
- add support for ffmpeg libswresample (similar to libavresample)
- handle BGRA (GL type) and BGR24 (texture shader)
- Change Camera URI semantics, drop 'host' and use 'path' for camera ID
and use 'query' for options.
- add support for Window's DShow camera selection
- our camera id -> index of list of video-input devices,
this gives us same behavior as w/ Linux
- requires windows libs: strmiids, uuid, ole32, oleaut32
- Compiles w/ MingW64, works w/ libav/ffmpeg
- TODO: test compilation w/ MingW 32bit !
- don't push data to texture if (linesize <= 0)
this may happen due to buggy decoder / setup ..
Tested manually on GNU/Linux x64 and Windows x64:
- GNU/Linux libav 0.8, libav 9, libav 10, ffmpeg 1.2, ffmpeg 2.0
- Windows libav 0.8, libav 9, ffmpeg 2.0
- videos and camera
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
missing symbol 'av_realloc'.
- Add camera input
- Use URI w/ scheme 'camera' to determine camera input is desired,
use URI host as camera id.
E.g. 'camera://0' for 1st camera.
- AndroidGLMediaPlayerAPI14: Via 'Camera'
- FFMPEG*: Via libavdevice, device name and input format
- TODO: Add controls to manipulate camera if available
- FFMPEG*
- Add symbols
- avcodec_register_all
- av_realloc (was missing)
- avdevice_register_all
- Load libavdevice (opt)
- Camera:
- Use <ID> (windows) and /dev/video<ID> other OS
- simply find the input format in native code
- Support YUYV422 (used in video4linux2, etc.)
- Stuff 2x 16bpp (YUYV) into one RGBA pixel!
- Add texture format for 16bpp
- Add texture lookup shader
- Fix av_packet leak in readNextImpl(..)
- Restore orig pointer and size values,
we may have moved along within packet.
Then call av_free_packet().
- Use null AudioSink if audio-id is NONE
|
|
|
|
| |
allowing to change the audio volume.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Multithreaded decoding and API should be considered stable by now,
minor changes may apply if Android/OMX impl. requires it.
We still need to solve TODO's as listed below, copied from 474ce65081ecd452215bc07ab866666cb11ca8b1.
+++
- *TextureFrame OO changes:
- TextureFrame extends TimeFrameI
- GLMediaPlayerImpl*
- Adapt to Ringbuffer changes of GlueGen commit f9f881e59c78e3036cb3f956bc97cfc3197f620d
- Fix impl. method's API doc
- getNextTextureImpl(..) returns video PTS
- Fix audio-only playback
- frame dropping shall only happen if:
- previous frame has not been dropped
- frame is too later
- one decoded frame is already available
- Don't block for decoder anymore:
- nextFrame = "videoFramesDecoded.getBlocking() -> videoFramesDecoded.get()";
No 'next decoded frame avail' only could mean:
- slow decoding/hardware
- slow transport
hence we shall not block rendering.
- Add DEBUG output if using last frame
- Add integer property 'jogl.debug.GLMediaPlayer.StreamWorker.delay' in milliseconds
to simulate slow decoding, i.e. delay is added in StreamWorker after decoding
before pushing new frame to Ringbuffer.
- FFMPEGMediaPlayer:
- audioFrameLimitWithVideo 128 -> 64
- audioFrameLimitAudioOnly 128 -> 32
- uses AudioSink's 'enqueueData(int pts, ByteBuffer bytes, int byteCount)'
- fixes for audio-only playback
+++
Working Tests: MovieSimple and MovieCube
TODO-1: Fix
- Android
- OMXGLMediaPlayer
TODO-2:
- Fix issue where async audio frames arrive much later than 1st video frame, i.e. around 300ms.
- Default TextureCount .. maybe 3 ?
- Adding Audio synchronization ?
- Find 'truth' about correlation of audio and video PTS values,
currently, we assume both to be unrelated ?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Update/fix GLMediaPlayer API doc
- GLMediaEventListener: Add event bits for all state changes to be delivered via attributesChanged(..)
- StreamWorker / Decoder Thread:
- Use StreamWorker only !
- Handle exceptions on StreamWorker via StreamException
- Handles stream initialization and decoding (-> initStream(..))
- Split initGLStream(..) -> initStream(..) + initGL(GL)
- allow initStream(..)'s implementation being executed on StreamWorker
- allow GL initialization to be 'postponed' when stream is read,
i.e. non blocking stream initialization (UI .. etc)
- Handle EOS via END_OF_STREAM_PTS -> pause/event
- Video: Use lock-free LFRingbuffer, similar to
ALAudioSink (commit f18a94b3defef16e98badd6d99f2422609aa56c5)
+++
- FFMPEGDynamicLibraryBundleInfo
- Add avcodec's:
- avcodec_get_frame_defaults, avcodec_free_frame (54.28.0), avcodec_flush_buffers,
- Add avutil's:
- av_frame_unref (55.0.0)
- Add avformat's:
- avformat_seek_file (??)
+++
- FFMPEGMediaPlayer Native:
- add 'snoop' video frames for a/v frame count relation.
disabled per default, since no more needed due to ALAudioSink's
grow-buffer usage of LFRingbuffer.
- use sp_avcodec_free_frame if available
- 'useRefCountedFrames=1' for libav 55.0 to cache more than one audio frame,
not used since ALAudioSink's OpenAL usage does not require it (copies data once).
Note: the above snooped-video frame count is used here.
- use only one cached audio-frame (-> see above, OpenAL copies data once),
while reusing the NIO buffer!
- Perform OpenGL sync (glFinish) in native code!
- find proper PTS value, i.e. either frame's PTS or DTS,
see 'PTSStats'.
- FFMPEGMediaPlayer Java:
- use private fields
- simplified code due to above changes.
+++
Working Tests: MovieSimple and MovieCube
TODO-1: Fix
- Android
- OMXGLMediaPlayer
TODO-2:
- Fix issue where async audio frames arrive much later than 1st video frame, i.e. around 300ms.
- Default TextureCount .. maybe 3 ?
- Adding Audio synchronization ?
- Find 'truth' about correlation of audio and video PTS values,
currently, we assume both to be unrelated ?
|
|
|
|
|
|
|
|
|
|
| |
- GLMediaPlayer: Use URI instead of URL, allowing passing a non resolved location
- Java's URL doesn't allow 'other' protocols, i.e. RTSP
- GLMediaPlayer: Add Table of test streams and their location ..
- FFMPEGMediaPlayer
- Handle av_read_play/pause response on java side, ignore error - simply dump in DEBUG_NATIVE mode
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Use Platform.currentTimeMillis() for accurate timing!
- GLMediaPlayer / GLMediaPlayerImpl
- Add DEBUG_NATIVE property jogl.debug.GLMediaPlayer.Native
for verbose impl. messages, i.e. ffmpeg/libav
- Add 'synchronization' section in GLMediaPlayer API doc (WIP)
- Use passive non-blocking video synchronization,
i.e. repeat frames instead of 'sleep'.
Thx to Xerxes's suggestion.
- Add flushing of cached decoded frames,
allowing to remove complicated 'videoSCR_reset_latch'
- FramePusher (threaded decoding):
- Always create a shared context!
- Release context while pausing
- Pre/post 'getNextTextureImpl()' actions only
at makeCurrent/release.
- newFrameAvailable(..) signal after decoded frame is enqueued
- FFMPEGDynamicLibraryBundleInfo
- Bind add. functions of libavcodec:
+ "av_init_packet",
+ "av_new_packet",
+ "av_destruct_packet",
- Bind add. functions of libavformat:
+ "avformat_seek_file",
+ "av_read_play",
+ "av_read_pause",
- DEBUG property := FFMPEGMediaPlayer.DEBUG || DynamicLibraryBundleInfo.DEBUG;
- FFMPEGMediaPlayer
- Use libavformat's 'av_read_play()' and 'av_read_pause()',
which may get utilized for network streams, e.g. RTSP
- getNextTextureImpl(..):
- Fix retry loop
- Use postNextTextureImpl/preNextTextureImpl if desired (PSM)
- Native:
- Use fixed my_av_q2i32(..) macro (again)
- Use INVALID_PTS marker (synced w/ Java code)
- DEBUG: Dump more detailed frame information
- TODO: Consider passing frame_delay, especially for repeated frames!
- Tests (MovieSimple, MovieCube):
- Refine KeyEvents control for seek and speed.
- TODO:
- Proper audio clock calculation - difficult w/ OpenAL !
- Video / Audio sync:
- seek !
- streams w/ very async A/V frames
- Test Streams:
- Five-minute-sync-test.mp4
- Audio-Video-Sync-Test-Calibration-23.98fps-24fps.mp4
- sound_in_sync_test.mp4
- big_buck_bunny_1080p_surround.avi
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- GLMediaPlayer
- Remove State.Stopped and method stop() - redundant, use pause() / destroy()
- Add notion of stream IDs
- Add API doc: State / Stream-ID incl. html-anchor
- Expose video/audio PTS, ..
- Expose optional AudioSink
- Min multithreaded textureCount is 4 (EGL* and FFMPEG*)
- GLMediaPlayerImpl
- Move AudioSink rel. impl. to this class,
allowing a tight video implementation reusing logic.
- Remove 'synchronized' methods, synchronize on State
where applicable
- implement new methods (see above)
- playSpeed is handled partially in AudioSink.
If it exeeds AudioSink's capabilities, drop audio and rely solely on video sync.
- video sync (WIP)
- video pts delay based on geometric weight
- reset video SCR if 'out of range', resync w/ PTS
-
- FramePusher
- allow interruption when pausing/stopping,
while waiting for next avail free frame to decode.
- FFMPEGMediaPlayer
- Add proper AudioDataFormat negotiation AudioSink <-> libav
- Parse libav's SampleFormat
- Remove AudioSink interaction (moved to GLMediaPlayerImpl)
- Tests (MovieSimple, MovieCube):
- Add aid/vid selection
- Add KeyListener for actions: seek(..), play()/pause(), setPlaySpeed(..)
- Dump perf-string each 2s
- TODO:
- Add audio sync in AudioSink, similar to GLMediaPlayer's weighted video delay,
here: drop audio frames.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
available EGL/FFMPeg. WIP!
Off-thread decoding:
If validated (impl) textureCount > 2, decoding happens on extra thread.
If decoding requires GL context, a shared context is created for decoding thread.
API Changes:
- initGLStream(..): Adds 'textureCount' as argument.
- TextureSequence.TexSeqEventListener.newFrameAvailable(..) exposes the new frame available
- TextureSequence.TextureFrame exposes the PTS (video)
Implementation:
- 'int validateTextureCount(int)': implementation decides whether textureCount can be > 2, i.e. off-thread decoding allowed,
default is NO w/ textureCount==2!
- 'boolean requiresOffthreadGLCtx()': implementation decides whether shared context is required for off-thread decoding
- 'syncFrame2Audio(TextureFrame frame)': implementation shall handle a/v sync, due to audio stream details (pts, buffered frames)
- FFMPEGMediaPlayer extends GLMediaPlayerImpl, no more EGLMediaPlayerImpl (redundant)
+++
- SyncedRingbuffer: Expose T[] array
+++
TODO:
- syncAV!
- test Android
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- AudioSink w/ AudioFrame and formats public
- ALAudioSink uses a circular buffer now, hence relaxes the one-threaded player mode
- FFMPEGMediaPlayer uses multiple audio frames (equal to the ALAudioSink number)
and wraps data to NIO buffer w/o copy.
- FFMPEGMediaPlayer audio threading currently disabled: distorted sound
Seems that the ALAudioSink's circular buffer usage is good enough for now.
- Verbosity only w/ DEBUG flag
- New SyncedRingbuffer for effcient synced buffering
|
|
|
|
| |
implementations.
|