| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
c8abb9d2c9a8c92f0c5c42aba13e3e80c69739dc: Test isSetup() _after_ running glEventListenerPre.
glEventListenerPre may be utilized to setup the TileRenderer.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Sync issue w/ NEWT/AWT based GLAD
NEWT based GLDrawables may trigger GLAD display() via native repaint events.
If using in conjunction w/ AWT, i.e. NewtCanvasAWT and setupPrinting(..) has been called
and it's attched to the TR .. it could happen that display tries to issue beginTile()
before the TR is being setup.
This patch mitigates this issue (while not removing it) by querying whether setup is completed.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
functionality (reshapeTile(..), ..); Only process GLEventListener impl. TileRendererListener; attachToAutoDrawable -> attachAutoDrawable, etc.
-TileRendererNotify -> TileRendererListener
- Added methods:
- void reshapeTile(TileRendererBase tr,int tileX, int tileY, int tileWidth, int tileHeight, int imageWidth, int imageHeight);
- void startTileRendering(TileRendererBase tr);
- void endTileRendering(TileRendererBase tr);
allowing to clarify user code and API specification,
i.e. TR only processes GLEventListener which impl. TileRendererListener.
This also allows simplifying the API doc, while having a more descriptive
reshape method focusing solely on tile rendering.
Further more, the start/end TR methods allow certain GL related actions
while the context is current before and after iterating through the tiles.
This is even used for RandomTileRenderer (one tile only), to allow
to reuse same TileRendererListener for diff TRs.
- Fix language, attach and detach usage was vice versa. We do attach an GLAutoDrawable to a TR
- attachToAutoDrawable -> attachAutoDrawable
- detachFromAutoDrawable -> detachAutoDrawable
- Adapted unit tests.
|
|
|
|
| |
pre/post reshape in it's display method instead.
|
|
|
|
|
|
|
|
|
|
|
|
| |
In FBO mode save TextureState of current texture-unit,
as well as for the fbo texture-unit if the latter is a different.
In glslTextureRaster mode for verical flip (using FBO),
set the viewport to drawable size if before flipping and restore afterward - if equired.
TestGLJPanelTextureStateAWT fully tests use cases:
- Keep texture bound w/ same or other texture-unit
- Use 2 viewports within one drawable and keep it
|
|
|
|
|
|
|
|
| |
Add clipping of the image-size and hence differentiate the image-size and
the size the tile-renderer iterates through.
The original image-size is required for the opengl reshape and rendering,
where the clipping size may restrict the range of rendering.
|
| |
|
| |
|
|
|
|
| |
pre-swap only for FBO && MSAA. See TileRendererBase.reqPreSwapBuffers(..) API doc.
|
|
|
|
|
|
|
|
|
|
|
| |
endTile(); Enhance unit tests for MSAA, also add TileRendererBase.TileRendererNotify to GearsES2
GL[Auto]Drawable.swapBuffers() must be called before endTile().
This is especially important if using multisampling offscreen FBO drawables,
where swapBuffers() triggers the <i>downsampling</i> to the readable sampling sink.
Otherwise, we will be 'one tile behind' !
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
requires >= GL2ES3; Always set pack-alignment, Set glReadBuffer(..) >= GL2ES3
- Allow general usage w/ any GL profile, only image-buffer requires >= GL2ES3
Due to GL2ES3.GL_PACK_ROW_LENGTH and image-width != tile-width
- Always set pack-alignment
Forgot for tile-buffer
- Set glReadBuffer(..) >= GL2ES3
Required if using FBO offscreen, i.e. MSAA mode.
|
|
|
|
| |
GLProfile (just a query, no new profile)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
always return last result, no overloading of attachToAutoDrawable(..); Fix comments
TileRenderer:
- adds setTileOffset(..)
A tile offset might be required, i.e. via a given rectangular clip bounds
- getParam(pname) shall always return last result
Even when finished tiling, the last value shall be returned,
otherwise a post endTile() is unable to retrieve the value.
- No overloading of attachToAutoDrawable(..)
No reason to complicate usage by mutating semantics,
call setTileSize(..) manually
TileRendererBase:
- Fix API doc: TR_CURRENT_TILE_X_POS, TR_CURRENT_TILE_Y_POS
|
|
|
|
| |
about GLAnimatorControl's which association gets swapped as well.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GLEventListener about attached/detached TileRenderer
.. since GLEventListener's reshape(..) method must query certain tile renderer attributes (tile pos and image size),
they have to be aware of the TileRendererBase.
To simplify such awareness and hence automate this attachement and passing over the tile renderer reference,
they should implement this new interface.
Gears example implements the new interface,
which caches the TR reference and pauses rotation.
|
|
|
|
| |
for UI agnostic (no-awt tests).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GLEventListener reshape(..) or manual reshape after beginTile(..) method.
GLEventListener reshape(..) method should be aware of TileRenderer usage
and get the missing tile position and image size from it (-> see Gears example).
TestRandomTiledRendering3GL2AWT demos AWT GLCanvas onscreen
being used for random tile rendering to produce a PNG file.
TestTiledRendering1GL2 is now GLAutoDrawable/GLEventListener agnostic,
hence demos plain GLDrawable tile rendering usage.
|
|
|
|
| |
(cleanup / API doc)
|
|
|
|
| |
GLPixelBuffer, and pre/post GLEventListener)
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
utilization AWTGLPixelBuffer* -> GLPixelBuffer*
GLPixelBufferProvider:
- Default*.getAttributes(): Add componentCount==1 (ALPHA/RED), validate values, throw exception if n/a or not supported
- Add 'allowRowStride' (as for AWTGLPixelBufferProvider)
- Add default for true and false
GLPixelBuffer:
- Add 'allowRowStride' (as for AWTGLPixelBuffer)
- Fix requiresNewBuffer(..):
- aquire minByteSize if passed one is <= 0
- validate minByteSize w/ currentByteSize according to allowRowStride.
AWTGLPixelBuffer: 'allowRowStride' impl. moved to GLPixelBuffer.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
be used w/ GLAutoDrawable.
- Remove GL2 dependencies
- Only requires PixelStorage ROW_LENGTH -> GL2ES3
- Position target buffer position according to skip [pixels, rows]
- Use an interface PMVMatrixCallback, allowing user to reshape
the custom 'PMV Matrix' according to the current rendered tile
- Properly adjust tile/image buffer to written position and flip for read operation
|
|
|
|
| |
instead int[] - no return value desired.
|
|
|
|
| |
set[Pack|Unpack]RowLength(GL2ES3 gl, ..)
|
| |
|
| |
|
|
|
|
| |
(cosmetic change only); Typo in comment; TestSharedContextListNEWT2: Stop animator.
|
|
|
|
| |
next frame after play() will provide new frame. Added API doc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
orientation change (flipped), API-doc,
- State
- Fix state transition (initGL() error)
- Camera options
- options uses ';' as query separator
- don't use 'default' options, driver should know
- Detect and act on orientation change (flipped)
- ffmpeg impl detects if flipped changes and triggers a SIZE update event.
This allows application to react, i.e. re-init GL and use new TextureCoord's.
Test: Works well on Windows w/ rawvideo dshow camera driver/codec.
- API-doc
- TexSeqEventListener/GLMediaEventListener usage / constraints (GL, ..)
- State transition fix
|
|
|
|
| |
characteristics.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
'camera ID'
If linesize is < 0, it is not invalid as assumed in commit eca6a5cb1e2beda84dfbafc31ed225e272f4f3fb,
but vertically flipped (bottom-up).
We have to adjust the data pointers, which are moved to the upper end of memory as well
and can proceed as usual.
TODO:
- Update texture 'mustFlipVertically' to 'false' in this case.
- Later:
- Allow updating texture size ..
- Whole pixel-fmt/texture-lookup-shader association must scale better,
i.e. extract the 'knowledge' into one class, use a static shader code
using uniforms instead of hard-coded values .. etc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
windows, 2 more pixel formats, fail-safe data handling
- add support for ffmpeg 2 / libav 10 -> lavc55_lavf55_lavu52_lavr01
- add support for ffmpeg libswresample (similar to libavresample)
- handle BGRA (GL type) and BGR24 (texture shader)
- Change Camera URI semantics, drop 'host' and use 'path' for camera ID
and use 'query' for options.
- add support for Window's DShow camera selection
- our camera id -> index of list of video-input devices,
this gives us same behavior as w/ Linux
- requires windows libs: strmiids, uuid, ole32, oleaut32
- Compiles w/ MingW64, works w/ libav/ffmpeg
- TODO: test compilation w/ MingW 32bit !
- don't push data to texture if (linesize <= 0)
this may happen due to buggy decoder / setup ..
Tested manually on GNU/Linux x64 and Windows x64:
- GNU/Linux libav 0.8, libav 9, libav 10, ffmpeg 1.2, ffmpeg 2.0
- Windows libav 0.8, libav 9, ffmpeg 2.0
- videos and camera
|
|
|
|
| |
GL_UNSIGNED_SHORT_8_8_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
missing symbol 'av_realloc'.
- Add camera input
- Use URI w/ scheme 'camera' to determine camera input is desired,
use URI host as camera id.
E.g. 'camera://0' for 1st camera.
- AndroidGLMediaPlayerAPI14: Via 'Camera'
- FFMPEG*: Via libavdevice, device name and input format
- TODO: Add controls to manipulate camera if available
- FFMPEG*
- Add symbols
- avcodec_register_all
- av_realloc (was missing)
- avdevice_register_all
- Load libavdevice (opt)
- Camera:
- Use <ID> (windows) and /dev/video<ID> other OS
- simply find the input format in native code
- Support YUYV422 (used in video4linux2, etc.)
- Stuff 2x 16bpp (YUYV) into one RGBA pixel!
- Add texture format for 16bpp
- Add texture lookup shader
- Fix av_packet leak in readNextImpl(..)
- Restore orig pointer and size values,
we may have moved along within packet.
Then call av_free_packet().
- Use null AudioSink if audio-id is NONE
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
adds AL_SOFT_buffer_samples support w/ full AL caps
- Fixe type names:
- Remove AudioDataType, we only support PCM here anyways
- AudioDataFormat -> AudioFormat / Add 'planar' attribute to distingush packed/planar data type
- Validate float types
- Enhance AudioFormat negotiation
- Add 'isSupported(AudioFormat format)' which _shall_ be used before 'init(..)'
to test/negotiate format
- Add getMaxSupportedChannels(), which may be used w/ getPreferredFormat() if orig requested format fails
via 'isSupported(..)'
- 'init(..)' returns boolean only.
- ALAudioSink adds AL_SOFT_buffer_samples support w/ full AL caps
- Determine whether AL_SOFT_buffer_samples is supported
- Use new JOAL ALHelper to convert AudioFormat -> AL-types,
which also answers the 'isSupported(..)' query.
- Now allows multiple: channles, sample-types, etc.
|
|
|
|
| |
allowing to change the audio volume.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
non-frame based AudioSink's to deal w/ desired queue sizes.
- Rename AudioSink.initSink(..) -> AudioSink.init(..)
- Move: "int initialFrameCount, int frameGrowAmount, int frameLimit" to
"int initialQueueSize, int queueGrowAmount, int queueLimit"
based on milliseconds instead of frame count.
- Passing hint 'float frameDuration' to calculate frame count for fame based audio sink, i.e. ALAudioSink.
- Adding sensible static final default values
- AudioDataFormat: Add convenient conversion routines (samples/bytes/frame-count)
- FFMPEGMediaPlayer: Retrieve audio frame size in samples per channel, pass it to AudioSink.init(..)
to properly calculate frame count/limits based on duration.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Multithreaded decoding and API should be considered stable by now,
minor changes may apply if Android/OMX impl. requires it.
We still need to solve TODO's as listed below, copied from 474ce65081ecd452215bc07ab866666cb11ca8b1.
+++
- *TextureFrame OO changes:
- TextureFrame extends TimeFrameI
- GLMediaPlayerImpl*
- Adapt to Ringbuffer changes of GlueGen commit f9f881e59c78e3036cb3f956bc97cfc3197f620d
- Fix impl. method's API doc
- getNextTextureImpl(..) returns video PTS
- Fix audio-only playback
- frame dropping shall only happen if:
- previous frame has not been dropped
- frame is too later
- one decoded frame is already available
- Don't block for decoder anymore:
- nextFrame = "videoFramesDecoded.getBlocking() -> videoFramesDecoded.get()";
No 'next decoded frame avail' only could mean:
- slow decoding/hardware
- slow transport
hence we shall not block rendering.
- Add DEBUG output if using last frame
- Add integer property 'jogl.debug.GLMediaPlayer.StreamWorker.delay' in milliseconds
to simulate slow decoding, i.e. delay is added in StreamWorker after decoding
before pushing new frame to Ringbuffer.
- FFMPEGMediaPlayer:
- audioFrameLimitWithVideo 128 -> 64
- audioFrameLimitAudioOnly 128 -> 32
- uses AudioSink's 'enqueueData(int pts, ByteBuffer bytes, int byteCount)'
- fixes for audio-only playback
+++
Working Tests: MovieSimple and MovieCube
TODO-1: Fix
- Android
- OMXGLMediaPlayer
TODO-2:
- Fix issue where async audio frames arrive much later than 1st video frame, i.e. around 300ms.
- Default TextureCount .. maybe 3 ?
- Adding Audio synchronization ?
- Find 'truth' about correlation of audio and video PTS values,
currently, we assume both to be unrelated ?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Reuses ALAudioFrames to ease GC, Ringbuffer changes
- Adapt to Ringbuffer changes of GlueGen commit f9f881e59c78e3036cb3f956bc97cfc3197f620d
- Favor AudioSink 'AudioFrame enqueueData(int pts, ByteBuffer bytes, int byteCount)',
- Impl. shall reuse AudioFrame's instead of creating them on the fly
- User shall simply pass the net data required, while receiving an internal AudioFrame
- Add byte/time calc to AudioDataFormat:
- Add getDuration(byteCount) and getByteCount(ms).
- *AudioFrame OO changes:
- abstract AudioFrame extends TimeFrameI
- allow setting of all components to reuse instanced (GC clean)
- ALAudioSink reuses ALAudioFrames to ease GC:
- Remove creating temporary objects to ease GC
- ALAudioFrame holds ALBuffer name, remove ActiveBuffer type.
- Use ALAudioFrame similar to TextureFrame in GLMediaPlayerImpl,
i.e. fill them in 'full' Ringbuffer and move them in-between 'full'/'playing' Ringbuffer.
-
|
|
|
|
| |
basen on integer milliseconds.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Update/fix GLMediaPlayer API doc
- GLMediaEventListener: Add event bits for all state changes to be delivered via attributesChanged(..)
- StreamWorker / Decoder Thread:
- Use StreamWorker only !
- Handle exceptions on StreamWorker via StreamException
- Handles stream initialization and decoding (-> initStream(..))
- Split initGLStream(..) -> initStream(..) + initGL(GL)
- allow initStream(..)'s implementation being executed on StreamWorker
- allow GL initialization to be 'postponed' when stream is read,
i.e. non blocking stream initialization (UI .. etc)
- Handle EOS via END_OF_STREAM_PTS -> pause/event
- Video: Use lock-free LFRingbuffer, similar to
ALAudioSink (commit f18a94b3defef16e98badd6d99f2422609aa56c5)
+++
- FFMPEGDynamicLibraryBundleInfo
- Add avcodec's:
- avcodec_get_frame_defaults, avcodec_free_frame (54.28.0), avcodec_flush_buffers,
- Add avutil's:
- av_frame_unref (55.0.0)
- Add avformat's:
- avformat_seek_file (??)
+++
- FFMPEGMediaPlayer Native:
- add 'snoop' video frames for a/v frame count relation.
disabled per default, since no more needed due to ALAudioSink's
grow-buffer usage of LFRingbuffer.
- use sp_avcodec_free_frame if available
- 'useRefCountedFrames=1' for libav 55.0 to cache more than one audio frame,
not used since ALAudioSink's OpenAL usage does not require it (copies data once).
Note: the above snooped-video frame count is used here.
- use only one cached audio-frame (-> see above, OpenAL copies data once),
while reusing the NIO buffer!
- Perform OpenGL sync (glFinish) in native code!
- find proper PTS value, i.e. either frame's PTS or DTS,
see 'PTSStats'.
- FFMPEGMediaPlayer Java:
- use private fields
- simplified code due to above changes.
+++
Working Tests: MovieSimple and MovieCube
TODO-1: Fix
- Android
- OMXGLMediaPlayer
TODO-2:
- Fix issue where async audio frames arrive much later than 1st video frame, i.e. around 300ms.
- Default TextureCount .. maybe 3 ?
- Adding Audio synchronization ?
- Find 'truth' about correlation of audio and video PTS values,
currently, we assume both to be unrelated ?
|
|
|
|
| |
getNextTexture(..), may blocks .. or not, depending on implementation and state.
|
|
|
|
| |
existing object name.
|
|
|
|
| |
frameLimit allowing an optional used Ringbuffer to grow in implementation.
|
|
|
|
|
|
|
|
|
|
| |
- GLMediaPlayer: Use URI instead of URL, allowing passing a non resolved location
- Java's URL doesn't allow 'other' protocols, i.e. RTSP
- GLMediaPlayer: Add Table of test streams and their location ..
- FFMPEGMediaPlayer
- Handle av_read_play/pause response on java side, ignore error - simply dump in DEBUG_NATIVE mode
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Use Platform.currentTimeMillis() for accurate timing!
- GLMediaPlayer / GLMediaPlayerImpl
- Add DEBUG_NATIVE property jogl.debug.GLMediaPlayer.Native
for verbose impl. messages, i.e. ffmpeg/libav
- Add 'synchronization' section in GLMediaPlayer API doc (WIP)
- Use passive non-blocking video synchronization,
i.e. repeat frames instead of 'sleep'.
Thx to Xerxes's suggestion.
- Add flushing of cached decoded frames,
allowing to remove complicated 'videoSCR_reset_latch'
- FramePusher (threaded decoding):
- Always create a shared context!
- Release context while pausing
- Pre/post 'getNextTextureImpl()' actions only
at makeCurrent/release.
- newFrameAvailable(..) signal after decoded frame is enqueued
- FFMPEGDynamicLibraryBundleInfo
- Bind add. functions of libavcodec:
+ "av_init_packet",
+ "av_new_packet",
+ "av_destruct_packet",
- Bind add. functions of libavformat:
+ "avformat_seek_file",
+ "av_read_play",
+ "av_read_pause",
- DEBUG property := FFMPEGMediaPlayer.DEBUG || DynamicLibraryBundleInfo.DEBUG;
- FFMPEGMediaPlayer
- Use libavformat's 'av_read_play()' and 'av_read_pause()',
which may get utilized for network streams, e.g. RTSP
- getNextTextureImpl(..):
- Fix retry loop
- Use postNextTextureImpl/preNextTextureImpl if desired (PSM)
- Native:
- Use fixed my_av_q2i32(..) macro (again)
- Use INVALID_PTS marker (synced w/ Java code)
- DEBUG: Dump more detailed frame information
- TODO: Consider passing frame_delay, especially for repeated frames!
- Tests (MovieSimple, MovieCube):
- Refine KeyEvents control for seek and speed.
- TODO:
- Proper audio clock calculation - difficult w/ OpenAL !
- Video / Audio sync:
- seek !
- streams w/ very async A/V frames
- Test Streams:
- Five-minute-sync-test.mp4
- Audio-Video-Sync-Test-Calibration-23.98fps-24fps.mp4
- sound_in_sync_test.mp4
- big_buck_bunny_1080p_surround.avi
|
|
|
|
|
| |
GLPixelAttributes checks arguments (componentCount, format / type)
and the queried bytesPerPixel.
|