| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Removed Deprecated Class:
- com/jogamp/opengl/util/TGAWriter.java
- Use TextureIO w/ .tga suffix
- com/jogamp/opengl/util/awt/Screenshot.java
- Use:
- com.jogamp.opengl.util.GLReadBufferUtil, or
- com.jogamp.opengl.util.awt.AWTGLReadBufferUtil
The latter for reading into AWT BufferedImage
See: TestBug461FBOSupersamplingSwingAWT, TestBug605FlippedImageAWT
- javax/media/opengl/GLPbuffer.java
- Use:
caps.setPBuffer(true);
final GLAutoDrawable pbuffer = GLDrawableFactory.getFactory( caps.getGLProfile() ).createOffscreenAutoDrawable(null, caps, null, 512, 512);
- See: TestPBufferDeadlockAWT, ..
Removed Deprecated Methods:
- Constructor of AWT-GLCanvas, SWT-GLCanvas, AWT-GLJPanel
with argument 'final GLContext shareWith'
See GLSharedContextSetter, i.e. glCanvas.setSharedContext(..) !
- GLDrawableFactory.createOffscreenAutoDrawable(..)
with argument 'final GLContext shareWith'
See GLSharedContextSetter, i.e. offscreenAutoDrawable.setSharedContext(..) !
- GLDrawableFactory.createGLPbuffer(..),
see above!
- com.jogamp.opengl.util.av.AudioSink 'enqueueData(AudioDataFrame audioDataFrame)',
use 'enqueueData(int, ByteBuffer, int)'
- GLSharedContextSetter.areAllGLEventListenerInitialized(),
migrated to GLAutoDrawable !
- GLBase's
- glGetBoundBuffer(int), use getBoundBuffer(int)
- glGetBufferSize(int), use getBufferStorage(int).getSize()
- glIsVBOArrayBound(), use isVBOArrayBound()
- glIsVBOElementArrayBound(), use isVBOElementArrayBound()
- NEWT MouseEvent.BUTTON_NUMBER, use BUTTON_COUNT
|
| |
|
|
|
|
| |
unboxing
|
|
|
|
| |
is safe!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
'com.jogamp.opengl.util.stereo' contains all public interfaces/classes
Renamed interfaces:
CustomRendererListener -> CustomGLEventListener
StereoRendererListener -> StereoGLEventListener
New vendor agnostic 'stuff' in com.jogamp.opengl.util.stereo:
1 - StereoDeviceFactory
To create a vendor specific StereoDeviceFactory instance,
which creates the StereoDevice.
2 - StereoDevice
For vendor specific implementation.
Can create StereoDeviceRenderer.
3 - StereoDeviceRenderer
For vendor specific implementation.
4 - StereoClientRenderer
Vendor agnostic client StereoGLEventListener renderer,
using a StereoDeviceRenderer.
Now supports multiple StereoGLEventListener, via add/remove.
- MovieSBSStereo demo-able via StereoDemo01
can show SBS 3D movies.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
c47bc86ae2ee268a1f38c5580d11f93d7f8d6e74)
- Change non static accesses to static members using declaring type
- Change indirect accesses to static members to direct accesses (accesses through subtypes)
- Add final modifier to private fields
- Add final modifier to method parameters
- Add final modifier to local variables
- Remove unnecessary casts
- Remove unnecessary '$NON-NLS$' tags
- Remove trailing white spaces on all lines
|
|
|
|
| |
OpenAL/JOAL (works using openal-soft default on all platforms now)
|
|
|
|
|
| |
NullAudioSink shall return the last enqueued PTS in getPTS()
not causing a-v delta measure based on lagging audio in player.
|
|
|
|
| |
API stability
|
|
|
|
|
|
|
|
| |
TextureSequence's fragment shader hash-code
Adding TextureSequence.getTextureFragmentShaderHashCode() allowing to use a cached hash-code (performance, interface usability).
Implemented in GLMediaPlayerImpl and ImageSequence.
|
|
|
|
|
|
|
|
|
| |
private package.
jogamp.opengl.util.av.impl.FFMPEGNatives.SampleFormat -> jogamp.opengl.util.av.AudioSampleFormat
jogamp.opengl.util.av.impl.FFMPEGNatives.PixelFormat -> jogamp.opengl.util.av.VideoPixelFormat
.. to be reused for other decoders later-on.
|
|
|
|
| |
method from TextureIO
|
|
|
|
| |
validation (libavutil)
|
|
|
|
| |
test-ntsc01-28x16.png asset ; Generalize TextureSequenceDemo01 -> SingleTextureSeqFrame ; Unit tests use test-data, not assets.
|
| |
|
|
|
|
| |
when source becomes ready
|
|
|
|
|
|
| |
(14:15:13) sgothel: @Xerxes: In doResume .. do a 'while( !isActive && !shallPause && isRunning ) {'
(14:15:52) sgothel: doPause: while( isActive && !shallPause && isRunning )
(14:31:55) sgothel: doPause only: while( isActive && isRunning ) {
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
FFMPEG Natives:
- Move 'mutex_avcodec_openclose' to local static and initialize at initSymbols0
- setStream0:
- Add another locked mutex block around:
- [ sp_avformat_open_input .. sp_avformat_find_stream_info ]
This solves the issue of:
[NULL @ 0x89d20c60] insufficient thread locking around avcodec_open/close()
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Issue:
[NULL @ 0x35bde60] insufficient thread locking around avcodec_open/close()
Decorating said libav functions w/ mutex lock/release.
Abstract impl. to either use pthread or JNI Monitor,
but using the latter to reduce dependencies (ming64 windows).
FFMPEGNatives is now an abstract class containing the
'static final Object mutex_avcodec_openclose'
|
|
|
|
| |
GLMediaEventListener impl. to access GLMediaPlayer associated objects
|
|
|
|
| |
~2 kB)
|
|
|
|
| |
8a032a2c1f247819bdb08382fbebcc4cd896b3f2
|
|
|
|
| |
Signed-off-by: Xerxes Rånby <[email protected]>
|
|
|
|
|
|
| |
(camera or other sources may not have duration)
Regression of commit 8a8ed735f6631b2da7bf605c5c3dda4e0fc13905
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
; Fallback for EOS Detection ; MovieSimple uses full GLEventListener for 'Audio Only' as well to test seek
Determine StreamWorker usage after init
- To support audio only files, we need to determine to use StreamWorker
after completion of stream-init.
Fix seek(..)
- FFMPeg: pos0 needs to use aPTS for audio-only
- Clip target time [0..duration[
Fallback for EOS Detection
In case the backend does not report proper EOS:
- Utilize 'nullFramesCount >= MAX' -> EOS,
where MAX is number of frames for 3s play duraction
and where 'nullFramesCount' is increased if no valid packet is available
and no decoded-video or -audio in the queue.
- Utilize pts > duration -> EOS
MovieSimple uses full GLEventListener for 'Audio Only' as well to test seek
- Matroska seek for audio-only leads to EOS ..
http://video.webmfiles.org/big-buck-bunny_trailer.webm
- MP4 audio-only seek works
http://download.blender.org/peach/bigbuckbunny_movies/BigBuckBunny_320x180.mp4
MovieSimple/MovieCube:
- Use audio-pts in audio-only to calc target time
Tested:
- A, V and A+V
- Pause, Stop and Seek
- GNU/Linux
|
|
|
|
|
|
|
|
|
| |
(seek) - Tested w/ seeking 'Audio Only' and Matroska
Test stream was default of MovieSimple:
http://video.webmfiles.org/big-buck-bunny_trailer.webm
while disabling video (-vid -2)
|
|
|
|
|
|
|
|
| |
'getNextTexture(..)' is issued here!
Thanks to Xerxes to analyze this issue thoroughly.
TODO: Implement EOS for 'Audio Only' and test seek, pause, etc .. - Apply manual tests in MovieSimple
|
|
|
|
| |
decode proper file-scheme if applicable - otherwise encoded ASCII URI.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Validate isGLES*() usage and definition ; Add and use ShaderCode.createExtensionDirective(..)
- Fix GLES3 Profile Mapping, i.e. GL2ES2 queries and mappings
- GLProfile: Add GL2ES2 -> ES3 mapping
- EGLContext: Reuqest major '3' for ES3
- EGLGLCapabilities/EGLGraphicsConfiguration: Consider EGLExt.EGL_OPENGL_ES3_BIT_KHR
- Validate isGLES*() usage and definition
- Fix BuildComposablePipeline's isGLES() code
- For GLSL related queries use isGLES() instead of isGLES2(),
which would exclude ES3
- Add and use ShaderCode.createExtensionDirective(..)
- Supporting creating GLSL extension directives while reusing strings from GLExtensions
- Minor cleanup of GLContextImpl.setGLFuncAvail(..)
|
|
|
|
|
|
|
|
|
|
|
|
| |
multiple media textures (Android) or shared GL context are not usable.
- GLMediaPlayer:
- TEXTURE_COUNT_MIN is the new minimum: '1' - i.e. no multithreading, single threaded player
- TEXTURE_COUNT_DEFAULT is '4' - multithreaded
- GLMediaPlayerImpl:
- Add Single threaded mode, but perform initStreamImpl(..) off-thread.
-
|
|
|
|
| |
GLCapabilitiesChooser instead of GLProfile, allowing using same or similar caps - important for sharing ctx
|
|
|
|
| |
Signed-off-by: Harvey Harrison <[email protected]>
|
|
|
|
| |
Signed-off-by: Harvey Harrison <[email protected]>
|
|
|
|
| |
Signed-off-by: Harvey Harrison <[email protected]>
|
|
|
|
|
|
|
| |
40863632d1428de015099b5967e5136425e99f25), throw IllegalArgumentException if ordinal is out-of-range. Add API doc.
- FFMPEGNatives
- MouseEvent.PointerType
|
|
|
|
| |
FFMPEGNatives's Enums and new MouseEvent.PointerType.valueOf(int)
|
| |
|
|
|
|
| |
up warnings and includes (clang).
|
|
|
|
|
|
|
| |
compiled for it -> pass
Scenario ffmpeg-0.10, where we are not prepared (compiled-in) for sw-resample support.
Don't use if compiled in version (CC) is < 0 (n/a), and allow to pass at load time.
|
|
|
|
| |
next frame after play() will provide new frame. Added API doc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
orientation change (flipped), API-doc,
- State
- Fix state transition (initGL() error)
- Camera options
- options uses ';' as query separator
- don't use 'default' options, driver should know
- Detect and act on orientation change (flipped)
- ffmpeg impl detects if flipped changes and triggers a SIZE update event.
This allows application to react, i.e. re-init GL and use new TextureCoord's.
Test: Works well on Windows w/ rawvideo dshow camera driver/codec.
- API-doc
- TexSeqEventListener/GLMediaEventListener usage / constraints (GL, ..)
- State transition fix
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
'camera ID'
If linesize is < 0, it is not invalid as assumed in commit eca6a5cb1e2beda84dfbafc31ed225e272f4f3fb,
but vertically flipped (bottom-up).
We have to adjust the data pointers, which are moved to the upper end of memory as well
and can proceed as usual.
TODO:
- Update texture 'mustFlipVertically' to 'false' in this case.
- Later:
- Allow updating texture size ..
- Whole pixel-fmt/texture-lookup-shader association must scale better,
i.e. extract the 'knowledge' into one class, use a static shader code
using uniforms instead of hard-coded values .. etc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
windows, 2 more pixel formats, fail-safe data handling
- add support for ffmpeg 2 / libav 10 -> lavc55_lavf55_lavu52_lavr01
- add support for ffmpeg libswresample (similar to libavresample)
- handle BGRA (GL type) and BGR24 (texture shader)
- Change Camera URI semantics, drop 'host' and use 'path' for camera ID
and use 'query' for options.
- add support for Window's DShow camera selection
- our camera id -> index of list of video-input devices,
this gives us same behavior as w/ Linux
- requires windows libs: strmiids, uuid, ole32, oleaut32
- Compiles w/ MingW64, works w/ libav/ffmpeg
- TODO: test compilation w/ MingW 32bit !
- don't push data to texture if (linesize <= 0)
this may happen due to buggy decoder / setup ..
Tested manually on GNU/Linux x64 and Windows x64:
- GNU/Linux libav 0.8, libav 9, libav 10, ffmpeg 1.2, ffmpeg 2.0
- Windows libav 0.8, libav 9, ffmpeg 2.0
- videos and camera
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
def. high camera options, cleanup symbols)
- Fix libav/ffmpeg compilation
- Split native GLContext code from JoglCommon
- JoglCommon is required for ffmpeg_* c-compile/link
- Supported versions now:
- 0.8 53.53.51
- 9.0 54.54.52
- FFMPEGMediaPlayer
- Update API doc, add compatibility .. etc
- Pixel format conversions (via shader texture lookup func):
- YUV420P, YUVJ420P
- YUV422P, YUVJ422P
- YUYV422
- Properly handle aid/vid
- In camera mode: set high default values
- TODO: Make it configurable via camera URI:
- video_size
- framerate
- ?
- FFMPEGDynamicLibraryBundleInfo
- Cleanup symbols / remove unused (pre 53)
- Add av_dict_* methods
|
|
|
|
| |
for float values)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
missing symbol 'av_realloc'.
- Add camera input
- Use URI w/ scheme 'camera' to determine camera input is desired,
use URI host as camera id.
E.g. 'camera://0' for 1st camera.
- AndroidGLMediaPlayerAPI14: Via 'Camera'
- FFMPEG*: Via libavdevice, device name and input format
- TODO: Add controls to manipulate camera if available
- FFMPEG*
- Add symbols
- avcodec_register_all
- av_realloc (was missing)
- avdevice_register_all
- Load libavdevice (opt)
- Camera:
- Use <ID> (windows) and /dev/video<ID> other OS
- simply find the input format in native code
- Support YUYV422 (used in video4linux2, etc.)
- Stuff 2x 16bpp (YUYV) into one RGBA pixel!
- Add texture format for 16bpp
- Add texture lookup shader
- Fix av_packet leak in readNextImpl(..)
- Restore orig pointer and size values,
we may have moved along within packet.
Then call av_free_packet().
- Use null AudioSink if audio-id is NONE
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
FFMPEGNatives -> FFMPEGv08Natives + FFMPEGv09Natives
Enables FFMPEGMediaPlayer to work w/ either ffmpeg/libav version 8 or 9 w/ same JOGL binary
Same C source code is compiled against
1: version 0.8 FFMPEGv08Natives lavc53.lavf53.lavu51
2: version 0.9 FFMPEGv09Natives lavc54.lavf54.lavu52.lavr01
FFMPEGv08Natives and FFMPEGv09Natives implements FFMPEGNatives,
native C code uses CPP '##' macro concatenation to produce unique function names.
To enable 'cpp' to find the libav* header files matching the desired version,
we have placed them in the c-file's folder, issued '#include "path/file.h"
and added symbolic links to allow finding same module and 'sister modules':
ls -l libavformat/
..
lrwxrwxrwx 1 sven sven 13 Aug 26 12:56 libavcodec -> ../libavcodec
lrwxrwxrwx 1 sven sven 14 Aug 26 12:56 libavformat -> ../libavformat
lrwxrwxrwx 1 sven sven 12 Aug 26 12:57 libavutil -> ../libavutil
..
At static init FFMPEGDynamicLibraryBundleInfo, determines the runtime version
and instantiates the matching FFMPEGNatives, or null if non matches.
FFMPEGMediaPlayer still compares the compile-time and runtime versions.
FFMPEGMediaPlayer passes it's own instance to FFMPEGNatives for callbacks.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
negotiation w/ AudioSink; Misc
- Add libavresample support
- Resample if avail && (!AV_SAMPLE_FMT_S16 || !prefSampleRate || !sinkSupported)
- Resample to: prefSampleRate (if set), AV_SAMPLE_FMT_S16 and min(channelCount, maxChannelCount)
- Proper AudioFormat negotiation w/ AudioSink;
- Utilize AudioSink's 'isSupported(AudioFormat)'
- Misc
- use 'av_get_bytes_per_sample(fmt)' always, don't assume 2
|