| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
reduce buffer usage (performance) in favor of float[].
Thomas De Bodt reported this error and provided the unit test.
|
|
|
|
|
|
|
|
| |
(GL3), use GL3.2 compatible shader; Use VBO in general.
Covered by:
Auto unit tests: TestOffscreenLayer01GLCanvasAWT, TestOffscreenLayer02NewtCanvasAWT
Manual: TestGearsES2AWT '-gl3 -layered'
|
|
|
|
| |
add forceGL3; TextureDraw01ES2Listener uses defaultShaderCustomization()
|
|
|
|
| |
for success.
|
|
|
|
| |
CGL/CGLExt Robustness ..
|
|
|
|
| |
internal APIs, critical array is not required, hence redundant.
|
|
|
|
| |
internal APIs, critical array is not required, hence redundant.
|
|
|
|
| |
internal APIs, critical array is not required, hence redundant.
|
|
|
|
| |
internal APIs, critical array is not required, hence redundant.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Setting up default VAO for all GL >= 3.2 core ctx.
Refines commit 9b6448b1d54716fd455c0cad0c6133c0edeb3bb8
Due to GL 3.2 core spec: E.2. DEPRECATED AND REMOVED FEATURES (p 331)
"There is no more default VAO buffer 0 bound, hence generating and binding one
to avoid INVALID_OPERATION at VertexAttribPointer."
More clear is GL 4.3 core spec: 10.4 (p 307):
"An INVALID_OPERATION error is generated by any commands which
modify, draw from, or query vertex array state when no vertex array is bound.
This occurs in the initial GL state, and may occur as a result of BindVertexAr-
ray or a side effect of DeleteVertexArrays."
+++
I just have read (same spec) 2.10 (p 46/47):
"An INVALID_OPERATION error is generated if any of the *Pointer commands
specifying the location and organization of vertex array data are called while zero
is bound to the ARRAY_BUFFER buffer object binding point, and the pointer argu-
ment is not NULL."
.. which only constraints the *Pointer command use to _VBO_, not forcing a VAO.
+++
|
|
|
|
| |
simple major version number check.
|
|
|
|
| |
w/ higher GLSL versions
|
|
|
|
| |
except for sampler2D (mediump instead of lowp)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GLContextImpl: Bind default VAO if having quirk RequiresBoundVAO.
OSX w/ OpenGL >= 3 core context implementation requires a bound VAO for vertex attribute operations,
i.e. VertexAttributePointer(..). This has been experienced on OSX 10.7.5, OpenGL 3.2 core w/ Nvidia GPU
and in several forum posts. Such 'behavior' violates the GL 3.2 core specification,
which does not state this requirement, hence it is a bug. (Please correct me if I am wrong!)
GLContextImpl works around this quirk, by generating a default VAO and binds it at 1st makeCurrent (@creation)
and deletes it at destroy. This is minimal invasive since no action is required for subsequent makeCurrent or release.
We assume if a user uses and binds a VAO herself, she will mind this quirk.
Note: We could enhance this workaround by quering for a currently bound VAO at makeCurrent() and bind our default if none.
However, we refrain from this operation to minimize the workaround and complexity.
|
| |
|
|
|
|
| |
GLSL version and default precision (if GLES) - Used by GearsES2/RedSquare/PointDemo (Made GLSL version proof)
|
|
|
|
|
|
|
|
|
| |
string (for shader programs)
Uses GL_SHADING_LANGUAGE_VERSION and parses it via VersionNumber, as well as having a static fallback
using the GL context version.
The value is valid and can be retrieved after ctx has been made current once.
|
| |
|
|
|
|
| |
EXT_packed_depth_stencil extension
|
|
|
|
| |
(gl_PointCoords n/a otherwise); Add FFP Emul point test in TestPointNEWT/PointDemoES1.
|
|
|
|
| |
fourth element was invalid
|
| |
|
|
|
|
|
|
|
|
|
| |
All *Pointer methods used 'normalized:=false', but we cannot assume
the fixed function code does use normalized (0f..1f) values.
On the contrary, it usually uses the native format value range.
Hence we have to pass normalized:=true for all fixed point data types
and normalized:=false for floating point data types.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
GL_POINT_SOFT and dist/fade attenuation (Adding basic POINT unit tests)
gl_PointSize (and all other uniform array elems) was not propagated due to wrong usage of GLUniformData component param.
Due to efficiency, we use vec4[2] now and #defines in shader to easy readability.
GL_POINT_SOFT uses gl_PointCoord to determnine inside/outside circle position
while adding a seam of 10% in/out. This almost matches 'other' implementations and gives a nice smooth circle.
!GL_POINT_SOFT produces a proper square (billboard).
Point-Vertex shader takes dist/fade attentuation into account.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
allowing passing a GL action w/ custom argument and return value.
Adding simple POINTS shader not regarding POINTS parameters and not using a texture (commented out).
FIXME:
Event thought it works using a texture and gl_PointCoord in frag shader,
I don't see the point here (lol) if gl_PointSize must be 1.0 in vert shader ..
otherwise nothing is seen on ES2.0.
On Desktop POINTS are always shown as 1 pixel sized points!
|
|
|
|
|
|
| |
twice (duh!) almost halfed performance :)
TODO: Create GL_POINT texture and render w/ glDraw*()
|
|
|
|
|
|
| |
ES1 impl. detection
'glBegin' is not ES1, duh!
|
| |
|
|
|
|
|
|
| |
found in ES1 library
This is the case in BCM-VC-IV blobs, tested on Raspeberry-Pi
|
|
|
|
| |
discarding pixels of culled faces.
|
|
|
|
| |
according it's usage (update Mvi/Mvit only if lighting is being used)
|
| |
|
|
|
|
| |
resize element count
|
|
|
|
|
|
|
|
| |
and size gross-net > PAGE_SIZE
Usually PAGE_SIZE is written within one DMA xfer command,
so if the gross buffer bulk transfer contains more unused data than PAGE_SIZE
we may win when transfering each single buffer at buffer update.
|
|
|
|
| |
imm. gl* functions; Default color padding is 1f; Make fields private.
|
|
|
|
| |
glColor4f() more efficient, use pre-alloc NIO buffer
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| | |
Missing format specifier for the first argument would lead to this throwing IllegalFormatException.
Signed-off-by: Harvey Harrison <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It is impossible to use this method as it will get into an infinite loop as it
just calls itself. Base the implementation on the contains method shortly before
this method. As this method is impossible to actually use, it could also just
be removed.
Signed-off-by: Harvey Harrison <[email protected]>
|
| |
| |
| |
| |
| |
| |
| |
| | |
The readlong() method is attempting to build a 64bit value from two 32 bit reads.
The problem is that shifting an int only uses the lower 5 bits of the shift value,
so << 32 is the same as << 0. Cast to long and restore the original intention.
Signed-off-by: Harvey Harrison <[email protected]>
|
|/
|
|
| |
properties, drawIndices QUAD w/ proper range and add uint; FixedFunctionHook: drawIndices QUAD w/ proper range and add uint
|
| |
|
|
|
|
| |
code: Remove precision for default precision types.
|
| |
|
|
|
|
| |
ShaderSelectionMode.AUTO (good for mobile); Lazy shader instantiation.
|
|
|
|
| |
becomes public method to JoglVersion
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
923d9dd7f1d40db72d35ca76a761ca14babf147f
We are aware that Google's ANGLE (Windows EGL/ES2 impl. based on D3D)
crashes using eglInitialize(..) w/ EGL_DEFAULT_DISPLAY.
Commit 923d9dd7f1d40db72d35ca76a761ca14babf147f moved the EGL device initialization
into the EGLDrawableFactory ctor and hence slipped out ANGLE workaround to disable it per default.
- Moving property static flags from GLProfile -> GLDrawableFactory
- Moving ANGLE workaround right into EGLDrawableFactory (where it belongs)
- Moving optional EGL/ES disable code to GLDrawableFactory (where it belongs)
Tested on Windows w/ Java-32bit and latest Chrome ANGLE DLLs
|