| Age | Commit message (Collapse) | Author |
|
|
|
video_core: Update texture format
|
|
|
|
|
|
|
|
constexpr static to static constexpr for consistency
Signed-off-by: arades79 <scravers@protonmail.com>
|
|
where possible
Signed-off-by: arades79 <scravers@protonmail.com>
|
|
|
|
|
|
Visual Studio has an option to search all files in a solution, so I
did a search in there for "default:" looking for any missing break
statements.
I've left out default statements that return something, and that throw
something, even if via ThrowInvalidType. UNREACHABLE leads towards throw
R_THROW macro leads towards a return
|
|
|
|
Resolves C4146 on MSVC
|
|
|
|
|
|
|
|
video_core: Modify astc texture decode error fill value
|
|
Given the issues with GPU accelerated ASTC decoding with NVIDIA's latest drivers, parallelize astc decoding on the CPU.
Uses half the available threads in the system for astc decoding.
|
|
|
|
|
|
|
|
Follow-up to 99ceb03a1cfcf35968cab589ea188a8c406cda52
|
|
This formats all copyright comments according to SPDX formatting guidelines.
Additionally, this resolves the remaining GPLv2 only licensed files by relicensing them to GPLv2.0-or-later.
|
|
|
|
|
|
|
|
|
|
|
|
vk_blit_screen: Fix non-accelerated texture size calculation
|
|
`via_header_index` is already checked above, so it would never be true in this branch
|
|
Addresses the potential OOB access in UnswizzleTexture.
|
|
|
|
astc_decoder: Various performance and memory optimizations
|
|
This makes UnswizzleTexture up to two times faster. It is the main bottleneck in NVDEC video decoding.
|
|
This buffer was a list of EncodingData structures sorted by their bit length, with some duplication from the cpu decoder implementation.
We can take advantage of its sorted property to optimize its usage in the shader.
Thanks to wwylele for the optimization idea.
|
|
Moves leftover values that are no longer used by the gpu decoder back to the cpp implementation.
|
|
|
|
|
|
We can move them to instead be compile time constants within the shader.
|
|
These changes should help in reducing crashes/drivers panics that may
occur due to synchronization issues between the shader completion and
later access of the decoded texture.
|
|
Per the spec, L1 is clamped to the value 0xff if it is greater than 0xff. An oversight caused us to take the maximum of L1 and 0xff, rather than the minimum.
Huge thanks to wwylele for finding this.
Co-Authored-By: Weiyi Wang <wwylele@gmail.com>
|
|
Users may want to fall back to the CPU ASTC texture decoder due to hangs
and crashes that may be caused by keeping the GPU under compute heavy
loads for extended periods of time. This is especially the case in games
such as Astral Chain which make extensive use of ASTC textures.
|
|
continue causes a memory leak in A Hat in Time.
|
|
This is not a real fix, so assert here and continue before crashing.
|
|
- Removes a dependency on core and input_common from common.
|
|
|
|
Co-Authored-By: Rodrigo Locatti <reinuseslisp@airmail.cc>
|
|
ASTC texture decoding is currently handled by a CPU decoder for GPU's without native ASTC decoding support (most desktop GPUs). This is the cause for noticeable performance degradation in titles which use the format extensively.
This commit adds support to accelerate ASTC decoding using a compute shader on OpenGL for GPUs without native support.
|
|
AlignUpLog2 describes what the function does better than AlignBits.
|
|
Invalid ASTC textures seem to write more bytes here, increase
the size to something that can't make us push out of bounds.
|
|
Avoid out of bound reads on invalid ASTC textures.
Games can bind invalid textures that make us read or write out of bounds.
|