- move wuffs code from src/ to pkg/
- eliminate stray debug logs
- make logs a warning instead of an error
- remove initialization of some structs to zero
Improve the performance of Kitty graphics by switching to the WUFFS
library for decoding PNG images and for "swizzling" G, GA, and RGB data
to RGBA formats needed by the renderers.
WUFFS claims 2-3x performance benefits over other implementations, as
well as memory-safe operations.
Although not thorougly benchmarked, performance is on par with Kitty's
graphics decoding.
https://github.com/google/wuffs
With a minimum contrast set, the colored glyphs that Powerline uses
would sometimes be set to white or black while the surrounding background
colors remain unchanged, breaking up contiguous colors on segments of
the Powerline.
This no longer happens with this patch as Powerline glyphs are now
special-cased and exempt from the minimum contrast adjustment.
Fixes#1037
Renderers must convert the internal Kitty graphics state to a GPU
texture for rendering. For performance reasons, renderers cache the GPU
state by image ID. Unfortunately, this meant that if an image was
replaced with the same ID and was already cached, it would never be
updated on the GPU.
This PR adds the transmission time to the cache. If the transmission
time differs, we assume the image changed and replace the image.
Fixes#932
We have to do this for Metal as well (this code is copied from Metal)
since Metal doesnt even allow RGB textures. I couldn't figure out a way
to get the A value to be "255" (byte), it seems to always be set to 1
which gets normalized to 1/255, so lets just manually convert.
Font metrics realistically should be integral. Cell widths, cell
heights, etc. do not make sense to be floats, since our grid is
integral. There is no such thing as a "half cell" (or any point).
The reason we historically had these all as f32 is simplicity mixed
with history. OpenGL APIs and shaders all use f32 for their values, we
originally only supported OpenGL, and all the font rendering used to be
directly in the renderer code (like... a year+ ago).
When we refactored the font metrics calculation to its own system and
also added additional renderers like Metal (which use f64, not f32), we
never updated anything. We just kept metrics as f32 and casted
everywhere.
With CoreText and #177 this finally reared its ugly head. By forgetting
a simple rounding on cell metric calculation, our integral renderers
(sprite fonts) were off by 1 pixel compared to the GPU renderers.
Insidious.
Let's represent font metrics with the types that actually make sense: a
cell width/height, etc. is _integral_. When we get to the GPU, we now
cast to floats. We also cast to floats whenever we're doing more precise
math (i.e. mouse offset calculation). In this case, we're only
converting to floats from a integral type which is going to be much
safer and less prone to uncertain rounding than converting to an int
from a float type.
Fixes#177