Fixes#668
We were previously only checking the first font result in the search.
This also fixes our CoreText scoring algorithm to prioritize faces that
have the codepoint we're searching for.
This is a weird one. By using intCast on the `idx` I am periodically
getting a panic on index out of bounds where the index is larger than
FontIndex can possibly be. Very strange!
I tried to just remove intCasts and believe it or not that worked.
Previously, `cat /dev/urandom` would trigger the issue in seconds and
now I've had it running 20+ minutes without the issue.
The additional `if` check is just a safety mechanism
When font shaping grapheme clusters, we erroneously used the font index
of a font that only matches the first codepoint in the cell. This led to the
combining characters being [usually] unknown and rendering as boxes.
For a grapheme, we must find a font face that has a glyph for _all codepoints_
in the grapheme.
This also fixes an issue where we now properly render the unicode replacement
character if we can't find a font satisfying a codepoint.
Font metrics realistically should be integral. Cell widths, cell
heights, etc. do not make sense to be floats, since our grid is
integral. There is no such thing as a "half cell" (or any point).
The reason we historically had these all as f32 is simplicity mixed
with history. OpenGL APIs and shaders all use f32 for their values, we
originally only supported OpenGL, and all the font rendering used to be
directly in the renderer code (like... a year+ ago).
When we refactored the font metrics calculation to its own system and
also added additional renderers like Metal (which use f64, not f32), we
never updated anything. We just kept metrics as f32 and casted
everywhere.
With CoreText and #177 this finally reared its ugly head. By forgetting
a simple rounding on cell metric calculation, our integral renderers
(sprite fonts) were off by 1 pixel compared to the GPU renderers.
Insidious.
Let's represent font metrics with the types that actually make sense: a
cell width/height, etc. is _integral_. When we get to the GPU, we now
cast to floats. We also cast to floats whenever we're doing more precise
math (i.e. mouse offset calculation). In this case, we're only
converting to floats from a integral type which is going to be much
safer and less prone to uncertain rounding than converting to an int
from a float type.
Fixes#177