This maybe is a robust way to get Monaspace fonts working.
Previously, we used leading as part of the calculation in cell height. I
don't remember why. It appears most popular monospace fonts (Fira Code,
Berkeley Mono, JetBrains Mono, Monaco are the few I tested) have a value
of 0 for leading, so this has no effect. But some fonts like Monaspace
have a non-zero (positive) value, resulting in overly large cell
heights.
The issue is that we simply add leading to the height, without modifying
ascent. Normally this is what you want (normal typesetting) but for
terminals, we're trying to set text centered vertically in equally
spaced grid cells. For this, we want to split the leading between the
top and bottom.
* font: disable default font features for Menlo and Monaco
Both of these fonts have a default ligature on "fi" which makes terminal
rendering super ugly. The easiest thing to do is special-case these
fonts and disable ligatures. It appears other terminals do the same
thing.
Font metrics realistically should be integral. Cell widths, cell
heights, etc. do not make sense to be floats, since our grid is
integral. There is no such thing as a "half cell" (or any point).
The reason we historically had these all as f32 is simplicity mixed
with history. OpenGL APIs and shaders all use f32 for their values, we
originally only supported OpenGL, and all the font rendering used to be
directly in the renderer code (like... a year+ ago).
When we refactored the font metrics calculation to its own system and
also added additional renderers like Metal (which use f64, not f32), we
never updated anything. We just kept metrics as f32 and casted
everywhere.
With CoreText and #177 this finally reared its ugly head. By forgetting
a simple rounding on cell metric calculation, our integral renderers
(sprite fonts) were off by 1 pixel compared to the GPU renderers.
Insidious.
Let's represent font metrics with the types that actually make sense: a
cell width/height, etc. is _integral_. When we get to the GPU, we now
cast to floats. We also cast to floats whenever we're doing more precise
math (i.e. mouse offset calculation). In this case, we're only
converting to floats from a integral type which is going to be much
safer and less prone to uncertain rounding than converting to an int
from a float type.
Fixes#177
Since we have no way to detect our presentation (text/emoji), we need to
actually render the glyph that is being requested to double-check that
the glyph matches our supported presentation.
We do this because the browser will render fallback fonts for a glyph if
it can't find one in the named font.