This is in preparation to move constraint off the GPU to simplify our
shaders, instead we only need to constrain once at raster time and never
again.
This also significantly reworks the freetype renderGlyph function to be
generally much cleaner and more straightforward.
This commit doesn't actually apply the constraints to anything yet, that
will be in following commits.
This sets the stage for dynamically adjusting the sizes of fallback
fonts based on the primary font's face metrics. It also removes a lot of
unnecessary work when loading fallback fonts, since we only actually use
the metrics based on the parimary font.
This is achieved by rendering to an alpha-only context rather than a
normal single-channel context, and adjusting the brightness at which
coretext thinks it's drawing the glyph, which affects how it applies
font smoothing (which is what `font-thicken` enables).
Allows for high dpi displays to get odd numbered pixel sizes, for
example, 13.5pt @ 2px/pt for 27px font. This implementation performs
all the sizing calculations with f32, rounding to the nearest pixel
size when it comes to rendering. In the future this can be enhanced
by adding fractional scaling to support fractional pixel sizes.
Fixes#1618
Font sizes in configuration were always a u8, but the keybinding and
internal state was a u16 so it allowed for an ever-growing font size. At
a certain point, there is an integer overflow which causes it to wrap
around. This is all silly, 255 should be large enough for anyone[1]
[1]: Ready to be super wrong about this
Fixes#895
Every loaded font face calculates metrics for itself. One of the
important metrics is the baseline to "sit" the glyph on top of. Prior to
this commit, each rasterized glyph would sit on its own calculated
baseline. However, this leads to off-center rendering when the font
being rasterized isn't the font that defines the terminal grid.
This commit passes in the font metrics for the font defining the
terminal grid to all font rasterization requests. This can then be used
by non-primary fonts to sit the glyph according to the primary grid.
Fixes#845
Quick background: Emoji codepoints are either default text or default
graphical ("Emoji") presentation. An example of a default text emoji
is ❤. You have to add VS16 to this emoji to get: ❤️. Some font are
default graphical and require VS15 to force text.
A font face can only advertise text vs emoji presentation for the entire
font face. Some font faces (i.e. Cozette) include both text glyphs and
emoji glyphs, but since they can only advertise as one, advertise as
"text".
As a result, if a user types an emoji such as 👽, it will fallback to
another font to try to find a font that satisfies the "graphical"
presentation requirement. But Cozette supports 👽, its just advertised
as "text"!
Normally, this behavior is what you want. However, if a user explicitly
requests their font-family to be a font that contains a mix of test and
emoji, they _probably_ want those emoji to be used regardless of default
presentation. This is similar to a rich text editor (like TextEdit on
Mac): if you explicitly select "Cozette" as your font, the alien emoji
shows up using the text-based Cozette glyph.
This commit changes our presentation handling behavior to do the
following:
* If no explicit variation selector (VS15/VS16) is specified,
any matching codepoint in an explicitly loaded font (i.e. via
`font-family`) will be used.
* If an explicit variation selector is specified or our explicitly
loaded fonts don't contain the codepoint, fallback fonts will be
searched but require an exact match on presentation.
* If no fallback is found with an exact match, any font with any
presentation can match the codepoint.
This commit should generally not change the behavior of Emoji or VS15/16
handling for almost all users. The only users impacted by this commit
are specifically users who are using fonts with a mix of emoji and text.
Font metrics realistically should be integral. Cell widths, cell
heights, etc. do not make sense to be floats, since our grid is
integral. There is no such thing as a "half cell" (or any point).
The reason we historically had these all as f32 is simplicity mixed
with history. OpenGL APIs and shaders all use f32 for their values, we
originally only supported OpenGL, and all the font rendering used to be
directly in the renderer code (like... a year+ ago).
When we refactored the font metrics calculation to its own system and
also added additional renderers like Metal (which use f64, not f32), we
never updated anything. We just kept metrics as f32 and casted
everywhere.
With CoreText and #177 this finally reared its ugly head. By forgetting
a simple rounding on cell metric calculation, our integral renderers
(sprite fonts) were off by 1 pixel compared to the GPU renderers.
Insidious.
Let's represent font metrics with the types that actually make sense: a
cell width/height, etc. is _integral_. When we get to the GPU, we now
cast to floats. We also cast to floats whenever we're doing more precise
math (i.e. mouse offset calculation). In this case, we're only
converting to floats from a integral type which is going to be much
safer and less prone to uncertain rounding than converting to an int
from a float type.
Fixes#177