Previously, we'd send renderer screen size updates and termio sigwnch
updates on every single resize event even if the screen size or grid
sizes didn't change. This is super noisy and given how many resize
events macOS sends, its also very expensive.
This commit makes it so that we only update the renderer if the screen
changed. If the screen size didn't change, the grid size couldn't have
changed either.
If the screen size did change, its still possible the grid size didn't
change since Ghostty supports fluid pixel-level resizing. We have to
send the screen size event to the renderer so all the GPU shader vars
are right but we do not have to send a termio event.
So, only if the grid size changed do we then notify the pty that the
terminal dimensions changed. Note that the resize event for ptys does
have a pixel-level x/y but I don't think the granularity is useful
beyond grid changes.
Font metrics realistically should be integral. Cell widths, cell
heights, etc. do not make sense to be floats, since our grid is
integral. There is no such thing as a "half cell" (or any point).
The reason we historically had these all as f32 is simplicity mixed
with history. OpenGL APIs and shaders all use f32 for their values, we
originally only supported OpenGL, and all the font rendering used to be
directly in the renderer code (like... a year+ ago).
When we refactored the font metrics calculation to its own system and
also added additional renderers like Metal (which use f64, not f32), we
never updated anything. We just kept metrics as f32 and casted
everywhere.
With CoreText and #177 this finally reared its ugly head. By forgetting
a simple rounding on cell metric calculation, our integral renderers
(sprite fonts) were off by 1 pixel compared to the GPU renderers.
Insidious.
Let's represent font metrics with the types that actually make sense: a
cell width/height, etc. is _integral_. When we get to the GPU, we now
cast to floats. We also cast to floats whenever we're doing more precise
math (i.e. mouse offset calculation). In this case, we're only
converting to floats from a integral type which is going to be much
safer and less prone to uncertain rounding than converting to an int
from a float type.
Fixes#177