This makes a few major changes:
- cursor style on terminal is single source of stylistic truth
- cursor style is split between style and style request
- cursor blinking is handled by the renderer thread
- cursor style/visibility is no longer stored as persistent state on
renderers
- cursor style computation is extracted to be shared by all renderers
- mode 12 "cursor_blinking" is now source of truth on whether blinking
is enabled or not
- CSI q and mode 12 are synced like xterm
Fixes#400
This ensures the hollow box is shown even if the cursor is not in a
blinking state when unfocus happens. We still hide the cursor even on
unfocus if the terminal mode explictly asks for a hidden cursor.
This is the behavior of the other terminals I have, such as iTerm, allcritty, WezTerm, and Kitty. The hollow cursor is especially noticable when running in two panes,
one of them inactive with animations (such as a progress bar)
This was a regression. The previous logic would always show the cursor
if we were using a non-blinking cursor. But, if the terminal state is
explicitly requesting an invisible cursor (mode 25) then we need to hide
the cursor.
Font metrics realistically should be integral. Cell widths, cell
heights, etc. do not make sense to be floats, since our grid is
integral. There is no such thing as a "half cell" (or any point).
The reason we historically had these all as f32 is simplicity mixed
with history. OpenGL APIs and shaders all use f32 for their values, we
originally only supported OpenGL, and all the font rendering used to be
directly in the renderer code (like... a year+ ago).
When we refactored the font metrics calculation to its own system and
also added additional renderers like Metal (which use f64, not f32), we
never updated anything. We just kept metrics as f32 and casted
everywhere.
With CoreText and #177 this finally reared its ugly head. By forgetting
a simple rounding on cell metric calculation, our integral renderers
(sprite fonts) were off by 1 pixel compared to the GPU renderers.
Insidious.
Let's represent font metrics with the types that actually make sense: a
cell width/height, etc. is _integral_. When we get to the GPU, we now
cast to floats. We also cast to floats whenever we're doing more precise
math (i.e. mouse offset calculation). In this case, we're only
converting to floats from a integral type which is going to be much
safer and less prone to uncertain rounding than converting to an int
from a float type.
Fixes#177