This test-only function wraps testWriteString with semantic prompt
marking. This replaces the manual, row-based semantic_prompt field
manipulation we were doing in all of our prompt-related test setups.
This function's heuristics are a little complex because it wraps
testWriteString as a "black box"; we don't benefit from that function's
own line-based logic to know which rows need to be updated with the
semantic prompt flag. We need to infer them externally instead.
I considered adding an options argument to testWriteString that would
allow passing e.g. a semantic_prompt prompt. Given that it's called from
200+ places, that would involve a lot of unrelated changes, but it
remains an "option" (ha!) if there's value there for other cases.
I also have plans that move us from row-based to cell-based semantic
tracking, where the current semantic type is tracked by the cursor. In
that implementation, testWriteString can update the written cells
directly, and testWriteSemanticString just helps manage the cursor's
state. Introducing testWriteSemanticString here and now therefore helps
bridge us to that world while maintaining test consistency.
These tests write specific lines into a 10-column-wide test screen. The
"prompt3$ input3\n" writes exceed that column limit, and some of their
characters wrap onto the following line.
These tests' current assertions aren't sensitive to that overflow, but
I spotted the problem while doing some related work, and I thought it
worth making these corrections to avoid any future surprises.
Cleans up the logic, checks for out of bounds using rows instead of
sel.contains because that excludes cases where a rectangle selection
doesn't include the leftmost column.
Also adds test for clipping behavior of rectangular selections.
Fixes an issue where rectangle selections would appear visually wrong if
their start or end were out of the viewport area, because when cloning
them the restored pins were defaulting to the start and end of the row
instead of the appropriate column.
There are two main improvements being made here. First, we move away from using autohash and instead
use a one-shot strategy similar to the Style hashing. Since the GlyphKey includes the Metrics struct,
which contains quite a few fields, autohash was performing expensive and unnecessary repeated updates.
The second improvement is actually just, not hashing Metrics. By ignoring the Metrics field, we can
fit the rest of the GlyphKey into a 64-bit packed struct and just return that as the hash! It
ends up being unique for each GlyphKey in renderGlyph, and is nearly a zero-cost operation.
This ends up boosting the performance (on my machine at least), from around 560fps to 590fps on the
DOOM-fire benchmark.
Fixes#7536
When we're reflowing a row and we need to insert a spacer head, we must
move to the next row to insert it. Previously, we were setting a spacer
head and then copying data into that spacer head, which could lead to
corrupt data and an eventual crash.
In debug builds this triggers assertion failures but in release builds
this would lead to silent corruption and a crash later on.
The unit test shows the issue clearly but effectively you need a
multi-codepoint grapheme such as `👨👨👦👦` to wrap across a row by changing
the columns.
This brings the behavior of mode 47, 1047, and 1049 much closer to
xterm's behavior. I found that our prior implementation had many
deficiencies.
For example, we weren't properly copying the cursor state back to the
primary screen from the alternate screen for modes 47 and 1047. And we
weren't saving/restoring cursor state unconditionally for mode 1049 even
if we were already in the alternate screen.
These are weird, edgy behaviors that I don't think anyone expected
(evidence by there being no bug reports about them), but they are bugs
nontheless.
Many tests added.
As of Zig 0.14.0, `@splat` can be used for array types, which eliminates
a lot of redundant syntax and makes things generally cleaner.
I've explicitly avoided applying this change in the renderer files for
now since it would just create rebasing conflicts in my renderer rework
branch which I'll be PR-ing pretty soon.
This logic is cleaner and produces better behavior when selecting by
dragging the mouse outside the bounds of the surface, previously when
doing this on the left side of the surface selections would include the
first cell of the next row, this is no longer the case.
This introduces methods on PageList.Pin which move a pin left or right
while wrapping to the prev/next row, or clamping to the ends of the row.
These need unit tests.
This commit changes a LOT of areas of the code to use decl literals
instead of redundantly referring to the type.
These changes were mostly driven by some regex searches and then manual
adjustment on a case-by-case basis.
I almost certainly missed quite a few places where decl literals could
be used, but this is a good first step in converting things, and other
instances can be addressed when they're discovered.
I tested GLFW+Metal and building the framework on macOS and tested a GTK
build on Linux, so I'm 99% sure I didn't introduce any syntax errors or
other problems with this. (fingers crossed)
cc @qwerasd205
This commit adds a few new mode flags to the `bench-stream` program to
generator synthetic OSC sequences. The new modes are `gen-osc`,
`gen-osc-valid`, and `gen-osc-invalid`. The `gen-osc` mode generates
equal parts valid and invalid OSC sequences, while the suffixed variants
are for generating only valid or invalid sequences, respectively.
This commit also fixes our build system to actually be able to build the
benchmarks. It turns out we were just rebuilding the main Ghostty binary
for `-Demit-bench`. And, our benchmarks didn't run under Zig 0.14, which
is now fixed.
An important new design I'm working towards in this commit is to split
out synthetic data generation to a dedicated package in
`src/bench/synth` although I'm tempted to move it to `src/synth` since
it may be useful outside of benchmarks.
The synth package is a work-in-progress, but it contains a hint of
what's to come. I ultimately want to able to generate all kinds of
synthetic data with a lot of knobs to control dimensionality (e.g. in
the case of OSC sequences: valid/invalid, length, operation types,
etc.).
This problem was introduced by f091a69 (PR #6675).
I've gone ahead and overhauled the placement positioning logic as well;
it was doing a lot of expensive calls before, I've significantly reduced
that.
Clipping partially off-screen images is now handled entirely by the
renderer, rather than while preparing the placement, and as such the
grid position passed to the image shader is now signed.
This commit adds a few new mode flags to the `bench-stream` program
to generator synthetic OSC sequences. The new modes are `gen-osc`,
`gen-osc-valid`, and `gen-osc-invalid`. The `gen-osc` mode generates
equal parts valid and invalid OSC sequences, while the suffixed variants
are for generating only valid or invalid sequences, respectively.
This commit also fixes our build system to actually be able to build the
benchmarks. It turns out we were just rebuilding the main Ghostty binary
for `-Demit-bench`. And, our benchmarks didn't run under Zig 0.14, which
is now fixed.
An important new design I'm working towards in this commit is to split
out synthetic data generation to a dedicated package in
`src/bench/synth` although I'm tempted to move it to `src/synth` since
it may be useful outside of benchmarks.
The synth package is a work-in-progress, but it contains a hint of
what's to come. I ultimately want to able to generate all kinds of
synthetic data with a lot of knobs to control dimensionality (e.g. in
the case of OSC sequences: valid/invalid, length, operation types,
etc.).
Fixes#7337
AppKit encodes functional keys as PUA codepoints. We don't want to send
that down as valid text encoding for a key event because KKP uses that
in particular to change the encoding with associated text.
I think there may be a more specific solution to this by only doing this
within the KKP encoding part of KeyEncoder but that was filled with edge
cases and I didn't want to risk breaking anything else.
XTGETTCAP queries are a semicolon-delimited list of hex encoded terminfo
capability names. Ghostty encodes a map using upper case hex encodings,
meaning when an application uses a lower case encoding the capability is
not found. To fix, we convert the entire list we receive in the query to
upper case prior to processing further.
Fixes: #7229
Fixes#7066
This fixes an issue where under certain conditions (expanded below), we
would not clear the correct row, leading to the screen having duplicate
data.
This was triggered by a page state of the following:
```
+----------+ = PAGE 0
... : :
4305 |1ABCD00000|
4306 |2EFGH00000|
:^ : = PIN 0
+-------------+ ACTIVE
4307 |3IJKL00000| | 0
+----------+ :
+----------+ : = PAGE 1
0 | | | 1
1 | | | 2
+----------+ :
+-------------+
```
Namely, the cursor had to NOT be on the last row of the first page,
but somewhere on the first page. Then, when an `index` (LF) operation
was performed the result would look like this:
```
+----------+ = PAGE 0
... : :
4305 |1ABCD00000|
4306 |2EFGH00000|
+-------------+ ACTIVE
4307 |3IJKL00000| | 0
:^ : : = PIN 0
+----------+ :
+----------+ : = PAGE 1
0 |3IJKL00000| | 1
1 | | | 2
+----------+ :
+-------------+
```
The `3IJKL` line was duplicated. What was happening here is that we
performed the index operation correctly but failed to clear the cursor
line as expected.
This is because we were always clearing the first row in the page
instead of the row of the cursor.
Test added.
This sort of command is treated as valid by Kitty so we should too. In
fact, it occurs with the example `send-png` script provided in the docs
for the protocol.
clearCells() always asserts its page's integrity after finishing its
work (via a `defer`). We don't need to re-assert the page's integrity
immediately thereafter.