In multiple tests we create 1 or more pages by growing them 1 row at a
time, which results in an integrity check of the page for each row grown
which is just... horrible. By simply pausing integrity checks while
growing these pages (since growing them is not the point of the test) we
MASSIVELY speed up all of these tests.
Also reduced grapheme bytes during testing and made the Terminal "glitch
text" test actually assert what it intends to achieve, rather than just
blindly assuming 100 copies of the text will be the right amount -- this
lets us stop a lot earlier, making it take practically no time.
In multiple tests we create 1 or more pages by growing them 1 row at a
time, which results in an integrity check of the page for each row grown
which is just... horrible. By simply pausing integrity checks while
growing these pages (since growing them is not the point of the test) we
MASSIVELY speed up all of these tests.
Also reduced grapheme bytes during testing and made the Terminal "glitch
text" test actually assert what it intends to achieve, rather than just
blindly assuming 100 copies of the text will be the right amount -- this
lets us stop a lot earlier, making it take practically no time.
I've added a weekly github actions workflow that automatically creates a
PR to update the iTerm2 colorschemes dependency and the zig cache hash
alongside it. I'm using a third-party action to create the pull request
such that it won't create another PR if a pending one exists already,
and will force-push to the PR branch (iterm2_colors_action) instead.
I noticed that the other workflows made use of namespace runners and
caching along with a cachix action, but I haven't included them here
yet. I could update the workflow to match those if you'd like.
… break
Fixes#2941
This fixes the rendering of the text below. For those that can't see it,
it is the following in UTF-32: `0x22 0x1F3FF 0x22`.
```
"🏿"
```
`0x1F3FF` is the Fitzpatrick modifier for dark skin tone. It has the
Unicode property `Emoji_Modifier`. Emoji modifiers are defined in UTS
#51 and are only valid based on ED-13:
```
emoji_modifier_sequence := emoji_modifier_base emoji_modifier
emoji_modifier_base := \p{Emoji_Modifier_Base}
emoji_modifier := \p{Emoji_Modifier}
```
Additional quote from UTS #51:
> To have an effect on an emoji, an emoji modifier must immediately
follow
> that base emoji character. Emoji presentation selectors are neither
needed
> nor recommended for emoji characters when they are followed by emoji
> modifiers, and should not be used in newly generated emoji modifier
> sequences; the emoji modifier automatically implies the emoji
presentation
> style.
Our precomputed grapheme break table was mistakingly not following this
rule. This commit fixes that by adding a check for that every
`Emoji_Modifier` character must be preceded by an `Emoji_Modifier_Base`.
This only has a cost during compilation (table generation). The runtime
cost is identical; the table size didn't increase since we had leftover
bits we could use.
Load and display (`T`) was responding even with implicit IDs, because
the display was achieved with an early return in the transmit function
that bypassed the logic to silence implicit ID responses- by making it
not an early return we fix this.
Fixes#2941
This fixes the rendering of the text below. For those that can't see it,
it is the following in UTF-32: `0x22 0x1F3FF 0x22`.
```
"🏿"
```
`0x1F3FF` is the Fitzpatrick modifier for dark skin tone. It has the
Unicode property `Emoji_Modifier`. Emoji modifiers are defined in UTS
#51 and are only valid based on ED-13:
```
emoji_modifier_sequence := emoji_modifier_base emoji_modifier
emoji_modifier_base := \p{Emoji_Modifier_Base}
emoji_modifier := \p{Emoji_Modifier}
```
Additional quote from UTS #51:
> To have an effect on an emoji, an emoji modifier must immediately follow
> that base emoji character. Emoji presentation selectors are neither needed
> nor recommended for emoji characters when they are followed by emoji
> modifiers, and should not be used in newly generated emoji modifier
> sequences; the emoji modifier automatically implies the emoji presentation
> style.
Our precomputed grapheme break table was mistakingly not following this
rule. This commit fixes that by adding a check for that every
`Emoji_Modifier` character must be preceded by an `Emoji_Modifier_Base`.
This only has a cost during compilation (table generation). The runtime
cost is identical; the table size didn't increase since we had leftover
bits we could use.
Load and display (`T`) was responding even with implicit IDs, because
the display was achieved with an early return in the transmit function
that bypassed the logic to silence implicit ID responses- by making it
not an early return we fix this.
This reverts commit daa0fe00b16989cf4696c686aab33b80763834a3. It is
correct to use the pkg-config name instead of the literal dylib name.
Sorry I didn't know about pkg-config needing to be installed at the time
and zig searching for the lib directly as a fallback
I believe the correct permission bits are 644 (read) instead of (755)
executable as this is not an executable file and works without
executable permissions
This backs out commit bb185cf6b695420ce8b43b5c1cadd16ef71c481a.
This was breaking IME input for some users and overall I couldn't find
other users where this really fixed anything other than me so I'm going
to back this out and fix this using my own system.
Variable font init used to just select the first available predefined
instance, if there were any, which is often not desirable- using
createFontDescriptorFromData instead of createFontDescritorsFromData
ensures that the default variation config is selected. In the future we
should probably allow selection of predefined instances, but for now
this is the correct behavior.
I found this bug when adding the metrics calculation test case for
CoreText, hence why fixing it is part of the same commit.
Unify grid metrics calculations by relying on shared logic mostly based
on values directly from the font tables, this deduplicates a lot of code
and gives us more control over how we interpret various metrics.
Also separate metrics for underlined, strikethrough, and overline
thickness and position, and box drawing thickness, so that they can
individually be adjusted as the user desires.
The surface might be mutated during the clipboard confirmation (resized
in my case), leading to the copied cursor `page_pin` being invalidated.
Fixes#1714. Would be nice if @stgarf can verify this.
I agree to the MIT relicensing.
**Context**
Currently, if there are multiple keybindings with a shared prefix,
they are grouped into a nested series of Binding.Sets.
For example, as reported in #2734, the following bindings:
keybind = ctrl+z>1=goto_tab:1
keybind = ctrl+z>2=goto_tab:2
keybind = ctrl+z>3=goto_tab:3
Result in roughly the following structure (in pseudo-code):
Keybinds{
Trigger("ctrl+z"): Value.leader{
Trigger("1"): Value.leaf{action: "goto_tab:1"},
Trigger("2"): Value.leaf{action: "goto_tab:2"},
Trigger("3"): Value.leaf{action: "goto_tab:3"},
}
}
When this is formatted into a string (and therefore in +list-keybinds),
it is turned into the following as Value.format just concatenates
all the sibling bindings ('1', '2', '3') into consecutive bindings,
and this is then fed into a single configuration entry:
keybind = ctrl+z>1=goto_tab:1>3=goto_tab:3>2=goto_tab:2
**Fix**
To fix this, Value needs to produce a separate configuration entry
for each sibling binding in the Value.leader case.
So we can't produce the entry (formatter.formatEntry) in Keybinds
and need to pass information down the Value tree to the leaf nodes,
each of which will produce a separate entry with that function.
This is accomplished with the help of a new Value.formatEntries method
that recursively builds up the prefix for the keybinding,
finally flushing it to the formatter when it reaches a leaf node.
This is done without extra allocations by using a FixedBufferStream
with the same buffer as before, sharing it between calls to nested
siblings of the same prefix.
**Testing**
Besides the included unit tests, I ran the GLFW-based app
and verified that the resulting binary produced the correct output
with `ghostty +show-config`:
```
❯ .zig-cache//o/02a32e7ba516d2692577a46f1a0df682/ghostty +show-config 2>/dev/null | grep goto_tab
keybind = ctrl+z>1=goto_tab:1
keybind = ctrl+z>3=goto_tab:3
keybind = ctrl+z>2=goto_tab:2
```
**Caveats**
We do not track the order in which the bindings were added
so the order is not retained in the formatConfig output.
Resolves#2734
A common issue for US-centric users of a terminal is that the "option"
key on macOS is not treated as the "alt" key in the terminal.
## Background
macOS does not have an "alt" key, but instead has an "option" key. The
"option" key is used for a variety of purposes, but the troublesome
behavior for some (and expected/desired behavior for others) is that it
is used to input special characters.
For example, on a US standard layout, `option-b` inputs `∫`. This is not
a typically desired character when using a terminal and most users will
instead expect that `option-b` maps to `alt-b` for keybinding purposes
with whatever shell, TUI, editor, etc. they're using.
On non-US layouts, the "option" key is a critical modifier key for
inputting certain characters in the same way "shift" is a critical
modifier key for inputting certain characters on US layouts.
We previously tried to change the default for `macos-option-as-alt` to
`left` (so that the left option key behaves as alt) because I had the
wrong assumption that international users always used the right option
key with terminals or were used to this. But very quickly beta users
with different layouts (such as German, I believe) noted that this is
not the case and broke their idiomatic input behavior. This behavior was
therefore reverted.
## Solution
This confusing behavior happened frequently enough that I decided to
implement the more complex behavior in this commit. The new behavior is
that when a US layout is active, `macos-option-as-alt` defaults to true
if it is unset. When a non-US layout is active, `macos-option-as-alt`
defaults to false if it is unset. This happens live as users change
their keyboard layout.
**An important goal of Ghostty is to have zero-config defaults** that
satisfy the majority of users. Fiddling with configurations is -- for
most -- an annoying task and software that works well enough out of the
box is delightful. Based on surveying beta users, I believe this commit
will result in less configuration for the majority of users.
## Other Terminals
This behavior is unique amongst terminals as far as I know.
Terminal.app, Kitty, iTerm2, Alacritty (I stopped checking there) all
default to the default macOS behavior (option is option and special
characters are inputted).
All of the aforementioned terminals have a setting to change this
behavior, identical to Ghostty (or, Ghostty identical to them perhaps
since they all predate Ghostty).
I couldn't find any history where users requested the behavior of
defaulting this to something else for US based keyboards. That's
interesting since this has come up so frequently during the Ghostty
beta!
A common issue for US-centric users of a terminal is that the "option"
key on macOS is not treated as the "alt" key in the terminal.
## Background
macOS does not have an "alt" key, but instead has an "option" key. The "option"
key is used for a variety of purposes, but the troublesome behavior for some
(and expected/desired behavior for others) is that it is used to input special
characters.
For example, on a US standard layout, `option-b` inputs `∫`. This is not
a typically desired character when using a terminal and most users will
instead expect that `option-b` maps to `alt-b` for keybinding purposes
with whatever shell, TUI, editor, etc. they're using.
On non-US layouts, the "option" key is a critical modifier key for
inputting certain characters in the same way "shift" is a critical
modifier key for inputting certain characters on US layouts.
We previously tried to change the default for `macos-option-as-alt`
to `left` (so that the left option key behaves as alt) because I had the
wrong assumption that international users always used the right option
key with terminals or were used to this. But very quickly beta users
with different layouts (such as German, I believe) noted that this is
not the case and broke their idiomatic input behavior. This behavior was
therefore reverted.
## Solution
This confusing behavior happened frequently enough that I decided to
implement the more complex behavior in this commit. The new behavior is
that when a US layout is active, `macos-option-as-alt` defaults to true
if it is unset. When a non-US layout is active, `macos-option-as-alt`
defaults to false if it is unset. This happens live as users change
their keyboard layout.
**An important goal of Ghostty is to have zero-config defaults** that
satisfy the majority of users. Fiddling with configurations is -- for
most -- an annoying task and software that works well enough out of the
box is delightful. Based on surveying beta users, I believe this commit
will result in less configuration for the majority of users.
## Other Terminals
This behavior is unique amongst terminals as far as I know.
Terminal.app, Kitty, iTerm2, Alacritty (I stopped checking there) all
default to the default macOS behavior (option is option and special
characters are inputted).
All of the aforementioned terminals have a setting to change this
behavior, identical to Ghostty (or, Ghostty identical to them perhaps
since they all predate Ghostty).
I couldn't find any history where users requested the behavior of
defaulting this to something else for US based keyboards. That's
interesting since this has come up so frequently during the Ghostty
beta!
`hostname` is not guaranteed on *nix as in the comment. For example,
Arch does not have hostname by default.
`uname -n` is supposed to be the POSIX way to get the hostname, so I
replaced `hostname` with `uname`.
(I assumed that it is not that important to have a same name binary on
platforms, since we have to add `.exe` on Windows anyway)
(And also, I did agree to the MIT relicensing, but I changed my email
since then, so I will agree to the relicensing again here)
Fixes#2516
Those changes mean that when we have one ghostty window in non-native
fullscreen and another ghostty window not in fullscreen switching to not
fullscreen window won't cause appearing menu bar and dock. I think it
looks good:

If we implement detection and make menu bar and dock appear for not
fullscreen window in this case it will cause the fullscreen window to
change its size and will look bad.
Non-native fullscreen works bad with multiple screens in either way.
E.g. switching to a non-native fullscreen window would cause menubar
disappering on another screen or switching to not fullscreen window
would show menu bar over fullscreen window on another screen. I think
nobody would want non-native fullscreen with multiple screens.