These should not be independent per-frame, that makes the time
calculations all sorts of jank.
Also moves the uniforms struct layout in to `shadertoy.zig` and cleans
up the handling in general somewhat.
We need this to get info about the padding, even if we do derive the
grid and screen size separately.
In the future this should possibly be changed to a message that only
sends the padding info and nothing else.
If this was Swift code, we'd be using a strong reference, which would
retain the layer for us and release it when the object is deallocated,
but this is Zig land so we have to do that manually.
NOTE: We don't *have* to do this, but it fits much better with Zig idiom
and hopefully avoids potential future footguns. We should do this to any
autoreleased objects that we persist a reference to in a Zig struct.
This commit is very large, representing about a month of work with many
interdependent changes that don't separate cleanly in to atomic commits.
The main change here is unifying the renderer logic to a single generic
renderer, implemented on top of an abstraction layer over OpenGL/Metal.
I'll write a more complete summary of the changes in the description of
the PR.
This commit changes a LOT of areas of the code to use decl literals
instead of redundantly referring to the type.
These changes were mostly driven by some regex searches and then manual
adjustment on a case-by-case basis.
I almost certainly missed quite a few places where decl literals could
be used, but this is a good first step in converting things, and other
instances can be addressed when they're discovered.
I tested GLFW+Metal and building the framework on macOS and tested a GTK
build on Linux, so I'm 99% sure I didn't introduce any syntax errors or
other problems with this. (fingers crossed)
This problem was introduced by f091a69 (PR #6675).
I've gone ahead and overhauled the placement positioning logic as well;
it was doing a lot of expensive calls before, I've significantly reduced
that.
Clipping partially off-screen images is now handled entirely by the
renderer, rather than while preparing the placement, and as such the
grid position passed to the image shader is now signed.
Fixes#2446
Two separate issues:
1. Ensure that our screen size matches the viewport size when
drawFrame is called. By the time drawFrame is called, GTK will have
updated the OpenGL context, but our deferred screen size may still
be incorrect since we wait for the pty to update the screen size.
2. Do not clear our cells buffer when the screen size changes, instead
changing to a mechanism that only clears the buffers when we have
over 50% wasted space.
Co-authored-by: Andrew de los Reyes <adlr@rivosinc.com>
We can't use nearest neighbor filtering for sampling from the atlas
because we might not actually be doing pixel perfect sampling if the
glyph has been constrained. This will change in the future, but this
will have to be set to linear for now.
When adding a glyph, we didn't pass the expected width to the glyph
renderer. This can be helpful when scaling emoji, as will happen in the
next commit.
This was causing garbled text due to a non-rebuilt rows referencing an
outdated atlas when the DPI changed but not the grid dimensions, which
could be caused by a variety of things such as the quick terminal
slide-in, dpi scaling changes on sleep/wake, moving windows between
displays because of closing/opening the laptop lid, etc.
Related to #3224
Previously, Ghostty used a static API for async event handling: io_uring
on Linux, kqueue on macOS. This commit changes the backend to be dynamic
on Linux so that epoll will be used if io_uring isn't available, or if
the user explicitly chooses it.
This introduces a new config `async-backend` (default "auto") which can
be set by the user to change the async backend in use. This is a
best-effort setting: if the user requests io_uring but it isn't
available, Ghostty will fall back to something that is and that choice
is up to us.
Basic benchmarking both in libxev and Ghostty (vtebench) show no
noticeable performance differences introducing the dynamic API, nor
choosing epoll over io_uring.
This optimization is extremely niche and seems to be related to a
strange bug experienced by Intel Mac users. Considering it costs some
amount to have this extra check here even though it's false in the vast
majority of cases, I feel it's pretty safe to just remove it entirely.
Discrete GPUs cannot use the "shared" storage mode. This causes
undefined behavior right now, and I believe it's what's causing a
problem on Intel systems with discrete GPUs with "inverted" cells.
This commit also sets the CPU cache mode to "write combined" for our
resources since we don't read them back so Metal can optimize them
further with this hint.
More mathematically sound approach, does a much better job of matching
the appearance of non-linear blending. Removed `experimental` from name
because it's not really an experiment anymore.
This fixes a regression introduced by the rework of this area before
during the color space changes. It seems like the original intent of
this code was the behavior it regressed to, but it turns out to be
better like this.
NEEDS REVIEW
continuation of #5037resolves#4729
renders all shaders to the default buffer and then copies it to the
designated custom shader texture.
this is a draft pr because:
- it introduces a new shader "pipeline" which doesnt fit in with how the
system was designed to work (which is only rendering to the fbo)
- im not sure if this is the best way to achieve shaders being able to
sample their output while also drawing to the screen. the cusom fbo
(previous implementation) was useful in that it modularized the custom
shader stage in rendering
---------
Co-authored-by: Mitchell Hashimoto <m@mitchellh.com>