hardmode-triangle-2
Initializing our OpenGL context and using EGL to connect it to our window.
Summary
In this post we will be exploring EGL and interfacing EGL with wayland. By the end we will have a colored rectangle on screen.
Parts
- Part 0
- Introduction and why I decided to write an OpenGL hello-triangle program the hard way: no sdl, no glfw, only Linux platform libraries.
- Part 1
- Grokking the wayland protocol and creating a window.
- Part 2
- Current page
Overview
EGL is a library for writing all the glue that goes in between your application and the operating system to coordinate access to resources on the GPU, in a platform independent way.
In specific, EGL is a mechanism for allowing the graphics driver to share memory on the GPU between different rendering contexts. In our case, the contexts are the OpenGL context that our application uses and the OpenGL context that the compositor uses, and the memory—or buffer—is the contents of the application window. It is worth remembering that there are other types of buffers we might also share, such as cursors and popup windows.
EGL is managed by the same people responsible for OpenGL: the Khronos group. It also has extensions just like OpenGL does. The API of EGL is defined in a so-called registry XML file, just like OpenGL. The actual implementation, however, is left up to your operating system and graphics drivers.
In this part we will:
- Get zig OpenGL bindings set up
- Set up EGL on our system
- Link system EGL and set up the headers
- Get familiar with the EGL entry point:
EGLDisplay
- Configure and initialize our EGL and OpenGL contexts
- Make EGL surfaces that are linked to wayland surfaces
- Draw our triangle!
OpenGL Setup
Using zig-opengl
, it is possible to generate a zig file with bindings for
whichever OpenGL version and extensions we need. Unlike zig-wayland
,
zig-opengl
does not get run as part of your build.zig
file. It is written
in C# and, as a result, it requires a dotnet
7.0 runtime to be installed in
order to run the generator.
Once that is done, generating the zig OpenGL bindings can be done like so:
Place the generated gl_4_5.zig
in your deps
folder and then add the
following to your build.zig
file:
28 exe.addModule("gl", b.createModule(.{
29 .source_file = .{ .path = "deps/gl_4_5.zig" },
30 }));
And then add the following to main.zig
:
3 const gl = @import("gl");
EGL Setup
In order for our application to use EGL, we need to have access to the proper
header files and object files. I opted to use the system's EGL headers which
required me to install some packages. For example, on Ubuntu I needed to
install libwayland-dev
and libegl-dev
; on different distros the package
names might be different.
We should already have libwayland-dev
installed from the previous part, but
we will be using new EGL specific functionality defined in the system-wide
wayland-egl.h
header for the first time here.
It can be helpful to know where the actual header files we depend on are located. This is because some of the documentation you will find online can be incomplete compared to looking at the actual source for the header files. On my Ubuntu system, the headers I needed to reference can be found at:
/usr/include/EGL/egl.h
/usr/include/EGL/eglext.h
/usr/include/wayland-client-core.h
/usr/include/wayland-egl.h
Finding Packages
On Ubuntu I was able to figure this out using dpkg
. When you know
the package you installed (for example libwayland-dev
) but you don't
know which headers it installed, you can do the following:
|
When you know the file-path or even just part of it, but you don't know which package installed that file, you can use:
|
Which will output something like:
libegl-dev:amd64: /usr/include/EGL
libegl-dev:amd64: /usr/include/EGL/eglext.h
libegl-dev:amd64: /usr/include/EGL/egl.h
libegl-dev:amd64: /usr/include/EGL/eglplatform.h
libegl-dev:amd64: /usr/lib/x86_64-linux-gnu/libEGL.so
If you are missing any of these files, you need to either get the headers and copy them into your project or install the required system packages (listed above). For example on Ubuntu if you are missing the headers you can run:
Linking EGL
Once you have everything available on your system, you can proceed with adding
the system dependencies to your build.zig
file:
49 exe.linkSystemLibrary("wayland-egl");
50
51 // EGL is an interface for initializing and setting up a client API for GPU
52 // rendering (OpenGL, OpenCL, OpenVG)
53 exe.linkSystemLibrary("EGL");
Linking "EGL"
gives access to all the headers inside the folder! We can now
import the headers to start using EGL functions. At the top of our main.zig
file:
4 const egl = @cImport({
5 @cInclude("EGL/egl.h");
6 });
However, this isn't quite sufficient. EGL, being platform independent, is
missing some functionality that we would like to use. Some of the functionality
that we will need is actually defined as an extension to EGL in EGL/eglext.h
;
we can amend our import statement to add the extension header and set the
proper defines for our specific platform:
4 const egl = @cImport({
5 @cDefine("WL_EGL_PLATFORM", "1");
6 @cInclude("EGL/egl.h");
7 @cInclude("EGL/eglext.h");
8 @cUndef("WL_EGL_PLATFORM");
9 });
This will set up some EGL interface types as typedefs to wayland types:
79 80
81 typedef struct wl_display *EGLNativeDisplayType;
82 typedef struct wl_egl_pixmap *EGLNativePixmapType;
83 typedef struct wl_egl_window *EGLNativeWindowType;
Here we can already some of how EGL bridges between wayland and OpenGL. Feel
free to browse egl.h
, eglplatform.h
, and eglext.h
to see how the
platform defines work.
Display
Similar to wayland, EGL has the concept of a display which represents the
connection between the application and the host system. To quote the
EGL Man pages, Most EGL functions require an EGL
display connection, which can be obtained by calling
As I understand it, the display is used for things
like synchronization and connecting to the specific windowing API that is in
use.eglGetPlatformDisplay
or eglGetDisplay
.
Since there are two options for getting our display, its unclear which one
is preferred without more information. However, it seems that
eglGetPlatformDisplay
is preferred wherever possible.
eglGetDisplay
- Given a native display (
void*
) orEGL_DEFAULT_DISPLAY
, the function has to figure out what the platform is which can be unreliable. The behavior is dependent on the particular EGL implementation which means it can also vary by platform. eglGetPlatformDisplay
obtains an EGL display connection for the specified platform and native_display.
Since we explicitly indicate what the platform is, the function doesn't have to guess. Each platform is an extension that may not be supported.
Given this information, we can initialize our display connection using
eglGetPlatformDisplay
in our pub fn main
after setting up our surfaces:
79 if (display.roundtrip() != .SUCCESS) return error.RoundtripFailure;
80
81 const egl_display =
82 egl.eglGetPlatformDisplay(egl.EGL_PLATFORM_WAYLAND_KHR, display, null);
83 _ = egl_display;
If we run our application now with WAYLAND_DEBUG=1
, the output should be
identical compared to before, as we haven't done anything with egl_display
yet.
Errors
Many EGL functions can return invalid values and generate errors. I recommend handling and checking for these errors in order to both help troubleshoot issues that might be unique to your system and understand a little bit better how EGL actually works.
If a function returns the object we are interested in directly, then EGL
defines a specific sentinel value for that object which represents an invalid
state, like EGL_NO_DISPLAY
. Otherwise, it will return EGL_FALSE
upon
failure.
In order to get details about the error returned, we must call eglGetError
,
which provides more detail. The best place to know which errors each function
will generate is the specification or the online manual.
The first thing we need to do to start using our egl_display
is initialize
our display using the eglInitialize
function:
This function takes our egl_display
as its first argument, and will set the
major and minor version of the spec this EGL implementation conforms to.
Initializing our EGL display can also produce errors; it can be useful for troubleshooting to know what the possible error values are and what they mean. The best place for this information is the EGL specification.
EGL_BAD_DISPLAY
- The display passed in was invalid. This can happen when the platform is not specified.
EGL_NOT_INITIALIZED
- The display was valid, but EGL could not be initialized.
With the error handling code, initializing the egl_display
looks like this:
81 const egl_display =
82 egl.eglGetPlatformDisplay(egl.EGL_PLATFORM_WAYLAND_KHR, display, null);
83
84 var egl_major: egl.EGLint = 0;
85 var egl_minor: egl.EGLint = 0;
86 if (egl.eglInitialize(egl_display, &egl_major, &egl_minor) == egl.EGL_TRUE) {
87 std.log.info("EGL version: {}.{}", .{ egl_major, egl_minor });
88 } else switch (egl.eglGetError()) {
89 egl.EGL_BAD_DISPLAY => return error.EglBadDisplay,
90 else => return error.EglFailedToinitialize,
91 }
One more thing that can be useful to do is call eglTerminate
to clean up
our display at the end of the program:
93 defer _ = egl.eglTerminate(egl_display);
Now if we run our program with WAYLAND_DEBUG=1
, we will start to see how EGL
integrates with wayland under the hood.
First, you will see our first round-trip syncing registering globals, and our second round-trip creating our surface and assigning roles.
info: Hardmode triangle.
[4053331.824] -> wl_display@1.get_registry(new id wl_registry@2)
[4053331.863] -> wl_display@1.sync(new id wl_callback@3)
[4053332.071] wl_display@1.delete_id(3)
[4053332.096] wl_registry@2.global(1, "wl_compositor", 5)
[4053332.108] -> wl_registry@2.bind(1, "wl_compositor", 1, new id [unknown]@4)
[4053332.182] wl_registry@2.global(9, "xdg_wm_base", 4)
[4053332.191] -> wl_registry@2.bind(9, "xdg_wm_base", 1, new id [unknown]@5)
[4053332.301] wl_registry@2.global(22, "zwp_linux_dmabuf_v1", 4)
[4053332.352] wl_callback@3.done(231487)
[4053332.363] -> wl_compositor@4.create_surface(new id wl_surface@3)
[4053332.372] -> xdg_wm_base@5.get_xdg_surface(new id xdg_surface@6, wl_surface@3)
[4053332.383] -> xdg_surface@6.get_toplevel(new id xdg_toplevel@7)
[4053332.393] -> wl_display@1.sync(new id wl_callback@8)
[4053337.385] wl_display@1.delete_id(8)
[4053337.408] wl_callback@8.done(231487)
But then we see a second wl_registry@8
with a new ID, and a binding to a
new global that we haven't seen before, zwp_linux_dmabuf_v1
:
[4053341.571] -> wl_display@1.get_registry(new id wl_registry@8)
[4053341.596] -> wl_display@1.sync(new id wl_callback@9)
[4053351.971] wl_display@1.delete_id(9)
[4053351.990] wl_registry@8.global(1, "wl_compositor", 5)
[4053352.056] wl_registry@8.global(9, "xdg_wm_base", 4)
[4053352.117] wl_registry@8.global(22, "zwp_linux_dmabuf_v1", 4)
[4053352.124] -> wl_registry@8.bind(22, "zwp_linux_dmabuf_v1", 4, new id [unknown]@10)
[4053352.153] wl_callback@9.done(231487)
The z
indicates that this comes from an unstable protocol, and the v1
indicates the protocol version. The namespace
wp
is reserved for protocols that are
generally useful to Wayland implementations.
wp_linux_dmabuf
is intended to allow clients that render on the GPU and compositors that render
on the GPU to share GPU buffers back and forth.
dmabuf
is the Linux kernel's Direct Memory Access, or
DMA
, subsystem. This allows
different devices on the system to share memory, and is implemented in the
device driver. This makes sense as a mechanism for wayland and EGL to use to
share access to GPU resources!
[4053352.158] -> zwp_linux_dmabuf_v1@10.get_default_feedback(new id zwp_linux_dmabuf_feedback_v1@9)
[4053352.164] -> wl_display@1.sync(new id wl_callback@11)
[4053353.036] wl_display@1.delete_id(11)
[4053353.041] zwp_linux_dmabuf_feedback_v1@9.format_table(fd 4, 176)
[4053353.103] zwp_linux_dmabuf_feedback_v1@9.main_device(array[8])
[4053354.082] zwp_linux_dmabuf_feedback_v1@9.tranche_target_device(array[8])
[4053354.093] zwp_linux_dmabuf_feedback_v1@9.tranche_flags(0)
[4053354.100] zwp_linux_dmabuf_feedback_v1@9.tranche_formats(array[22])
[4053354.115] zwp_linux_dmabuf_feedback_v1@9.tranche_done()
[4053354.121] zwp_linux_dmabuf_feedback_v1@9.done()
[4053354.126] wl_callback@11.done(231487)
[4053354.132] -> zwp_linux_dmabuf_feedback_v1@9.destroy()
info: EGL version: 1.5
Here we see a get_default_feedback
request; according to wayland.app,
this request
creates a
new
We don't need to go into any more
specifics than that for our purposes, but I hope this has provided insight into
the inner workings of EGL.wp_linux_dmabuf_feedback
object not bound to a particular surface. This
object will deliver feedback about dmabuf
parameters to use if the client
doesn't support per-surface feedback.
Config
There are many ways to possibly encode an image in memory, but only a subset
of all possible encodings are supported by the hardware and software of your
system. In order to create an EGL buffer, we must select a specific encoding
for our image. EGL calls this an EGLConfig
, which has properties like:
EGL_DEPTH_SIZE
- Bit-wise size of the depth component (Z in clip space) of a texture buffer.
EGL_NATIVE_RENDERABLE
- Whether this buffer configuration can be a render target of a rendering API.
In practice, there can be multiple possible configurations for a given set of requirements as well, so EGL provides a function to choose a configuration (or list of configurations) given a set of attributes that are required:
The attributes list is many-element pointer that ends in EGL_NONE
, containing
a bunch of pairs; the first element of the pair is the attribute and the second
element is the requirement. The attributes we are interested are:
EGL_SURFACE_TYPE
- A bitmask indicating the roles of our surface; since we want to create a
surface that is our application window, we will use
EGL_WINDOW_BIT
. The other supported values areEGL_PIXMAP_BIT
andEGL_PBUFFER_BIT
. They are not mutually exclusive. EGL_RENDERABLE_TYPE
- Bitmask indicating the supported client APIs, note this can be
|
together to request configs that support multiple for sharing between them. The API we are interested in isEGL_OPENGL_BIT
. EGL_RED_SIZE
- The minimum required bitwise size of the red component of the buffer.
EGL_GREEN_SIZE
- The minimum required bitwise size of the green component of the buffer.
EGL_BLUE_SIZE
- The minimum required bitwise size of the blue component of the buffer.
EGL_ALPHA_SIZE
- The minimum required bitwise size of the transparency/opacity component of the buffer.
Using zig's explicit terminator in the type of our array, we can create a list
of our required attributes in fn main
like so:
95 const egl_attributes: [12:egl.EGL_NONE]egl.EGLint = .{
96 egl.EGL_SURFACE_TYPE, egl.EGL_WINDOW_BIT,
97 egl.EGL_RENDERABLE_TYPE, egl.EGL_OPENGL_BIT,
98 egl.EGL_RED_SIZE, 8,
99 egl.EGL_GREEN_SIZE, 8,
100 egl.EGL_BLUE_SIZE, 8,
101 egl.EGL_ALPHA_SIZE, 8,
102 };
The rest of the parameters are relatively simple: eglChooseConfig
requires
our egl_display
, an array where it will store out the valid configs, the
size of the output array, and a pointer where it will indicate the number of
configurations returned.
Referring to the specification, eglChooseConfig
returns the
following errors on failure:
EGL_BAD_ATTRIBUTE
generated if attribute list contains an undefined EGL attribute or an attribute value that is unrecognized or out of range
.
Passing in our display, the required attributes, and handling our errors can be done like so:
104 const egl_config = config: {
105 // Rather ask for a list of possible configs, we just get the first one and
106 // hope it is a good choice.
107 var config: egl.EGLConfig = null;
108 var num_configs: egl.EGLint = 0;
109 const result = egl.eglChooseConfig(
110 egl_display,
111 &egl_attributes,
112 &config,
113 1,
114
115 &num_configs,
116 );
117
118 if (result != egl.EGL_TRUE) {
119 switch (egl.eglGetError()) {
120 egl.EGL_BAD_ATTRIBUTE => return error.InvalidEglConfigAttribute,
121 else => return error.EglConfigError,
122 }
123 }
124
125 break :config config;
126 };
While we have only used a single config, we could also ask for multiple configs
and process them to select the best one. Given a particular EGLConfig
, one
can call eglGetConfigAttrib
to query its attributes and use those to make
choices about which config to prefer. The specification has a lot more to say
about how EGLConfig
s are chosen and sorted, so I recommend referring to the
specification and the eglGetConfigAttrib
manual page for more
details.
Creating a Context
In order to proceed, we must actually create our OpenGL rendering context. EGL allows for the creation of several different kinds of rendering contexts, and ways of sharing data between them. As a result, we must specify our required attributes for our context to get one that supports the functionality we need.
Our first step is to bind the client API that our application will use:
The supported values we can pass in as api
are:
EGL_OPENGL_API
EGL_OPENGL_ES_API
EGL_OPENVG_API
.
The function will return EGL_FALSE
for any other enum value (as of EGL 1.5),
and EGL_BAD_PARAMETER
if the current EGL implementation does not support the
requested client API.
Thus, we bind OpenGL
in our main function like so:
127 if (egl.eglBindAPI(egl.EGL_OPENGL_API) != egl.EGL_TRUE) {
128 switch (egl.eglGetError()) {
129 egl.EGL_BAD_PARAMETER => return error.OpenGlUnsupported,
130 else => return error.InvalidApi,
131 }
132 }
From here we can proceed with actually creating the OpenGL rendering context.
We will use the EGLConfig
we chose earlier and our display connection to
create the context. If we wanted to do multi-context rendering, we would pass
in another context as the share_context
to allow sharing data between them.
The attrib_list
has a similar format to the attribute list we use for config
creation, but with different supported keys and values. The
specification contains a full list of the supported values, but
here we will use:
EGL_CONTEXT_MAJOR_VERSION
, 4- Specifies the major version number of the selected client API, and will return a context that supports this version number.
EGL_CONTEXT_MINOR_VERSION
, 5- Specifies the minor version number of the selected client API, and will return a context that supports at least this version number. The actual supported version number could be higher if it is backwards compatible with the requirements stated here.
134 const context_attributes: [4:egl.EGL_NONE]egl.EGLint = .{
135 egl.EGL_CONTEXT_MAJOR_VERSION, 4,
136 egl.EGL_CONTEXT_MINOR_VERSION, 5,
137 };
With our context attributes now set, we can now create our context:
139 const egl_context = egl.eglCreateContext(
140 egl_display,
141 egl_config,
142 egl.EGL_NO_CONTEXT,
143 &context_attributes,
144 ) orelse switch (egl.eglGetError()) {
145 egl.EGL_BAD_ATTRIBUTE => return error.InvalidContextAttribute,
146 egl.EGL_BAD_CONFIG => return error.CreateContextWithBadConfig,
147 egl.EGL_BAD_MATCH => return error.UnsupportedConfig,
148 else => return error.FailedToCreateContext,
149 };
150
151 defer _ = egl.eglDestroyContext(egl_display, egl_context);
We also need to load our OpenGL functions. Calling any OpenGL function at this
point will segfault our program, because each function must have its address
loaded at runtime. If we peek into gl_4_5.zig
we will see a load
function
which sets a bunch of function pointers:
This is due to the variety of ways that one might implement a graphics driver. Even on a single system, the actual rendering device could be a software renderer that implements the OpenGL specification, a graphics card, or perhaps even one of multiple graphics cards, each from different vendors with different drivers.
Thus, EGL provides us with a function called eglGetProcAddress
to look up
the address of every function for our specified version of OpenGL which we will
pass as the get_proc_address
argument.
We need to create a wrapper function around eglGetProcAddress
as gl.load
provides us with the ability to pass a context into the load function:
157 fn getProcAddress(_ctx: void, name: [:0]const u8) ?gl.FunctionPointer {
158 return egl.eglGetProcAddress(name);
159 }
Then we can load the OpenGL functions in our fn main
after creating the
context like so:
153 try gl.load({}, getProcAddress);
Now, calling gl
functions will no longer segfault the program!
Window & Surface
Now we can begin interfacing our wayland objects with our rendering context
through EGL. This is where the wayland-egl
library that we linked into our
program earlier comes into play. zig-wayland
exposes functionality from
wayland-egl.h
in wayland_client_core.zig
that we will be using via the
wayland.client.wl.EglWindow
interface.
Inside our fn main
, after creating our context:
158 const egl_window = try wl.EglWindow.create(surface, 720, 480);
159 _ = egl_window;
You may recall that when we set the WL_EGL_PLATFORM
define, EGL sets the
native window type used by EGL functions to wl_egl_window
.
EglWindow
is an opaque pointer that underneath refers to the wl_egl_window
type defined by libwayland
. From the definition for wl_egl_window
in
wayland-egl-backend.h
, we can see that it is a relatively thin wrapper
around the wl_surface
object we pass in.
Thus, the EglWindow.create
function wraps our
wl_surface
into a wl_egl_window
and initializes members that the EGL
implementation will use.
In order to create the GPU buffer associated with our window, we need to create
an EGLSurface
from our window, which is the resource that our rendering
context will draw to.
The EGLSurface
will be the render target of our rendering context; we
can have multiple EGLSurfaces
which we render to that can each be part of
different wl_surface
s.
The possible errors generated are:
EGL_BAD_MATCH
- The config pixel format does not match what is required to create the surface.
EGL_BAD_CONFIG
- Generated when the config passed in is invalid.
EGL_BAD_NATIVE_WINDOW
- The native window passed in is invalid.
Thus, with the error handling added in, we can create our surface in fn main
like so:
158 const egl_window = try wl.EglWindow.create(surface, 720, 480);
159
160 const egl_surface = egl.eglCreatePlatformWindowSurface(
161 egl_display,
162 egl_config,
163 @ptrCast(egl_window),
164 null,
165 ) orelse switch (egl.eglGetError()) {
166 egl.EGL_BAD_MATCH => error.MismatchedConfig,
167 egl.EGL_BAD_CONFIG => error.InvalidConfig,
168 egl.EGL_BAD_NATIVE_WINDOW => error.InvalidWindow,
169 else => return error.FailedToCreateEglSurface,
170 };
171 _ = egl_surface;
Running our application again with WAYLAND_DEBUG=1
, we can see new wayland
requests and events which correspond to creating the surface:
info: EGL version: 1.5
[4050679.300] -> zwp_linux_dmabuf_v1@10.get_surface_feedback(new id zwp_linux_dmabuf_feedback_v1@11, wl_surface@3)
[4050679.324] -> wl_display@1.sync(new id wl_callback@12)
[4050679.787] wl_display@1.delete_id(9)
[4050679.798] wl_display@1.delete_id(12)
[4050679.802] wl_callback@12.done(18110)
[4050680.281] zwp_linux_dmabuf_feedback_v1@11.format_table(fd 8, 1968)
[4050680.309] zwp_linux_dmabuf_feedback_v1@11.main_device(array[8])
[4050680.316] zwp_linux_dmabuf_feedback_v1@11.tranche_target_device(array[8])
[4050680.320] zwp_linux_dmabuf_feedback_v1@11.tranche_flags(0)
[4050680.326] zwp_linux_dmabuf_feedback_v1@11.tranche_formats(array[246])
[4050680.335] zwp_linux_dmabuf_feedback_v1@11.tranche_done()
[4050680.343] zwp_linux_dmabuf_feedback_v1@11.done()
We are almost ready to start rendering!
Make Current
Since EGL allows for multiple contexts and surfaces for rendering, we must
specify which context and surface combination will be the target for API calls
using eglMakeCurrent
.
Every time one wants to change the surface or the context, one must call
eglMakeCurrent
. The current context bound is specific to the calling thread.
Only one OpenGL or OpenGL ES context can be set as the current context per
thread.
Upon failure, the function will return EGL_FALSE
and generate one of the
following errors:
EGL_BAD_ACCESS
- When
context
is bound to another thread and thedraw
andread
surfaces are bound on another thread, creating a possible race condition. Or, when the maximum number of bound instances ofcontext
already exist across all threads. EGL_BAD_MATCH
- Also generated when
draw
orread
are valid surfaces, but the context isEGL_NO_CONTEXT
, or some other combination of mismatched surfaces. Some contexts require theread
anddraw
surfaces to be the same. EGL_BAD_NATIVE_WINDOW
- Generated when the window associated with the surface is invalid
EGL_BAD_CONTEXT
- The context used is invalid and is not explicitly
EGL_NO_CONTEXT
. EGL_BAD_ALLOC
- There was not enough memory to allocate the
draw
orread
surfaces.
The specification includes a few more error conditions, but they are not really relevant to our program.
With the error handling code, we can add the following to our fn main
:
169 const result = egl.eglMakeCurrent(
170 egl_display,
171 egl_surface,
172 egl_surface,
173 egl_context,
174 );
175 if (result == egl.EGL_FALSE) {
176 switch (egl.eglGetError()) {
177 egl.EGL_BAD_ACCESS => return error.EglThreadError,
178 egl.EGL_BAD_MATCH => return error.MismatchedContextOrSurfaces,
179 egl.EGL_BAD_NATIVE_WINDOW => return error.EglWindowInvalid,
180 egl.EGL_BAD_CONTEXT => return error.InvalidEglContext,
181 egl.EGL_BAD_ALLOC => return error.OutOfMemory,
182 else => return error.FailedToMakeCurrent,
183 }
184 }
I needed to set both the draw
and read
surface to the surface we created
to avoid an egl.EGL_BAD_MATCH
error.
From here, it is possible to start making OpenGL calls to render to the
surface. To keep things simple, we will finish this part with just a colored
window, which we can do by setting gl.clearColor
and calling gl.clear
.
188 // Pick whichever color you want
189 gl.clearColor(1.0, 1.0, 0.5, 1.0);
190 gl.clear(gl.COLOR_BUFFER_BIT);
191 gl.flush();
After we do so, we need to actually end the frame and present what we have
rendered so far, which we can do using eglSwapBuffers
. This function takes
the back-buffer which is the target of our OpenGL calls and presents it the
surface. Its possible errors are:
EGL_BAD_DISPLAY
display is not an EGL display connection
EGL_NOT_INITIALIZED
display has not been initialized
EGL_BAD_SURFACE
surface is not an EGL drawing surface
EGL_CONTEXT_LOST
power management event has occurred. The application must destroy all contexts and reinitialize OpenGL ES state and objects to continue rendering
193 if (egl.eglSwapBuffers(egl_display, egl_surface) != egl.EGL_TRUE) {
194 switch (egl.eglGetError()) {
195 egl.EGL_BAD_DISPLAY => return error.InvalidDisplay,
196 egl.EGL_BAD_SURFACE => return error.PresentInvalidSurface,
197 egl.EGL_CONTEXT_LOST => return error.EGLContextLost,
198 else => return error.FailedToSwapBuffers,
199 }
200 }
201
202 display.dispatch();
If we run our program now with WAYLAND_DEBUG=1
, we don't see a window quite
yet, but we do see more new requests and events to dig into:
[2949174.260] -> wl_surface@3.frame(new id wl_callback@12)
[2949174.282] -> zwp_linux_dmabuf_v1@10.create_params(new id zwp_linux_buffer_params_v1@9)
[2949174.330] -> zwp_linux_buffer_params_v1@9.add(fd 9, 0, 0, 3072, 33554436, 1078049281)
[2949174.342] -> zwp_linux_buffer_params_v1@9.add(fd 10, 1, 1572864, 1024, 33554436, 1078049281)
[2949174.350] -> zwp_linux_buffer_params_v1@9.create_immed(new id wl_buffer@13, 720, 480, 875713089, 0)
[2949174.357] -> zwp_linux_buffer_params_v1@9.destroy()
[2949174.362] -> wl_surface@3.attach(wl_buffer@13, 0, 0)
[2949174.369] -> wl_surface@3.damage(0, 0, 2147483647, 2147483647)
[2949174.375] -> wl_surface@3.commit()
In Part 1 we mentioned that a wl_surfaces
is a rectangular area that
may be displayed on zero or more outputs, and shown any number of times at the
compositor's discretion. They can present
However, we never created any
wl_buffer
s, receive user input,
and define a local coordinate system.wl_buffers
or attempted to present them. That is because the buffer exists on
the GPU, and, as a result, we can't just create a big old array and commit that
as our buffer.
However, the Linux dmabuf
subsystem is capable of doing just that, and our
call to eglSwapBuffers
creates a new wl_buffer
and attaches it to our
bound surface in order to display what we rendered with OpenGL.
If we walk through the new requests roughly in order, we can see EGL creating
a wl_buffer
and then attaching it to our designated surface:
wl_sufrace
.frame
Request a notification when it is a good time to start drawing a new frame, by creating a frame callback. This is useful for throttling redrawing operations, and driving animations. When a client is animating on a
wl_surface
, it can use the 'frame' request to get notified when it is a good time to draw and commit the next frame of animation. If the client commits an update earlier than that, it is likely that some updates will not make it to the display, and the client is wasting resources by drawing too often.zwp_linux_buffer_params_v1
.create_immed
This asks for immediate creation of a
In other words, map thewl_buffer
by importing the addeddmabufs
dmabuf
directly to awl_buffer
, so we can present it to our surface.wl_surface
.attach
Set a buffer as the content of this surface.
Surfaces are double-buffered, however, and the surface must be explicitly set to be the front buffer. Part of this is due to wayland's promise of making every frame perfect (I.E. avoiding screen tearing or other artifacts).wl_surface
.damage
This request is used to describe the regions where the pending buffer is different from the current surface contents, and where the surface therefore needs to be repainted. The compositor ignores the parts of the damage that fall outside the surface.
wl_surface
.commit
Surface state (input, opaque, and damage regions, attached buffers, etc.) is double-buffered. Protocol requests modify the pending state, as opposed to the current state in use by the compositor. A commit request atomically applies all pending state, replacing the current state. After commit, the new pending state is as documented for each related request.
Now we can actually create our main loop and see our colored window.
Note that we will want some way to sync up wayland events, so we can use
display.dispatch
or display.roundtrip
to do so. This is certainly a bigger
topic to consider in a real application, but dispatch
works well enough for our purposes.
188 gl.clearColor(1, 1, 0.5, 1);
189
190 // Running is set by our xdg_toplevel listener, associated with this buffer
191 while (running) {
192 gl.clear(gl.COLOR_BUFFER_BIT);
193 gl.flush();
194
195 if (egl.eglSwapBuffers(egl_display, egl_surface) != egl.EGL_TRUE) {
196 switch (egl.eglGetError()) {
197 egl.EGL_BAD_DISPLAY => return error.InvalidDisplay,
198 egl.EGL_BAD_SURFACE => return error.PresentInvalidSurface,
199 egl.EGL_CONTEXT_LOST => return error.EGLContextLost,
200 else => return error.FailedToSwapBuffers,
201 }
202 }
203
204 if (display.dispatch() != .SUCCESS) {
205 return error.dispatchFailed;
206 }
207 }
If everything is working properly, then this will draw a bare colored
rectangle: no window decorations, no controls for resizing or moving the
window, no close or maximize buttons. It is up to the client application
to draw all of its borders and controls, and this is usually handled by
GUI application frameworks like GTK
, KDE
, etc.; with the unstable
xdg-decoration
extension, the compositor can add decorations, but
relatively few compositors support this extension as of writing this article.
Congratulations, with that we have the bare minimum necessary to present a buffer drawn with OpenGL to the screen!
Troubleshooting
There are lots of things that can go wrong, I certainly ran into a lot of befuddling and aggravating issues along the way. Here I will list issues I had in no particular order:
Rendering deadlock
I have noticed a deadlock when calling display.dispatch
and eglSwapBuffers
in the wrong order. Perhaps it has something to do with the way both functions
interact with wl_callback
objects?
OpenGL version might be unsupported
Not all computers will support OpenGL 4.5, especially older devices or smaller low-power devices like the Raspberry Pi which require OpenGL ES. You can check the version of the OpenGL context returned using OpenGL functions like:
gl.getString
(gl.VERSION)
egl.
eglQueryString
(egl.EGL_CLIENT_APIS)
egl.
eglQueryContext
Be sure to update the context creation code if you need to change APIs or API versions, and re-generate your OpenGL bindings!
eglMakeCurrent
requires a read buffer
From the documentation it seems that egl.EGL_NO_SURFACE
/null
is a valid
argument in some cases, but it was required in my case to avoid an error.
Application is not responding
For simplicity, we have ignored events on xdg_wm_base
, one of which is the
ping
event, which expects a pong
request in response. This is one way
in which the compositor detects a frozen application. Handling this event will
likely stop the "Not Responding" popup.
Surface does not acknowledge configuration
One important function we have overlooked is surface configuration. I recommend
reading further in the wayland book to learn more, but if you
are running into issues then adding a listener to the xdg_surface
we created
might help.
fn xdgSurfaceListener(
xdg_surface: *xdg.Surface,
event: xdg.Surface.Event,
surface: *wl.Surface,
) void {
switch (event) {
.configure => |configure| {
xdg_surface.ackConfigure(configure.serial);
surface.commit();
},
}
}
Triangle
Well this series is called hardmode-triangle
, not hardmode-rectangle
, so
let's get our triangle on-screen! Since this tutorial is focused on the wayland
and EGL parts more than the OpenGL parts, we will breeze through this quite
quickly.
The first thing we can do is set up the actual vertex data for our triangle:
188 std.log.info("Set up triangle", .{});
189
190 // Create an object that tracks the buffers and metadata of our triangle
191 const triangle_object: gl.GLuint = vao: {
192 var vao: gl.GLuint = 0;
193 gl.genVertexArrays(1, &vao);
194 break :vao vao;
195 };
196
197 // Set our trianlge as the target of operations below
198 gl.bindVertexArray(triangle_object);
199
200 // Load Vetex position data of our triangle to the GPU
201 {
202 const data: [9]f32 = .{
203 0.5, 0.5, 0.0,
204 0.5, -0.5, 0.0,
205 -0.5, -0.5, 0.0,
206 };
207
208 // Create and bind the buffer to the triangle object
209 var buffer: gl.GLuint = 0;
210 gl.genBuffers(1, &buffer);
211 gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
212 gl.bufferData(
213 gl.ARRAY_BUFFER,
214 @sizeOf(@TypeOf(data)),
215 &data,
216 gl.STATIC_DRAW,
217 );
218 }
219
220 // Set metadata that describes the layout of our vertex buffer
221 gl.vertexAttribPointer(0, 3, gl.FLOAT, gl.FALSE, 3 * @sizeOf(f32), @ptrFromInt(0));
222 gl.enableVertexAttribArray(0);
This will provide our GPU with an array of positions that represent the 3 corners of the triangle, along with all the metadata required to interpret the buffer.
I separated things into blocks to make the dependencies, repeated code, and necessary lifetimes clearer. Ideally, around this point one would begin to start extracting repeated code into functions.
However, getting the data onto the GPU isn't enough. OpenGL exposes a
programmable render pipeline, which we will hook into using programs called
shaders, written in GLSL
. The
code for compiling the shaders and linking them is as follows:
188 std.log.info("Compile Shaders", .{});
189
190 const vertex_shader = create: {
191 const src =
192 \\ #version 330 core
193 \\
194 \\ layout (location = 0) in vec3 vPos;
195 \\
196 \\ void main() {
197 \\ gl_Position = vec4(vPos, 1.f);
198 \\ }
199 ;
200 const src_ptr: [*c]const u8 = src; // Awkward ptr conversion
201
202 const shader: gl.GLuint = gl.createShader(gl.VERTEX_SHADER);
203 gl.shaderSource(shader, 1, &src_ptr, null);
204 gl.compileShader(shader);
205
206 var success: gl.GLint = 0;
207 gl.getShaderiv(shader, gl.COMPILE_STATUS, &success);
208 if (success == gl.FALSE) {
209 var info: [512]u8 = [_:0]u8{0} ** 512;
210 gl.getShaderInfoLog(shader, info.len, null, &info);
211
212 return error.ShaderFailedToCompile;
213 }
214
215 break :create shader;
216 };
217
218 const fragment_shader = create: {
219 const src =
220 \\ #version 330 core
221 \\
222 \\ out vec4 color;
223 \\
224 \\ void main() {
225 \\ color = vec4(0.0f, 0.0f, 0.0f, 1.0f);
226 \\ }
227 ;
228 const src_ptr: [*c]const u8 = src; // Awkward ptr conversion
229
230 const shader: gl.GLuint = gl.createShader(gl.FRAGMENT_SHADER);
231 gl.shaderSource(shader, 1, &src_ptr, null);
232 gl.compileShader(shader);
233
234 var success: gl.GLint = 0;
235 gl.getShaderiv(shader, gl.COMPILE_STATUS, &success);
236 if (success == gl.FALSE) {
237 var info: [512]u8 = [_:0]u8{0} ** 512;
238 gl.getShaderInfoLog(shader, info.len, null, &info);
239
240 return error.ShaderFailedToCompile;
241 }
242
243 break :create shader;
244 };
245
246 const program = gl.createProgram();
247 gl.attachShader(program, vertex_shader);
248 gl.attachShader(program, fragment_shader);
249 gl.linkProgram(program);
250
251 gl.deleteShader(vertex_shader);
252 gl.deleteShader(fragment_shader);
I like to set up the program before setting up the vertex data, but any order works.
Now that the GPU knows about our triangle and how to draw it, we need to
actually make the request to draw the triangle inside our main loop,
between our calls to gl.clear
and gl.flush
:
295 gl.clear();
296
297 gl.useProgram(program);
298 gl.bindVertexArray(triangle_object);
299 gl.drawArrays(gl.TRIANGLES, 0, 3);
300
301 gl.flush();
With that, you should have a black triangle on screen! We really breezed through the OpenGL heavy part, but if you want a more detailed, step-by-step walkthrough, you can refer to the excellent Learn OpenGL site.
What's Next?
As I progress further in learning wayland, I have considered whether it would be fun to continue this series into developing our own cross-platform windowing system specifically for games.
Knowing more about how things work at a low level opens up a lot of interesting possibilities for straightforward engine integration. The way most wayland interfaces are extension based feels like it maps relatively well to an ECS architecture, and exposing the surfaces to the rendering subsystem feels like a nice idea.
At any rate, we have really only scratched the surface in an effort to get from 0 to window as quickly as possible—only about 350 lines of code, which is hard to believe! There are a lot more topics to cover, should I ever get around to them.
References
This page is referenced by the following documents: