| title | Architecture |
|---|
Gio is based on the concept of Immediate Mode UI. This approach can be implemented in multiple ways, however the over-arching similarity is that the program:
- listens for events such as mouse or keyboard input,
- updates it's internal based on the event (e.g. sets a Checked = true for checkbox),
- runs code that re-renders the whole state.
In pseudo-code a minimal immediate mode UI can look like:
// state of the program
var showlist bool
var items []string
for {
// waiting for new events
select {
case ev := <-eventQueue:
clearScreen()
// handle the checkbox
if DoCheckbox(ev, &showlist) {
Listbox{
Items: items
}.Do(ev)
}
}
}
func DoCheckbox(ev Event, checked *bool) bool {
// see whether we need to handle the event
if e, ok := ev.(KeyboardInput); ok {
if e.Key == Space {
*checked = !*checked
}
}
// draw the checkbox
if *checked {
fmt.Println("[x]")
} else {
fmt.Println("[ ]")
}
// return whether we are checked for convenience
return *checked
}
type Listbox struct {
Items []string
}
func (list *Listbox) Do(ev Event) {
for i, item := range list.Items {
fmt.Printf("#%d: %q\n",i, item)
}
}This of course is not a very useful library, however it demonstrates the core loop of a immediate mode UI:
- get an event
- handle the widgets while updating the state and drawing the widgets
This simplicity of course contains a lot of different trade-offs that can be made:
- how do you get the events,
- when do you re-render the state,
- what do the widget structures look like,
- how do you track the focus,
- how do you structure the events,
- how do you communicate with the graphics card,
- how do you communicate with the operating system,
- how do you render text,
- ...
The rest of the document tries to answer these questions.
Immediate Mode References:
- http://sol.gfxile.net/imgui/
- https://caseymuratori.com/blog_0001
- http://www.johno.se/book/imgui.html
- https://github.com/ocornut/imgui
Since a GUI library needs to talk to some sort of operating system to display information:
window := app.NewWindow(app.Size(unit.Dp(800), unit.Dp(650)))
for {
select {
case e := <-window.Events():
switch e := e.(type) {
case system.DestroyEvent:
return e.Err
case system.FrameEvent:
// update state based on events in e.Queue
}
}
}app.NewWindow chooses the appropriate " handling driver" depending on the OS.
It might choose Wayland, WinAPI or Cococa and many others.
It then wires together events coming from the OS to windows.Events().
Additionally, it will initialize communication with the GPU, e.g. OpenGL, EGL or DirectX 11.
Input is delivered to the widgets via a system.FrameEvent which contains Queue.
It might contain a keyboard event, such as key.Event.
Based on these events widgets can modify the state.
There are also event-processors, such as gioui.org/gesture, that detects higher-level actions such as a "double-click" from individual click events.
TODO: describe how they work.
Since the system needs to tell different graphics API-s, there's an abstraction of op.Ops. This records different operations in some specific format that the graphics API can decode.
As an example encoding a colored rectangle into that structure would look like:
var ops op.Ops
ColorOp{Color: color.RGBA{R: 0x80, G: 0x00, B: 0x00, A: 0xFF}}.Add(&ops)
PaintOp{Rect: f32.Rectangle{Min: f32.Point{}, Max: f32.Point{X:10, Y:10}}}.Add(&ops)
gioui.org/gpu is able to decode the resulting ops.Bytes() and handle these operations for different APIs.
There are also operations such as op.TransformOp that allow to offset all the following rendering. And clip.Rect to prevent drawing outside of some boundary.
A single frame consists of getting input and rendering the new state:
red := color.RGBA{R: 0xFF, G: 0x00, B: 0x00, A: 0xFF}
blue := color.RGBA{R: 0x00, G: 0xFF, B: 0x00, A: 0xFF}
for {
e := <-w.Events()
switch e := e.(type) {
case system.DestroyEvent:
return e.Err
case system.FrameEvent:
var ops op.Ops
ops.Reset()
color := red
// TODO: show how event.Queue works
if e.Queue contains space {
color = blue
}
ColorOp{Color: color}.Add(&ops)
PaintOp{Rect: f32.Rectangle{
Min: f32.Point{},
Max: f32.Point{X:10, Y:10},
}}.Add(&ops)
e.Frame(gtx.Ops)
}
}Of course writing a program in these terms would be really annoying.
To simplify writing code, there's a structure that can be passed around operations and the available screen space. This is called layout.Context.
It contains the constraints how much screen is available and where to draw things.
TODO: explain constraints struct in depth
TODO: Describe how a simple button looks like.
As an example, to split the screen into two you could write a widget that looks like:
type SplitView struct {
Ratio float32
}
func (splitView *SplitView) Layout(gtx *layout.Context, left, right layout.Widget) {
var stack op.StackOp
stack.Push(gtx.Ops)
gtx.Constraints = // TODO: constrain the `left` rendering to the left side
left()
stack.Pop()
stack.Push(gtx.Ops)
gtx.Constraints = // TODO: constrain the `right` rendering to the right side
op.TransformOp{}.Offset(offset).Add(gtx.Ops)
right()
stack.Pop()
}Of course, you do not need to implement such layouting yourself, there are plenty of them available in layout
TODO: Extend the above with ability to change ratio by dragging
TODO: describe how shaper works
Since many widgets need different colors, it's useful to place all the relevant colors into a single struct Theme. It contains the relevant settings for a Material design based UI.
That gioui.org/widget/material package also contains widgets that are based on Design.
TODO: describe how units are handled