yes because its an incremental immediate mode
So that means that you don't run the entire regenerate flow every frame
it is fairly optimal in that sense
It also has a texture cache for parts of the UI that might be static
I've been running makepad on a raspberry pi 4 which has about as much gpu as your thumbnail
and it works
Now do keep in mind tho that flutter is a project by hundreds of engineers funded by hundreds of millions
Makepad is 2 people and we are almost more ambitious because we are also building an IDE and designtool
So you should keep that in mind when trying to pit us 1:1 against flutter
I cannot meet them at every point
Hopefully on enough points tho to make sense
We are confident we can atleast make a useful application designtool that is high performance and runs cleanly on web and native
Our unique position is that we are developing a designtool to pair, and its all native Rust code. Also we have shaders woven through the entire UI stack so you can do fast UI that is light on the CPU which matters on web a lot
We also have a light multi-gpu-backend layer compared to heavier stacks
Our main testcase is the designtool itself and the synth and now a mobile UI
So we can do small wasm builds
And develop very tight integration with a developer environment
Flutter is built on Dart which is its own javascript like VM
So our advantage (and disadvantage) is Rust as a langauge and ecosystem
And have a more open path to writing GPU shaders because its fully integrated in the UI styling
Our disadvantages are that Flutter has a more complete font stack and many more people on the vector engine/graphics engine
Now do i think we can over time become competitive? I hope so and we are trying. But it's also good to keep in mind the scale differences
Flutter has 100.000s of developers. If i had to answer questions to 100.000s of people our project would instantly break
So in terms of 'worldview' i would say makepad definitely has a space where it can compete with flutter, and also a space where it will be much harder to impossible.
Our main 'ace' in the deck is an integrated designtool
Without that there is no real hope
But i would be careful to bet everything on makepad meeting flutter in the mobile app space as a full competitor
However it is a big world with many people and many technologies
I think there will be plenty of space to co-exist and find value
We will need to specialise to find areas where we can be competitive. Rust as a core language for the UI is the main one. All the advantages Rust has over Dart are what we get for free. Same with the disadvantages
So for us building an IDE / designtool is one of those challenges 'in itself' with also unique value. So optimising the developer experience for Rust itself is a value there. The nicest way to start learning Rust should be part of it
One reason i'm so extreme at pushing for complexity minimisation is also that it is one of the few areas where we can turn our disadvantage (not a lot of people/resources) into an advantage
Developers like simplicity and things being light. Wasm downloads as well. And if you want to have small builds the only way to do that is by writing not so much code
Which is exactly what we already do because we are 2 people not 100
So thats one reason that i limit piling on wgpu / other libraries if their value is minimal because it will erode one of our main advantages
So we'll have to see where we are with a designtool
In the end 2D UI is a finite complexity space as well
We might get away somewhat with having a small team.
So hope this story makes sense
sum it up
Advantages:
Disadvantages
does this make sense? Now CAN you make a pizza ordering app in makepad? Yea i think you can. But if you would hire me as a tech consultant for a company doing a pizza ordering app i would not recommend it
If you are building a new CAD application, or exploring VR tooling, or doing genome modelling, or machine learning debugging? yes then i would
So adding good support for Vulkan and webGPU for instance is definitely important to us. And i intend to revisit multhreaded GPU data generation soon.
The integration of the shader langauge is one of the 'new' ideas in makepad. It means you write actual pixelshaders to style UI elements just like every button is a quad with a little 'shadertoy' on it. You parameterise the shaders with values you animate say 'hover' or 'click' (they go from 0->1) and then handle the styling in response to those numbers in pixelshaders. This means you can do a lot in terms of styling and the CPU only needs to set the 0-1 value and be done with it
this is why makepad scales to huge amounts of UI elements in a browser because the CPU only has to generate very little data
We have plans to build a 'flowgraph' for this shader language as well just like Unity or Unreal has for theirs
Also yes you can bolt a script language onto makepad, however i currently don't have any big ideas on how that should work or look like. We have a WASM engine nearly done so that could be a way to integrate a simpler language than Rust. Be aware though that building a language is a huge undertaking. Flutter is the UI framework btw, Dart is the language
Google has been building Dart for almost as long as they have been building Chrome
This is also why i am chosing to NOT do a scripting layer but stick to Rust. 1. it's a life mission to beat 'Dart'. Google has some of their smartest people working for 13 years on it now. NOT using a script language also has huge benefits in terms of performance. So you get less ergonomics but also no performance issues ever
Also if we go make a new language adoption would be terrible and take a decade
Atleast now we can point to 'It is Rust, go learn that'
So in terms of energy use the more you use the GPU the more battery it takes. This is why you shouldn't animate the UI permanently since it will use a similar battery range as running a game on your phone
However it also depends on what shaders you use to fill the screen. With the texture caching i added it generally means 'just a texture copy' for atleast part of the screen
Makepad has a lot of room to optimize for energy by limiting the amount of compute you do on the GPU
In general tho it appears to be fine (given you dont animate the screen all the time)
the fact it runs on very low powered devices means it is not 'that heavy'
However as a general rule i don't think we are going to win an energy efficiency battle here
we are definitely more efficient on the cpu because we are more inefficient on the gpu
10 years ago this was a problem, however since about 3-4 years it appears we fall in the margin of error
So having a freeform shader language for the UI elements gives you more freedom to style things. Flutter is generally built in a vector API with gradient fills and image fills and dropshadows. Whereas makepad has the ability to be more material-like such as a game engine
The uses of that we are still exploring but for instance you can use 3D elements with light calculations on it, or otherwise fancy shaders
now do realise this use of the gpu comes again at an energy cost so its not free
the cheapest thing since the 90s has always been just an image
One advantage of reducing the CPU load is that you can make high-framerate applications more easily
Makepad in wasm on web for our IDE for instance has about a 3-4 ms per frame cpu load for a fairly complex UI. meaning that 120hz or higher is much more likely to be reached
Simpler UI is often in the 1ms range
For webXR on an android device like the Quest our CPU parameters were just about good. I wouldn't put more load on it
In terms of load moving more styling onto the gpu instead of the cpu+gpu combo with vectors means the CPU side is lighter
Rendering vectors more gpu accelerated is can also be very heavy tho
Nothing stops makepad from also getting an accelerated vector stack, we have one and will integrate it for use with icons
Makepad is optimized for having lots of UI with low CPU load. For instance for designtool UI's or IDE's
flutter is optimized for slick mobile UI's with not so many elements but all built out of vectors
this is also why makepad is leaning more towards 'complex' webUI's or 'complex' desktop UIs
because thats where our strength is
now we CAN likely also do slick mobile UIs, we are building a prototype of that right now. However this was not a design constraint during the development
At my last startup we built 'Cloud9IDE' with a code editor and it was a nightmare with HTML. I can fairly safely say that design requirement is solved.
There is no free lunch here in a way. Google engineers are very capable and that means flutters rendering stack isn't stupid in any way that we can magically do better
We can only optimize for different things
For us that meant we wanted the ability to build actually complex design tooling and UI's. With Rusts performance and a half immediate half retained mode model you can do this.
Dart has severe performance issues for heavy compute that needs threads or manipulate more data
A friend of mine has been using flutter/dart and has been needing to use Rust for instance to do image manipulation and load it inside Dart
This is where our advantage is
Also if we have a visual designtool we 'might' have a story for less heavy workloads as well. But we'll have to see
A visual designtool is also very useful for more compute heavy/complex UI applications
However if its easy enough to put things together that will offset the complexity of Rust as well
This is why i'm saying Rust is our home ground. When you need Rusts performance a Rust-native UI stack is much preferred over a complex hybrid stack involving dart and flutter. And the key to access that space is our IDE+designtool
I tested the startup time of our synthesiser demo application with binary shader cache
without: ยฑ 3 seconds, with ยฑ 1 second
APK size is about 1mb
oh taht you mean
the shader cache is only like 200kb
each shader is 2-5kb in size
and we have about 30 of them
this means the overhead of our own shader compiler/DSL stack is there, but its small
in makepad we have our own draw-api that abstracts it for all our backends
you effectively generate 'instanced arrays'
we pack a lot of 'rects' into one array that way which is the only way to be fast on web
So in makepad you can write your own vertex and pixelshaders entirely
it is up to you to adhere to the structure the UI wants in terms of input rects/positions
shaders are 'inheritable' just like C++ or java
so you can define a vertex() method that calls get_something() method and only override get_something() in the subclass
this way you can compose shaders whilst adhering to the requirements of the UI
its something that looks a bit like Rust but has GLSL semantics
its integrated in our DSL
shaders are inheritable inside our DSL
this way it is trivial to specialise a button with a custom pixelshader
its only a few lines of code
https://github.com/makepad/makepad/blob/rik/examples/ironfish/src/lib.rs#L103
here you see we have a 'text' shader which has an overrideable get_color() method and there you can write a pixelshader for the color of the text in any way you want
yea we do as well
shaders are sorted by type
and if you darw a lot of say text (a code editor) it becomes 1 drawcall
batching does directly follow the structure of your UI though
so its mostly used for large lists of things in one container
things from multiple containers are generally different batches
there is no abstraction layer here in makepad
makepad does not 'as such' have an engine the UI almost directly reflects into the drawtree
this is both efficient and fast and has limitations
sofar this has worked fine for us, but there are situations imaginable where it gets in the way a bit
for instance drop shadows are not as easy in makepad
you really directly are generating geometry
so dropshadows require you to inject a layer with a good looking gaussian
we'll have something there but because makepad doesnt really have an 'engine' in between that can re-sort or cut things up later
the draw pass from the UI is what is sent to the gpu 'as is'
we can of course change this or add this but sofar it was not needed for our UI cases
this came from our web usecase to really do the minimal amount of CPU work
yes
our focus was never minimal energy use but maximal framerate and responsiveness for complex UI
Pure minimal energy use requires texture caching everything and not redrawing
Now you can do that with makepad
i added that for our raspberry pi 4 demo because it would not run at all otherwise at any kind of acceptable framerate
so i sometimes look at really slow devices to optimize power usage on faster devices
however this is not our strength to be energy optimal
i cannot truly say how good/bad it is
it would have to be measured
it is also extremely dependent on what you do
for instance a synth with an always running 'visualiser' will not be great for battery use
but a static app with buttons that does not repaint will use no energy there
so it really depends what you do
makepad does not repaint always
it is not a game engine
it is very efficient at not wasting either cpu or gpu when it should not
however IF you redraw on the GPU with lots of shaders it is worse than really optimising GPU usage with just a few textures
however how bad that is completely depends what kind of app you build
similarly you cannot beat the efficiency of a tile based engine like a browser when you look purely at scrolling text
with drawing everything on the GPU without tiles
so if you truly want to equal that you will need to implement a tiled renderer
howevre it has been our impression that the cost of doing more on the GPU is decreasing over the years
so being a little wasteful is acceptable compared to years ago
So yes would say the flutter render stack is more optimised for mobile optimised rendering.
The question is if it would matter when you care
When you enable general use of shaders for styling you lose special optimised shaders that have been written by graphics programmers to minimise battery use
hence more wasteful
But nothing stops you from also writing specialised shaders
Yea so there you need a specialised and energy optimised render engine which makepad by design is not. It is a programmer extendable render engine like a game engine in that sense
Enabling the programmer to customise the render engine easily is part of what makes makepad unique in the UI space
It depends what you do with it
You can make it fine by not using highly complex shaders and not redrawing all the time
to really bad by putting a raytracer on a button
In makepad doing the energy optimisation is more a choice to the end programmer
You have to turn on the texture-cache for container objects yourself, and make sure you dont write too expensive shaders
we don't have automated optimisations other than incremental UI regeneration but that is a CPU optimistaion
Makepad is not a system designed for really average mobile/web developers but for people that want to build hard things
I always said makepad was designed to make the hard doable
not the easy easier because everyone else already does that
and its also not the problem i have had over my life with web
We aim to make the easy 'as easy as we can'
by adding a visual designer, and an IDE with code completion/info and possibly AI assistant
However Rust is also not as easy as possible. For that you want Dart or typescript or C#
If you optimise for as easy as possible for simple applications you ahve to make different design decisions
So our aim is to provide 'ergonomics' around a system that is very powerful and high performance
Absolutely
I have struggled for 10 years building applications with HTML and failed every time
Makepad is the first stack that enables me to build real applications like IDE's and designtools
We made the hard doable here
So in makepad widgets compose as well.
It is not super compact yet to do so but a lot of widgets compose out of subwidgets
like the slider is composed out of a slider + textbox
So yes. For examples you should look here
The widget api is very new and i'm not completely happy with it yet but it does work
https://github.com/makepad/makepad/blob/rik/widgets/src/slider.rs
yes the DSL allows you to override all substructures
anywhere you want
I have not added very many programmer-ergonomic features here yet because the key will be to have a visual designer for this
i have designed the component model very much like how Figma does things
So wwhen you see this
https://github.com/makepad/makepad/blob/rik/examples/ironfish/src/lib.rs#L68
you see that all substructures have to be IN the overriding DSL
this is because otherwise the designtool would be harder to make
maybe i'll make that more compact later
It is likely most of the people would not manipulate this as code
but only with dropdowns and colorpickers
yes it is optimised for that
not for ultimate pretty code
however we will soon start on that work so we can test if this was done right
things will change when we integrate the editor
i dont know yet how much but it will in places as we build the editor
i dont expect it to change a lot, but we'll see
essentially this UI design
https://github.com/makepad/makepad/blob/rik/examples/ironfish/src/lib.rs#L68
will look like a giant figma sketch with all components visible when loaded up
this file you could see as a serialised file for a designtool
like SVG
We now write it by hand tho as you have to start with the engine/definition before you can make a designtool for it
but after android works we will build a visual editor for this exact file
This is what makes us different from most other UI systems
You cannot make a visual editor (like svg editor) for an immediate mode gui that exists only in code
React is also difficult to do but some try with html/css editors
because the code infects the design
makepad has a hard separation
google also does not ship a UI designer with flutter, although they might in the future
But for now that is a unique thing we are building. And because we build the designer IN makepad it is an important part of our design requirements for the UI kit itself
yes
we pay for design-ability and designer freedom with programmer ergonomics
different tradeoff
We think this could be a different optimum
Atleast it is worth trying
That is why we have a team of 2 programmers and 1 designer, we aim to build a product that all of us use together
When i was young i used visual studio UI editor (like visual C++) to sketch out applications and then immediately connected it to code. We have lost this in recent years to web / everything else
I want to get that back with modern design ability
And a fast modern language and multiple platforms
And i'm sure i'm not the only one that wants this.
When i was building cloud9 we had designers make photoshop files that ook our developers months to integrate
Now my designer directly changes the design of the application without me having to worry about it
All i need to do is build him a designer friendly UI now, even though he handwrites things now to get acquainted with the system
The utility domain of this tool is not very hard defined. So it might be that people use it for all sorts of mobile application up to building complex datavisualisation interfaces or code editors, or 3D designtools
or, as AI integrated designtool UIs
This is why we build all sorts of things with it that we personally 'like' such as our IDE, designer, and a synthesiser UI because that is what a close friend of ours would like to use it for and it give us a usecase to see if it works for other things than IDE's and designtools as well
Which is why doing an android port is also fine because you cannot make a modern 'visual C++/basic' without being able to also do mobile applications
I'm leaning towards example application that need performance as this is our primary showcase of why to choose Rust to build a UI application.
Essentially things you cannot build easily in flutter or Dart.
Anyway i hope this explains what we aim to do and why, and also how it might or might not fit into what you aim to do.
For us mobile apps are a necessary part but not our core focus
Now if that is a smart strategy i don't know.
However if you have a designer oriented application system that is high performance you can build websites with it, or apps, or desktop applications equally
Yes i would think that what we aim to build in terms of designer could definitely be useful along the way or any kind of systems we build lower down as well
If our designer turns out to work very well its an ideal base to fork into a mobile specialised system for instance
It would offset half a decade of effort
Sorry for the pause i had to take a bit of time off because i was overworked (worked for 6 weeks straight). Currently i'm trying to package fonts into our APK which turns out to be harder than i thought since on android i cannot simply package files into the apk
So will need to create some kind of resource packaging tool
Or figure out the assetmanager. will update here.
Im getting close to having the 'system' layer work with touch and audio
our final todo list is
I do dare say this is pretty great. Buildtimes are <2 seconds from codechange to application on screen. Startuptime is near instant
APK size is 1.2mb
Framerate is easily 60, will trivialy run 120hz on phones that can do it as well
If you then add to this a stripped android buildchain that instanly installs and some buildtooling to automate it it'l be the cleanest android dev experience you can make
So we may not be 100% battery use optimized, it's still pretty great.
And the battery use optimisation you could do by building a widget set with a shaderkit that IS optimised.
Nearly completed midi+ble on android today. Tomorrow finishing + camera api
We can't bind all apis of course, but just the basic set that i support on all platforms at this moment
which is audio/video/midi
This way atleast the examples work everywhere
Over time we'll get more apis
After the platform work i still need to do buildtool and pack up the sdks as well. Will keep you updated
@TigerXu So the explanation is: Makepad effectively draws its UI with quads with shaders on it. You can also draw it with vectors, but thats not really well developed yet. Point is you can make the shaders as cheap/optimised as you want for UI. You don't 'have' to use the shader infra or require designers to use it to style things. You can make a pre-fab UI kit that is as optimal as anything you want
However people would then be using that UI kit 'as-is' and not customising it with their own shaders
Our goal is to enable a really strong design-angle for makepad, so here we need those shaders. But if you just build a specialised UI kit in it where people don't do that but just drag/drop buttons it'l be competitive battery use wise
Anyway i'll see if we can explain the whole thing a bit better in a document
currently there are no facilities for that. The live reload will use an API exposed by the application to hot reload the live design code. Nothing stops you from writing a vscode extension that does something similar tho
Or even the application itself could watch the files and reload them on change
i still have to work out the api, but its likely to be simply 'hotloading a file'. It's not hard to use other editors with it
from a technical point of view
I haven't built the hotreloading bit yet, it's on my near term todo
i've done a prototype of the hotreloading last cycle, but lots has changed since then
I'll develop it as part of the visual designer
It can redraw partially, however it depends what you resize. In the docking UI panels i redraw everything. Now do note that regenerating/redrawing the UI is extremely fast. It only takes a millisecond or two for the whole IDE
So redrawing/regenerating the whole UI on relayout isn't remotely a problem
However it is left up to the implementor how you let layout change trigger redraws
right now layout is processed 'inline' with redrawing and not a separate phase
So this means there is no need to worry about layouting in that sense
Makepad is so fast in its 'redraw' cycle that its not been a thing to worry about
Generally if you change layout parameters for something that moves /size subsequent areas around they will regenerate but you can contain it to a scroll container
We have a relatively simple layout system that should cover most cases. There is one case thats a bit annoying i'm still searching for a nice solution to and thats the case where you want to do something like 'size all to the biggest of a list'. Since our layoutflow is immediate mode that one is a bit tricky.
However overall it works really well
I also have minwidth/maxwidth to do still
also because our layout flow is immediate mode you can make widgets that do whatever they want in terms of positioning sub elements. It's not an 'engine' in that sense you need to manipulate to get what you want.
There is no advance knowledge needed outside of 'drawing' how big things will be
However with a visual designer the layout properties and how it works will become more apparent. It's highly similar to how figma does their 'autolayout'
Alright updates:
im guessing about one more week for this
this is going to be a big release because i added all the platforms in this cycle. Also: Linux x11, linux direct rendering, windows, and webassembly and a bunch of work on macos as well (audio in/camera)
these are all done minus a bit of work on webassembly
All in all heading for 20 platform apis in 2 months
Flexbox is kindof 'not simple' in terms of algorithmic complexity and advance knowledge. Makepad layout as it stands now is a bit more simple as it limits the amount of advance knowledge for performance reasons
Which makes it extremely fast for lots of UI, but also gives it a few issues for other cases
The domain will show more when we have the designtooling
Since it will be easier to change the layout parameters. It is very similar to figmas 'autolayout' right now
Yea, the DSL 'frame' syntax has the autolayout parameters
Hopefully the designtool will be flexible enough for a designer to express what they want
this is why i'm dogfooding it to our own designer and take it from there
Comment on raphs view:
He seems to be correct here. Immediate mode has shortcomings that are more easily solved in a retained mode UI. Makepad is 'hybrid' So i have an immediate mode drawflow, and a retained mode UI widget structure built on top. The immediate mode API is used by widgets to draw themselves and other widgets. The retained mode structure is used by the DSL-UI and what the visual editor will manipulate.
This should hopefully sidestep the problems with both, and unify these paths in a new way
I can only say it works fine for us sofar. It is not as super ergonomic as egui, but it compensates with being far more designable with visual tools (soon)
What egui uses is that the only 'existence' of a widget is the code flow that also handles its events. Like if ui.button("Hello").clicked(){. }
In makepad our components live on structs, so they aren't magically kept in the back and our component structures are styleable with the DSL. So we have more overhead in defining things and using them. You need to provide usually 3 points instead of one
One to add the widget to the parent widget struct, one to handle the events, and one to draw the widget
with the DSL 'interpreter' this is simplified a lot to only be 2 places. Once in the DSL and once in the event handle code
However which way you choose to construct things is up to the developer
We have a kindof 'interpreted' UI structure where the components form generic trees (like DOM trees or Widget trees), and we have a hardcoded UI structure where UI components form hard-types onto parent structs. Both are interchangeable
You hybridise these two things. So for instance for code editors or IDE's or docking systems you use the hard-coded approach mostly. And for visually designable UI you use the DSL structure more
Things will become more clear as we integrate the designer as this will explain the 'why' of a lot of these design decisions
Our 'DOM' tree of widgets is really just a special kind of widget that knows how to draw child widgets via a 'Widget' trait. It is not hardcoded into the platform in that sense, its part of our widgets crate.
Since everything has DSL property definitions you can see this widget (it is called our Frame widget) as a little mini interpreter that knows how to spawn child components from the DSL definitions
And thats also all it is
And this Frame widget itself you can treat as a 'static' widget otherwise
so you put
struct MyApp{ root_frame: Frame }
so this is how the static and dynamic UI structures blend into eachother
So it's all fairly nicely layered in that sense
The static structure is like egui except you have to manage the storage of the childwidgets yourself instead of 'magically' by the system
so if you draw say a button in the drawflow or 'do not draw it' it behaves the same as an imgui
if you dont draw it it doesnt exist on the screen. However since you keep the storage of the Button yourself, i can split-out the eventhandling flow for it. This means that in makepad the event handling flow can be vastly faster than in egui since i dont have to generate the draw structures in the eventflow like imguis do
this is also why the layout flow is inline with the drawflow. if i wouldnt do that the system would get really complex
since doing an immediate mode drawflow becomes very hard then
However since the Frame component does know about its child components, its layout is more easily styleable than the true immediate mode UI bits
Certain layout flows that require 'advance knowledge' of how big something is you havent drawn yet. And since its immediate mode you have to work with essentially a 'future' for a piece of layout information. This is programming-work in the immediate mode flow and is handled automatically by the Frame component
I personally really like this ability to choose how you wanna build an application. If you have a UI thats highly dynamic or needs extreme performance don't use the Frame component but immediate mode flow. If you have a UI thats design heavy and more static (like apple UIs or mobile UIs) you use the DSL+Frame component
For instance in the piano widget or the list widget these things are implemented as an immediate-mode widget with a custom 'set' of childcomponents that are called via hard-typing. This absence of an abstraction layer means its extremely performant in Rust
And the synth UI itself with all its components and visual design is done using the Frame component that then can integrate the list / piano widget like any other widget
Once we are doing the designer these API's will change still, but i'm fairly confident of the model we chose here
Most other UI stacks require you to 'generate' the DOM structures dynamically based on code for instance (like React) They dont have this dual-path we have
I'm very hesitant to enable this dynamic generation of the Frame structures soon because it will break the visual designability. However im sure something will happen there in the future
First have a designer for it all tho, then we'll develop the ergonomics more
Also as a general statement of how Raph and others want to do things, i find that a lot of UI exploration in the Rust space is too heavily tied to Rust as a language. And this friction of tying a non GC language to a UI shows everywhere. The moment you have to use a closure in Rust you are already setting yourself up for a lot of pain. This is why we almost never use closures
makepad doesnt have onclick: ||{ code } anywhere. It's done using querying like so
if ui.button(id!(path.to.my.button)).clicked(){
}
this leaves the borrowchecker free at he cost of queries in the eventflow. But those are insanely cheap
So in that sense its much more like egui. However its split up differently
anyway. wall of text, but i hope this explains things a bit more
Yes you can use widgets without the overhead of using the Frame widget /DSL. You will have to use the DSL for the shaders tho.
The DSL also has very little overhead. its extremely fast
Essentially the shader language supports inheritance provided by the DSL system.
Which is extremely important for quick styleability / style specialisation
However you can choose not to use the DSL to define UI's (ie the Frame widget)
I don't want to remove the DSL from the shader system because it's relatively intricately interwoven because of the inheritance systems
it is all incrementally parsed
By that i mean that if you use functions in multiple shaders i only parse/typeinfer those once
so the DSL forms essentially a function/code library for the shader system
yea
live language
we've still been coming up with a name for it
DSL is not a really useful way to name it
since that just means 'domain specific language'
its something like 'working title' for a movie ๐
live language is a fusion of a treestructure, has inheritance and supports mixing in shader code into the treestructure
yes
its transpiled to glsl, metalsl, hlsl and soon wgsl
this way we keep a very lightweight cross platform layer
yes
you should see it like this
FillerH = <Frame> {
walk: {width: Fill}
draw_bg:{
fn pixel(self)->vec4{return #f00}
}
}
โโโโthat 'fn pixel' is essentially a property on that tree thats named 'pixel' with an associated bit of code
so
the Live language is not really the same as HTML in the sense that i dont actually mutate it at runtime like say React
i dont 'generate' the live language with code
i can poke at it tho
since makepad has immediate mode drawing there is no real need to generate the live language programmatically
if i want to instance a piece of the DSL 10 times you can do that without having to generate it
yes
and you can instance chunks of it as many times as you want
instancing UI from the DSL is extremely fast
the reason im pushing back on generating the DSL programmatically is it breaks designability
we'll see where we stand with the designer
yes
when you modify the live language you can 'reload' the UI with it
so a designtool / code editor can modify it and tell the application to reload it
because it instances so fast that actually scales pretty far
our entire IDE takes 1 millisecond or so to spawn up
so modifying the DSL and reloading also takes 1-2 milliseconds
yea however keep in mind its not designed as a browser
i haven't taken strong security considerations into account
however the surface area should be fairly minimal as there is no actual scripting code in it
so you cant really do all that much
it might simply need a bit of constraint on what resources it can load from disk
but yea you can do that
i'm a bigger proponent tho of writing a real application that talks using JSON or something to a server
and not send generated live langauge to the client
but there is no reason you can't
it would just not be the preferred way of doing an app
so
so
in makepad the immediate mode drawing is really not magic at all.
what happens is that as you walk your component tree you emit a draw tree
and the immediate part is that you always have to draw full branches of the tree
so say you have a list widget
you draw the items in the list like 'canvas' in a way
but its not a canvas its a piece of scenegraph / drawcall list
it feels conceptually the same but you really are generating drawcall trees underneath
and for things like mouse hovers you often don't need to regenerate that
you can poke 'hoverstate' directly into the existing drawcall tree
so for a lot of these kinds of interactions the UI is very efficient
the GPU does have to redraw the whole window, but atleast you wont waste time regenerating too much data for it
this is another advantage of having separated the eventflow from the draw flow. The eventflow can poke into the drawtree without having to regenerate it
Its up to the programmer tho. if you have interactions that do regenerate its so fast that its almost always fine
no thats simply a piece of the 'engine' that has little to do with the shading languager
the only overlap they have is that i have a few 'default' shaders like draw_quad draw_text and so on that have 2D geometry inputs for UI
if i override it and change some code it'l be 3D or whatever i want it to be
adhering to the 2D UI structure in your shaders is your own choice
this is also why i can switch the 'engine' to be 3D without much change
just have to plug in a different matrix
in a sense makepad is an extremely bare bones 'engine' that is more like a commandbuffer-tree
i can use that commandbuffer tree to render 2d or 3d things or custom shaders for fonts (as we do)
the engine also supports renderpasses
i use that for the font-texture or now for texture caching of bits of the UI
however you could build a game engine with it. I've kept it super light tho because this is an endless sinkhole
I don't aim to be much more general purpose than it is now otherwise i'll never get to building a UI designer
the commandbuffer tree is usually highly matched to the UI tree, but doesn't have to be per-se.
and the UI tree has 'pointers' into the commandbuffer tree
these are called Area's
so with those pointers say a button can poke a 'hover' value into the commandbuffer tree
this way you can do a hover without regenerating the commandbuffer tree
technically the commandbuffer tree is a mutable system
i just wrote the UI stack to kinda treat it as an immediate mode output
but our 3D engine which i'll do some work on soon will treat it as a mutable scenegraph
The commandbuffer tree or drawcall tree is further optimised by packing 'instances' into a single drawcall
so if you draw say 100 buttons it only becomes one drawcall with a 100 item array attached
these 'Area' pointers can point into these buffers
but you can also emit 100 drawcalls
however if you do that on web you wont have great performance
btw the terms inside makepad for the commandbuffer tree are '
DrawList
a DrawList is a list of DrawItem which is either a DrawCall or another DrawList
thus forming a tree
and its executed on the gpu backends via depthfirst traversal
just 1:1
its extremely simple in a way
for metal i pretty much convert it to a commandbuffer
on openGL i call it directly
for webGL i stream it out as a call messagebuffer
for direct3D 11 i also just execute the calls
however the abstraction layer is only that drawtree
i have a separate little engine 'per backend' that turns it into gpu api calls
hence why doing wgpu or webgpu is simply adding one more of those things
the shader compiler provides the other side of the coin for enabling programmable GPU UI thats not tied to a particular gpu api. So those 2 things slot together
well yes. you have a shader compiler backend and the actual 'DrawList' execution engine
that then uses the native shaders
thats what forms a complete set for say, openGL
the shader compiler is of course shared per language
so webGL and openGL use the same GLSL generator with a few extra parameters
and all the languages use the same typechecker/typeinferencer
so to add vulkan we'd have to add a spir-v generator there, and a vulkan version of the drawlist execution engine
or if we wanna add webgpu
we have to do a wgsl generator and a bunch of code to use webgpu api
all of these things are straightforward
but im already drowning in platforms so designtool first.
we can, but its not that hard. Eddy already wrote vulkan hello world and doing the spir-v codegen is easier for someone who wrote the compiler
so it won't really scale all that well to add external resources to it. but i could possibly have someone do the groundwork
yea
i wouldn't mind an 'intern' ๐
desmond might be available as well for select tasks like vulkan
but it'd have to be in tight cooperation with us
yea i know some of these people
going to do a rust meetup with them with a microtalk about makepad
in 2 weeks time
but i'll ask them
just explore a bit
we definitely have an insane todolist
however makepad doesn't necessarily need to be the most amazing in everything as long as designing applications works well enough
in the end its about the designer/IDE workflow and not necessarily if i can be more impressive at multicore renderdata generation
since thats very rarely needed stil, and we can do that as well
so i'm keeping myself focussed a little bit
he does, or where he doesn't i can quickly sync with him
he is busy tho so if we want to tag him in i'll need to give him advance notice
shall i start talking to him about possible work?
if i can budget it into a contract that works fine
i already floated the idea of doing the vulkan work a bit
alright
one fair warning: until we have the visual designer i'm going to change apis possibly heavily. Since only once i start building/using that will we experience if the design works and it will likely get changes
I'm fairly confident but, i know how this goes
otherwise the system already does work
i am currently checking the boxes on all the platforms, and then i'll pack up the android SDKs
our next release 0.4.0 will build on all the supported platforms
so thats why im going a bit slower than just doing android
that is my aim yes
i was going for one week but it'l probably be two
but i'll keep everyone informed on the progress.
here.
we use the same notation for almost everything
However it does need more docs
i'm here to answer questions tho for now
THe biggest difference with GLSL is that our 'variant' is type inferred
so let x = 1.0
instead of
float x = 1.0
and we use rustisms for function defs
for example shaders you can look at our widgets or the ironfish example
if you are looking for the 'standard library' we have for shaders its here
https://github.com/makepad/makepad/tree/rik/draw/src/shader
these are the foundational shaders we have in our UI system
and this is our standard library that has the SDF objects
https://github.com/makepad/makepad/blob/rik/draw/src/shader/std.rs
fairly. as i said im going to build the visual designer upnext and i need to be able to change things if i run into issues there
after that i'd say i'll feel more secure at making a stability statement
but generally changes should be quite straightforward to fix up yea
we're at an endstage of all this
however i might change things such as constant name resolution, or some notation bits here and there in the shader syntax / UI syntax
The UI designer needs to feel 'right' also in terms of how it deals with files and design-sheets. so things might move around a bit
but in general im quite sure it wont change that much
however yea if you come from GLSL you can pretty much read our code as 'GLSL with type inference, modules and linear inheritance so you can override methods'
all the semantics are the same
the built-in function names as well
i did this on purpose as to not create too big a gap with shadertoy people
'formally' specifying this language is a huge project tho
it'd be the same size as the GLSL spec and more
also very hard to actually spec because it is transpiled to metalsl / hlsl
so the general assumption is that if you stick to our SDF apis and do some straightforward shadertoy-ing it will just work
but if you do highly complex algorithms there may be issues
i personally have found very few issues with the transpiler
however it is good to remember to keep the shaders relatively simple
thats a great idea for not just the compiler but also performance reasons
howver if there are issues with a certain shader and say, windows HLSL its not hard to fix up
its a very straightforward compiler
so we'll simply have to see how it goes
Once we have the IDE we'll include inline errors from the shadercompiler as well
that should also help with ergonomics
making errors friendlier is a thing we should improve
makepad shading language is transpiled to all backends on the fly yes
including GLSL
the transpile cost is not very high
its there but not problematic
runtime transpiling is not a big issue for UI applications as the number of shaders is fairly limited
even for an application like the synth which uses a lot relatively
it only ends up being about 4000 lines of shadercode
compared to games with thousands of shaders thats nothing.
we had 29 shaders at last count
i can make an AOT cache you can get out of the application at build time
however the utility of this is marginal
because the UI still needs to analyse the shaders to build its interop-datastructures
the drawing API's use information from the shadercode to build Rust APIs
But its possible. Just not very useful/handy
However i see that as a 'possible future optimisation' if it turns out to be useful
yea i know
AOT is all the rage now
i just dont have that as high prio since the gain is so low
if we'd have thousands of shaders yes
but it doesnt' seem to happen with UI
I already compact/minimise the shader variation quite heavily
Also it makes our live editing more complex
it's just that it has very very little performance impact at this point in time
it just makes my backend more complex
the OS already precompiles the shaders for you
and keeps the cache
so you only pay the price the first time you start the app
I just added that cache for android as well
does it only the first time the app starts
afterwards its instant
Right now starting the app on android (after i fix a few more things) will be in the 300-500 milliseconds range on the slowest device
i mean i CAN try to make that faster but its taking longer to fade in the application than to start
So its a bit diminishing returns at this point. but as i said i can revisit this later and add it in a buildcycle
yea right now i use openGL api features to create a shadercache at first startup
and then read that afterwards
the same would work for vulkan if the driver doesn't do it already
metal already solves this for you
the only place i dont have a shadercache is on web
because i cant
but there i also couldnt really precompile anyway
so yea if i would AOT cache it at build time i could take probably remove about a second of startup time 'the first time you start the app' on ios
or vulkan the same
and thats worst case slowest device
new iphones / faster androids its less
Currently i'm building our 'makepad cargo' extension that includes tooling to download and strip the Android SDK, and compile the examples
Since it has to run on windows as well i have to build a bit more than i would for only macos
Also i talked to Desmond and he's available to develop the vulkan backend in the second half of this year
So we can take that along in our planning
yea totally it was a 5 minute example i did for someone in finance to show off performance
its not hard to fix
maybe the example should be removed entirely its hardly anything at this point
i'll mark that as todo
My success of the day is that i wrote my own zip file reader so i can pick files out of a zipfile selectively. It ended up only 250 lines of code
I needed this because on windows you dont have 'unzip' on the shell
but i also didn't want to use a large rust ecosystem dependency for this
definitely needs to be more than 200 lines of code for that.
im writing a utility that downloads the android SDK's from google and then generates a stripped version of the SDK we use
so i selectively unzip particular files from the distribution
instead of unzipping all 6gb and then deleting it again
if i'd be really badass i'd range-read it straight off of HTTP ๐
The only sad part is that google distributes the macos version as a 'dmg' file and not a zip file
so that trick doesnt really work everywhere.
i'll just stick to downloading it all i think
Rik Arends โ 02/28/2023 2:12 PM
ah yea
yea sure i'll make it responsive
Yue Chen โ 02/28/2023 2:12 PM
in a retained mode
Rik Arends โ 02/28/2023 2:12 PM
the size of the view is known completely when you draw
Yue Chen โ 02/28/2023 2:13 PM
So the platform layer can tell the actual window size and let the frame redraw the number grid
Rik Arends โ 02/28/2023 2:13 PM
yea
in the 'redraw' function you know all the things you need to, i just need to write a few ifs
Yue Chen โ 02/28/2023 2:14 PM
Can we also change the size the of the number box dynamically so we can keep same number of number boxes regardless of the window size
Rik Arends โ 02/28/2023 2:14 PM
yea sure
i also have ellipsis in immediate mode
the โฆ thing
our text rendering apis need some work tho. after eddy is done with the editor i think thats his next task
Yue Chen โ 02/28/2023 2:16 PM
How about a very long rolling window, like the one in mobile shopping app?
Rik Arends โ 02/28/2023 2:16 PM
yea sure
virtual viewporting is how we draw lots of things
Yue Chen โ 02/28/2023 2:16 PM
Good to know.
Rik Arends โ 02/28/2023 2:17 PM
in makepad virtual viewporting is so easy because of the immediate mode drawing that you often just do it that way
Yue Chen โ 02/28/2023 2:17 PM
So what is the more challenging use case when some neighbor layout info can not be known ahead of time
Rik Arends โ 02/28/2023 2:17 PM
so yea when i need to size something say 'the maximum width of all the items in a list'
thats a bit shit right now
i need to come up with a nice solution there
because in immediate mode flow you only have a single pass
whilst this needs multiple passes to be resolved
so i'll think of something
the benefit of not having layout be a separate pass is that it decomplects things and makes it go very fast. the downside is that you have a few things that aren't as easy to do as in flexbox or other systems
however it is a vast space so i'll revisit it when we are doing it in the visual designer
some people make their entire career about layout systems
Yue Chen โ 02/28/2023 2:21 PM
We tried to implement a set of makepad mobile widgets that is in par with the react native component library
Rik Arends โ 02/28/2023 2:21 PM
okay
did that work somewhat? ๐ im not there yet in that sense
Edward Tan โ 02/28/2023 2:22 PM
can makepad use both retained mode and immediate mode? and decide which way to go on a per case basis? (ex: if heavy layout needed then use retained mode?)
Rik Arends โ 02/28/2023 2:22 PM
yea makepad can use both retained and immediate mode
the foundation of it all is immediate mode
and on top you can layer retained mode structures
I have a retained mode structure that is the Frame widget and the 'Widget' trait'
It is a retained tree that has a DSL reflection
I'm still iterating the retained mode structure a bit, although i'm fairly pleased with it sofar
okay
Yea its hard for me to judge how easy /hard that would be. I'd have to make an example and see if all the buildingblocks are there for it
The scope of UI is so large i need to work with examples to find out what to build first
after i finish the tooling and work on the mobile UI i'll have a look
this is what im building now (cant run it yet)
cargo install cargo-makepad;
cargo makepad android install-compact-sdk:
cargo makepad android run -p makepad-example-ironfish โrelease
if you have an android plugged in and in 'dev mode' it should immediately run
Yue Chen โ 02/28/2023 2:32 PM
Does the user need to have the android stuio/android emulator installed before they run Makepad Android?
Rik Arends โ 02/28/2023 2:32 PM
no
cargo makepad android install-compact-sdk
this part actually pulls in a local stripped version of the SDK thats only 200mb
you can also do
cargo makepad android install-full-sdk
this downloads the sdk from google and pulls the files out you need
so thats a 1.6gb download
im building the install-full-sdk part now since thats essentially how i create the compact sdk
so you can completely run with only sources directly from google/oracle
for android and openJDK
so we're not putting ourselves up as a dependency here
hence why it would be cool if google served the macos NDK as a zip file and not a dmg because then i could range-read the files directly from the zip off of HTTP ๐
since i just wrote a zipreader there is no reason i cant put that on top of a ranged http request
but.
you can partially download files from a zip file over http if you know how to read the zip fileformat
ahwell. because of that macos dmg i need to atleast download that fully
and some files are tar.gz files as well
i'm pretty pleased tho how easy android dev now is tho
also happy that reading zipfiles was only a few hours of work
there is a fine line between NIH and being stupid
but now i can also use it for packaged resources in an executable for instance
or atleast i have the start of that.
i also have a bunch of tooling work related to 'packaging' of applications left over
like packaging icons, managing manifests, your digital certificates, etc
ea its not bad now
i can grab a new computer
type cargo makepad android install-full-sdk
and bam. done
working android devenv
i even have a more optimal path on macos than google themselves
i use the M1 openJDK and google uses the x86 one via rosetta still
saves 10% of time on the apk signing
which is 0.5 seconds so its all not eyewatering.
but its good to have it dissected to this level
and the tooling to do the SDK stripping/combination is now soon fully automated
there is one downside to doing it this way
if you run into issues with like 'i cant link with random jar file X' and google that question, you will only find answers that tell you how in android studio
those issues are fairly limited, but it is something to be aware of if you go and write a bunch of java yourself
we call javac directly
Right now i'm going into the rabbit hole of https access on each platform.
I can't use 'curl' on windows to download
since its not installed there
so i have to use something we made ourselves
Rik Arends โ 03/01/2023 1:29 AM
(by that i mean i have to talk to the best api on each platform, network framework on apple, something else on windows/linux)
I didn't really wanna do this but i have to. Our IDE needs to be able to talk to websockets / https as well
on the plus side i can then directly range-read the android SDK on linux and windows without having to compact it and host it
but yea this is not fun.
Rik Arends โ 03/01/2023 1:58 AM
my plan at first was to use git but i dont think github will like a 400 meg uncompressed repo much
its much more scalable to read directly from google
Rik Arends โ 03/01/2023 2:07 AM
so im going to give 'cargo-makepad' the ability to read directly from https
macos first then windows
Sorry for taking longer than my estimate btw. I'm putting in a lot more than i initially estimated.
'properly working cross platform android buildtooling wasn't really in my estimates but its kinda meaningless to do without.
right now i compile for android with a bunch of shell scripts you can't run on windows for instance
and i suspect windows to be by far the largest group of people interested
Rik Arends โ 03/01/2023 2:14 AM
you can always choose to host the stripped SDK somehwere, and that will also require https then
apple looks easy. hope its only a day or so
Edward Tan โ 03/01/2023 11:42 AM
Hi Rik, Can you describe how state management works in current Makepad system?
For example, assume that we have a simple app with a search input box at the top, and below it can display arbitrary lines of text results. So widget-wise we may have input widget and its data, plus either a text area widget or multiple lines of Label widgets and the data for them.
When user enters a search text, we send the input to some API (is this currently supported? or just mock data locally) to get a variable length of text response back, and display that response list in below the search box. The response may either fit in the current window height or may exceed it. So how does the Makepad system handle such use case flow? Thanks
Rik Arends โ 03/04/2023 4:58 AM
which means executing these drawlists, and handing mouse/keyboard input / windowing/ etc
then on top of that i have a crate called makepad-draw
this contains what you could see as our 2d 'engine' and the base shaders for 2D rendering
Here we define the 2D layout system, and the 2D drawflow (Cx2d) . This is also where i'm building a little 3D engine as well, just to show that you can just build whatever you want on top of that drawlist+shader tree
It also contains the font stack, vector rendering stack
Then on top of that (makepad-draw) we get makepad-widgets
makepad-draw is a fully immediate mode drawing system that generates the drawlists
drawlists are 'retained' by the render engine, so makepad-draw 2d turns that into an immediate mode system that has to re-create the entire drawlist if you change it
However there is no hard requirement for that if you want to do something non-immediate mode
makepad-draw 3d will actually modify the drawlist tree like a scenegraph
So makepad-widgets contains our entire base widget set
including our 'retained mode widgets'
since draw-2d is immediate mode you can write entire makepad applications as immediate mode. However this is not convenient for a visual designer since it needs to manipulate data and not code. Manipulating code isn't fast enough
Also manipulating code gets very complex once you inject logic. This is why React isn't great to build visual designers for
Rik Arends โ 03/04/2023 5:05 AM
So we have the Frame widget which you could see as a 'dsl interpreter' to construct UI trees out of what it reads in that DSL
This Frame widget forms what you could see as our DOM tree
And this framewidget has similar functions to query its trees like 'getElementById' -> find_widget
This find_widget is what then the databinding state layer connects to
ok when drawing all this
the widget tree is traversed (either manually or via the Frame widget) and draw(..) is called on the widget. Here you then call draw on the shaders which then emit themselves into the drawtree, giving you back an Area. This is a pointer into the draw tree that is useful to 'poke' into the drawtree
One of the engineering challenges i had early on was what if you have thousands of buttons or sliders how do you efficiently handle mouse hovers
you don't want to have to 'immediate mode' regenerate the entire UI
So what makepad can do is because the widget has an Area pointer it can just directly write a 'hover' value directly into the GPU data that the shader uses
and call the backend to redraw the drawlist tree
so this means it can do this with 10s of thousands of items in a browser on a slow computer
Now this requires of course there are no other side effects to what you want, like for isntance if your animation causes a re-layout you do need to do the immediate-mode redraw
but this is also extremely fast and you would then not do that when you have lots of items atleast we have both paths
Rik Arends โ 03/04/2023 5:14 AM
So an important take-away i would say is that makepad kinda doesn't have a render engine. You are fairly directly manipulating the data on drawcalls. There is no layer inbetween what is generated by the drawflow and what the gpu executes. This is highly performant, but can also be limiting because the developer needs to be more aware of what they do
For instance turning on texture caching for a UI element is manual since it creates a complete renderpass and a texture. The Frame element has support for this
The only abstraction we have there is if you use the Frame element. but its all exposed to the widget builder
Same with the layout system because its completely single pass, you can do whatever you want there as it runs. There are no 'engines' to poke at
Now for the actual pixels on screen
Almost all of makepads UI styling is done using pixelshaders
So say you define one
on a super basic button
instance hover: 0.0
fn pixel(self)->vec4{return mix(#f00,#0f0,self.hover)}
in the Rust code is manipulate the 'hover' value from 0 to 1 (via the animation system, but lets ignore that for now)
and thats enough to get the color to go from red to green here
we have quite a nice pixelshader API available in makepad in the form of a little 'canvas-like' api that can draw circles and round corners and all sorts of things you could do in CSS
and this has worked great for us thus far
in the longer run we want to have a flowgraph editor for shaders as well just like unity/unreal has
Rik Arends โ 03/04/2023 5:23 AM
Hope this answers atleast some of the questions, it's a rather massive scope project and we need to /intend to document it in much greater detail but i first need to finish android release / other work as well
And also untill the visual designer is in alpha i can't commit to api's yet. We've been working towards this goal for 4 years now and the entire system is designed to do that
Edward Tan โ 03/06/2023 10:33 AM
Thanks for the detailed explanation. The link to the file however seems to be broken?
Image
Rik Arends โ 03/06/2023 12:22 PM
oh i guess thats what i get when i rename things ๐
sec
i'm fixing things up so android/wasm and native builds work
https://github.com/makepad/makepad/blob/rik/examples/ironfish/src/app.rs#L1682
its now called app.rs
few more apis and my platform work / buildtool is done. Then i'm back on UI kit and designtooling
cargo has issues trying to compile the 'same' project as a dll or a binary so i had to invent something called 'app.rs' which is that combination
he networking is the shitty one, but since we need networking in all of makepad for end users i don't mind as much. But if i skip that i might be able to make it in not too long
maybe a week
what i can also do is not push to crates
so you'd have to git clone
so
TigerXu โ Yesterday at 3:14 AM
Great work.๐
I have another questions. As we all know, in android platform, the widget would be translated into openGL commands which would be submitted to GPU to do rendering. In makepad, as you said, the widget would be translated into openGL shaders. My question are what does the shaders look like and what's the difference between android's openGL commands and makepad's shaders? @Rik Arends
Rik Arends โ Yesterday at 3:43 AM
ehm
so in makepad the widget isn't translated into shaders, its just that it CAN be drawn / styled using pixelshaders
makepad builds something called 'instanced arrays'. So it builds a bunch of geometry data that contains the visual data for UI structures like say a button is usually a quad
The shaders for this look very normal, it has a vertex shader that has matrices and positions the geometry on screen, and a pixelshader that determines the color
they are a bit 'generated' so not super pleasant to read
the makepad shadercompiler has all sorts of facilities to make it easier for Rust to send data to the shader code
so we dont turn an entire UI into shaders. its really just a bunch of geometry that is instead of textured, generally using pixelshaders to determine color
in fact ALL of the UI right now is quads
text is little quads one per character, buttons are a quad
we don't have any other geometry in use at this point
however you could use any geometry you can use in openGL as the shapes to instance
this also means you can just put the UI in '3D' by changing a few matrices
it'l look flat, but it doesnt have to be if you make the UI elements not built out of quads but out of 3D shapes
So in terms of openGL commands makepad pretty much uses 1 type of drawcall
gl_sys::DrawElementsInstanced(
gl_sys::TRIANGLES,
indices as i32,
gl_sys::UNSIGNED_INT,
ptr::null(),
instances as i32
);
that one
and thats it
it has one blend mode which is premultiplied alpha
and it has render to textures
Rik Arends โ Yesterday at 3:51 AM
its extremely simple in what it uses for the graphics api
our entire set of openGL calls we use is something like 40 total
I've really limited our graphics apis to simplest possible first. we can extend later. because we have so many of them
TigerXu โ Yesterday at 4:21 AM
Got, thanks.
TigerXu โ Yesterday at 4:29 AM
'instanced arrays' means we can draw several instances through one invocation, right? So there should be a batch to merge several data with same invocation?
Rik Arends โ Yesterday at 4:29 AM
yes
makepad batches things you draw i
inside one container
so for instance a text editor has all of its text in one call
and the scrollbar in another
or a long list of property items
would have the labels in one
the text input box backgrounds in another
etc.
this is behind the drawing apis
so you mostly dont have to worry
its not the absolute most perfect batching but it is good enough
our entire IDE is like 60 drawcalls
which is totally acceptable
TigerXu โ Yesterday at 4:32 AM
so this decreases draw call count?
Rik Arends โ Yesterday at 4:32 AM
yea i cant draw every character of every word or every item in a tree with a drawcall that would be pretty slow
but because it batches you can do things like zooming out till the characters are pixels
because drawing instances / generating the data is so fast
TigerXu โ Yesterday at 4:32 AM
yes, that would be very fast
and does android or flutter do the same?
Rik Arends โ Yesterday at 4:33 AM
ehh no idea tbh
im sure they do things in an optimal way
as i said before makepad has no special magical way except for using more shaders than others do
i draw the same triangles as everyone else
seems likely that flutter right now uses skia's gpu triangulator and lots of texture caching
and in the future i can see them use even more gpu accelerated vector things
which is pretty cool and all but for us it doesnt really matter all that much
makepad is already extremely efficient in manyways, now it just needs to have a good-enough designtool to beat web, atleast play
and then it should be good enough
also gpu accelerated vector stacks is a lot of R&D so
however if you pitch us vector for vector against skia we have no story of course
that project is an entire office for google
TigerXu โ Yesterday at 4:38 AM
yes
that's a big project
Rik Arends โ Yesterday at 4:38 AM
we do have a vector triangulator, i'm adding soon
so we should be able to draw vector icons adequately
and our font renderer also uses a different kind of gpu accelerated rendering
TigerXu โ Yesterday at 4:39 AM
what kind?
Rik Arends โ Yesterday at 4:39 AM
we use the algorithm of 'pathfinder 2'
the mozilla R&D project
pathfinder 3 was the really fancy one that uses lots of GPU
pathfinder 2 is a very simple algorithm but it does allow you to create font atlasses in a gpu accelerated way
its not fast enough to realtime draw vectors every frame but its great for font atlasses
TigerXu โ Yesterday at 4:40 AM
Heard a little about it, and I'll go and learn it in detail
Rik Arends โ Yesterday at 4:40 AM
its also very hard to find things about
my cofounder eddy is ex mozilla and did the code
TigerXu โ Yesterday at 4:41 AM
great
Rik Arends โ Yesterday at 4:41 AM
so you can always ask him.
TigerXu โ Yesterday at 4:42 AM
OK, would you invite him to join us?
Rik Arends โ Yesterday at 4:42 AM
i recently tried to google the algorithm and failed
let me ask him
he might have a link for you
TigerXu โ Yesterday at 4:43 AM
OK, now I have no question due to lack of knowledge about it. I would ask questions later.
you said makepad uses more shaders than others do, what are these shaders?
Rik Arends โ Yesterday at 4:44 AM
so others tend to have shaders for one particular drawing algorithm
like a vector-rendering shader or an 'image drawing' shader
for us because we pulled the shader language all the way through from the back to the front
you write custom little shaders for every widget
nobody does that
it remains to be seen if its a good idea, but sofar i really like it for what we do
the ability to specialise shaders all over the place is a great tool to have for building high perf UI
It comes at a startup cost but as you've seen you can mitigate that by caching at the graphics API level
TigerXu โ Yesterday at 4:48 AM
what changes do custom little shaders bring, less draw call or less data to upload to GPU?
why is it good for building high perf UI
Rik Arends โ Yesterday at 4:49 AM
so you can parameterise little drawingprograms with values for UI components
like hover
its much more intricate to be able to animate/tweak a bit of pixelshader code in response to interaction inputs
than to have to manipulate vector data
and because there is 'nothing' goin on on the CPU side here
except for storing a single float per button
it scales to 10s of thousands of UI items on a slow web device
so 1. powerful if you need it 2. extremely fast for the CPU
https://www.researchgate.net/publication/2547487_A_Fast_Trapezoidation_Technique_For_Planar_Polygons
ResearchGate
(PDF) A Fast Trapezoidation Technique For Planar Polygons
PDF | Triangulation is one of the most popular methods for decomposing a planar polygon into primitive cells. Often trapezoidation is performed as aโฆ | Find, read and cite all the research you need on ResearchGate
there is part of the algorithm
TigerXu โ Yesterday at 4:55 AM
I seem to get it a little bit and will go to read the article
Thank you , Rik
Rik Arends โ Yesterday at 4:57 AM
once you have trapezoids you can break those down further into 2 triangles (for the edges) and 1 quad
the quads are always 100% filled and for the triangles you use a simple area calculation for each pixel
we have a shader for this
its here
https://github.com/makepad/makepad/blob/rik/draw/src/font.rs#L24
as you can see our shader compiler can also be used for slightly more complex shaders
this is not a shader as we use it in the UI
you can also use it to write shaders for 3D
its not bolted to 2d rendering
Edward Tan โ Yesterday at 10:29 AM
Hi Rik, is there an input field widget support currently? I see there's a "TextInput" widget but not much examples
Rik Arends โ Yesterday at 10:38 AM
we have a textinput yea
only use we have for that for now is the text-input in the slider of ironfish
it might be a bit unfinished
its pretty much complete tho
Edward Tan โ Yesterday at 10:47 AM
ok, let me check it out. I put it on screen but by default it doesn't seem to show anything or interact. I'll look into the event handlers to see what else needs to be added
Rik Arends โ Yesterday at 11:36 AM
so
i can try it out
let me see
its just a widget that we dont have a separate case for yet so its probably half configured
It seems to work
input1 = <TextInput> {
walk: {width:100, height:30},
text: "Click to count"
}
btw this widget is not mobile-ready yet. we don't have support for mobile keyboards yet thats a project.