It just doesn't go in my head that we are building text editors inside a web browser! I get it, there are many good use cases for Electron and it's easy to get started with cross platform support, but why is everybody going crazy about text editors in them? Because you can write plugins in JS?

Wouldn't it be better to make native application, especially for code editors, where developers spend most of their time, where every noticeable lag and glitches are not appreciated.

Edit: Many people here think that I am attacking this web based kind of technology, which I am not, and sorry for not being clear enough, but why chose something so high up the stack for dev tool?

Edit2: For non-believers in nested comments, look -> https://github.com/jhallen/joes-sandbox/tree/master/editor-p...

Yep. I also can't wrap my head around the fact that we are now constructing buttons, drop-down boxes, tagged text boxes using dozens of nested

layers instead of a native widget that writes directly to the screen. My 486 rendered UIs with nearly imperceptible lag. Google Docs takes a good 2-3 seconds to spin up a UI on my i7.

> instead of a native widget that writes directly to the screen.

"Writing directly to the screen" (by which I assume you mean writing pixels one by one to the framebuffer) is a bad idea for modern graphics hardware. It was fine on the 486, but nowadays you need the ability to do global optimizations for good 2D (or 3D) graphics performance. Ironically, the Web stack is much better positioned to do this than, say, Win32, because of the declarative nature of CSS.

Besides, as some downthread have pointed out, you didn't "write directly to the screen" in Win32. You went through GDI.

It seems reasonable this might be true, but it's not. In video games we went down the road of retained-mode graphics APIs (declarative-type things, so that they can do the kinds of 'global optimization' you mention) but we abandoned them because they are terrible. Video games all render using immediate-mode APIs and this has been true for a very long time now and nobody is interested in going back to the awful retained-mode experiment.

You build custom retained-mode APIs on top of the immediate mode APIs—they're called game engines.

What happens if you try to present an immediate mode API for UIs is the status quo with APIs like Skia-GL. You frequently end up switching shaders and issuing a new draw call every time you draw a rectangle, and you draw strictly in back to front order so you completely lose your Z-buffer.

Imagine if games worked like that: drawing in back to front order and switching shaders every time you drew a triangle. Your performance would be terrible. But that's the API that these '90s style UI libraries force you into. Nobody thought that state changes would be expensive or that Z-buffers could exist when Win32, GTK, etc. were designed. They strictly drew using the painter's algorithm, and they used highly specialized routines for every little widget piece because minimizing memory bandwidth was way more important than avoiding state changes. But the hardware landscape is different now. That requires a different approach instead of blindly copying what the "native" APIs did in 1995.

Ehh, game engines are not really retained-mode in the way you mean. There isn't usually a cordoned-off piece of state that represents visuals only. Rather, much of that state is produced each frame from the mixture of state that serves all purposes (collision detection, game event logic, etc).

"What happens if you try to present an immediate mode API for UIs is the status quo with APIs like Skia-GL."

I don't know what Skia-GL is, but in games, the more experienced people tend to use immediate-mode for UIs. (This trend has a name, "IMGUI". I say 'more-experienced people' because less-experienced people will do it just by copying some API that already exists, and these tend to be retained-mode because that is how UIs are usually done). UIs are tremendously less painful when done as IMGUI, and they are also faster; at least, this is my experienced. [There is another case when people use retained-mode stuff, and that's when they are using some system where content people build a UI in Flash or something and they want to repro that in the game engine; thus the UI is fundamentally retained-mode in nature. I am not a super-big fan of this approach but it does happen.]

"and you draw strictly in back to front order so you completely lose your Z-buffer"

That sounds more like a limitation of the way the library is programmed than anything to do with retained or immediate mode. There may also be some confusion about causation here. (Keep in mind that Z buffers aren't useful in the regular way if translucency is happening, so if a UI system wants to support translucency in the general case, that alone is a reason why it might go painter's algorithm, regardless of whether it's retained or immediate).

"But that's the API that these '90s style UI libraries force you into."

90s-style UI libraries are stuff like Motif and Xlib and MFC ... all retained mode!

I don't agree that an IMGUI style forces you into any more shader switches than you already would have. It just requires you to be motivated to avoid shader switches. You could say that it mildly or moderately encourages you to have more shader switches, and I would not necessarily disagree. That said, UI rendering is usually such a light workload compared to general game rendering that we don't worry too much about its efficiency -- which is another reason why game people are so flabbergasted by the modern slowness of 2D applications, they are doing almost no work in principle.

Back to the retained versus IMGUI point ... If anything, there is great potential for the retained mode version to be slower, since it will usually be navigating a tree of cache-unfriendly heap-allocated nodes many times in order to draw stuff, whereas the IMGUI version is generating data as needed so it is much easier to avoid such CPU-bottlenecking operations.

This argument looks like you and pcwalton are arguing about different definitions of "immediate mode API". I think both of you agree with each other on object-level propositions.

pcwalton seems to be presuming that part of the contract of an "immediate mode API" is like old-school ones it actually immediately draws to the frame buffer by the end of the call.

Whereas you are talking about modern "immediate mode API"s where the calls just add things to an internal data structure that is all drawn at once, avoiding unnecessary shader switches etc. IIRC this is how Conrod (Rust's imgui library) and https://github.com/ocornut/imgui work, although with varying levels of caching.

One point to make about retained mode GUIs is I remember reading an argument that immediate mode is great for visually simple UIs, such as those in video games, but isn't as good for larger scale graphical applications and custom widgets. For example when rendering a large text box, list or table you don't want to have to recalculate the layout every frame so you need some data structure that sticks around between frames specific to the widget type, so that's what retained mode APIs like Qt do for their widgets.

Sure you can do the calculations yourself for exactly which rows of a table are currently in view and render those and the scrollbar with an immediate mode API, but the promise of toolkits like Qt is that you don't have to write calculations and data structures for every table.