Should we improve full-stack DX at all cost just so as to improve UX as well ?

Ifeora Okechukwu
Bootcamp
Published in
19 min readNov 30, 2023

--

This is a question that has been on my mind for a while. By the way DX stands for Developer Experience while UX stands for User Experience.

Web software engineering is taking a journey back to 1997 (somewhat) when DTHML was made available by Microsoft. However, now in a much more mature, scalable and cost-effective way. Friends, i can comfortably say that we are witnessing the end of the era of the thick browser client and a return back to the time of the thin browser client (history is basically repeating itself). It’s also apparent that the path ahead might be favouring the sentiment of collapsing the wall between frontend and backend roles into both-end-stack (or full-stack) roles using fewer people per engineering or product team. Trying to improve DX this way is one thing but improving it at all costs without considering the downsides is another.

A lot of people have viewed the segregation of both frontend and backend into distinct roles as a counter-productive practice. Are they right ? I don’t think so. Here’s why: Insisting that FRONT-END an BACK-END are a dichotomy is like saying the client and the server on the web are also a dichotomy. The client and the server are simply 2 sides of the same coin. Two related pieces of the same whole (the web) based on the client-server architecture.

Now, Jan Lehnardt (an renowned expert progressive web software engineer from Germany), once said the the segregation of frontend and backend roles has led to decades of duplicating most business logic and data fetching, filtering and sorting needs and i believe he’s right — partially. I do not believe that the segregation of roles is directly responsible for the duplication of effort. Rather, the unproductive paradigms like Single-page Application is responsible. Firstly, here’s an article from Thoughtworks stating that arbitrarily dividing frontend and backend is an anti-pattern. On the other hand, Here is another article speaking to the merits of separating both roles and i agree 100%.

Secondly, I do believe that allowing frontend engineers to be separate from backend engineers is a very productive way to structure software teams. Why ? Well, for starters, even though both frontend and backend roles do needlessly duplicate tasks like state management, input validation, data fetching and business logic design & implementation (like i said earlier this is due to the emergence of weak and highly complex paradigms like Single-page Applications), there are still aspects of each role that don’t intersect or duplicate effort at all like animations, observability, page styling, web accessibility, database design/administration, error handling. Plus, these require immense specialization to be done properly.

For instance, performance bottlenecks (e.g execution latency) on the backend can severely affect the critical rendering on the frontend. However, frontend performance analysis and optimisation is a very different concept from performance analysis on the backend and is often a case of specilization. In the same vein, state management on the frontend is mostly focused on UI (presentation layer) state (e.g. URL or HTML form state) and other derived data state. However, state management on the backend is mostly focused on transactional workloads (e.g. Database state). The very same applies to security analysis on both frontend and backend too.

To suggest that there be no distinction in roles between frontend and backend is to also simultaneously suggest that each member of a web software team become experts in everything frontend and backend at once. This is why i find such suggestion a bit wild and interesting. I do however, encourage that before specialising into the role of either frontend or backend, that each team member of a web software be well acquainted with the web platform in general: HTML, CSS, JavaScript, Hypertext, URIs, HTTP, IP Addresses, JSON, Web sockets e.t.c and keep an open-mind to exploring additional skills in either role. Yet, this should be done without sacrificing specialization in each role. Also, remember that frontend doesn’t just refer on the browser. It also refers to native mobile frontend too. It will be too much on one person to have to juggle most of mobile and browser frontend work with most of backend work.

Furthermore, i believe that those aspects of the frontend and backend which are often duplicated across the both such as: state management and data fetching can be synchronised and collapsed into a less complicated form using well designed tooling technology that act as a bridge between the two roles. Thankfully, this is fast becoming a dream come true. Over the last 3-4 years, we have seen the advent and increased adoption of newer both-end-stack (or full-stack) solutions like Hotwire, Stimulus, HTMX, Livewire, SvelteKit, NextJS, and Remix. It seems this is poised to help engineers from either role become much more productive on the job by extending into the opposite role as much as possible to offset any unnecessary duplicate logic and enhance the communication between client and server. In fact, with this brand of solutions, software tech start-ups (especially software shops) can begin to save a ton of time and money on hiring for software engineers, reducing resource allocation overhead and build costs in general.

The both-(end)-stack reset

It’s no surprise then that these both-end (both frontend and backend fused) solutions are bringing back the good old days of using primarily HTML to communicate data from the server to the client (as opposed to JSON or XML). So, instead of using HTTP 1.0 or HTTP/2 on the browser to create HTTP resource waterfalls as a result of cascading requests for scripts and JSON data (usually from REST API endpoints) which in turn enable numerous loading spinners on a web page, data is loaded in parallel on the server and sent at the same time already rendered as one block of markup (HTML/SVG) without the need for extra magic like hydration. Another feature of these solutions is making much more use of the browser platform (web standards) and its APIs (especially for preloading scripts and data and dealing with HTML forms) than is currently obtainable with SPA alternatives. Lastly, much of the manipulation of base application state is restricted to the server-side.

After the release of Rails 7 in late 2021, DHH wrote an article that made term: “One Person Framework” very popular. The idea is simple: One person to build fully commercial and production-ready web applications with a frontend, a backend, SEO and deploy systems. This term has resonated well with a large section of builders, managers, executives and engineers whose daily job is to improve the efficiency and speed of shipping digital software products/features in the smallest amount of effort and manpower.

We just came out of an era where component-based frontend libraries meant one thing — SPAs. This meant you needed a backend team to push out JSON (or Protobufs/XML — i guess) but now the emphasis is on the backend pushing out markup (HTML) that has been infused with “magical” interactivity commands as opposed to JSON. This might mean fewer frontend and backend engineers per team per product or sub product. It also might mean, frontend and backend engineers not wanting to learn more about the other side. But that seems not to matter now, as right now, we are in the era of Transitional apps! Where both frontend and backend engineers have the power to become “10x engineers in a relatively small timeframe without having to involve themselves in any cross-discipline learning of either the frontend or backend.

In the same vein, nearly the entire web software industry seems focused on improving DX by putting more productivity in the hands of as many engineers as feasible. Apparently, this is because this investment will surely trickle down to producing better UX for web application end users (developers will be more effective at their jobs — win-win!). Developers want convenience even if it may sometimes be at the detriment of their users. In the past, it’s often said that the best UX is usually delivered by miserable web software engineers. Yes, that may have been the case until web standards became mainstream. I for one believe that it’s possible to have great UX and DX all together but not at the cost of losing the requirement of flexibility for software behaviour change both now and in the future.

Today, everyday OSS maintainers and thought leaders are constantly seeking ways to push the limits of whats possible with web sites/apps and simultaneously propose enhancements to the web standards. It may not be obvious but DX has been at the heart of the web standards movement for decades anyway.

Some say: DX is simply about familiarity more than it is about anything else. Well, i agree partially. DX is about familiarity yet it’s also about timeless stability, longevity, resilience and future-proof adaptability. DX is more about easeful productivity (both for now and the future) than it is about familiarity and convenience.

The web platform standards & DX

The web platform standards (browsers, HTML, CSS, ECMAScript, HTTP, TCP, servers) is the most stable, future-proof distributed software foundation that i know of. It’s all been in development for at lest 20 years and it’s not yet all perfect but it’s all we have got. For all of its’ faults, it is the very reliable and resilient. Websites made in 1996 or 2014, still work on modern browsers today without needing to change the code or update “dependencies” or upgrade the “components” or rewrite huge parts of web pages. If that is not DX, then i don’t know what is. Also, guess what ? It’s still loads quickly even today— it’s not slow.

For a very long time, companies like Facebook slowly but steadily moved away from web standards and the web platform in general while enhance what is possible on the web platform. This took center stage with ideas like GraphQL — which had HTTP file uploads & HTTP caching as well as the semantic meaning of HTTP verbs wholly and partially tossed aside while adding significant indirection (resolvers and the GQL parser) in the process. Furthermore, ReactJS had and still has synthetic events for dealing with browser events like form submissions, clicks, keypresses are proxies to the real browser events and yet don’t behave similarly to them. ReactJS also makes use of an unnecessary level of indirection called the virtual DOM and extra code libraries to enable React perform other tasks. These has caused a couple of problems down the line. For instance, most libraries need ReactJS binding/glue code to work with React e.g. redux, mobx, chartjs.

Also, imagine this; if i want to programatically update the value of a form element (<input> or <select>) and trigger a change event on them, there’s no way to do it without requiring a DOM element reference, property descriptor and a custom event dispatch respectively.

import React, { useRef, useEffect } from "react";

const setInputValue = Object.getOwnPropertyDescriptor(
HTMLInputElement.prototype,
"value"
).set;

const setSelectIndex = Object.getOwnPropertyDescriptor(
HTMLSelectElement.prototype,
"selectedIndex"
).set;

const setSelectValue = Object.getOwnPropertyDescriptor(
HTMLSelectElement.prototype,
"value"
).set;

const FormComponent = () => {
const inputElementRef = useRef<HTMLInputElement | null>(null);
const selectElementRef = useRef<HTMLSelectElement | null>(null);

useEffect(() => {
const event = new Event("change", { bubbles: true });

setSelectValue.call(inputElementRef.current, "3");
setInputValue.call(selectElementRef.current, "23");

inputElementRef.current.dispatchEvent(event);
selectElementRef.current.dispatchEvent(event);
}, []);

return (<>
<form>
<input
type="text"
inputmode="numeric"
pattern="[0-9]+"
ref={inputElementRef}
onChange={(e) => {
console.log("ON <input> CHANGE: ", e.target.value);
}}
/>
<select
ref={selectElementRef}
onChange={(e) => {
console.log("ON <select> CHANGE: ", e.target.selectedIndex);
}}
>
<option value="2">Hello</option>
<option value="3">World</option>
</select>
</form>
</>);
}

If i were writing this with vanilla JavaScript, i certainly do not need a property descriptor at all.

Yet, year after year, the JavaScript bleeding-edge merchants (as i like to call them) have always overwhelmed us with what they believe the next best thing for the web and digital products thereof. These ideas are sold to us as the missing pieces to the web platform that will improve both DX for us and UX for users and augment the current offerings of the web platform. These ideas are full of DSLs, excessive novelty, paradigms, dependencies, layers and contexts that shift and change ever so quickly with no way to control how these shifts and changes impact the products (web sites/apps) we ship and the stability and longevity of said products. There seems to be no guarantee that we continue to be productive even as the layers underneath are moved around, upgraded and re-versioned.

Even more egregious is that these ideas move us farther away from utilising the web platform standards to the fullest. Examples of these ideas were found in ReactJS, AngularJS, e.t.c. until recently. It’s nice that ideas like HTMX, Remix are bringing back the focus on web standards to being critical to building web sites/apps that are future-proof.

This is why i will never understand the insistence on and the hype around NextJSs’ server actions specifically. I am in utter shock by its’ design and more so; it’s implementation. It is also surprising that the creators don’t realize that the implementation in its’ current state is a form of common coupling that has so many good things traded off — chief among them is reasonable indirection. You see, without indirection, you are limiting the set of behaviours a unit of code can exhibit and perform. I do not understand who thought it was a great idea. However, i am eager to be proved wrong. Let’s be clear, I am not saying i have an issue with the concept of server actions. I only have an issue with its’ NextJS implementation of it. Also, NextJS server actions are not as composable as server actions in Livewire for example.

Similarly, this also reminds me of a post (formerly tweet) where someone was asking if infinite scrolling could be implemented with React Server Components (RSC) only and Dan Abramov responded in the negative that it wasn’t yet possible as the RSC will wipe off all of the loaded grid items in the DOM each time the server component was re-rendered. Ultimately, someone came up with a hacky solution using server actions; See below:

Now, this solution involved using 2 slightly different server action definitions for the initial render and all other subsequent renders. This implementation to me is not a reliable nor a stable one.

Similarly, React Server Components is another one i can’t quite wrap my head. Firstly, it packs more complexity than SSR/SSG or SPAs with little marginal performance gains. Secondly, contrary to what is obtainable for the web platform standards in terms of backward compatibility, React Server Components (RSCs) are not compatible with what came before them as delineated in this article. Mainly, RSCs have mostly fragmented the waiting (data fetching via Server components) and state update flows (via Client Components) of general rendering problem more than before (Remix does a better job of fetching in parallel and rendering at once). This is the major source of the immense complexity of using RSCs. I would prefer it if the ReactJS core team did away completely with client-side components and the “use client” / “use server” tags. In my opinion, having only server-side components (i will explain later) is the way to go!

DX vs. UX for different UI rendering paradigms

In our quest to do right by our users, we seem to be encouraging ever increasing complexity that leads to very little gains on the UX side (see graph above 👆🏾). Hear me out please. The problems that RSCs solve are very cogent and important. However, the current model for RSCs has the client and server tightly coupled in such a way that if the client changes frequently in the way data is displayed and what data is rendered, one has to redo the backend business logic (sometimes on a per component basis) and change the data access logic too to reflect this. Even though the business logic in ReactJS server components is highly composable, it can be easily made to be too specific to the UI structure of the client-side it caters for and so the server components cannot evolve independent of the client components.

The Haskell community has a proposal (called the IHP Server-side Components) that i feel the JavaScript (read Node/Bun) community in general would benefit from. It relocates components exclusively to the server and uses messages sent via web sockets to invoke server actions on the server and re-render the views on the server which in turn update the entire page via DOM morphing/DOM diffing. A Server-side Component consist of a state (domain model) object, a set of server-actionsand a render function. This is also very similar to Laravels’ Livewire Server-side Components.

IHP components are stateful and do re-render just like React Server Components but unlike React Server Components, they don’t render client-side components and don’t need to as that adds unnecessary complexity. These server-side components are composed into views which when rendered into a browser DOM are kept in sync throughout the life of all components composed in that view. I feel that client-side components when mixed up with server components add unnecessary overhead to the delivery of client-side view and markup making things slower than necessary.

Here is a talk by Rich Harris that echoes my thoughts on server actions most especially.

Now, the current crop of web experience tooling builders keep saying that indirection isn’t necessary anymore because according to them separation of concerns is a misleading idea in software development and not beneficial to DX. Some others like HTMX proponents also concludes that separation of concerns is in conflict with locality of behavior (which i find perplexing 😟). The argument is that making changes that are whole and concise as well as tracking down definition sites in source files to make those changes becomes more difficult with time as we encapsulate, abstract and tuck away implementation details in different files and sections of a codebase.

The current “trade-offs” debate

I previously wrote this article where i stated that when building software, there were only 2 important things one should consider very closely to ensure a reliable, performant, maintainable and robust software system.

  • Constraints
  • Trade offs

Ironically, in order to improve DX (ostensibly without sacrificing UX) for users, much of the web software industry has decided to compromise greatly on constraints and experiment deeply and endlessly with trade-offs. This sort of thinking has given rise to ideas like ReactJSs’ RSCs and NextJSs’ server actions. In a nutshell, over-promoting locality of behavior. More specifically, ReactJSs’ RSCs and NextJSs’ server actions blur the lines between the client and the server — which (in my opinion) leads to more foot-guns to be easily fired off than anything else.

Now, i have no problem with experimenting deeply and endlessly with actual trade-offs as that rightly constitutes a huge part of our job as software engineers. My issue is with compromising on necessary constraints. There’s a saying: “One should put a limit on what should be done even when there seems to be no limit on what can be done”. Sometimes, we get so excited and lost in the details of technology and forget that this technology shouldn’t be about what we as engineers feel most impressed with but what serves people best. We forget that the technology is for and about people. Technology is not to soothe our need to getting a gold star as the most popular web technology option with the least foot-guns and greatest IDE support out there.

We don’t build for mobile devices and tablets! We don’t build for browsers! We build for people (software engineers and the software end users alike). Somehow, we have lost touch with this guiding principle in how we create technology that help engineers build rich web experiences for end users. It’s great that we want to experiment with different ideas but at least we should experiment within a set of constraints guided by the example that web standards have laid down for use many years prior and what they have offered and continue to offer in terms of the longevity and timeless stability of the web platform.

  • Web standards give engineers context-compatible options and don’t lock people in to one technology, or one way or pattern of doing things.
  • Web standards are future-proof by design and can be enhanced progressively
  • Web standards are easily interoperable with other peripheral aspects of the platform (e.g webVR, webRTC).
  • Web standards make the platform more accessible to a wider range of users.
  • Web standards make it less time consuming to update software and lower maintenance costs.
  • Web standards introduce necessary and adequate levels of indirection.

Technologies like Remix, HTMX and Livewire have the qualities of web standards. Here’s an example of a progressively enhanced web link in HTMX: The web link still works even when JavaScript is disabled

<a
href="/${note.slug}"
hx-get="/standdalone/note/${note.slug}"
hx-target="main"
hx-push-url="/${note.slug}"
>${note.title}</a>

Some of these technologies: HTMX and Livewire specifically are trying to do away with JavaScript frontend components paradigm while others like Remix and NextJS want to keep it and innovate around them. Trade-offs!

Trade-offs are great when you know how to wield them and understand how they impact the overall outcome. Remix and NextJS are both trying to optimize mostly around the first render on a per route basis as well as pushing further on locality of behaviour for everything: routes, data flow, component render and server action logic. These are all good ideas, however, sending rendered HTML via SSR and hydration is only a workaround targeting CSR. A better approach to first render performance and better TTI is Resumability.

SSR isn’t bad because it renders to HTML on the server-side. I mean, Rails, LaravelPHP, AdonisJS all render to HTML on the server-side too and they aren’t bad for that. In fact, rendering HTML on the server-side is a great thing! SSR is bad because it requires a JS runtime to work. SSR is specific to a single-threaded language called JavaScript and because of that it is limited in the amount of work it can do per unit time (throughput). This makes SSR so much less efficient than rendering a HTML template directly using a much efficient runtime.

JavaScript was never built natively for the server-side. Async/Await on the server-side is a poor concurrency model that never scales beyond a certain point. Go routines, for example, scale much better.

Don’t get me wrong — Locality of behavior is great! It makes code easy to reason about as the cognitive effort to do so and understand what is happening with the code is much lesser. But, extreme locality of behavior (which seems like a win for DX on the surface) is a code smell.

What is happening here is that there seems to be a huge movement from one extreme (unnecessary indirection) to another (excessive locality of behavior) because instead of organising our code source files, logic and modules by feature (co-locate code in folders that contain closely related logic, helpers and data — see here), we still (unfortunately) organize by type (co-locate in folder that contain closely related source file structure).

Locality of behavior is just another fancy name for cohesion. However, as software engineers we must remember that there are 2 extremes that we should try to avoid:

  1. Extreme cohesion (extreme locality of behavior)
  2. Extreme decoupling

The best software are those that operate far between these 2 extremes and not near the extremes.

Server-side components (Livewire & IHP)

Server-side (backend) components are the middle ground between the best of Multi-page applications and the best of Single-page applications. They are just like JS frontend components but better and have the best parts of Remix, NextJS, SvelteKit, Hotwire and HTMX all rolled up in one.

Hotwire has a problem with regards to partial page updates when routing to other pages on a web app similar to the Github back button bug which still persists till date but this time on the Issues tab UI. This issue is also eliminated with Server-side components especially with event-based communication between one or more Server-side components.

One of the guys behind Remix did a poll recently:

The final result is quite compelling and speaks to DX more than anything else. The ease of usability and composability of components in relation to data delivery. In my honest opinion, this polls will not even exist if the server-side alone was solely responsible for component rendering (no need for SSR, hydration or suspense and all that super complex stuff) and sending the right set of rendered UI with the data it needs and subsequent UI updates to the client using lightweight transport mechanisms like Web sockets or HTTP. Then, let the client continue from the server stopped.

HTTP can sometimes bring significant overhead as a transport mechanism for communication between client and server. By opting for Web sockets, we can cut down on network latency for JavaScript-powered web sites/apps and only fallback to HTTP (degraded experience) when Javascript is disabled in the browser. This is a better trade-off in my opinion.

Conclusion

Great developer experience is the one of the key requirements to delivering high-quality software. Yet, the software industry today seems content with running with a one-sided definition of user experience (defined as speed-related only) and masking complexity instead of reducing or simplifying/removing it entirely.

Most of the so-called UX improvements coming from frontend frameworks have been mostly about initial load speeds to game lighthouse scores and not about overall quality of the application with proper trade-offs as default. User experience is only seen through the prism of initial render performance. Developers hardly even use the apps they built as users.

Over on the client-side, there’s an incessant and growing over-reliance on JavaScript by most frontend frameworks instead of the entire web platform. Resumability is way better than hydration but it still relies heavily on JavaScript — just like hydration!

For instance, a NextJS app will server-render but the page is non-interactive until after a few seconds. So, if a user clicks a button, it doesn’t work until after a few seconds. Thankfully, Remix solves this problem easily by making buttons work without needing JavaScript.

On the server-side, we seem to be doubling down on a scripting language (JavaScript) that wasn’t built for the intricacies of server-side computations because of capitalism and the general startup culture of moving fast and rapid experimentation. While, this is a good thing, we must recognize its’ limitations and act accordingly.

It is possible to have great DX and great UX for your users if one allows the server to do what it does best: track application state and send correct set of updates to the client concurrently. Solutions like page refreshes with custom <turbo-stream> refresh action for Hotwire (similar HATEOAS for HTMX —since it uses HTTP instead of Web sockets) and Server-side components for Livewire are really the way to go for the future of the modern web.

MVC frameworks are still relevant for a category of web sites/apps that have limited client-side interactivity. But for the other categories of web sites/apps that have a lot of client-side interactivity, say hello to Server-side Components!

Honestly, I do like how well Remix, SvelteKit and NextJS are innovating with Server-Side Rendering (SSR). However, i don not believe that JS frontend components are going to win eventually over ideas like Server-side components. I believe that as ideas like Server-side components gain much more mainstream adoption, this may mean that we start to see these JS frontend components-powered frameworks and libraries fade in popularity andd use. I just can’t wait for a javascript version of Laravels’ Livewire but running on nodejs or bun.

--

--

I like puzzles, mel-phleg, software engineer. Very involved in building useful web applications of now and the future.