Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The absurd complexity of server-side rendering (gist.github.com)
361 points by takiwatanga on April 19, 2022 | hide | past | favorite | 382 comments


There are cases when server-side rendering (without SPA) is easier and faster. For example: documentation sites, blog-like sites, internet stores, sites like Hacker News. In all these cases, you can save on development time by writing just one application instead of two (server and client), and improve performance (no need to load multimegabyte JS applications and make multiple AJAX requests to display a page).

Of course, there are cases when SPA-style application is better, for example: (graphic/circuit/text) editors, IDEs, mobile newsfeed-based apps - everything that resembles an app rather than a page with text, menus and images. But in these cases you usually don't need server-side rendering. And sometimes you cannot even use it - for example, if your code needs to know browser windows size to arrange objects on the page.

So I think that SSR (running JS on server) is rarely useful.


> documentation sites, blog-like sites

Sites like those can often be fully static sites.

Process the docs or the blog posts into HTML. Nothing is generated at runtime. Of course you can cheat a tiny bit, like putting the date in the footer, or cheat a lot. Life's a lot simpler if you don't cheat at all.


Sites like those (e.g. Wikipedia) tend to be driven by a CRM on the backend, with content maintained by a team of people, and so it's not super reasonable for the site to be regenerated and the static HTML file to be reuploaded to the server on every change. Cache the body page, put the timestamped footer on it, and serve it up. Easy peasy, been doing it since 1995.


It's trivial to generate pages statically, irrespective of the update volume. Instead of updating database rows, just save S3 blobs of HTML.

The reason this is not done is an accident of history that is no longer required but has become the norm.

"Back in the day", browser compatibility issues meant that user agent strings had to be inspected and different HTML content served based on the browser. Mobile browsers especially required vastly different content compared to desktop browser. I don't mean two different kinds of content, but dozens.

Similarly, text encoding variations was a huge pain.

With HTML5 and modern browser standards this is a thing of the past. The same static HTML can be served to everyone. Any content that needs to vary based on client-side metrics is best implemented by JavaScript or CSS, which can be responsive to display changes like phone rotation.

There is literally no benefit to dynamic rendering for many sites. It nearly guarantees scalability issues at the hardest part to scale -- the database. It means that every "moving part" must always be moving, or you get a guaranteed outage. It means that any caching tier like a CDN gets you right back to the pros and cons of static rendering: either you don't cache all of the site OR you have to live with the limitations of static/stale content.

CMS products like SiteCore or Wordpress are especially bad, doing everything fully dynamically. I've seen these products struggle to serve 10 requests per second on clusters of VMs that should be able to serve 10 million.

The most crazy thing I saw was a plugin that did image compression optimisation dynamically with a fixed-size cache on a per-server basis. If you had more GB of pictures than the cache size, you'd get cache thrashing and the CPU load on all nodes would spike. Scaling up the farm wouldn't help either because the new server would have an empty cache and hence would overload its CPUs for hours.

Simply pre-optimising everything and dumping the result in something like and S3 bucket would be orders of magnitude more efficient and scalable. We're talking up to 100 Gbps egress to the Internet, no sweat.


Have you worked on one of these products before? I worked for Wikimedia Foundation, and it's not trivial to generate the pages statically. The content itself is wikitext, not html. It'd derived from content, templates, and meta-templates. Quite a bit of the data comes from other sources, like wikidata, or commons. You need to display things differently based on logged-in vs logged-out status.

Through lots of layers of caching, the fact that the backend is dynamic doesn't really make things that taxing. Serving mostly static content is a pretty well-known problem-set at this point.

Wordpress is perfectly fine, btw. If you use memcache/redis, and some form of a reverse-proxy caching layer (like cloudflare/fastly), you can very easily be #1 on hackernews for the entire day, from a single t2 instance in AWS (I say this from experience, multiple times).

Yes, if you can static-generate your site, it's probably better to do so, but it's a more difficult approach, and is also limiting in terms of managing the content.


Yes, I have worked with these products at a moderately large scale. I have also developed a wiki-like CMS from scratch myself, with templating, macros, and everything. The issues I saw would occur at any scale above "tiny". Essentially as soon as scale-out is needed for a dynamic site, caching also becomes mandatory.

Caching is not well understood at all. People think they understand it, but probably don't actually know most of the pitfalls, especially in the face of failure in the general case.

With Wikimedia, many of the issues are not super important. E.g.: transaction integrity is not relevant. The occasional 404 or eventual (in)consistency problem is also not a big deal.

Similarly, the user-based rendering is also fairly easy to handle. The vast majority of the content (the rendered wiki text) is identical-ish between users. The headers, footers, CSS, etc... do change. This can be handled on the client-side in a variety of ways, typically via JavaScript. Even with mostly static content, headings can be altered on the way out "at the edge" by a CDN or CDN-like system.

This is my point: If you're going to cache things in something like a CDN, then you're 90% of the way there to static content anyway! Take the leap and go the whole way to get the benefits. Some things need to be done 100%, otherwise it's like being almost pregnant.

Some random examples of static vs dynamic problems/comparisons:

- You mentioned templating: In the static world, you can regenerate all static pages based on the new template asynchronously at whatever slow "batch process" rate you desire. In a dynamic system, a change to a template would typically invalidate the entire cache and blow your CPU budget instantly, dragging down the synchronous rendering path into molasses. This can be managed, but it's complex and difficult. Similarly, code changes can similarly result in either mixed/corrupt cache content or instant CPU spikes. At least with static content you can manage the rollout by content instead of by server. A trick you can pull with static content is pre-generate the updated pages side-by-side and swap instantly. This is impossible with dynamic content generation. You either get mixed content as the caches slowly expire, or instant load spike.

- Variations such as mobile/non-mobile pages: These add to cache pressure, and can result in sudden performance cliffs where going from 90% cache utilisation to 110% can cause dramatic spikes in load. If you use static content, your "utilisation" is known in advance and changes slowly. Everything is served at the same speed, always. In fact, with systems like S3, your speed goes up as your data volume increases because of the way the sharding works.

- Memcache and the like are hilarious to me. These days the "standard" is to have layers upon layers of caching to paper over the fundamental bottleneck of the database tier. SiteCore has so many layers of caching that I lost count. Is it ten? Eleven maybe? Whatever. The point is that keeping that straight in your head as a web developer is so difficult that SiteCore keeps all caching off by default, murdering performance for most sites most of the time. It's just too "difficult" to have it on by default because devs would lose their minds. Just the access control issues of providing devs with access to shared Redis clusters or CDNs for purge operations is a task all by itself. In large enterprise, this is almost never done and then it becomes a tradeoff between cache TTLs and freshness/consistency. I've lost count of the number of times I've heard some dev tell users to "clear their browser cache" as the "fix".

- Cache purging: you can hide a fundamental performance problem under a layer of caching for years, have it grow to monumental proportions, and then blow up your production site for days while everything slowly recovers from 1,000% load. This has caused several large-scale outages that have hit headlines.

Look at it this way: Wikipedia is something like 99% read-only access and 1% write access. With a static hosting model the VMs would only need to be scaled to handle the write-throughput, not the read-throughput. The content could be put on cloud storage like S3 and that's it. The whole site could be hosted of a handful of small VMs just for HA/DR!


Many good points, but doing static sites like you do is an entire field of itself and not very well understood too, and you must expect some resistance from people who just want to carry on doing whatever they're doing.

What you write about caching rings especially loudly. Static sites indeed are a very interesting solution to these problems.


The reason this approach lost to dynamic rendering is the absence of any sort of logic that would rebuild _only_ updated pages.

The job of many, many web sites is to make it easy for people to add content to them. When adding something takes many minutes, the web site is doing its job poorly.

Things are around today that are fast and that even offer the logic I refer to above, but the ship has sailed. We largely live in a world of cached pages that are changed via a complicated database-driven CMS.


You don't need to regenerate the entire site. You only need to regenerate the pages that have changed and maybe some indexes.

Considering that SSR has to generate the pages for each view - unless they're cached, which sort of reduces to option 1 anyway - site (i.e. page) regeneration is hugely more efficient.


Heavily cached server-side rendering and statically generated sites kinda converge at some point to the same thing


Yes, but the latter can do the same thing without the complexity of the former.


App server + Cache server == Static deployment pipeline + Simple file hosting. I don't think the latter is necessarily simpler.


A purely static site is more complicated outside of the simplest use cases.


I had a website which I was rendering statically using Jekyll. Overtime I had millions of pages, rendering was getting too time consuming and was taking up far too much of space.

I then moved on to PHP. Worked like a charm with $15/mo server on Google cloud. Excellent search performance as well.


Millions of pages? What was the site?


It’s not so unusual. My employer has a product line of ~40k products, in ~45 markets, with ~2 languages per market, which gets you to 3.6m product pages immediately, and that’s leaving aside any other pages.


That's fair, but I bet you're not using a static site generator to produce your PDPs. (Or, if so, I'd love to read an engineering blog post about it! Static PDPs would be a solid optimization in a lot of ways, and while I'm no longer in a role where that could be directly useful, it'd still be interesting to find out what challenges a team overcame and how in the course of getting there.)


Most content on our PDPs are statically pre-rendered, yes. Some stuff we obviously can’t of course (e.g. some of the recommendations that are based on your personal behaviour), and those tend to be mini-SPAs. We save from having to re-render everything if, say, a menu item changes, by compositing page fragments at a CDN level and caching the result for a shorter amount of time.

This setup is not particularly unusual for a microfrontends environment I don’t believe.


We did much the same, as I recall. (I didn't spend very much time close to the frontend in that role, so I may misremember, but that description has a familiar ring.)

It's probably about as far as the concept can reasonably be taken; in theory I could see something more like "true" SSR with some probably complex hydration, but the engineering time would likely make it uneconomic to pursue what would likely be marginal benefit.


It was basically a clone of https://www.gutenberg.org/.


Yeah, I don't think a site with "millions of pages" is the right fit for SSGs.


I would point out that this is not the hit you think at SSG but at their current build system.

You could imagine a distributed build system for this that would build it really fast for competitive price on any ci/lambda/batch system.

The fact we barely hear anyone talk about this is the strange twist of history the parent made.


But at that point, what exactly are you gaining?


Lower cost, better use of hardware, better scalability, easier debugging, more robustness to failure, etc etc


It's unclear that any of those are true if you need to maintain a distributed build system.


You don't have to. The tooling can be built to do it. I advise the Build Systems a la Carte for a view of what already exist.

You don't maintain jekyll today...


Plus, no way for outside actors to trigger the rendering is great for security.


I've tried this twice and it didn't work out for me.

If you want even just some features like categories and tags you end up with a tool that has some weird metadata language you put over the actual text, the markup obviously is yet another slightly different kind of markdown, and then learn the umpteenth way of linking to internal sites. Then you run the generator, triple check everything is not messed up because of a misplaced space in the meta data and then upload everything to your server.

Compare this to a simple wysiwyg editor in the browser where you hit save. I've made my choice.


I don't think I understand your point. Weird markup vs WYSIWYG authoring is orthogonal to static vs dynamic serving.


Static serving usually means you don't have anything running on the server but an actual web server, and you upload the static files to its webroot. So it's not technically possible to have any fancy editor in your browser. You have all the brains in some tool you run on your machine that processes your markup files and spits out html files.

Sure, you could throw PHP or whatnot on your server, have a nice editor and whenever you edit or add content it statically generates all the html files, which would probably technically still be called a static site generator but then I don't see why you'd want to do that instead of just going all the way and use normal blogging software and putting a cache in front.


There’s a whole category of headless content management systems, either API or git based, to give a friendly editorial experience (also for non technical editors) while maintaining the advantages of static sites.

Here’s a good starting point (not my post): https://pagepro.co/blog/jamstack-headless-cms-which-one-to-u...

Disclaimer: I work for a vendor in this space


> So I think that SSR (running JS on server) is rarely useful.

I would disagree for a couple of reasons. The team I work with at a large SV company built a very complex SSR framework for our mostly static app for 2 reasons:

- Isomorphic codebase, you can share the majority of your code with the client/server since it's Node on the backend and ES2015 on the frontend

- SEO. Google penalizes your site for slowness and more often than not it can't even render some pages the way you'd expect. Attempting to "fix" that through client-side hacks is a fool's errand and you're likely to get penalized.


> mostly static

> SSR is justified because isomorphism

> SSR is justified because SEO

I think the fundamental problem is that you have a mostly static site and yet chose to write it like a non-static SPA.

SSR now bridges the downsides of this mismatch.

But the point is that SSR is a solution to a problem introduced by some sub-optimal decision-making.

If you had a use case that necessitated a SPA, then it is understandable.

If you just built a static site, then you don’t need isomorphism and your SEO would also be straightforward.


Do you realize that neither points justify SSR? You only need those because you introduced SSR.

Anyway, there are webapps that fall between the highly interactive webapp examples of the GP, and static sites. With just enough interactivity that static + sprinkled JS becomes unmanageable. In that area I would really like to see a comparison between SSR (which everybody seems to jump on) and alternatives like... knockout.js? I don't know what's a modern alternative.


Modern alternative would be for example AlpineJS+HTMX, or Turbo/Hotwire.


Sorry, I’m not seeing how this overengineered solution adds any benefit…?

If it’s a mostly static site, just use Django or Rails and be done with it. SEO becomes a non-issue and your code stays in a single (server-side) language.


But you have Turbo/Hotwire in Rails so best of both worlds, no?


"We built a very complex SSR framework so we can share code between client/server". Um, how many development hours did you save here?


Honestly that's difficult to quantify, if not impossible.

The beauty of what we've done is that other microservices internal to our corp are also leveraging said SSR code and can bootstrap their own backend very quickly. We've been following the BFF pattern (Backend for frontend).


I know big SV companies have money to burn, but surely there was some project management in place to quantify the cost of building such a complex, in-house solution?


> SEO. Google penalizes your site for slowness and more often than not it can't even render some pages the way you'd expect. Attempting to "fix" that through client-side hacks is a fool's errand and you're likely to get penalized.

This is the only realistic reason I have seen to be honest. Isomorphism is a tool that you use to achieve a statically rendered site that you can give to Google, that will then become highly dynamic in the hands of the user.

But if you've got a mostly static site, what's the point of all the complexity? Why not just make a static site a more simple way, rather than using SPA tooling. Either you have highly interactive site or you don't, and if you do have an SPA style site, and need to appease Google, you might need SSR. Otherwise, use the right tool for the job and cut out the complexity is my opinion on the matter.


>SEO. Google penalizes your site for slowness and more often than not it can't even render some pages the way you'd expect. Attempting to "fix" that through client-side hacks is a fool's errand and you're likely to get penalized.

SEO aside(that is another discussion), but Google "requiring" your site to be fast is a net positive for consumers/us !


It has been of benefit I will admit, but I think for SSR in particular it can make things worse since you wait until a user interacts to spin up the application so to speak. But like anything, you can do this well with enough care.


> Isomorphic codebase, you can share the majority of your code with the client/server since it's Node on the backend and ES2015 on the frontend

I've been working for 20 years on this, last 8 ~ 9 only with node and React and not a single time, other than sharing some validation rules and/or for SSR, got anything useful out of having the possibility of running the same code in the browser and on the server... totally different responsibilities, libraries, requirements and environments.


Does SSR need to be so complex for SEO? Can't the backend basically throw its hands up and render a trivial (e.g. "reader mode" text view) version, as soon as a browser isn't detected properly, or determined to be known a bot/crawler? E.g. you skip all interactivity, layout etc. I assume Google would have to penalize sites that render something completely different to their bot, or else bait and switch would be rampant?


> I assume Google would have to penalize sites that render something completely different to their bot, or else bait and switch would be rampant?

That's correct. They call that "cloaking" when you render different content for a crawler than for a normal user. Google is rewarding sites for making the UX better for the end user.

> Can't the backend basically throw its hands up and render a trivial (e.g. "reader mode" text view) version

We're actually experimenting with this approach in a couple of different ways. One approach we're testing now is using AWS Lambda to render a fairly static version of the page using puppeteer. So far the results are promising as the page is true to what the end user would see (except for client-side logic / AJAX requests) and it renders much quicker than the standard experience with all the analytics/JS libraries we use.


This is what Discourse does.


It's sad that the primary reason for making the site perform well is SEO and not the improved user experience.


Uber? :)


> sometimes you cannot even use [SSR] - for example, if your code needs to know browser windows size to arrange objects on the page

It's been a very long time since I ran into a situation like that which wasn't best solved with CSS. Describe what you want to the browser and let it handle it for you.


Back in the day, every page of an online store was rendered server-side. One day, someone needed certain elements to call database views and rewrite the dom inside a little area when a user clicked a button. So smart people tried to create frameworks like React that would automate the Dom update and bind the Dom more closely to the data state on the server.

After many years, front end programmers forgot how to build their own hooks and came to rely on automated views. Then they began to forget how to parse models and build controllers. Why not shuffle all that off to the back end anyway?

One nice thing about rolling your own front and back ends is that you can determine when or if each individual subcomponent needs to update its visual state. Often this is very useful in managing the actual server load, but it requires understanding who needs this data immediately and who can load it lazily long after a page loads, or the other data updates server-side, not to mention whether it also has to become readable across a distributed or federated data structure.

So after all the modern conveniences, server side rendering impoverishes both front and back end coders in the name of curing a problem with data transfer and efficiency that is far less of a problem than that of overly tight bindings between front end states and back end data storage.

Anyone can write a tightly bound backend model/view/control.. it's a fun project. It's a horrible architecture.


> Back in the day, every page of an online store was rendered server-side. One day, someone needed certain elements to call database views and rewrite the dom inside a little area when a user clicked a button. So smart people tried to create frameworks like React

I think you skipped a few years there! AJAX, Mootools, prototype, JQuery, etc.


Totally ;) Between the first and second sentence. I'm still writing a lot of software in that paradigm. Basic AJAX, with nice typescript. JQuery (still super useful for having its own event loop). Underscore JS. Bit of PixiJS to replace the Flash I lost when I need something in canvas. Here's what I won't do:

* Move any logic into HTML files

* Use any nonstandard tags or inline any code to hook frameworks

* Use any frameworks that modify the dom on the fly in any way I'm not 100% in control of.

[edit] Here's what I do: Write all data types front/back in NodeJS now, or maintain a 1:1 datatype where PHP is involved that's a perfect mirror of the database table structure; strongly type everything coming in from DB calls before it hits the front end, including types that need to be dehydrated; make outbound objects rewritable to the DB and cached for writing until the time is right, depending on load; maintain high readability without forcing the backend to choke on every single long poler's data update.

But hey, take 20 years worth of kids and put them through the same academies and job interviews, eventually they'll be great at answering questions about React and suck at writing useful code.


I have a marketing site and a dashboard SaaS app. For the marketing site I decided to keep it in React for consistency, but I just kept the server rendering and removed the JS tags from the output. Now it's a fully server-rendered site with no client-side JavaScript, and it's super fast cause it's cached on the CDN.

I feel like not many people realise this is an option but it can work really nicely.


LoL. The truth is most sites don’t need tons of js, but we’re in the sad state where normal websites begins with heavy js framework, junky cookies, annoying notifications as a default. Then add more and more junks …


In my experience (having implemented React SSR a few times for different companies) it's usually applied halfway between the two, on simpler form-driven websites - process-driven screens, marketplaces, the kind of places where you want either the performance improvement of a prerender or the SEO friendliness.


Worth pointing out that SSR scales better for huge surface area applications on very low resource endpoints like Citrix etc. Consider LOB enterprise applications. I’ve worked on ones with 700+ logical endpoints. A few chrome or edge tabs open on a few meg of JS and the client will bite the dust hard.


Your axis here seems to be something like "unique content/personalization per client." From every client sees the same documentation content to every page view/action of an IDE is unique.


But it is trendy and fashion, so we still do it

/s


VS Code is effectively a SSR IDE. Electron and node are the 'server' side of it and are necessary for interacting with the filesystem and all the native things browsers can't access (running processes, terminals, etc.). It has a somewhat complex RPC model (inherent to all Electron apps in general) where JS code on the browser frontend side has to make requests with JS code on the node side and vice-versa, just like most Next.js style SSR apps. It's a very similar ball of complexity for better or worse as mentioned in this gist/blog post, and in general very necessary as browsers are only taking baby steps to give full access to file systems, processes, hardware, etc.


I've been saying it for years - the hard part is not "server side" vs "client side", it's making sure the state stays consistent between those 2 buckets.

If you want to remove the hell from your life, you need to be all in on one or the other.

For us, we've been keeping all state server-side using things like Blazor and hand-rolled js-over-websocket UI frameworks. We never have to worry about more than ~10k users, so this works out great for us.

If we were webscale, we'd have to consider client side options.


Even in smaller applications this is incredibly relevant.

I've recently written something that's meant to interface on a local area network, just controlling something on another computer, literally just one or two users at a time -- thing is the precision of floating points are a big thing here, and we have to consider that. So all of the bignum stuff had to stay server-side in C++ after much messing around doing math client side in javascript only to learn the hard way how completely broken that is. Fun fact - You can't maintain state when floats start to drift off of values they were supposed to land on due to javascript's crappy math libs


Javascript can get messy even if you just need integers, since even naively used integers are stored as doubles. This means that sending a random 64-bit identifier from a server to a client and just trying to read it as an int can give the wrong value if you aren't careful.


Recently I've been dabbling in developing online multiplayer games.

After some agony trying to make the chaos go away (rewinding time, merging conflicting physics states...), I realized I could actually harness it by re-framing the product from something serious into something lighthearted (eg. making goofy physics the whole point of the game).

At worst, it lowers the expectations (eg. relative to something serious/competitive where jankiness is unacceptable), and at best may actually produce laughter. I hope.


I mean as long as you can keep it from crashing completely? Sounds fun


I'm curious, what problems did you have with JavaScript math? I had the impression it was bog standard IEEE 754 same as every other language nowadays, but I have never tried doing any serious calculation in that language.


Did you try to use javascript's BigInt data type?


I clearly stated we were working on floating points here, ones originally provided by the c++ application.

There was a thought to just make everything into big integers here but for this particular application it really just was the realization that all of this stuff was better off server side anyways for state maintenance reasons


I've never worked on a project that could be "all in" on client side state. To be all in there means your application doesn't need the network at all except as a method to distribute your application. One step away from that would be something that is completely client authoritative - I've also never worked on a project like that.

Note that even if your code has zero Javascript, you can still have state mixed between client and server: instead of living in Javascript, it lives in the html, oftentimes as values in form elements. Even though the html is generated on the server, it still lives in the client, ready for the client to do things with it - such as submit a form with information that is 8 hours out of date.

There are obviously ways around this, but it's still something one should keep in mind.

Live update via websockets can kinda get around this. As long as there are no bugs and all state is properly transmitted and processed in a deterministic order. But it's still effectively a synchronization process and stuff can go wrong.


You're technically correct (the best kind!) but in practice one can largely ignore server state in some applications.

I've got a side project where the data is stored server side. In practice, that server side data might as well be a saved file on the clients hard drive.

Is it "all in" on client state? Not really. Do I ever need to worry much about synchronization? Also no. The synchronization is just "send it to the server when it changes, put an error bar up if that fails". Pretty simple.

It's a bit network heavy, but in this case there are other, heavier parts of the app. There are options if data size ever does become an issue - for now it's not.

Incidentally, I feel like this is where something like MongoDB really is a decent choice. The client would probably just preload the data anyway if it were normalized and in Postgres somewhere. API serves out JSON. Why not just... load the JSON out of the DB?


that's pretty much how i build all my websites. not with mongodb but another reusable generic CRUD backend. it's not much more network heavy than a static or serverside rendered site and the server doesn't need to keep any state other than the data itself. any custom state that the client wants to store on the server is just saved as additional server data.


The correct answer to this is that state lives in the url, which is shared between client/server. This is a simplistic solution for many applications, but still achievable on some level.


The right model for attacking this problem is multiplayer video games. That is the technical space where the questions of synchronizing a shared concept of what the universe should look like among multiple distinct nodes separated in space and time with real-time latencies has been thoroughly investigated.

It is also some of the more complicated and intricate code that you will find in the wild, so best of luck.


Blazor Server and other HTML over the wire frameworks (Livewire, Hotwire, HTMX, etc.) are the new sweet spot for a lot of LoB web applications I think.


The amount of leverage you can get out of these paths is pretty incredible. The fundamental idea is really simple too if you boil it down to the absolute minimum essence...

The following 6 lines of javascript is all it takes to connect a hot pipe from server to client and dynamically invoke whatever is required.

  function StartApplication() {
    document.ApplicationSocket = new WebSocket(...);
    document.ApplicationSocket.onmessage = function (e) {
      return new Function('use strict";\n'+e.data)();
    }
  }
Anything that needs to talk back to the server would just send a message on the same socket.

This is pretty similar to how Blazor Server works, except we are able to bootstrap our client off <1kb of javascript+html source.


I can't believe it took people 12 years since the publication of the WebSocket standard to figure this stuff out. Is everyone just running in circles?


We were already doing this kind of workflows with WebForms and Java Server Faces alongside DHTML, but naturally we weren't cool, so the kiddies took 12 years to re-discover the wheel.


The productivity boost I've gotten with Phoenix LiveView vs a typical front/backend split is simply unreal.

Seriously, if you haven't tried it, you're missing out.


> I've been saying it for years - the hard part is not "server side" vs "client side", it's making sure the state stays consistent between those 2 buckets. If you want to remove the hell from your life, you need to be all in on one or the other.

A full rendered web page is the server state displayed on the client. Except of removing all dynamic in page interaction, you are ALWAYS in risk for de-synched state between client and server. There is no single side state, as long as someone sees your server sate.


Recently I started to toy with a PHP framework that works like Blazor and has all the state on the server. It also has the virtual DOM on the server. This works great even without sockets or polling. All requests from the client to the server will have a virtual DOM diff as result.

Having the state only on the server is great.


I've been using server-side rendering for many years now and I find it to be almost zero overhead at this point. Yes, I had to do some initial work to set things up, but these days I don't even remember it's there.

Most people make an assumption that in order to have SSR you need to run JavaScript (e.g. Node most of the time) on your backend. That is not a valid assumption.

I think the title of the article could benefit from adding "using Node", because much of the complexity described is something I do not encounter at all. I'm using Clojure and ClojureScript with Rum as a React interface. Much (if not most) of my code is in cljc files, which get compiled with Clojure (for JVM on the backend) or ClojureScript (for JavaScript on the frontend). And the ClojureScript advanced compilation takes care of only including what is strictly necessary in the frontend compiled JavaScript.

I mean, sure, you need to be careful not to do stupid things like trying to read from files, but otherwise I just don't encounter much friction. And I reap the benefits of using a single language for both backend and frontend, with most of my data model (e.g. business logic) code written only once, with SSR being a nice cherry on top.


This is a really nice combo, I agree. But… I think it’s safe to say it’ll probably remain niche forever.

I wish there were more stacks like Clojure/Script.

While I’m wishing, I’d love to have a modern ML with simple tooling, a good stdlib like Go (with a high performance http server baked in), and a good full stack story that doesn’t produce a 1MB “hello world”, and doesn’t require 1K dependencies. Does anyone know what the closest thing to this is these days?


It's still a long way from being complete, but I'm working on something like that[0]. Eventual plans are to have good Rust library interop (e.g. bindings to hyper for http) while also being able to compile to Wasm (to run on an erlang-style distributed runtime / the browser). The language is currently interpreted, but one I get typechecking working, I should be able to merge in the Wasm codegen backend I'm working on (with eventual plans for LLVM). Current compiler has zero external dependencies.

Language itself could be described as a mix of OCaml, Scheme, and Lua. Currently working on the hygienic procedural macro system and system injection through algebraic effects.

[0]: https://github.com/vrtbl/passerine


> This is a really nice combo, I agree. But… I think it’s safe to say it’ll probably remain niche forever.

"I don't like fruit — they are bitter and taste bad"

"Here's a kiwi: it's sweet and tastes great!"

"I think it's safe to say it'll probably remain niche forever. I wish I could get sweet fruit!"

I don't know about others, but I don't get paid for non-nicheness of tools I use (nor, incidentally, for the small size of my "hello world"). I get paid by users who use the software that I develop, and I choose tools that allow me to effectively deliver good quality software. So I guess my decision process might be slightly different — but overall I have no reason to complain, because I'm doing great so far :-)


> While I’m wishing, I’d love to have a modern ML with simple tooling, a good stdlib like Go (with a high performance http server baked in), and a good full stack story that doesn’t produce a 1MB “hello world”, and doesn’t require 1K dependencies.

I don't have anything close, but I would like the exact same thing. Especially since OCaml and F# both have JS compilers, you could easily have a fullstack app written with this language. I think your best bet might be either OCaml or F# these days if the ML part is the most important one, or Go if the rest is the most important part.


For the next year(s), take a look at Roc (https://roc-lang.org). It is being developed by some folks from the Elm community, but it's focused on backend/CLI.


Rum is amazing. I also use it for my latest Clojure/ClojureScript projects since you can move quite effortlessly from HTML templates to a hydrated web app should the need arise.


I make a difference between website and webapp.

a website is a collection of webpages, where the maingoal is to display some mostly static information (i.e. an article)

a webapp has the main goal of managing and reacting to a complex, non trivial user state.

website, go server side all in, do not add the tax of client side rendering / hydration to the frontend. if necessary embed small enclosed client side webapps on the frontend.

webapp, go client side all in, screw server side rendering.

yes, you can marry website and webapp via server side rendering / hydration. but the cost to make this really really fast and spiralling complexity and edge cases, does not make it worth it.

choose the right tool for the job, and just because it has web** in its name, does not make it automatically a job for react/angularjs/newest-coolest-framework


In principle I agree with what you say, but I'd like to make a more fine grained distinction.

If the website is just static content, go for static-site generation.

If it's a webapp, where you have some interactions but there's only a few of these and the site is not that complex, then go for server-side rendering.

If it's a web app with complex interactions, go for client-side rendering.


What's Amazon in this example? Hacker News? Google Search? All of these are problems that are still perfectly well suited to server-side HTML generation - the dominant user interaction replaces all / a significant part of the page, with relatively minimal non-database state preserved between interactions (a cart for amazon, your user name, your search term). All of that can be built perfectly simply by a traditional request-response cycle with some Javascript enhancement.

Going SPA for most "webapps" (if app is defined as something that isn't purely static) is a massive waste of time. Use Javascript on the few items that require more interactively, but if most of the time major interactions are replacing > 50% of the UI, a SPA isn't buying you anything.


A great distinction and way to think about your work.

However, I would prefer server side rendering as the best thing to do in all cases, because, while user state may be complex in webapp, it can be easily handled securely in a server side environment.

More so, until the ecosystem around webassembly becomes more mature, the choice of languages and frameworks on the server side are far more than what is available on the client side.

Thinking and architecting apps that render on the server is much easier than client apps.

Also, client apps are slow and laggy. It takes a lot of effort to make them work well, effort that could be used to improve bottomline otherwise.


> I make a difference between website and webapp.

Everyone says this, which doesn't solve anything. 99,9% of what we build is just something in between. It is not as easy as you make it look like.


Especially the web sites that get built by developers are almost by definition "something in between" or more. It's the reason they need developers in the first place. As soon as it needs to have anything bespoke and is updated by non-technical people it is a web app anyways. If not they could use a pre-built no-code solution.


There is a lot of value in rendering the shell of an app on the server and populating the rest on the client. The outer layout of a given view is mostly static. Doing this correctly gives you stable loading with no reshuffling as the rest loads.

And some views in apps can have page-y aspects like reports that might as well be static. Though this is an optimisation, and can be deferred till later.


https://htmx.org/

Having spent 36+ years in the computer industry, I consider the advent of htmx to be the first thing in web development to to attempt to pull the industy's head out of it's ass.

Don't forget to include a solid remake of css in your project like tailwindcss. It also makes code much more readable.


> Read the docs introduction for a more in-depth... introduction.

Heh. This reminded me of http://itre.cis.upenn.edu/~myl/languagelog/archives/000012.h... .

"PERSONNEL WHO ARE NOT AUTHORIZED TO BE IN THE HANGAR ARE NOT AUTHORIZED TO BE IN THE HANGAR"


sign seen in Seattle on a housing project in the 90s:

  ALL ILLEGAL ACTIVITIES ARE PROHIBITED


That one's not redundant; it says that illegal activities (in addition to violating the law) also violate the rules of the housing project. That kind of thing is bog-standard in all kinds of contracts, because including a clause like that makes your illegal activity grounds for terminating the contract. (In this case, the prohibition on illegal activities presumably allows the project to evict tenants who are convicted of crimes.)


I find writing isolated unit tests tough when using htmx over the wire frameworks like HTMX and HotWire. The options are to mock the server and test the page, which is not exactly unit test.

So I am curious is there an easy way to write unit tests for htmx bits of the page?


Same. Testing is an issue. I bought into the Hotwire idea on a recent project. But I can’t see how it’s any different then SSR + jQuery. They have some nice ideas, but it requires every team member to know the whole thing top to down. Backend engineers can’t always be burdened with taking care of HTML/templating/Writing JS specific functions. Feels almost wrong to me at how hard it has been sold.


I have never really understood the purpose of tailwindcss. It's main benefit seems to be that it exposes CSS rules using classes. Then I have to add a mountain of classes to my html. Am I missing something?


It's for people that hate cascading part of CSS, people that never really took time to learn CSS, and people that got burned by an unmaintainable 5k+ line CSS file so hard that they want to jump to another extreme.



How well does it work for complex projects/component libraries? React is a pain in the ass but there's at least some established patterns for large projects.


You will most likely be using a templating language to re-render the view with new state.

If you're lucky your chosen language enables you to do that inlined in your function along with your normal workflow. Otherwise you now have template files, templating logic and domain logic across several files and you're right back where you started (accidental complexity), minus the JS.


When fetching from the backend, the docs recommend returning snippets of html instead of JSON. Is it technically valid to return a snippet of HTML with a content type of “text/html”?

There’s no standards saying you need !DOCTYPE or a head or body?


Regardless of whether it is standard or not I've definitely worked on sites that worked this way back in the day. Smarty+PHP would generate the beef of the page, then certain components of the page would be pulled in using Ajax requests as the user did things. The browser would not care.


I still write sites like this. I realize this is not "the way" to do things today by many, but I find it works quite well and it avoids a lot of complexity. I think the "classic" web-apps were given a bit of an undeserved bad rep because a lot were written in bad PHP and bad JavaScript, and people conflated that with "this entire approach is wrong".


I guess it's not valid, although the impact might be that big if it's only you calling your own endpoints. If you want to be really correct you could use your own content type under the vendor tree: https://www.rfc-editor.org/rfc/rfc6838#section-3.2.

E.g.

Content-type: application/vnd+yourorg.htmlx+html-snippet


Some discussion here https://stackoverflow.com/questions/19303361/content-type-fo...

Basically that nowadays it’s fine to use text/html for html fragments, because the content is still html, even if it’s not a full document.


I'm not familiar with htmx, but the approach I use in Rails with turbo is to return a full page, and then use a subset of that page for what I replace client side. This has a nice property that you can get it working as progressive enhancement, where the site still functions perfectly well without Javascript.


Don't forget Unpoly! A bit more opinionated and "batteries included" than HTMX, still same ideology. I use it every day and I love it.


Looks like yet another opinionated web library/framework.


What's the difference between this and AlpineJS?


They solve different problems.

Alpine is about defining in-browser behaviours in a nice, simple, declarative way.

HTMX is about defining data loading and brower-server interactions in a nice, simple, declarative way.

The author of HTMX have often suggested using HTMX and Alpine together for this very reason, they work very well together.


I think it's just an alternative https://www.libhunt.com/bigskysoftware/htmx


I use Unpoly (equivalent to HTMX for your question) + Alpine and they complement each other very well.

Unpoly handles everything that requires what in SPA world would be an "API call", the difference is that the response I get is just the HTML with the data already rendered and which is dynamically updated in the DOM. Also Modals and page transitions Turbolinks style are handled by Unpoly.

Alpine handles everything that doesn't require a server API call, such as form wizards steps, dropdowns, tables filtering, sidebars, buttons loading/disabled states, etc.

The end result, the application "feels" like Github.


why do you think after 36 years, its htmx that is changing things


> After a fashion, it was decided that sometimes our HTML is best rendered by JavaScript, running in a user's browser. While some would decry this new-found intimacy, the age of interactivity had begun.

I didn't get the memo when this was decided, and I'm still unconvinced.

Let me see if I get this straight:

The main reason for client-side rendering is that some web applications don't want to do a full page reload every time that there's a non-trivial state change (usually triggered by user interaction). When the inherent complexity of the application means that there are state changes pretty often, doing a page reload so often would just kill the user experience, and using jQuery or similar libraries would lead to a spaghetti.

Is that the reason for client-side rendering being the "default" in the realm of modern javascript?

If that's the actual reason, then that seems reasonable, but I think that Javascript culture (frameworks, tutorials, articles) usually erroneously assumes that every application is a complex one with tons of state changes, thus assuming that client-side rendering is the only right way to develop an application (without counting "isomorphic" SSR which is just another way to create a spaghetti).

Sure, if I don't like it I can just move over to Django, Laravel, etc. but it feels like I never get an actual explanation of why client-side rendering is the default in these js frameworks, it's just accepted as a fact of life.


I can’t speak to the decisions on the parts of whoever is building the frameworks. But I will say the dev experience working on a frontend app using React, Vue, etc. is substantially greater than using something like Django templates. And I say that as someone who has written a great deal of both.

I really wish I could be as productive in Django templates as I am with React, but alas. Once you go to component based UI development it is hard to go back. Especially when you get used to the high quality UI libraries.


100%. Also, every web app I’ve worked on has accreted UI interactivity with time, and I often end up crossing the threshold where I wish I had started with a decent client-side framework.


JSF and WebForms were already component based UIs 20 years ago, with page designers at development time, and 3rd party component vendors!


Honestly I wouldn't be surprised if web forms could also provide a faster and more user friendly experience even with all the page loads than some SPAs.


It can indeed, and now there is Blazor as its replacement.



But then maybe Django templates are not so great? You can also have UI components in the backend. Interactivity can be a little tricky though.


IMHO, this isn’t about “Modern JavaScript” or “JavaScript Culture” at all. It’s just $CURRENT_THING and the Cargo Cult is happy to oblige.

Futhermore, dev shops (like my employer) profit from selling overcomplicated solutions.

So in the end it’s just group dynamics that drive the usual hype cycle. Remember MVVM?


MVVM is pretty much alive in native frameworks, as it makes easy to write tests and share behaviours.


I’m not saying it’s bad. You just have to know when to use it. But that’s how it is with any technology/library/framework/whatever.

IIRC there was a “MVVM craze” back then, but that could’ve been my bubble.


One explanation could be that if you are a hammer everything looks like nail.

One problem with web dev in general is that we have multiple backend technologies, thus creating a complete client and server rendering framework requires you to integrate it against multiple backend technologies.

But if you assume that the best API is JSON over HTTP then you can completely ignore what makes sense from the perspective of the chosen backend technology.


> The main reason for client-side rendering is that some web applications don't want to do a full page reload every time that there's a non-trivial state change (usually triggered by user interaction).

If that is the reason (I don't know) some element other than <iframe> would be a much simpler solution.

I had a few ideas just now. It seemed the most fascinating to tie part of the query string to a div.

<humpalumpa cowabunga="foo" dartvader="foobariusmaximus"/>

Then you have your usual <div id="foobariusmaximus"></div>

And your url example.com/?foo=demo.html

And from thereon <a href="?foo=second-demo.html"> shall only update that part of the url and that part of the document.

If demo.html contains a <doctype><head><script> or <style> it is ignored if the target is a div.

But something like <a href="?mycss=banana.css&myjavascript=kiwi.js"> with appropriate targets seems fine(?)

Alternatively the querystring tie can be defined on the element.

<div id="foobariusmaximus" cowabunga="foo"></div>

Older browsers when hitting <a href="?foo=second-demo.html"> will load example.com/?foo=second-demo.html


This mapping of URL segments to areas of a page is precisely what Remix (and React Router v6) achieve with "nested routes". I used to denigrate React router's "routes as components" approach, but completely changed my mind with tbe advent the of nested routes as implemented by Remix; it's profoundly powerful, intuitive, performant, etc.


Basically what turbolinks did.


Or what 99% of ajax did. What I really want to see is for html to work toward what people are actually doing with web pages.

I basically want a lawnmower not a bag of parts that allows me to build my own. I know how to sort tables but I don't want to. I know how to dynamically xhr, display and insert auto complete values but I don't want to write it and I don't want the chunk of js on the page. I definitely don't want to load a library for it.


Old <frameset> pages had its charm. You can still use it with HTML5! If I remember correctly was that screen readers sometimes had some difficulty handling a <frameset> page, especially if the were multiple <frame>s.

<iframe> is also great, but hard to layout horizontally. And resizing can be tricky.

And nowadays you don't even need your own event layer to talk between frames, you can use postMessage.

But yes, agree, continue evolve <iframe> and such to become more usable.


One can use postMessage but if everything lives on the same domain one can top.frames["left"] in stead of window and call functions or manipulate the dom.

You had me play with framesets for a bit.

   document.write((function(){switch(window.name){

     case "main": switch(location.search){
       case "?foo": return `<base target="top"></head><body>Article text`;
       default: return `no article selected`;
     }

     case "menu": switch(location.search){
 case "?bar": return `<base target="main"></head><body>List of <a></a>'s`;
 case "?youtube": return `<base target="main"></head><body><list of youtube.com/embed/ links`;
 default: return `please make your selection on the left`;
     }

     case "left": return `<base target="menu"></head><body> list of <a href="?youtube"> links`;

     default: return `
       <frameset cols="15%,85%"> 
         <frame name="left" src="index.html" />
           <frameset rows="30%,70%">
             <frame name="menu" src="index.html" />
             <frame name="main" src="index.html" />
           </frameset>
         </frameset>`;

     }})());
I laughed so hard writing that.


Except for the confusion about terminology the article has some good points.

We had an idea a long time ago in the web dev world that we could run the same language in the browser and in the backend. Basically reuse everything and save time.

It think this is bad assumption to begin with.

If I have a clean separation between frontend and backend, regardless if doing mostly server rendering or client rendering it is only one thing that makes sense to share, input validation code.

Like what is the definition of a first name, last name, age, price, email address etc. That is the one thing that need to be in agreement between the frontend and the backend to be able to exchange data.

But to share frontend and backend technology for only the gain of input validation seems like a heavy price to pay for little gain.


Being used server side JS since before NodeJS came along on JVM. I would say your assessment is correct (input validation) around that time. However React, Vue, Preact, Svelt, SolidJS (not to mention web components) changed that. We now have a group of people ONLY associated html rendering with those type of frameworks and components. So HTML rendering on the server side is pivoted to cater to those type of front end developer with 'Hydration'. The EJS templating type of rendering has being phased out. It is not good or bad, it is catering to what developers are use to.


That has not been my experience.

Input validation or more broadly data integrity is just the first, obvious issue. There are things that can help with that like json-schema, openapi, graphql and so on, which solve some part of this at some level. Let's go for json-schema, which is the simple, focused data driven and pretty well designed. Great isn't it?

But it doesn't stop there.

You now have at least two libraries in different languages with different levels of support for the spec, so you need to settle on the lowest common denominator. That's just the validation part, now you want error reporting, which means duplicating logic again. You probably want to convert the schemas or at least the data structures they validate into something that your languages work well with, that's duplicated logic again.

Then you also need to somehow pull them in, build them etc. which is duplicated tooling around your language environments. Now you have version dependencies of somewhat the same thing in two different ecosystems with different versioning issues.

Whenever I said "duplicating" I really meant "solving the same-ish problem in two different ways", so "duplicating" is an euphemism here. It's a proliferation of accidental complexity and coupling.

Isomorphic code is not just a 1:1 thing. It's not necessarily the same functionality that you maintain on both sides (like validation) it's _all_ the other stuff as well, unrelated in terms of single features.

Aside from validation and data integrity - and the whole jungle of stuff that comes with it - you also need a bunch of fundamental data structures, network I/O, possibly routing logic if you're doing anything that's dynamic on the frontend and serves UX beyond just displaying text and images in a nice way. You want quick, optimistic feedback on the client, so you're coordinating more state and data, which means more duplication again. You might want to move stuff from the server to the client and vice versa, when you discover basic optimization issues.

Isomorphic code is _leverage_ to a high degree from tooling to standard libraries to knowledge, familiarity and mastering a language and ecosystem as a programmer.

If you decide to use a heterogeneous ecosystem you better have good reasons for it such as legacy code, specific powerful libraries and integrations you need to use or high expertise in a particular language that cannot talk to both sides.


Most of these problems stems from moving too much to frontend, making the frontend tilt over. If you keep more logic in the backend much of these problems goes away.

Problem is that for every core concept moved to the frontend it still needs to exist in the backend, a representation of an article needs to exist both in the frontend and the backend, same for error handling, logging, authentication etc, but they are rarely identical, even if you would code them in the same language, because the frontend will always be a remote representation of what the backends implements. Thus the more you move the more you need to duplicate.

Some views or components needs to be more interactive, true, but not every view, thus you solve the interactive part of those specific views or components in pragmatic way to minimize friction.

One pragmatic solution is to use BFF, backend for frontends, where you have a specific API, both for reading and writing, for that component or view only. Now you can adapt those together to a better fit, including error handling.

General and reusable (REST) APIs for driving a frontend is a bad idea, you should have specific APIs for each case, otherwise you end up doing asynchronous JOINS over HTTP.

The next step to realize after that is that if you already have specific APIs for a components you can skip passing JSON all together and just return HTML, thus saving an entire JSON encode/decode roundtrip.

JSON schema is trash btw, schemas does not fit JSON data well, structured data is better handled by something like XML. And what this shows is that using JSON for large data representation is a bad idea, but it is usually where you end up when doing heavy frontend.


Shameless plug: If you want seamless SSR with Django logic, but React templates, check out my project at https://www.reactivated.io .

It's like HTMX but uses React, and the JS bits are rendered on the server. You can then hydrate on the client.

The Reactivated docs site itself uses the project: https://github.com/silviogutierrez/reactivated/tree/main/web...


Setting up something like this myself has always been a pain and I’ve just stuck with DRF + React instead.

Will need to give this a try, thanks for sharing (and building it)!


Yea using Django only for DRF APIs always felt limited. I want Django forms and form sets.

Reach out if you have any questions! Email is on my GitHub profile.


The complexity of web development frontend itself is just absurd. The mess of dependencies, the mess of language transpiling, opaque abstract functions with unreadable call stacks, asset management, sync vs async, the random best practice of the week, etc.

I look at the state of web pages and apps and it's not even for the betterment of user experience! Hacker News and old.reddit.com still provide the smoothest, fastest experience at the cost of having to zoom in every once in a while, but the price is worth it.


My org just bought a professional bootstrap template. It takes 3 build tools and 1500 node packages just to build the SASS into a CSS bundle. It boggles my mind that this is normal in the front end world.


only 1500? you got lucky!

gatsbyjs (React framework) has so many dependencies it breaks GitHub's dependency graph

https://github.com/gatsbyjs/gatsby/network/dependencies


This is absolutely absurd. This is ammunition for others to not take the Node.js community seriously.


I ripped out one dependency, and it dropped build times in a recent project by an entire minute! Turns out that one dependency had 100's of megs in transitive dependencies...


I handwrite all my CSS directly in the browser and the only "dependency" I use is a reset.css. The zooming issues some of those old themes have can all be solved with a modern css grid template (one for smartphone, one for desktop). Needless to say sites like these can be blazing fast.

I only use js for small things like maybe hiding some header when scrolling down (no jquery, handwritten vanilla js with maybe 20 linws of code), but even that would not be strictly needed.


And even CSS resets are getting close to zero these days.


That is true and very fortunate, I noticed how this is getting less and less of an issue over the passt years.


I take it you haven't taken a look at backend lately. Vagrant? Or Docker? Composer? Maybe Drush. Will this plugin break? Is it even still actively maintained? Which database? Are we still on the NoSQL fad? Maria, or postgres or mysql? And does my production host support my language version, and oh god another php vulnerability


Frontend folks have an excuse. They cannot avoid using JavaScript, since they're limited by what browsers support.

Backend developers chose to write mountains of yaml


I recall a couple of years ago when Kubernetes was all the rage and jesus christ the amount of YAML was unbelievable.


I am a proponent of server-side, use Apache Tapestry for most production apps and have only one yaml file in all the projects.


I have done nothing but backend for a decade. Docker's been around for the past 6 years and still going strong.

Database? Postgres, always. You have to bend it so far until it breaks it's not even funny. If you need something else, you'll now.

Backend lang? Python. Framework? Django. Need data munging? NumPy/SciPy/Pandas/PySpark. Need a really high speed component? C will do, Rust if you feel fancy.

OS? Ubuntu, or Debian. They'll do well. Again, you need to have really, really good reasons to use anything else.

Of course there's alternatives! But there is absolutely no reason to move away from this stack. For some reason frontend developers seem to be either on a constant treadmill of moving technologies, or being stuck with subpar old tech with clear deficiencies.


So React has been around longer than Docker? Good to know that frontend is on a slower treadmill than backend, which according to this thread has moved on to alpine or htmx or something else.


React is for sure a relatively stable part of frontend, but the ecosystem around it seems to be building mountains of complexity on top of React.


React's ideas of best practices and library ecosystem have sure changed way more.


There's maybe two of those questions that you actually have to care about as an architect. Nobody's forcing you to chase fashions except you.


I dunno man, my "backend architecture" is a bunch of Rails servers (some web servers, some queue servers, but they run on the same codebase with different entrypoints), a PostgreSQL server, and a Redis server.


I wouldn't exactly call clicking on the 'reply' button, and loading a brand new webpage, typing this comment, pressing the 'reply' button again, and then getting sent back to the original webpage 'smooth'.


> I wouldn't exactly call clicking on the 'reply' button, and loading a brand new webpage, typing this comment, pressing the 'reply' button again, and then getting sent back to the original webpage 'smooth'.

OTOH, making a reply button that drops-down a textarea with "save"/"cancel" buttons requires about 30 lines of Javascript.

And yet, almost all "modern" sites that have only this functionality has megabytes of dependencies.

Maybe both extremes are bad, and Javascript is like adding salt to food - too little and you'll be able to eat it, but won't enjoy it. Too much and you won't even be able to eat it.


But at the same time, if you wanted to get away from that and have replies inline, that doesn't require very much Javascript at all! It can just be incrementally added to the page you already have with little fanfare.


I gave it a shot at one startup using next.js that wanted to pre-hydrate redux state, but also use tracking with session cookies et al the regular bag of beans. I spent maybe 6 weeks on it and basically I grew to realize I was in a miasma of pain. I didn't last much longer. I could've wrote the entire thing just vanilla js/css in probably a weekend...

So I can commiserate with this.


I'm always impressed by the labyrinthine towers of JS and frameworks people create to ship out relatively simple sites and web apps. By impressed, I mean that I look upon them both in awe and in horror. And I'm guilty of building them myself, as well.

One of the worst things is opening such a project that was written 5+ years ago by someone other than yourself.


>I'm always impressed by the labyrinthine towers of JS and frameworks people create to ship out relatively simple sites and web apps. By impressed, I mean that I look upon them both in awe and in horror.

Well, then... brace yourself for the one[1] I intend to release in a month or so. I've been working on/off it for the last three years.[2]

I'm hoping next month to actually build an application with it :-/

[1] A component library for client-side elements.

[2] Spent about 8 hours in total on it, over three years[3]. Sometimes I need a deadline to finish stuff. Don't judge me, no one's perfect :-)

[3] Still think it took fewer hours than learning React.


I love starting off projects in vanilla JS because they actually get to a workable state most of the time without a lot of effort. Most of the time when you write stuff in vanilla js/ts you can just fit that logic in to a react or vue or whatever application framework afterwards if you really need all those bells and whistles.

I think a lot of people start things off thinking they NEED to have all the authentication, cookies etc sorted out from the get go when really you just need your ideas in a working form first and stuff like HTTP basic auth is mighty fine for development


I opened this thinking it would lament the challenges of server side frameworks and templating.

But those things seem so, so sane and simple compared to the mess I just read about. It's worse than I could have imagined.

Who thinks that's a good workable solution? Who's idea was this?


Adtech needs a lot of JS to work. JS to determine how long they are hovering over this and that element, how long this or that ad is in their view, and so on and so forth.

The logical conclusion is that every single HTML element needs to be wrapped in a bit of JS somewhere. Nothing should happen in the users browser that can't be monitored by JS.


This is a gross mischaracterization of what's happening.

People want to build interactive pages and products with neat little shiny user-friendly doodads. JavaScript provides that ability. SSR allows you to write JavaScript that will be rendered before the user ever sees it so all that effort that went into making browsers really good at rendering HTML can work for your interactive applialcation too.

I have to assume anyone who doesn't understand the goals or says some nonsense about it being about "advertisers" has spent little to no time in this space.


Angular.js came out of Google. React.js came out of Facebook.

I promise you this is not a coincidence. You can use these frameworks to build cute toys, but they were built to advance certain business interests.


So adtech is ruining the web. Everything makes sense now.


Adtech represents the ruthless financialization of human attention, so it's not just ruining the web, it's ruining society.


Very aptly put. But its our fault as we don't want to pay for content online.


Most online content is too small/short to be worth paying an amount of money that would be worth the transaction.


A lot of content such as clickbait/etc actually has negative value and only works because ad impressions pay before the content has been seen and can't be clawed back even if the content ends up being garbage.


If there would only be a way to reward people for good content.


Always has been the case since day 1.


"At that moment the monk attained enlightenment."


Adtech wants that JavaScript. It doesn't necessarily need it. Most of the metrics generated are ultimately worthless. They exist to slap together bullshit graphs to overwhelm clients with "data".


Sure, analytics doesn't necessarily require JS, but the host client software absolutely does need it to create a gameified experience to attract and retain users over their competitors.


Call me old fashioned, but PHP still gets the job done better than just about anything else.


Already mentioned in other replies, but as a long time node/react developer, for the last year or so I've been working on a project built with Laravel (blade components) + Unpoly (for server interactions) + Alpine (client side only interactions) and it feels like a real "cheat code". Everything is super easy, although some people think the way we do it is pretty "uncool" so they look down at you.


And usually leads to a BBOM architecture


I have colleagues using Laravel, and there's nothing muddy-ball about their apps at all. Their code is tidy, modern, readable, and clearly maintainable.

In terms of practical effect, frameworks influence developers more than languages do.


When people say they use "PHP" for webdev I recall BBOMs of flat files and raw SQL. Historically speaking, "PHP" does not necessarily imply "PHP+Laravel" or any other sane framework.

OTOH if someone says they use "Python" for webdev I assume Flask/Django/etc.


I think you've summarized it well: people tend to make technical judgments based on preconceived and historical notions, in spite of evidence to the contrary. :)


This was my PHP experience with CodeIgniter 15 years ago. Nothing new under the sun.


That's good, I think? I'm not claiming that Laravel was the first or the best PHP framework, just that they were using it. (Admittedly I assume it's a reason their code is so well organized.)


Laravel inherits its organisation from the underlying Symfony 2 framework, just like Drupal nowadays.


Thanks, I didn't know that.


The idea of using a "framework" in a language that gets completely reloaded on every new request doesn't make sense - at least I thought a framework was something that wrapped your own code and presented an event loop, etc. You'd want a "library" if you just wanted to improve on the original low quality PHP database connectors and such.

But PHP developers always did seem to have inappropriate jealousy over unrelated languages like Java where there is persistence over requests.


You sound like me, but ten years ago. :) I used to have a similarly negative and narrow-minded perspective about PHP, but my colleagues and their outstanding work have enlightened me.


PHP itself is perfectly designed to run CGI scripts and the correct architecture for web performance (shared-nothing). It's the third party developers that have second-system effect enterprise dreams.

Though maybe some of them have found simplicity again.


you're right. it was obvious when we started seeing templating engines for php while php is itself a templating engine


Only thing missing from PHP now is to programmatically detect tainted strings, i.e. strings that are dangerous like user input, if we had that we could continue with the built in templating and still be sure to escape output at the correct moment.

Tainted strings should be built into PHP (there was an RFC for it but it didn't pass). Another solution could be using operator overloading instead (that RFC didn't pass either).


Or do what Hack did with XHP and make XML/HTML a part of the language, and escape everything else by default. That was the one Hack feature PHP should have taken, but didn't.


I am entirely unfond of PHP as a language but Laravel - and competently written apps that use it - are damn impressive nonetheless.


If you can't like the current PHP, there's probably no language that will make you feel too well. What exactly is so bad about it?


The point I wanted to make was that Laravel is objectively impressive even to somebody who dislikes PHP, so going into details as to exactly why I dislike the language would distract from what I was trying to get across.


I'm not sure why you've been downvoted. Anyone who has ever seen a Laravel stack trace will surely have been appalled at the number of stack frames that sit below their own code, and all that machinery is loaded on every single request. It's truly shocking. PHP isn't Java and shouldn't be treated as such.


There is a very strong movement for using a large framework within the PHP community, like Laravel, upside is that is has helped PHP community to revitalize the PHP stack. The downside is that large frameworks is now considered best practice.

I wonder if OOP as a technique will eventually always end up as Rube Goldberg machine.


I know what you're saying... but using Java as the counterexample? I'm still reading the stacktrace from a Java error I raised in 2003.


I haven't paid attention to frontend web development in like 15 years so I don't have any newer examples ;(

I'd hope new stuff is also CGI-based and restart on every request, otherwise you're asking for security issues (like leaking another user's info Heartbleed-style).


It has somewhat become better in PHP with preloading and better caches like the inheritance cache.


This is not true. If you need 1000 lines of templates to display a page, it will be 1000 lines no matter whether you use Twig (PHP) or React (JS). And Twig in my opinion looks better than JS code mixed with HTML tags and split into 100 files 10 lines each.


For the uninitiated, BBOM = Big Ball Of Mud.


Because what we're witnessing in JS-land these days is completely sane and rational.


All successful projects end up as a BBOM. The framework or language or architecture you use has no bearing on this. It is just the thermodynamics of software development. If something is not a BBOM it just hasn't been around long enough.


BBOM > debugging SSR hydration issues.


productive ORM & simple HTML template engine > BBOM


Not my experience. Most Laravel projects I've seen are in a much much better shape than even the most minimal "microservices" I've seen around. You get so much done for you with Laravel that it makes it a bit more difficult to mess up.

I've seen microservices that had absolutely no guardrais going off the hill too many times. At my previous job "migrations" where bash scripts with SQL in them because they "didn't like ORMs and they were slow". LOL. They were handling signup form POSTs and profile updates.


You can write BBOM in any language/platform/technology.

It is always the programmers that is the limitation, not the chosen technology. Thus you should pick technology that makes sense for the majority of programmers. PHP fits that. Just follow a guide to avoid BBOM.


> And usually leads to a BBOM architecture

Aren't "big ball of mud"s legendary for being counter-intuitively successful products?


I find this a very insightful application of the concept of colours to look at the problems of mixing code meant to run on slightly different and subtly incompatible runtimes, within the same code base, without calling out this distinction of "colour" by any particular mean of syntax, or tooling, or explicit documentation.

It's a good observation that helps, at least me, shed light and give a name to this problem that many will have experienced and struggled with.


Thanks for this comment, this was exactly the point I was trying to get across. I’m glad you found it helpful.


I was never really a fan of the original 'Colors' article but I think the analogy works really well for this client-side vs server-side stuff.

Red = client

Blue = server

Purple = either

I think I will use this if I ever have to teach it.


The moment you need SSR, you'd be in a better place if you were using just plain old server rendered (componetized) views, such as the ones you can do with laravel's Blade system, and do your client side interactions with Alpine and your server interactions with unpoly/HTMX.

SSR per se is not a "big problem", specially if using something like Next. The problem comes when you have to mix in translations, i18n, data fetching from external sources, authentication, cookies forwarding, etc, etc, etc... in my opinion complexity grows so exponentially that I don't see the advantage anymore.

I'm working on a project built with Laravel + Tailwind + Unpoly + Alpine and it is such a walk in the park to implement anything thrown at us. Although it is not "cool" tech, so some people around here don't want to work on it because they only want React all the things :-s


In Javascript SSR, if you need the same logic on the client and server, define an API module + interface. Make a different API bundle on the server that calls NodeJS functions, file reads, whatever, directly. Make an API bundle on the client that makes AJAX requests to your API endpoints. Now your Javascript can call the same `api.fetchWhatever()` method, and await the result, and it works on the client and server (if you need it). It's a very nice pattern.


Thus, you are confirming there is a hugh level of complexity.

I want to add something else on top of what's in that gist: ssr is slow, rendering an application twice (on server and client) is just slow.

We only do that for cache and seo, every other page suffers.


I'm not sure what about two bundles causes a huge level of complexity. Granted, I use Next.js, it does all of this out of the box.

SSR is fast, it's provably faster than client side rendering for first load. You can optionally do client side hydration, so the time to meaningful content is fast from SSR, and subsequent page loads are done SPA style (if you want).

It's never a fair comparison, but Stackoverflow is entirely SSR (not Node.js) and it's very fast. Also, all forum software, which is SSR, despite being 20-30 years old, is usually faster than a SPA. The Jamstack is a recipe for loading spinners, not speed.


> Also, all forum software, which is SSR, despite being 20-30 years old, is usually faster than a SPA.

Only because crap tons of hardware got thrown at the problem.

I remember when forums were multi-seconds to load each page. It sucked.

People complain about reddit a lot, but even on mobile, the reddit website is fast. It may be janky at times, but navigating between comments on stories is under a second, loading more comments is, well, not fast, but quick.

Hacker News is notably one of the few non-crap forum sites out there. phpbb used to be everywhere, and it sucked everywhere. Up until 2012 or so, the old dial up BBSes from the mid 90s had a better user experience than the latest and greatest internet forums. (of course BBSs only had 1 client at a time, a much easier problem to solve!)

And yes, phpbb works now, because now a cheap host has an SSDs with gigs of RAM. To do the equivalent of a what a 486 running DOS did back in 1993.


Personally, I'd much rather wait for a 1-2 second page load than have a page load quickly and peg one of my CPUs because of some poorly written Javascript.


It's pretty hard to compare some random free PHPBB running on a underspec'd host to companies with billions in revenue to support their platform. Honestly, reddit and HN are so simplistic, there's not much added by JavaScript doing any rendering.

Reddit in particular is horrific. Their API is slow, and their JS bogs down on anything that isn't a super-fast desktop. Server pages would be able to handle most of what they do, faster and better, except for crap like live updates or typing/posting indicators that don't add much value.


> Their API is slow, and their JS bogs down on anything that isn't a super-fast desktop.

I'm on a $1000 4 year old laptop. Initial render on home page is under 3 seconds. Comments load in under 2 seconds,

Hacker News is faster, but Reddit is the next best thing.

This is all w/o ad block, new reddit UI enabled.

With uBlock Origin enabled, it is marginally faster.

> It's pretty hard to compare some random free PHPBB running on a underspec'd host to companies with billions in revenue to support their platform.

My comparison was random free PHPBB to random free BBS hosted on dial up in the 90s. Up until the late 201xs, random free 90s BBS performed better.


I'm sorry but 2 seconds to load a page with primarily text is not something you should brag about. This kind of stuff should take miliseconds to load, with the bulk of it spent waiting for the server's reply. In fact, the "old" Reddit (which is entirely SSR with minimal JS on the front-end) would load in under less than a second.


> I'm sorry but 2 seconds to load a page with primarily text is not something you should brag about.

Reddit's homepage is primarily video and photos.

The homepage contents are obscenely dynamic, I have no doubt they use tons of caches for everything, but the homepage is not simple at all.

Since I am logged in, there is a profile lookup, as well as looking up to see if I have an notifications.

There is the fetching of stories + metadata from my personal feed, which involves custom prioritization. I doubt any of that can be trivially cached.

Reddit is not some simple static text website. Hacker news doesn't render a different page for every visitor, Reddit does. Heck HN saves on CPU by not even telling me if there are replies to my posts!


> People complain about reddit a lot, but even on mobile, the reddit website is fast. It may be janky at times, but navigating between comments on stories is under a second, loading more comments is, well, not fast, but quick.

Reddit is an example that clearly shows that the user experience of "old-style" code where interactions are primarily with the server beats out SPA.

old.reddit.com is clearly superior in pretty much every way save some pretty old CSS styling (which could easily be fixed without changing the underlying technology). The rewrite still feels buggy and slow, and the added interactivity doesn't really make a meaningful difference.


old.reddit.com is great when I am travelling and limited to 128kbit/s of data.

The new site solves the problem of not losing my place in the feed when leaving a story, which is a huge pain point of the original reddit design.


It sucked even more when forums used flat files (essentially runtime static site generation) rather than databases. We'd periodically have to prune the post history to restore site performance.


> I remember when forums were multi-seconds to load each page. It sucked.

You mean when you had the same internet speed 10 years ago, right...?


I could pull 2 megabytes per second down back in 2000.

I was at 5x that by 2010.

My internet speed didn't really improve from 2010 to 2021, (Comcastic!) then I got fiber. File downloads are faster (800mbit gets binary fast), web pages are about the same.

Servers have gotten faster, but ad tech had made crap slower.

SSR won't help solve ad tech garbage making pages janky.


You could exclude adtech from the equation using an ad blocker and theoretically make everything lightning-fast, but in reality adtech is not the only problem and non-ad JS also grew to consume all the available resources.


10 years ago in suburban Phila, I had 50/50.

Today in rural NM, I have 40/5. Many of my neighbors have 25/2.

Time is not the only thing that controls internet bandwidth.


> People complain about reddit a lot, but even on mobile, the reddit website is fast.

only on mobile is Reddit fast.


Pardon me while I load up reddit on my 4 year old $1000 Costco Special laptop.

Yup, that was fast.

OK let me disable ad block.

Under 3 seconds to first render.

Comments appear in under 2 seconds.

Reddit performs very well. The new UI is legit faster. There is a bit of stupid here and there with it, and if you aren't logged in the experience is horrible, but speed is, IMHO, not one of the site's problems.


16 core Ryzen Threadripper 2950X on a 40Mbps link with ublock origin: 4-5 seconds for first render.


What OS?

Fresh Firefox instance, Windows 10, hitting homepage without logging in. 2 seconds, if that, page is fully loaded.

Chrome, I am logged in, but still using the old UI somehow. Same 2 second page load.

Core I7-8550. 16GB of RAM.

Running on a laptop whose battery is pretty much useless and whose fans are rather upset at the moment!


I sometimes wonder if my HN refreshes go through because the site reloads instantly.


Yep. HN is fast.


Poe's law in action. This is literally "to draw an owl, start with two circles and you're pretty much there."


Double the code. Double the bugs.


SSR is the dumbest thing I’ve ever heard of. Client and server are different things, treat them that way. The whole web dev community keeps tripping all over itself to make simple things wildly complicated.


> If you're considering using one of these frameworks, I would recommend you carefully consider if the complexity is worth it, especially for less-experienced members of your team.

Far too many folks seem to believe that complexity is a binary choice or that it can be avoided wholesale by choosing frameworks (or lack-thereof) to solve their problems.

Truth is, when you're building software on the web, you're gonna have gonna have to make a series of decisions on where the complexity that you'll inevitably encounter should live.

We're all simply making tradeoffs on where that complexity lives: the browser runtime, the build system, the type system, the server, the platform, their framework, your framework.


I'm convinced SSR is only a thing because of Lighthouse scores.

Client-side rendering with client-agnostic REST APIs is a fantastic architecture.

But noooooooooo we can't have nice things


It's a nice architecture when the API devs get to throw stuff over the fence to the frontend devs. It's less nice when you are the guy throwing stuff over the fence and then running around the fence to catch the stuff you just threw over it. It makes you start to wonder if there's anything you can do about the fence.

The answer is "yes."


What an awesome metaphor, 100% agree with this. That's why I've always said that the SPA architecture with a separate backend is a way to increase the amount of work so that it can be distributed to more people/teams.

It baffles me when I see single developer or small team projects going for this.


As a developer I like doing CSR apps. As an user I like when my sites are SSRed. There are people who are trying to bridge the gap. https://dev.to/ryansolid/server-rendering-in-javascript-why-... https://dev.to/this-is-learning/conquering-javascript-hydrat...


Now that the end-user is essentially using their browser as a thick client for that RIA/SPA, and the client-server interaction primarily supplies certification and replication services, we can congratulate ourselves on having more-or-less reinvented Lotus Notes.


And a poorly implemented reinvention at that.


Ouch, that hit too close to home :/ It's too early for whiskey where I live.


because scraping and indexing the site is hard.

because getting first response on super slow speeds doesn't accomplish anything

because many js devs write buggy code that renders in their browser

etc


Fantastic if you ever could anticipate having multiple different consuming clients.

Otherwise it's fantastically over-engineered IMO.


I don't see it as anymore over-engineered than SSR.

In traditional SSR, using a template language, you craft a query and expose your variables to your template.

In SPA, you craft a query and expose your variables through an API.


In my opinion (and experience, this architecture is great as a way to scale teams, having a clear interface between them.

As soon as you have to work on both sides of the API, it is not and advantage anymore.

I think separating the backend from the frontend is a way to increase the amount of work required in order to gain a saner way to split it into chunks for different teams.

As a single person, or a small team, doing an SPA and an API only backend, if not done for learning purposes then it is just either madness or CV padding, or something else....makes no sense at all to me.


Let's say you're a mobile user on a 3G network, which a lot of our users are. With a SSR app (that's cached behind Cloudflare), they can see a document in about 100ms max.

With a fully client-side rendered app, you're talking multiple, multiple seconds while they're staring at a blank white screen.

Try it in real life and you'd quickly see your idea would be a terrible user experience.


Why would it take multiple, multiple seconds to download JS bundle (cached behind Cloudflare) and then make a request for initial data (probably also cached behind Cloudflare)? Overhead of downloading most of the app is likely less than that of loading a single hero image from said website.


On 3G and even low 4G it absolutely would. I've seen it. Also, a lot of mobile devices do not have as strong processing power as you think. Parsing and executing that much JavaScript also adds time.


SSR would make HN better under load when logged in, for example. When logged in you would get the cached versions like a logged out person, and then wait for user-specific content (and if it never renders, you can always just read).


As far as I can tell, HN is rendered on the server.


I meant SSR as in what nextjs does. Render on the server using front end tech, cache for performance and then rehydrate in the front end for up to date data.

They should have called it Server Side React Initial Rendering maybe :-)


SSR (2012ish, debatable) long predates Lighthouse (2018). If you mean that SSR is only a thing because people want pages to load quickly, however, then I'll completely agree with you.


In this thread: people confusing server rendering (like your old PHP) with SSR (same Javascript codepath for client and server, i.e. Next.js)

One is easy, the other is the definition of leaky and crappy abstraction. This article is about the latter technology.


Confusion is by whoever wrote the headline. Server side render is easy. Isomorphic is hard.


Yeah I was planning on changing the title, and then woke up on HN. Oh well.


Its a bit leaky, but I wouldn't call it crappy. What's crappy about it? Be specific.


This article is a good start.


I've read the article. The pain points exist, but they are not insurmountable. I'd go as far as to say they are being a bit dramatic.


Compared to traditional server side rendering, the article describes a hellscape of self-imposed complexity. Why would anyone choose this?


You choose something like react for the frontend if you need heavy interactivity, something closer to a desktop application than a traditional web app. Then you choose tools like Next.js that will do SSR for you so you can take advantage of how good browsers are at rendering HTML.


The needs you describe are incredibly rare. Facebook for example doesn’t have heavy interactivity closer to a desktop app. You can also do interactivity just fine with Ajax. These frameworks bring nothing but complexity and confusion.

Also, how good browsers are at rendering html has nothing to do with next js. Browsers don’t care where the html came from.


I think you're confused as well. Most frameworks that provide any kind of HTML rendering method can do server-side rendering. The trickiness is making them isomorphic.

> the other is the definition of leaky and crappy abstraction

Well that's just like, your opinion man.


You're not wrong, dimgl, you're just an asshole.


Clearly this generation needs a refresher on The Big Lebowski.


I already know I'm off my rocker (and my lawn with that phrasing) but I really wish the browser supported Python as a language instead of Javascript. I can pickle Python, send it to a client, and run it. Combine that with everything else Python 3 has brought to the table and I'd have a language that runs client side that I can depend on and that I love.


The browser doesn’t even have modern JS. Well, I mean it does, but for awhile it didn’t. We consider modern history to be anything past 1815. So in web terms, if you had to section it, modern web history is like maybe 2010 onwards? In those years we transpiled new Ecmascript to old ecmascript.

So, nothing is stoping us from tranpiling Python to new or old ecmascript.

But, I think you’ll see that it won’t solve much. You are just turning one turd into a lesser turd. The problem is fundamentally deeper. The browser and web server tech was not built for app development. That’s the bottom line.


Maybe some day WASM will become the default and you will just compile whatever you feel like to send out.


Isn't this supposedly one of the magical things WebAssembly gets us?

While I can sympathise with wanting <insert your favourite language here> in the browser, I'm not sure its so great in reality and I'm really scared of things like Pyodide/Blazor/<programming language to web asssembly> from a debug perspective.

Today the fact its JS everywhere client side for most part has made our lives hugely easier in a lot of regards. A world in which many languages can compile down to some web assembly and run will make debugging and developing web apps even more complicated than it already is...


I'm giving Haxe a try for that reason. It's not python however.


Brython, Pyodide, Coldbrew. You do have options. ;)


I agree with the Next.js people in that thread

SPA+SSR has never been easier, and my experience (as of this year) is that its as easy as static websites. Even easier in some cases with making the CSS easier while all the bundles and optimizers to make builds are hidden.

I think people need to rethink how they design their websites and complete concept of a viable business.

I get the sentiment of the absurdity off web development but its successfully been abstracted away if you keep your app and idea simple. If you’re still doing gigantic web 2.0 apps with hundreds of views, microservices everywhere and API calls going every which way, thats the problem. The stack for those is ridiculous and not even necessary anymore. You can drive [hundreds of] millions of revenue without any of that these days, without even a user account flow or state management, the market is trying to tell you something.


whats the reason behind all this endless JS race that s been going on for years. I only had to use it once and it felt like everything was prematurely overabstracted ending in more code doing less. It made me run back to php+jquery, which is more like a car engine rather than an modern abstract art statue.

is it the overabundance of programmers? the fact that it makes them look busy? people liking wasting time recompiling most of the day? overengineering making people feel superior? it certainly doesnt feel healthy and makes one question whether the whole SV startup ecosystem is a cargo cult built on shaky grounds by people who are just apeing each other on everything, from design to backend.


JavaScript based webdev continually makes the next thing you have to do seem easy if you just use this framework, that approach, the latest library for it, and the build tool that finally handles this case. It's a textbook example of the combination of the sunk cost fallacy and short-termism: you continually make amazing web stuff happen quickly by going into just a bit more technical debt. It's a technical debt pyramid scheme. The further you get, the more you're trapped by your previous decisions of convenience. It's an ecosystem addicted to quick fixes, no matter the cost.


Webdev shifted from making pages of content with some interactivity to web apps that live and maintain state on the client and occasionally query backend for some data. You don't need or particularly benefit from modern JS frameworks for somewhat interactive pages. But you absolutely can't do web apps with ad hoc jQuery and maintain your sanity.


Hello. I'm here from 1999 and I'd like to help:

https://htmx.org


Probably more like ~2005. I was a web developer in 2001 and XMLHttpRequest still felt like secret knowledge that you only found out about by digging deep into Microsoft documentation (and wasn't well supported if you weren't using IE). Most people I knew were making pages interactive by targeting forms to iframes.


Some of us were using JAVA Applets to make async requests ;)


Say you're travelling back from 2099 and your comment still applies :-)


Do I live in a parallel universe where building SSR/SPA sites with Next.js is somehow incredibly complex? I know below the abstractions there is complexity, but building SSR rendered SPA's today with something like Next.js has never been more simple. I think web development has taken a huge leap forward where we now have stable tooling, a strongly typed language with TypeScript and top notch boilerplate free state management with MobX. To top it off you can write performant backend API's in Node.js with again TypeScript sharing contracts (Interfaces) between backend and frontend. I for one am really happy with the current state of affairs.


It's easy to server-side render Next.js apps, but usually they still have to talk to a backend. I don't think Next.js' API routes are good for this, especially if you need messages queues, cron jobs, etc. Now you have three distinct parts, client-side rendering, server-side rendering and the API, usually all communicating via JSON.

Traditionaly with PHP, C#, Ruby, Elixir, etc, your backend and frontend was tightly coupled, your code has complete access to backend resources, and the view/template would map the state to a HTML document.

Now, for the sake of interactivity, the view has effectively moved from the backend to the frontend with the introduction of SSR/SPA, JSON everywhere and code duplication so that they can all talk and understand each other. It's cool when it works, but it is easily at least 3x the amount of work.

There is of course the argument of "just write both the client and server in JS/TS and use a monorepo", that brings its own challenges. Limiting the backend stack to what browsers support is not great, especially when Node.js is single-threaded with cooperative scheduling. No, lambda functions don't solve this entirely, and they're freaking expensive for CPU time. Honsetly, other ecosystems have it so much simpler IMO, even if it isn't as flashy.

I say all this as a svelte developer, guilty.


"Limiting the backend stack to what browsers support is not great" Node.js shares the same language but that's about it. In the Node.js backend you have access to modules like lovell/sharp with optimized c code (libvips) for image processing. There really isn't much you can't do. With pm2 it is easy to spin up say 8 instances of Next.js utilizing all available cores. In my experience there is barely a distinction between client and server side rendered components. getServerSideProps() or getInitialProps(), even though not perfect allow you to either call your backend code directly or call an API with the session cookie as a parameter.


In Next.js you'd typically use getServerSideProps on a page to talk to the backend at runtime. API routes have their place of course, but I've built fairly large-ish websites that almost entirely get their data via getServerSideProps.


Next.js is easy peasy and lovely and wonderful until you need to mix in validations + translations + authentication + authorisation + calling upstream APIs with user's credentials + ...

That's when you realise using what would be a perfect framework for building landing pages might not be the best one to build a full web application.


Isn't that the part where you as a developer step in? Most of these things are solved problems with mature libraries to integrate. I much rather mix and match then fight some all-in-one super framework that doesn't do quite what you need.


I think my job is to ship useful, secure and robust features to my company's customers. Dealing with technology is a consequence of that, not the end goal itself, which seems to be what's most wrong about this industry.

Certainly tying together libraries (or writing your own framework) is a valid approach, it's just a lot more expensive to reach the same quality level. That's why usually you end up with half assed solutions or never ending projects.


The complaint in the article is technical, but my issue with the SSR + hydration approach is that it doesn't actually work for the users. Mainly because if you need a page that is indexed by Google, there is a high chance your users will browse it and open pages in new tabs. So the approach of "there is a small initial penalty, but everything is faster later" no longer applies -- you get that initial hit quite often.


What initial hit are you referring to? If it's the origin request, just chuck it behind a CDN with some Cache-Control headers and call it a day.


The initial hit to hydrate / initialize the app.


What an incredibly boring, prosaic post which is almost completely devoid of anything useful. In fact, over half of the post is just the author waxing poetic about some tangentially related things. His arguments boil down to "To me, this seems like a bad idea."

Well, OK buddy. Then don't do it. Meanwhile thousands of web developers have been doing it for years and it has never been easier.


This is a nice idea, but unfortunately my work involves helping customers with all sorts of stacks integrate our technologies.

Rather than quit my job, it’s a bit easier to document and raise awareness about these issues.

Unfortunate that you didn’t enjoy the prose, can’t please everyone.


What is meant by "server-side rendering"? Is it the same as server-side on-demand html generation?

If so then I think the term "server-side rendering" is somewhat confusing. Your server is "generating HTML", not 'rendering" it.

Rendering in my view means turning code (like HTML) into visual output.. So whether the server serves always the same HTML, or regenerates it for every request, or something in between, the client/browser still has to "render" that HTML into visual output.

True server-side rendering I would think should mean the server renders HTML into a gif or jpg which then gets shown in the browser.


SSR always means outputting the first html as seen from the user when the app is fully loaded and ready to be interacted with. A SPA usually starts with a minimal, blank html file and a huge JS file that mounts the DOM structure and then the user can see something. SSR does the exact same thing, but on the server. From the user perspective, SSR seems faster. From a SEO perspective, SSR is better because there's some initial content on the page.


throw all that complicated stuff away & use htmx

generate html the same ol’ way you always have

safe & effective modern hypermedia


Just use PHP.


How can a small site like HN possibly work when SSR is so complicated?


SSR isn't the classic model of server rendering. That one is easy. SSR is Javascript client code that also acts as cached server-side pre-rendering with subsequent hydration on the client.

It's confusing I know.


Ryan Carniato (@RyanCarniato) have a hackernews clone in every major framework with SSR support. Mostly made on streams as well!


+1, I really enjoy watching his content. His deep dives into certain technologies is truly valuable, if a bit long-winded.


I've been testing out Alpine.js recently and find it to be very nice to use. You can have reactive state and most of the conveniences of React, but all in your HTML without a build step.

I work with SSR React professionally and many times I yearn for being able to just use good old fashioned templating engines and client side Javascript - no builds or anything.


See https://remix.run for a modern, sane, progressive approach to server-based rendering w/ as-needed hydration.


Remix has the exact same issue unfortunately: https://remix.run/docs/en/v1/guides/constraints


This actually bit me multiple times. I started leaking server-side code to the client, which completely turned me off. Also I've been in several arguments with Remix community members as to why middleware is necessary for server-side frameworks and why there needs to be a global state option.

I was going to evangelize them at my current gig... thankful that I didn't go down that route and made my own prototypes. There are solutions (such as implementing your own caching layer) but ugh, why.


I'm very interested to hear more about those arguments w/ the Remix community; got links to share?

Re: leaking server code to the client, so far I've been well-served by the recommended preventative measures: audits for inadvertent side-effects, and/or ".server.tsx" filenames as needed. But your experience here may be more extensive. Could you expand on "made my own prototypes"?


Yep. But it does a pretty good job of communicating where the boundaries are and telling you when you've crossed them. It's not even close to a deal breaker.


Unfortunately that hasn’t been my experience. I learned about this problem after helping a very confused Remix user overcome this issue.

I will give kudos to Remix for having a good docs page about this, something Next.js could do better (unless I just couldn’t find the docs).

Also just wanted to say that I love your username.


I agree that the lines are a bit more unclear in practice and in the documentation. It genuinely takes some experimentation. That said, I think Next.js has a good reputation for good reason.

And thank you!


Honest question: it's clear to me that the term SSR has been repurposed to mean this complex "generate html reusing ~same code on backend/frontend and then hydrate it" thing. What do we call the classic style where the server just renders html using a template language?


Since the post mentions Next.js, it's worth calling out two streams of ongoing work that solve major painpoints of SSR:

1. A filesystem convention (`*.server.ts`) for more cleanly separating client and server component trees

2. The introduction of a Web standards runtime[1] for SSR.

If anything, we're entering the best generation of SSR ever. We'll see new generations of apps shipping less JS to the client, rendering and personalizing at the Edge, closer to the user, and the infrastructure to support this will be serverless and economically every efficient.

[1] https://nextjs.org/docs/api-reference/edge-runtime


Is it believed that you'll do less overall compute and bandwidth if you use serverless functions to execute (parts of your) client, in place of sending static bundles to the client and letting them execute it?

I don't really get SSR. Isn't it more desirable to have more of the compute required to render the client done by the client machinery? I view this both as true in corporate and public web scenarios, surely it's cheaper to let the public render the SPA than it is to render it for them, surely in corporate this takes better advantage of already purchased machines with already installed programs instead of developer cycles wasted to recreate it, and surely it's better to have less server load and deployment complexity in both cases


Not really, because any JS/HTML logic that you have needs to be downloaded into memory and compiled and interpreted by each client independently. Then, once loaded, any server-side data needs to be fetched from an API request and injected into the page in the proper place. Server-side rendering saves at least one round trip by fetching the data from the database and injecting it on the server side before sending the fully compiled HTML + CSS + JS as one static file.

So you have to have some server-side logic regardless if you have the need for server-side data. The small amount of memory and compilation overhead by the server-side rendering function is more than made up for by the time saved on network latency. It's just not a significant amount of computation regardless.

Importantly, the server-side is not loading a full DOM into memory before sending it down the pipe. It's just interpolating the resulting HTML/CSS/JS as if it were static after fetching the prerequisite data.


> Not really, because any JS/HTML logic that you have needs to be downloaded into memory and compiled and interpreted by each client independently

I see my question was unclear. I did not mean compute/bandwidth "in totality", I meant "in costs that would be invoked by the application owner".

> Server-side rendering saves at least one round trip by fetching the data from the database and injecting it on the server side before sending the fully compiled HTML + CSS + JS as one static file.

Sure, but the first hit would've been a cache hit anyways.

> The small amount of memory and compilation overhead by the server-side rendering function is more than made up for by the time saved on network latency

I don't really believe network latency would play in here. CDN would have the static bundle "at the edge" already for you. Static content is very easy to serve and break up into cacheable components for clients too, the Edge isn't really a revolution in that space. It's also unfair to present Edge functions as always-hot in terms of latency here, again I imagine the cost invoked for such features must be so high for so little gain


> Sure, but the first hit would've been a cache hit anyways.

Similarly, a serverless function that is "warm" already has all the logic needed to inject the static template into memory. It simply needs to call out to the database, which is physically closer to the server, and always quicker, without the need for an additional HTTP handshake...and then inject the data into the template, before sending it down the wire.

> CDN would have the static bundle "at the edge" already for you.

The static bundle just saves on download time. The bundle still needs to be loaded into memory on the client to create the Document Object Model that powers every webpage. Depending on how your client-side bundle is structured, it may be 100ms or 1s before it even begins to make the outbound request back to the API to fill in the necessary data to power your BI dashboard or whatever your use case is. And that's if your BI dashboard was structured to be optimal with all requests rolled up into one (additional architectural effort, easier to satisfy on SSR with multiple database hits having less impact to performance).

Edge functions like Cloudflare Workers have 0ms cold starts.

We're talking tiny fractions of a penny per request. For example, Cloudflare Workers have 100K free requests per day and then its $0.15 / 1 million requests after that.

Better user experience, and it might cost you pennies more per month vs. static + round-trip.


> Similarly, a serverless function that is "warm" already has all the logic needed to inject the static template into memory

Sure, I accept this.

> It simply needs to call out to the database, which is physically closer to the server, and always quicker, without the need for an additional HTTP handshake

I contest all three parts of this.

* physically closer

How can this be guaranteed? You have no control over the closest Edge function vs Cloud Server to the user. It's just as likely your Edge function is farther away from the Cloud Server than the user is, from my view. Perhaps your cloud provider is offering you routing services you claim are superior, but given the nature of my questioning is anti-vendor-lock-in you'll understand I mostly disregard that.

* always quicker

How can this be true? The user's machine is quite nicely guaranteed to be somewhat available for the compute here - they issued the request themselves after all. Your machine, on the other hand, has a good chance of cold starting for this request or being hot because a bunch of other requests are being issued to it simultaneously. Given we have async steps in what the SSR is doing, backpressure from sharing resources with other users sounds worse than what we risk with a client machine.

* without the need for an additional HTTP handshake

Again, your Edge function seems likely to either be cold or busy, so this is only something you can speak about probabilistically

> The bundle still needs to be loaded into memory on the client to create the Document Object Model that powers every webpage

HTML still needs to render a DOM, not sure you're hitting on the right points of the parsing... Plus if this page is meant to be interactive still, you've not gained anything with the initial SSR phase

> Depending on how your client-side bundle is structured, it may be 100ms or 1s to fill in the necessary data

It's just not a big deal, and it's laughable you'd make the example a BI dashboard because you rendered something nice for them initially in maybe 0.5s faster on a median request, in exchange for literally enormous complexity. But a BI user is probably going to be in there for a long time, interacting heavily with client-side JS you shipped and they had to parse and render and then also reconcile/hydrate

> easier to satisfy on SSR with multiple database hits having less impact to performance).

How is this easier to satisfy on SSR? You have minimal advantage on your Edge worker being closer to your database than the user, unless your Edge function is actually just sitting beside your database server. It has to make the same requests and transfer the same data the client would be transferring in order to perform this SSR, there are only savings here if your Cloud provider is offering you big vendor lock-in promotions.

> Better user experience, and it might cost you pennies more per month vs. static + round-trip.

> Cloudflare loss-leader pricing

Unconvincing.


> * physically closer

> How can this be guaranteed? You have no control over the closest Edge function vs Cloud Server to the user.

Because the user doesn't get to talk directly to the database. The database could be on the second floor of the user's building, but it's still going to have to get routed via an API. If you're using serverless functions for your API (presumably so, given this conversation), you still have the multi-hop issue to fetch data from the db. And in the case of a complex BI dashboard with multiple data sources, the extra hops can add up significantly.

> Again, your Edge function seems likely to either be cold or busy, so this is only something you can speak about probabilistically

A true Edge function has nearly 0ms latency to spin up from cold. They are distinguished from Lambda functions, which would alternatively be running in a data-center colocated with your database in most cases. Lambda functions do have a small cold start issue. But whether you're using Lambdas or Edge functions or Node.js containers, this doesn't affect the core issue.

> HTML still needs to render a DOM, not sure you're hitting on the right points of the parsing... Plus if this page is meant to be interactive still, you've not gained anything with the initial SSR phase

The HTML has to load into memory and only when it hits the JS script that actually triggers the XHR request does the call out for data occur. Depending on how your JS is bundled with your static HTML, those XHR requests may fire off at sub-optimal times. That is not something just automatically handled. It's something you have to consciously optimize for.

> It's just not a big deal, and it's laughable you'd make the example a BI dashboard because you rendered something nice for them initially in maybe 0.5s faster on a median request, in exchange for literally enormous complexity. But a BI user is probably going to be in there for a long time, interacting heavily with client-side JS you shipped and they had to parse and render and then also reconcile/hydrate

It's not laughable. A BI dashboard is an excellent use case for SSR, because it has multiple independent components that draw data from multiple API endpoints. This is precisely where SSR shines. Not sure why you're laughing. It almost sounds like you're being cognitively dissonant in the face of a practical example that conflicts with your thesis.

SSR is not complex with a framework like Next.js. It's incredibly simple. It also trivially toggles between static or server rendering depending on that particular page's data access pattern.

> How is this easier to satisfy on SSR? You have minimal advantage on your Edge worker being closer to your database than the user, unless your Edge function is actually just sitting beside your database server. It has to make the same requests and transfer the same data the client would be transferring in order to perform this SSR, there are only savings here if your Cloud provider is offering you big vendor lock-in promotions.

Multiple database hits? Simple, if you have a page with 10 components that draw from different API routes / endpoints, instead of waiting for each of those components to load into memory on the client-side with janky placeholder data while it runs 10 HTTP requests (unless you've taken on the complexity of rolling up component requests to the top level, which can be difficult if for example, the dashboard is customizable in some way from user to user).

Firstly, Edge functions aren't a prereq for SSR. You can totally deploy a Node.js container in a particular data center right alongside the DB instance if you need. Secondly, you also have access to numerous caching solutions if you really need to keep data hot. Thirdly, if you're using Edge for SSR, you're presumably already using Edge for your API. Fourth, if you really want to have data located near your edge functions, you can achieve this too! See: https://blog.cloudflare.com/workers-kv-is-ga/

> Cloudflare loss-leader pricing > Unconvincing.

I don't know what to tell you. AWS Lambdas are $0.20 per 1M requests: https://aws.amazon.com/lambda/pricing/ and they've been around for 6 years.

At any rate, I also suggest looking at the direction that Next.js + React is going in: https://calibreapp.com/blog/nextjs-performance

"With Server Components, you’re able to opt-in which parts of your application are rendered on the server and when client-side code is required. In Next.js, a server component will be denoted by filename, e.g., page-name.server.js, whereas a client-side component will use page-name.client.js.

In the following example, we can fetch content from a CMS, import a date-time utility (date-fn) and render markdown. The resulting client-side JavaScript from this page will only include the ShareMenu, which is dynamically loaded. React will resolve <Suspense> boundaries client-side."

So the direction it's heading, you'll get the best of both worlds.


> > * physically closer > > How can this be guaranteed? You have no control over the closest Edge function vs Cloud Server to the user.

> Because the user doesn't get to talk directly to the database. The database could be on the second floor of the user's building, but it's still going to have to get routed via an API.

But the API is colocated with the database in any sane setup. You're trying to argue "the extra hops can add up significantly", when using the plural is straight up illegal - it is at most 1 hop more than via Edge - and you still never satisfied how the Edge is "closer" to the database...

> If you're using serverless functions for your API (presumably so, given this conversation)

It's like you're not even listening to me... Cloud vendor lock-in, in this thread via serverless or Edge, is worthless. It has no benefits. You've tried for 4 comments to find 1 and you haven't.

> A true Edge function has nearly 0ms latency to spin up from cold

> Lambda as a pricing comparable

You're right, I'm sure they just fixed the top issue with Lambda, did it with no extra costs, and are passing true pricing onto you as they try to aggressively grow. No risk here.

Did you know it actually costs no extra money per request to an already running server instance? Incredible.

> Trying to describe how webpage rendering works after you failed

Not interested, you completely missed the point that BI applications are going to be long-running, therefore initial startup time - the literal only gain of SSR - is virtually irrelevant, and when you look at the median request issued by their app, by virtue of it being long running, your SSR only saved you time on the initial requests out of an arbitrarily large set of them. Your average request savings are epsilon defined on session length

> A BI dashboard is an excellent use case for SSR, because it has multiple independent components that draw data from multiple API endpoints

> This is precisely where SSR shines

Again, you failed to defend how Edge / SSR are "closer to the database" (because they're not), and now you're trying to argue SSR is going to load API endpoint data faster again? Come on man

I'm done with this, you were given explicit examples to work against and you just talk past my technical objection. I'm genuinely offended you are trying to say I'm the one with "cognitive dissonance" here when you sound like a cloud spokesman and can't actually defend clear technical details. Bye, enjoy giving your money to cloud companies and your time to fitting into their lock-in architectures


Yes it’s cheaper but also it’s a worse experience. It’ll take longer for a user’s crappy android phone to render your page from source than for a beefy cloud server on the edge to do the same.


> It’ll take longer for a user’s crappy android phone to render your page from source than for a beefy cloud server on the edge to do the same.

Yes, if you pay more money and integrate into cloud vendor systems, you can get more compute. Besides, the person with a crappy android is irrelevant to monetizing your application anyways, so their user experience is pretty irrelevant surely? I don't really understand the idea to spend money to improve the experience for people unlikely to spend money on your thing. And, again, the "beefy cloud server" experiences load from a shared user pool, so I doubt there's just flawless performance waiting for everyone all the time; if there is, surely you see "loss leader" written all over it.


- depending on use case you don’t need to pay much to get these perf benefits (eg caching snapshots on CDNs)

- monetization depends on your business. How about B2B where customers are employees using crappy corporate laptops. Not like the only high value customers are using high end computers.

- and besides maybe you have ethics and don’t want to lock out people who don’t have the latest high end gear.

- flawless isn’t the point, better is the point


Doing something fully on the client is a nice idea, if your users all have fast connections, high processing power, and are all visiting from a desktop instead of a mobile device.

As soon as you have anyone visiting on a mobile device, and particularly on a mobile network, it's far superior to render it on the server.


You're right, it's a far superior pattern to waste tons of dev cycles to enable your team to give money to AWS so AWS can execute the logic their device would already do, just so your initial page load is 50% faster and your app otherwise performs identically, except the huge stutter step when you try to hydrate the state to match what your server rendered, which looks like hot garbage.

HN is not full of front-ends, I can tell that much.


Your viewpoint is valid but it's a philosophical argument, not a pragmatic one.

I'm also not sure what you mean by 'stutter step'. I would posit that 95% of visitors never even notice hydration.


> Your viewpoint is valid but it's a philosophical argument, not a pragmatic one.

I think not locking yourself to specific cloud architectures is quite pragmatic, but to each their own.

> stutter step

SSR advocates: parsing JS and interacting takes way too long, it's so noticeable and terrible!

Me: so if you serve a pre-rendered-via-SSR page AND a bundle of interactivity, there's going to be a nice set of static content that will be uninteractable during that entire terribly long parsing and loading phase?

SSR advocates: Never heard of such a thing, parsing JS and getting interactivity is instant!


If you look at it from a societal perspective SSR is better, it is easier to place servers closer to a clean energy source rather than every client.


Come on, Trisomorphic rendering or Distributed Persistent Rendering are pretty simple, basically these are just [insert joke about monoid in the category of endofunctors].

https://en.wikipedia.org/wiki/Hydration_(web_development)#Tr...

https://www.smashingmagazine.com/2022/04/jamstack-rendering-...


I figured this out several years ago when I tried to build a full-stack framework to seamlessly connect the front end and back end into a single development experience (kind of like what Meteor and Next.js turned out to be). I ended up cancelling that project, pulled out the RPC + pub/sub internals and spun it off into a separate set of libraries which became successful on their own.

I've been saying this for years. Although code reuse is possible between server and browser and sometimes it saves a lot of time, it's not common enough to be a default (as part of a monolithic framework) and there are security implications which make it highly desirable to differentiate between the two.

I do think frameworks are overused. IMO, frameworks make more sense in DevOps for infrastructure orchestration to provide resilience and scalability. For example, I think Kubernetes makes sense as a framework - Critiques of its complexity are not related to its frameworkness; there is literally no other way to do orchestration - Orchestration is the top layer which runs everything and sits above everything (including hosts and processes). Frameworks don't make much sense in development space IMO; they're mostly a way for companies to achieve lock-in.

Even on the front end, Google Polymer had already proved through their WebComponents polyfills that frameworks were not necessary to achieve reactivity on the front end. A simple, lightweight library is usually enough.


The original server side rendering (HTML rendered on the server by your choice of language) is simple, efficient and most sites on the internet still use it, including this one.

The article is talking about ‘modern’ js style server side rendering combined with client side rendering, which is of course a huge dumpster fire and encourages absurd complexity on both sides of the equation, starting with the fact the app is split in two.


It's mind boggling some people don't know you can render plain html on the server, or even submit forms without javascript.

Such a basic building block seems to be fading away nowadays, even some times presented as a "novel" thing or a "feature" (eg: remix)


We've gone full circle in JS-land already, and it will keep going. In circle...


This could probably be mitigated by a combination of algebraic effect handlers and session types for effects.

Algebraic effect handlers let you dynamically (re)define how effects are handled for a given piece of code.

and

- session types for effects, that would let one specify protocols for effects (i.e. a HTTP response handler would not be granted access to the socket, but would be allowed to use dedicated effects: `statusCode`, `header`, body` and `trailer` effects and would not be allowed to use them in a way that violates HTTP. Session types let you check that at compile time.


This isn't real "server side rendering". This is just generating simpler HTML. Real rendering on the server would mean sending a canvas or an image or video, like "cloud gaming".

All this complexity seldom translates into sites that do much. There are real "applications in the browser" that let the user do something, but those are rare, and most are toys. Most of them could have been written in Flash, anyway.


SSR in terms of webdev means what this post describes: generating HTML on the server. “Rendering” is a bit of a misnomer; a more apt title would be “server side HTML generation”, but that’s a bit long.

The alternative to SSR is CSR (client side rendering). Think React SPAs.


I know, I know, the webcrap crowd uses that terminology. Nobody else does. As someone who spends most of their time right now inside of 3D game-type renderers, I have a hard time thinking of cranking out HTML as "rendering".

What does that crowd call the part of the browser that does layout and display?


Layout and paint.

I don’t think using the term “rendering” for interpolating data into a template is anything new, though, nor is it bad or even confusing that it means one thing in the context of computer graphics and another in the context of programming language templates. ¯\_(ツ)_/¯


Front end complexity - Making applications isomorphic so you can re-use code on the client and server

Hacker News commenter: Ugh, why is this problem so over-engineered. It's completely unnecessary

Back end complexity - Two hour Google article from yesterday that goes through layers of C++ and raw assembly that results in a 2-3% speed up by basically using SIMD and turning off branch prediction

Hacker News commenter: Wow! So cool!


Phoenix LiveView provides a sane solution to this problem.


What if you need optimistic updates?


You can do optimistic updates with JS


Nexus DocUI is server-side defined, but client-side rendered. It doesn't have real-time updates yet, you have to reload the page, but I'm working on a plan to implement that. Currently renders the front-end with Flutter, the back-end could theoretically be any language since it sends JSON to the front-end.

https://nexusdev.tools


The common mistake teams make getting started with server-side rendering for React is thinking that your entire backend has to be contained in a single JavaScript bundle. Instead, use any language you want to set up your APIs and business logic. Then set up a fully independent pool of servers running Node.js whose only job is SSR. This setup skips over every problem the author mentions.


When I realized what the OP was describing, it honestly caught me off-guard. I had never heard of anyone attempting to colocate their API server and their Fastboot/Next.JS server in the same Node instance. Node is single-threaded. If you try to run lots of different things within the same Node VM, a runaway CPU-bound loop in your frontend can wind up causing the backend to lock up. Bad failure mode.

If your API server is separate from your Frontend server, you don't just make maintaining it more reasonable (the "server colored functions" are now a strict subset of the "client colored functions"). You also get operational flexibility: run the Frontend server in an edge function system [1], while running the API server alongside your DBMS. Isn't that what edge functions are for?

[1]: https://news.ycombinator.com/item?id=31084301


The newest versions of Next.js have API routes that are independently deployed as serverless functions, so they do not interfere with the SSR instance.


Yeah, I have recently learned Next.js WITH that and so I haven't experienced the pain points. Also using Next.js with something like Firestore (etc.) is also helpful. Next.js seem to recommend delegating in their documentation (e.g. need a scheduler? Use Github actions).


SSR feels like an echo of JSF (that's JavaServer Faces for you youngins) - an exceptionally complicated way of doing simple things. IMO, SSR will follow the same arc in history - a brief period of popularity, followed by a lot of "what on earth were we thinking". Client side rendering is much simpler and cleaner.


SSR is generally for SEO and performance


If you need SEO and (to a lesser degree) performance, you're likely serving content rather than an application and likely could've done with SOR (a new moniker I've come up with, Server Only Rendering).


Except that many applications have public content that you want to be indexed and interactive for the user e.g. forums and ecommerce


I'd say if it's timely content, e.g. you have a stream of comments flowing from the server to the client on a livestream then doing it all in JS is warranted, but the content has poor long-term value and indexing is pointless. Conversely if it's comments on Reddit, you can do any number of things that stop short of rendering everything in JS and I don't think your users will even notice.

I think the overarching theme is that you end up having to think about these things: if it was clear-cut we wouldn't even be having these discussions. Do you want to think about wiring event handlers to DOM elements rendered on the server or about how your JSX renders on the server vs browser? A similar conundrum exists in mobile development, do you want to be thinking about implementing everything twice in standalone iOS and Android apps or do you want to think about how to get Xamarin/ReactNative/Flutter to do the thing you want.


I've never seen an ecommerce or forums app that wouldn't be done better by more traditional techniques vs. an SPA. Interactive doesn't need to mean "driven entirely by Javascript", and also sites that use entirely HTML generated by a server can still use Javascript (there's even frameworks for this!)


For e-commerce probably checkout and perhaps the buy button needs to be interactive. Rest of it can be normal pages.

You can use different approaches depending on what you serve.


Googlebot can render with javascript now, so why do we still need SSR for SEO?


Not reliably. From my experience, the major search engine bots only see whatever is in the DOM until the DOMContentLoaded event or first X seconds which doesn't always see everything due to JavaScript's async nature.

You're also screwing your social media presence since most sites (Facebook, Twitter, Discord, etc.) only read the open graph tags of the initial document since they need instantaneous results.


What is absurd is doing SSR with SPA frameworks, instead of how it has always been done.


Laravel livewire / Phoenix liveview and a lot of new frameworks are now focusing on bridging this gap between ssr and clientside - almost trying to make it seamless.

Though only time will tell, what can of worms that will open later.


to be fair, .net had stuff like that for ages. i've never used any of it though.


foreach($seo_urls as $path){ $hash = sha256($path); exec('wget http://mywebsite.com/$path -o $project_root/seo/pages/$hash'); }

-----

if($agent_type == 'search_engine_bot'){ $path = $request->path; if(in_array($path, $seo_urls)){ $hash = sha256($path); echo file_get_contents($project_root/seo/pages/$hash); exit; } }

----

Would this work?

Edit: well i guess not.. because wget cant render javascript.. so.. maybe with a headless browser?


The only thing 'absurdly complex' about this is the Javascript portion. Server side component based frameworks exist and work very well for this purpose.


article crticizes "the absurd complexity of server-side rendering" by starting with "In the olden days, HTML was prepared by the server..."


There's a huge difference between traditional server side web applications and the modern meaning of SSR. If you don't know the difference then this article probably isn't for you.


my personal worry, could be unjustified, but SSR seems to be much less secure compared to plain old client side stuff. sure, static client side is less seo friendly but...not going to compromise your servers! considering the fast moving and relatively patchy security of npm ecosystem...things like prototype pollution vulns are alerted almost weekly if not daily and you are lucky if a fix exists that you can upgrade to!


That’s why I still use Perl CGI scripts. They just render.


Ah, someone who has discovered that writing in tiny paragraphs makes for addictive reading even if the substance is minimal.


Hum... You mean JS needs generic monads?

TypeScript supports them, doesn't it? This shouldn't be too hard to fix, but it's a matter of rewriting or encapsulating all of the core language's API.


What does any of the gist has to do with monads.


Principled, explicit handling of various function colours.

If implemented like Fantasyland, probably a terrible idea.


Which, again, has very little to do with any of the gist.


Hiya, I wrote this gist. I do actually think Monads or similarly expressive type constructs would allow us to eliminate this class of issue for good.

Hopefully the rise of Wasm makes this problem irrelevant for functional programmers, but that’s a whole other can of worms.

That said, it’s probably impractical to try and achieve that sort of type safety in JS via the Monad route, but maybe a clever TypeScript extension could be devised.


Monads are just type constructors with two operations: bind and pure. I fail to see the connection and why do you need a type constructor with a bind and pure operation at all to solve SSR.


I was thinking you could use them to specialize the IO types to prevent accidentally calling the wrong type of function in the wrong place.

Say you have a `ServerIO a` and `ClientIO a` type. Browser APIs, like addEventListener, would use the ClientIO type. Server specific APIs like fs would use the ServerIO type. That would prevent the entire issue I've described, and allowing users to confidently make changes to their shared code.


Sorry I'm gonna just stick with server-side rendering with Django templates. All web development 'advances' are a treadmill to nowhere, waste of time.


You're all clueless.


so clueless lol. missing the entire point




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: