The Dopefly Tech Blog

Join Nathan Strutz as he shoots the breeze on techie geeky web dev stuff.

React is Harmful - 25 Reasons Why

posted under category: General on January 4, 2023 by Nathan

I don't apologize for the clickbait title. I've worked on the web for over 25 years now. I've seen a lot of things come and go. I've seen good technology and bad, both win and lose.

Over the past handful of years, I've become less and less of a fan of React. The more I think about it, the more I believe that React is the worst popular framework (sorry, "LIBRARY") that exists. Let me explain why.

Prologue

React has been wildly popular ever since the Facebook engineering blog posted their first articles about it in 2013, explaining the need for state-driven interfaces and the flux data pattern. Ever since, React has been a pioneer in the lightweight JavaScript component framework space, incubating a lot of new ideas. React is so popular that it even makes job-hunting easier - when you read "React" in a job post, it signals a difference from the older generation of jQuery JavaScript apps.

I first learned React in 2016, when I was hoping to find something to replace jQuery - something that would let me add HTML templates in an integrated framework without another library like Mustache or Handlebars. Those were the best things I knew about at the time, though Angular would have probably sufficed. I was given the perfect application to branch out and try new things - an easy lay-up for a capable JavaScript framework. It seemed like everybody at the time was talking about React, so I should try it.

That's when I was blindsided by npm, node_modules, 8-trillion javascript packages that were all 2kb, and then the big ones - Babel.js and Webpack. I backed off and did it my own way for this project.

A year or two later, I found Vue.js, which actually solved all of my problems with React at the time. The following year, I was handed another project written in React. It was a SPA. I studied and learned, and I expanded it and helped stabilize what we had built, experimented with it a lot, pushed it to its limits, decided what I liked and did not like, became a company expert in React, and then realized that so much of it was too brittle to change and grow in a meaningful way, plus I... had the time... so I rewrote it in Vue - it took 7 days (even though thought I could do it in 5). Everyone loved the result.

The next year, I found myself with another React app, this time a much larger app with a team of developers and years of history. At first I was happy to nurse it along and add features until I got reassigned somewhere else, but then I started to notice the underlying issues. This one had serious architectural problems, reactivity problems, complexity problems, and performance problems, plus new features that would not be easy to add on. We decided to rewrite it, and I pushed my preference for Vue.js on the team. It took 6-9 months, but we made something scalable and capable that is much easier to work with. I will admit that most of my interactions with React have been bad applications, so that's where my rant really begins.

1. React is bad by default. A successful React application must be backed by very experienced developers. Newcomers aren't likely to fall into the "pit of success" - it's more like mountain climbing - you have to prepare, you have to bring a guide, you have to study your approach step-by-step. If this is your first time on the mountain, you won't summit.

2. React has a doctorate-level vocabularity. "Immutability" "Memoize" "hydrate" "Pure Component" "UseEffect" "Synthetic Event"

These are words come directly out of a DCS degree program but don't make sense to non-doctoral, even native, English speakers. It's not that this is hard to understand these words and concepts, it's that it's unfriendly - hostile even.

3. The Lifecycle Event vocabulary is just stupid. I'm glad that we have mostly done away with some classics like ComponentWillReceiveProps which was infinitely confusing, but shouldComponentUpdate or getDreivedStateFromProps - what are we doing here? To contrast, Vue.js simply has beforeMount and mounted, beforeUpdate and updated, and so on, while Svelte has onMount, onDestroy, beforeUpdate and AfterUpdate.

4. Hooks are yet another brainiac term that are contextually unclear. Hooks somehow simultaneously do everything and nothing. They replace state and they are the new event system and they are the easier way to cache things and they are the best way to debug everything. But how do they work? And why? Only little explanation is given.

5. Why do hooks have to be the first thing in a component? This feels like a code smell that indicates poor design.

We're going to hear this a lot: Let's compare it to Vue. Vue implemented a hooks system, but they took out the odd placement requirement - you can put them anywhere in Vue.

6. When we add Redux, we add more vocabulary problems, specifically the way Redux redefines words incorrectly, for example

  • Action - A tiny data structure that doesn't do anything (not an actual action)
  • Reducer - unintuitively, changes the state (immutably)
  • and Dispatcher - a useless convention to do a switch/case with actions for some reason

7. When we use Redux, or any state mangement library that loosely follows the Flux pattern, immutable state creates many copies of our state in-memory. A lot of state changes at one time will run up our memory usage very quickly and cause additional garbage collection, which may happen at any time, and can affect the performance of our applications.

While an immutable state avoids a number of problems, and may solve some very complex concurrent data issues, a large immutable state is a problem on its own.

8. If we added Redux to an app before 2020, we absolutely need to have added Thunk and Saga for asynchronous operations, then Reselect to "memoize" (or, cache) the Selects from the global state, plus Immer to shrink our reducers' scope. None of this is documented of course. None of this is official. You'll only see it if you read the right blog on the right day, but this information is basically required if you want your app to work the way you expect it to.

9. If we added Redux after 2020, we would absolutely need Redux Toolkit (unless you're a sadist), which of course has its own additional vocabulary.

  • A Slice - sort of a module of the state with selects and reducers all-in-one
  • AsyncThunk, EntityAdapter, I'm not even sure what these are

Redux toolkit makes Redux a lot more manageable but adds more overhead, both physically (in bytes) and mentally.

10. Redux, and all flux-patterned global state management systems add an incredible amount of complexity.

Complexity adds bugs, so this is a simple formula: reduce complexity to remove bugs. Remove Redux to reduce complexity. Therefore, remove Redux to make better applications.

But of course we need some kind of global state management. Therefore, we should all be looking for the simplest solution to reduce our bugs. Even though Redux is the most-default-looking choice, we should shop around. Instead of Redux, try MobX, or Recoil, XState, Hookstate, Akita, ClearX, Rematch, or any of the, roughly, 50 different great choices available.

11. And that brings us the problem of choice. React has no default path to success. There are no "easy" decisions. Every choice presents fifteen options, none of them advantageous over the others; every library needs seven more; each choice Fractally branches out to infinity, with options forever.

12. React is bloated.

  • If you add React + React Router + Redux + (reselect+thunk+immer) + React DOM, you get about 300kb of framework files
  • If you do the same for Vue, which BTW is just Vue + Vue Router + Vuex, it's only 100kb
  • If you do it for Svelte, it's essentially 0kb (not in reality, but Svelte is like a kind of magic)

13. React carries around features that only Facebook wanted to add. For instance, no one else was interested in...

  • "Concurrent Mode" - billed as interruptable UI re-rendering
  • "Portals" - Which I think finally went into production; it renders a component somewhere else in the page
  • "Transitions" - Which are supposed to help with loading new content
  • and "Suspense" - Which creates a framework-preferred way of building a "loading" state for components

14. React is slower than Vue and Svelte and a number of other comparable, very capable choices. It uses more memory. It takes longer to start up. This is measurably true in Stefan Krause's "JavaScript frameworks benchmark" which he publishes every month-or-so, in sync with Chrome updates.

Don't freak out, React is still fast and it does a lot of things well. It just isn't the fastest, most powerful, most scalable thing out there. Not by a long shot.

15. React-DOM is a separate library. Imagine me, a veteran web developer, attempting to add React to a web page. Yeah. It turns out React doesn't do anything unless it can interact with the DOM through a second library that weighs in at 20x larger than React.

I do understand why it's separate - it's because the external-facing parts can be swapped out to interact with something that's not a web page, such as a mobile app. That doesn't excuse the fact that this is confusing, unintuitive, and probably not the best solution possible.

16. I can't just add React to a web page. I would have to add React, and React-DOM, and then a way to translate JSX to HTML and React to JavaScript, so that requires Babel.js plus a huge bundle of plug-ins specific to React and JSX. They say you can do this in under 10MB but I've never seen it done in under 30MB. Imagine serving a 30MB JavaScript application only capable of outputting "Hello World!"

The reality is, we also need Webpack to bundle and serve our React app, and the easiest way to do that is through the React CLI. To contrast this, by the way, we can add Vue to a web page with a script tag like jQuery and just start using it.

17. The React CLI is powerful, but limited, and breaking out of those limitations requires us to eject our applications from the CLI. Ejecting from the React CLI is a one-way operation, and it leaves us with a 2MB (or more) webpack.config file - these are usually around 1-20kb - this 2mb size is unmaintainable and impossible to work with. While it could be a testimony to how much React CLI is doing for me under the hood, the reality is that it's inconsiderate.

18. Code-splitting is a nightmare. Who of us have attempted to split our webpack chunks with React? Code splitting is an amazing performance enhancement that can cut down the initial download and processing our users have to go through, letting them download the rest of the app as they explore it. And for some reason, this was hellacious for me, multiple times, in multiple applications, over multiple years. This caused big problems trying to make my React applications more scalable. Again compared to Vue: Vue makes this trivial.

19. With React, you either choose JSX or pain. No one in their right mind would code an application with React.createElement() instead of JSX. A lot of the benefit of React is the way that JSX integrates. JSX is the default templating engine, the only templating engine, and there are no other choices. It feels like an OK thing because JSX is passable, but it also feels a little bit like vendor lock-in. Switching templating engines is not something that people do in React, however it's something we can do in Vue, and in a lot of other frameworks.

20. The reason I bring up our inability to switch from JSX is because JSX sucks. For instance:

  • To add a CSS class to an HTML element, we have to call it className="" - this is the cause of a lot of errors I've seen (and caused).
  • Similarly, the <label> tag's for element is illegal, we have to call it htmlFor - it's ridiculous to me that HTML is invalid by default.
  • JSX is XML. I thought we fought the good fight and squashed most of the XML in the world, but here we are again.
  • CSS with JSX is such a problem that there are 63 CSS-in-JS frameworks for React!

21. React state properties are not really "reactive." When we change the state through an official means, like setState or through a useState hook, React re-builds the entire component, and potentially the component tree. Of course virtual DOM will keep the screen painting as small as possible, but still component re-rendering can be expensive. Instead of saying that properties are reactive, I would say that state changes in React are overreactive. Again, when we compare this to Vue, every state change is immediately, minimally, and truly, reactive.

22. In 2017, a potential change to the React "OSS" license could have allowed Facebook to revoke the ability to use React to anyone they wanted; imagine writing 6 million lines of code in React and then being told we can't use it anymore because someone violated their policy. That's a game over. It's a bankruptcy waiting to happen. Of course FB recanted the whole idea, but there's still a strange potential for something like this, isn't there?

23. The Other Facebook Dilemma - if Facebook wants to change something in React, they just will. Companies - even huge ones like the one where I work - have no say in the matter. Conversely, if you need a feature changed in React, it's not likely to happen.

If you need a feature changed in a smaller ecosystem like Vue or Svelte, I guarantee that a healthy donation can move a lot of code.

24. I dislike how so many JavaScript programmers, especially those less experienced, treat React as if it's the only way to write JavaScript now, like there's never been another way. Does anyone remember how annoying it used to be that every JavaScript answer on StackOverflow started off by suggesting jQuery? We are close to a similar place with React today.

I do not want less experienced programmers to think React is the way JavaScript works, or the way all programming works. The world is so much bigger than React. React is not the best choice for most things, and I hate seeing people choose it without thinking through the options. You can do better. We can all do better!

25. We tell new developers that JavaScript is easy, then we give them React, which feels kind of easy, but then they get NPM, JSX, Redux, Typescript (which I actually do enjoy but the ecosystem of adding a transpilation step to JavaScript is a big one to swallow). JavaScript really should be easy, and React really is the opposite.

Epilogue

I think we all owe it to ourselves, and especially to all the new programmers out there, to stop choosing React as the default. There are plenty of occasions where it may be a good choice, but I don't think it's very often.

What do you think? I bet that if you've been a fan of React, this is pretty inflammatory, isn't it? I admit, I probably don't know as much as you do on the subject.

On the other hand, you may be sick of seeing React everywhere like me. In that case, you're probably thinking of a few more points I could have brought up.

Whether you're seething with rage or want to pat me on the back, feel free to drop a comment or tweet me back. I'm looking forward to hearing from you.

(Tweet back on Twitter) (Discuss with Disqus!)

4 Reasons why you should write your own feature flag system

posted under category: General on January 3, 2023 by Nathan

A powerful feature flag system to charge your continuous deployment system is easily within your reach! It's literally the simplest system to build and add to your application.

I've been over why you would want your own feature flag system when I wrote about building my own. Make sure you know the benefits so you can keep the end goals in mind.

Since you're sold on the idea (as all devs really should be), I've compiled this list of reasons why you should build, not buy (or download), your feature flag system.

  1. It's so easy

A feature flag system is ridiculously simple to get started with. It starts with an "if" statement and moves up in complexity from there. Where does it stop? It depends completely on you, your dev team, and how many features you need to create.

  1. The devs are the experts

The developers who work on a system are the only ones that know how to modify the system correctly. A library doesn't know the best way. An external tool doesn't know how best to integrate. It's the developers that know, and the developers who will ultimately do the work.

  1. Bring your own opinions

When you build your own software, you have your own opinions about how things should work. This is ordinary software development. If your manager or product owner asks you to design a feature flag system for your application, you would have a very specific way of designing it, way of building it, way of integrating it, way of naming it, and way of controlling it.

To state the inverse, you are also rejecting unnecessary outside opinions that you don't need.

  1. Build what you want

Your own feature flag system can be as small as you want it, with as little overhead as you care to create. It can be directly inline with your code, or it can stand to the side. You make the universe the way you see fit. Creating your own software gives you the freedom to do it any way you want.

(Tweet back on Twitter) (Discuss with Disqus!)

I Wrote My Own Feature Flag System

posted under category: General on December 29, 2022 by Nathan
A blog series in which I confess to accidentally having written my own poor version of a solved problem

Feature flags really are a solved problem, right? I mean, there are big cloud services that will manage them for you. There are frameworks in every language, both as standalone tools, and full-fledged systems. We have feature flags in our build pipelines. Feature flags in our cloud services. I mean, this problem really seems like it's solved.

Then again, a lot of those systems, tools, libraries, and services seem really overkill. Some of them are massive. Even the smaller ones expect me to be A/B testing constantly. Maybe I'm not the FAANG company that this software is looking for. All of them seem to want to manage my users somehow - which I get, that's how you test with A/B groups.

What did I want though? I want the ability to deploy, even to production, without changing the user experience. I'm working with a lot of younger, and offshore developers, and I want to be able to turn off a feature if it doesn't pan out. My product manager wants to be able to specify a couple users for a specific beta test, then eventually roll the change out to everyone. These are not complicated use cases.

Unlike a lot of these stories where I confess to writing my own frameworks because I didn't know better, this one is really because I didn't like any of the frameworks out there. Plus how hard could it be? Turns out, not very!

The very first feature that we wanted to put a flag around was something that just wasn't code-complete yet. Using Vue.js as the template engine, my "feature flag" looked like this:

<new-feature-page v-if="false" />

That's terribly simple, right? When the feature was done, we just have to take off the v-if. Don't worry. We're Agile. This first version is the skateboard, or whatever analogy you like to use that explains how this is the simplest possible version that we can ship to our users. We can ship it, unit test it, but not show it to a single user.

In version two, we still didn't have any real data, since the database team was still working on the initial build scripts. The design of this FF system is to run it from the database and populate flags in our UI dynamically. Until that's a reality, I made a simple object with some of our known flag names, then a convenience function. That grew into a Vue plugin, something like this:

export const isFeatureEnabled = (featureName) => allFeatures.includes(featureName);

Vue.prototype.$feature = isFeatureEnabled;

So around the application, a .js file may import isFeatureEnabled or anywhere in a Vue component, my $feature plugin is ready:

<template>
  <!-- Used directly in the template -->
  <new-feature-page v-if="$feature('my-new-feature')" />
</template>
<script>
// or used in the script area
export default {
  mounted: {
    if (this.$feature("my-new-feature")) {
      // do stuff
    }
  }
};
</script>

That's about as convenient as it can be.

This second version used a static list list of feature flag names, added to a .js file in the UI application. It's nice because it's as fast as RAM, but it was not very dynamic yet.

Eventually the database was fleshed out and we created a FeatureFlag table. When a user logs in, they receive a packet of data - things like their name, their permissions, and some contextual information like what kind of airplanes they work on. I added the active feature flags to this login data packet through a very simple query like:

select FeatureName
from FeatureFlag
where Enabled = 1

So there it is, mission accomplished, right? That essentially handles it for the end-users. We have just a couple gaps left.

The next thing to add was tooling in our server-side application, like we did for our IU. Even through today, we haven't done much here still, but there's an injectable FeatureFlagService that can query the data, and maybe someday we will cache those results. We actually don't use this a lot, so the server-side evolution has slowed to a crawl.

Finally, for my SQL Server database, I added a simple SQL function that checks the status of a flag and returns 1 or 0. It's easy and is used heavily, mostly to switch entire procedures based on the currently active flag. This approach leads to versioned duplication of code, so we try to keep it manageable with a good naming scheme.

Feature flag taxonomy is a tough challenge. We want the name of the feature to represent where it is, and what it does, and it needs to fall in line with other flag names so that when you look at a sorted list of them, you know what they all do. We settled on a few naming rules. Our feature flags are kebab-case (that is, lowercase with hyphens instead of spaces) and hierarchically separated with slashes, named after parts of the application. For example, "login/include-xyz-data" or "admin/feature-flag-tool".

The last piece of the puzzle was a GUI to manage the flags. Another simple query to get the list, some easy APIs to switch the flags on and off. Easy as cake.

What went wrong?

As usual, building my own frameworks has it's complications and downsides. For instance, how do features migrate between environments? We have a system that involves bundling flags with SQL deployments, so anytime a database change goes, so do the new flags. But we don't have a central location to view and manage them across all the different testing and production environments.

One point of confusion we had, was that my international team didn't see our vision for targeting flags at particular users or groups, so we had the opportunity to build those parts twice. The end result, today, is called Feature Audience Groups where we add users to a group, then enable individual flags for that group. It's a simple additive-only methodology. I developed the concept of "feature flags move forward" - which means that flags don't usually roll back; a successful flag starts in the off position, then turns on for a group, then on for everyone, then we remove the flag. This forward motion simplifies the audience group flag management because flag groups cannot turn off a flag, they can only turn them on.

What did I learn?

A feature flag system really is just as simple as an if. I would encourage absolutely everyone to build one for themselves.

Today, we can safely deploy broken and half-finished features, and beta test them right in prod. We don't have to wait until everything is 100%. This freedom is worth the effort!

(Tweet back on Twitter) (Discuss with Disqus!)

I Wrote My Own Dapper, a .NET Micro-ORM

posted under category: General on October 30, 2021 by Nathan
A blog series in which I confess to accidentally having written my own poor version of a solved problem

It was my first .NET Core project ever. I knew a little ADO and ADO.NET from many years before, and I quickly learned enough about the built-in ORM, Entity Framework Core, to know there isn't a way to map SQL Server's stored procedures to EFCore entities. It's a shame.

This project had a team of data scientists writing a bunch of SQL that came out as record sets from stored procedures. Data scientists aren't necessarily great at SQL, but they managed to get it done. Questioning their methods wasn't something that was in the cards for me. Believe me, I tried. Nevertheless, they produced data, and I needed to consume it.

So the problem remained: there is no simple way in .NET Core to pull rows of data from a stored procedure into the application. So I started exploring. I have a pretty good memory, and I remembered the names of some classes in the System.Data namespace which, to my surprise, were still there. This is the fully rebuilt .NET Core and ADO.NET Core, but a bunch of the old things I knew from over a decade ago were still in there. Astonishing!

Thanks to google and stackoverflow, I managed to piece together something that queried the database. Moving forward, I could eventually read results! This was really working. I got a single stored procedure mapped to a simple class I built. Commit. Push. Deploy. Happy.

Mission accomplished, right? Hardly.

At this point, I was like 300 lines of code into something that I used to be able to do in 10. There were two problems:

  1. Building and sending the query takes multiple calls to awkward ADO APIs, and it's just too many lines of code. Sure maybe there's no better way to do it, and I appreciate all the control I get, but I'm just trying to make a little web app here. Why can't they just open and close the connections for me?

  2. Getting the data back is completely manual. I have to manually loop over the iterator that's streaming all the records from the server, set the columns by their ordinal position, and then cast types into my record object. This is a lot of lines of code.

In summary of my two problems, it's (1) the input, and (2) the output. Hah! Yeah literally the whole thing basically. sigh

So it was an obvious thing to quickly evolve it. I should be able to remove a lot of the boilerplate junk by adding some convenience features. All of a sudden, I could call the procedure in a single line of code! I added my connection to the dependency injection system so that any object in the data access layer can request it.

With a serotonin high off that success, I started to tackle the data retrieval problem. I poked around enough until I found how to get the return columns by their column name instead of their ordinal position in the recordset. But how do I know how the columns in the query compare to the properties in my data record class? The only way without a bunch of configuration is to use Reflection. I modified it to read the columns in the recordset and search through the properties in the target class to match it up. Works perfectly!

Now the API is something like this:

List<MyDataRow> data = await dataService.QueryAsync<MyDataRow>("exec usp_GetData");

Getting fancier, sometimes our front-end application needs different names for columns in the database. Also our data science team sometimes uses illegal names like columns with spaces and whatnot, so that's a mess. I looked around and found that ADO.NET Core has column alias hints you can give to properties for EFCore entities, why not use them here?. Adding those means my data-to-record mapper needs to build a dictionary between the incoming column names and the outgoing properties, with the possibility of multiple aliases or just with the property name. I suppose that's simple enough.

Then there's the problem of input parameters. Of course we need the ability to pass in parameters. ADO has this covered typically, with a SqlCommand class. Naturally, I wanted to have a way to simplify even that, but that one looks like it's nearly as simple as possible, so very little work to do here. It's actually inconvenient to create a SqlCommand, so I made a convenience method on my DataService from a fake SqlConnection and connect it to the actual one once the request was made.

Now queries looked like this:

var cmd = dataService.CreateCommand();
cmd.CommandText = "exec usp_GetData @myParam";
cmd.Parameters.Add("@myParam", value)
var data = await dataService.QueryAsync<MyDataRow>(command);

Is it the simplest I could make it, and it felt pretty good, so that's all I need to go to production! (I'm kidding! [kinda]). All the rest of the database connection opening and closing, data-model binding, and anything else is all handled by this DataService of mine. It could take any query data, populate any matching data record class, and spit out a List<T>. And it could do it really, really fast!

But I promised this was a duct-tape-and-shoestrings thing, didn't I?

Yes, in line with the rest of my accidental frameworks that have already been invented, I had two final problems - the first, as always, being portability. Taking this tool to a second app actually worked really well, which is surprising. Well, that is, until they needed me to connect to an Oracle server, and a Teradata server. I realized quickly that the SqlConnection is really just for SQL Server.

Taking it to a second app of course is where it goes from part of a product to a product of its own. Now instead of Sql*, it's Db* classes, like DbCommand and DbConnection, but configuring it to work with the differences in data types between servers ended in some unfortunate compromises. I'm sure that would have gotten ironed out eventually, but before long I was off to another project.

The second way that my framework fell short was in data mapping. There was always a single line of code that was prone to break, right where database types were mapped and cast into C# data types. The Flux Capacitor that did it was a single line of code that I commented very well. There are just some fields that don't map correctly, and every time I ran into a new one, or tried to cast it to the wrong type, I would get runtime errors. I think the answer to this would have been a utility called Automapper, but by now, why even?

What did I learn?

This was genuinely a fun project. In addition to the underlying ADO roots still inside the framework, I also got to really play with the implementation side of generics for the first time. Plus the chance to make my own ORM is just something that seems like fun to me. I'm still surprised that something like this isn't just included directly into the framework. Why is EFCore so limited?

I'd seen mention of the Dapper framework, and AutoMapper, a few times on stackoverflow answers, but didn't investigate them any further. I should really follow up on those kinds of leads more often. While it sounds like there is a problem with me -- and very likely there is in this case -- there is also the fact that my company doesn't like external software. If we didn't write it, how can we trust it? More than just a not-invented-here mentality, external software always has to go through a waiver process, and even new versions always have to be re-verified. It's often more trouble than just playing with some code and writing something custom.

Dapper

On the next project, I finally followed the stackoverflow advice and looked into Dapper, which was initially written for StackOverflow.com. Surprisingly, it wasn't much different than my little framework, with similarly named methods. Plus, as usual, switching to Dapper would mean that I don't have to write my own unit tests for it. Dapper was surprisingly easy to plug in, and thanks to its use of extension methods and anonymous types, it solves a lot of the usability problems I created for myself.

I do wish Dapper had better support for cancellation tokens, especially since I know it's built into the .NET Core framework's data access methods, and is relatively easy to pass an aborted HTTP call through in order to cancel a data call.

I also struggle a little with the fact that I can't write unit tests on code that uses Dapper to talk to my database because extension methods can't be overridden. Oh well, I guess. That's the price we pay for some things.

Overall, I'm much happier using Dapper for raw data access.

(Tweet back on Twitter) (Discuss with Disqus!)

I Wrote My Own Axios

posted under category: General on October 21, 2021 by Nathan
A blog series in which I confess to accidentally having written my own poor version of a solved problem

I was new to the “new” JavaScript. You know, the one we started doing when Node.js went mainstream and everybody started using NPM to launch their apps, and back when leftPad was cool. It’s fair for me to be this cranky about it; I wrote my first line of JavaScript in 1997, and I jumped in with both feet through the Prototype years, the jQuery years, and now, something new: NPM Packages and Webpack.

The JavaScript language was evolving, too. Meanwhile I was still trying to support my Internet Explorer users. That’s the corporate life. So the new ecosystem, plus the new language features, plus the new libraries, all had my head in a spin for literally years. Ours is a learning career. When you stop learning, you might as well stop your career. So, I guess the state of confusion is a good place to be.

My first Vue.js project started off on the far side of the progressive framework’s spectrum. That is, I added a tag to my layout and wrote a new Vue for each page. Vue.js is great at easing you into this world.

My next step was to normalize the way we bring data into the UI. We needed to use a token security system, and the other developers were already trying to get their own JWT and figure out how to copy & paste that code to all the features they were working on. I needed to act fast so that this didn’t get out of hand.

A cursory glance across the JavaScript universe brought up a few contenders - Axios, the download leader, whatwg-fetch and isomorphic-fetch that gave me trouble (probably beccause IE wasn’t compatible), and there were a few others that just didn’t pan out. Then I thought about the trouble of adding yet another .js file to my application, the additional download size, and what we really needed for this project. Then, you guessed it, I decided to just do it myself. How hard could it be?

It turns out, not very!

The first version of this first Vue project had jQuery built in, and we weren’t to the point of ditching jQuery yet, so since it was available, I could utilize it to hide my XMLHttpRequest business. I was confident that I could replace jQuery later in this instance.

In front of jQuery’s $.ajax method, my http “class” had get/put/post/delete functions as its API - finally a central place for all the API requests to come in. We added some UI logging on it to track API timing. We added JWT management so that it would all be handled inside the black box. It was also the smart place to add global error handling. This worked. I changed all the API code throughout the application to use my method, and we went forward successfully.

By the end of my time on that project, I had figured out how to get Webpack to precompile our Vue applications - individual applications per page on the site. I bought that knowledge forward to my next project. Again, everything seemed to be copy-and-pasted so I centralized the code and used the browser-native Fetch api with Babel targeting IE11 for compatibility and to automatically load any polyfills for me since IE doesn’t have Fetch at all.

This next iteration was even more successful. I could finally use ESM from the start, so I exported my 4 verbs - get/put/post/delete - which called into an internal function to contact the server and return the data. It didn’t take long until I found the cracks in that system.

Fetch is more than happy to return server-side errors to the UI. I was going to handle it. I wrote a bit of code before realizing that this is probably something that Axios already does. I double-checked the package size; for some reason I thought Axios was going to add like 40kb to my vendor bundle. It turns out it’s only 16kb. Not bad. Plus Axios is well tested, and I don’t want to write all the unit tests. It was time.

I switched to letting Axios handle the connections instead of Fetch. I still have my own adapter over it - not necessarily to simplify things, but more to control and standardize everything.

What did I learn?

Well, I’d like to say I learned to not sweat over a 16kb npm package, but the truth is, I still usually prefer to make my own everything until I run into trouble. I mean, isn’t that what this whole series is about? I clearly don’t learn!

Axios

What may not be evident, is how Axios and the browser-native Fetch differ. Of course Axios is essentially a library on top of XMLHttpRequest or the Fetch API. Axios simplifies the request chain a little bit, puts a slightly friendlier face on the request and response objects, and handles HTTP errors a lot cleaner than Fetch. Also, it has a bunch of great features like a cancellation token system, which is a little tough to use but better than my own absent one.

(Tweet back on Twitter) (Discuss with Disqus!)

I Wrote My Own Vue.js

posted under category: General on October 18, 2021 by Nathan
A blog series in which I confess to accidentally having written my own poor version of a solved problem

React was frustrating.

I had the pleasure of getting to rewrite our customer survey at work. It’s like “how do you like your airplane” kinds of questions. This was 2016 and I was looking forward to a 6-month task that required us to beef up the security and improve the looks for a helpful little program. Of course I chose to rewrite the whole thing, which had me shopping for front-end frameworks. Everyone was talking about React at the time, so I decided to give it a shot. What went wrong?

React basically has two modes: 1, you build the whole application in React, which requires live sacrifices made to the great and powerful NPM, forcing you to buy into the entire lifecycle of new JS development and everything that comes with it, or 2, you manually construct your html with the React.createElement() function, which is a fate worse than going back to the pre-CSS days of the internet. There’s actually a third way I found out. For the small price of a 22mb JavaScript bundle download, you could load the entire Babel transpiler into your browser! Uhhh, no thank you!

That was the state of things in 2016. I know it’s gotten a a little better since then. Don’t @ me.

So React left me out in the cold. I played with it for some time and just decided to give up, however I couldn’t give up the notion of component-based development and JavaScript-powered UI templating, plus immutable-state-driven UI. It was the right choice for this interactive project, I just didn’t like how the only framework I’d heard about solved this problem.

(I have a lot of other gripes with React, maybe someday I’ll finish this rant)

Apparently that’s as far as my research got me, so I whined and moaned about it for days like the grown man I am. “Why isn’t there a framework that does what I need?” Eventually I decided to move on and just make my own with a little bit of what I knew.

Really, I built some simple bridges between two basic technologies: jQuery and Mustache, then a few convenience functions to make it all come together.

jQuery is, of course, the simplest little library for finding and manipulating elements in the DOM. I knew it like the back of my hand so I knew I could fit it in, under budget.

Mustache.js is the less popular feature of this pair. It’s the smallest client-side templating engine I could find that could still get the job done. Mustache is the little cousin of the broader Handlebars library. It does some basic templating loops and variable outputs with curly brace {{ mustache }} syntax for the magic.

Mustache doesn’t have an easy way to attach itself to the DOM, or to produce highly interactive components, nor can it load components across external files. That’s where jQuery shines. jQuery can easily mount Mustache components and keep them interactive for subsequent user events. An HTTP call with jQuery would be all that’s needed to bring in and cache my component files.

I also thought about the concept of state-driven UI, and I was enamored with the idea of state changes that drive what’s shown. It’s a natural fit for getting the data right and being able to visible debug it. React uses setState() to change things - something like that would be easy enough to implement. Luckily I didn’t feel the need to write any major reactive data systems. I emulated my understanding of React’s state management through a finite state machine that controlled state changes. These state events are like nextQuestion and completeSurvey.

Did I succeed in creating a great JavaScript framework? That’s laughable. You know the answer. I did not.

But did I at least come up with something that worked better than React for my situation? You better believe it.

What did I learn?

Remember kids, you can build world-class enterprise apps with the tools you already have on hand. It helps to read a few books beforehand; I think I had just finished a couple of my favorite short reads - Javascript: The Good Parts and The Facts and Fallacies of Software Engineering which put me in the mood to build something great.

Of course looking back now, I realize Vue.js was already 2 years old. I suppose I should have done a little more research!

Vue.js

Fast forward 2 years. I’m on another project dealing with production factory analytics. By now there was a lot of internet dev chatter about the big-3 frameworks - React, Angular, and Vue. I did my research and picked Vue at 3pm on a Thursday. My buddy Joseph and I, pair programming, included Vue.js onto the page, mounted it to a DOM element, and instantly I was able to loop and output properties. It was way easier than React, and much more powerful than Mustache. I was instantly hooked! I rolled out the feature change before I went home. The very next morning we got comments on how quickly that page seemed to be running.

That was the day I began to realize that Vue was the framework I had wanted all along, and attempted to build. Sure, I built a cheap, duct-tape-and-shoestring version, but Vue was what I was dreaming about.

As I studied the Vue.js framework a little more over the next few months, I quickly began to realize that Vue was so, so much more. Vue’s reactive data system is so far beyond the cheap state management solution I had, that I can only look back and laugh now. That simple include-the-script idea was there by design to lure folks like me into the NPM world. Tricky.

(Tweet back on Twitter) (Discuss with Disqus!)

I wrote my own ORM

posted under category: General on October 16, 2021 by Nathan
A blog series in which I confess to accidentally having written my own poor version of a solved problem

I joined a new project at work. OK, joined is a polite word. A product was thrust into my lap. It has great documentation and lots of clean code written --maybe generated? Nevertheless the generator was missing and so were all the previous developers. One thing it had in spades was a strong MVC N-Tier Architecture. This made it really easy to find things, change things, and understand how the system worked.

By the way - if you do this for your application, you’re doing this for the next dev that maintains your application - and we thank you!

As I maintained this application for a while, I began to notice similarities in parts of the application that really were redundant. Specifically the data access layer. It was split between data access objects (DAOs) and data gateways (DGs). While the DGs had a lot of odds and ends that would return various recordsets, the DAOs had the same system over and over. CRUD. Load a single record and populate a single object. Read a single record and perform an insert or update. Delete a single record from the database.

The only things different were the names of tables and the names of the columns. There were a couple one-off tables without a single PKID column, but those weren’t the meat of the system.

I began to literally sketch out some potential solutions. The end result looked a little bit like this:

partial orm diagram

I began playing with constructing the SQL statements for each table based on component metadata. Properties in my components would probably need some custom metadata, but that both helps get this job done, and self-document the system a little better. Did I mention I was using ColdFusion for this? It makes things so simple. Watch.

The user class starts off looking like this

component {
  property name="id";
  property name="name";
  property name="role";
}

Thanks to ColdFusion’s custom metadata system, I can throw anything I want on there, then pull it out when I’m building my DAO queries.

component table="user" {
  property name="id" pk="true" required="true" sequence="seq_user_id;
  property name="name" type="string" required="true";
  property name="role" type="string";
  property name="someDynamicProperty" persist="false";
}

So on one end, I used this to build my CRUD queries, then on the other side, I used the metadata to map the recordsets back into the models. It was actually pretty simple, once it all worked.

I tried it out for a few new tables as part of a new feature. That’s how you add your innovations and entertainment, by the way – you make the fun stuff a “critical” part of the less-fun stuff. Once that worked, I spread it across the rest of the system. In one day I reduced the codebase by 3,000 lines!

I took it a little further by auto-generating some basic list functions, like the neat little listByCriteria where you send in an object from the table with the properties you want to find.

var criteria = new User();
criteria.setRole("Admin");
var admins = dgo.listByCriteria(criteria);
What did I learn?

It’s a lot of work up front to generate your own queries, but a lot less work in the long run when you know you’re getting the most optimized experience you can. Sure the ORM here was simplistic, but so were the needs of the application.

When you make something that’s like a framework, but it stays as part of a single system, it tends to integrate tighter than you expect. This ORM became an integral part of the application it grew from. The downsides with that are that it would have been very difficult to replace it with a publicly available ORM, and it became harder and harder to reuse it in another system. In this ORM’s case, it never grew out of this application.

Of course, now that other ORMs exist, I don’t think that I would do this again. However… I have another one coming up that would prove me wrong. Stay tuned.

(Tweet back on Twitter) (Discuss with Disqus!)

I Wrote my own Hybrid SPA+SSR Framework

posted under category: General on October 14, 2021 by Nathan
A blog series in which I confess to accidentally having written my own poor version of a solved problem or popular framework

It was 2009, and I thought to myself “jQuery is just so verbose.” I mean look at this code I have to write in order to download an HTML fragment from the server and inject it into an area on my HTML page.

$("#target-area").load("/api/users/list");

OK, Ok, ok. It’s not that bad. But imagine you did this with Prototype.js, the dominant framework before jQuery existed.

new Ajax.Request("/api/users/list", {
  onSuccess: function(response) {
    $("target-area").update(response.responseXML);
  },
});

Or imagine you started the project without a JavaScript framework

function reqListener () {
  var el = document.getElementById("taget-area");
  el.innerHTML = this.responseText;
}

var oReq = new XMLHttpRequest();
oReq.addEventListener("load", reqListener);
oReq.open("GET", "/api/users/list");
oReq.send();

I was on a project for a short time that had hundreds of screens with code like this – all customized for each and every page, all repeated, with so much boilerplate bloat that I questioned the reason for software altogether. If we add input fields into that code along with form submissions, validation, error messages, and so on, you can imagine how quickly we had JavaScript files that were tens of thousands of lines long. Then came the memory leaks, name conflicts, and maintenance.

Yes we could have done better, but I was just a loan-in, and I wanted to see what kinds of things we were building elsewhere in the company. The point is, this application made me afraid of what we could create if we didn’t start thinking about systems to handle bloat before we had problems with it.

I had an idea. What if we could implicitly load content based on some basic HTML, and use jQuery to sniff out what needs to be loaded. Just follow me down this trail for a minute.

What’s an ideal amount of JavaScript to write? None! Stupid question, I know! I figured that this is the perfect job for a data attribute. I only need to tell the content where to go, like so:

<a href="/api/users/list" data-target="#main">Users</a>

The first version of this HTML-powered, server-side rendered app looked something like this:

$(function(){
  $(document).on("a[data-target]").click(function(e){
    e.preventDefault();
    $($(this).data("target")).load($(this).prop("href"));
  });
});

It ballooned up from there, into a few hundred lines of code that handled global and inline loading spinners, delete confirmations, forms, caching, and errors.

Ahh - but you must be thinking: if the server is generating those HTML fragments, what happens when I open the link in a new tab? Well, jQuery’s AJAX api sends a HTTP header to let us know if we are in an AJAX request. With that header in place, the server sends an HTML fragment. When that header isn’t there, the back-end framework will wrap the fragment into the layout and send a full page.

It’s only a matter of the fragment being rendered with the full layout, or without.

Does that really work? Yes! It turns out it works really well. This web app was 100% functional without JavaScript. Why? Convenience! Also, users found they could open links in new tabs without a problem.

In today’s terminology, I think we would call this a hybrid SPA/SSR. Yes the discount, dollar-store version, but still, it fits the bill. Really, it was a pretty successful project.

What did I learn?

When I attempted to adapt it to another application, I learned that I either needed to cut this ‘framework’ up into smaller, individual parts that could be used independently, or bundle it all together as some kind of super-framework. Just taking parts of it was not a portable solution.

That doesn’t mean it was a waste. Not at all. This framework as its own glue for what it is, is a really cool solution that makes one application pretty easy to read and work on.

(Tweet back on Twitter) (Discuss with Disqus!)

I wrote my own Fusebox

posted under category: General on October 13, 2021 by Nathan
A blog series in which I confess to accidentally having written my own poor version of a solved problem or popular framework

It was 1999. I worked at a small agency in Alaska, and I just learned to program in ColdFusion. I drank Mountain Dew and exclusively ate from Taco Bell. A guy at work, probably 15 years my senior and trying to escape code, told me about how to arrange an application, recommending that I make a “fusebox” - a big switch statement that would control what gets called and shown. I started piecing it together.

The project was an online storefront for a local music producer. This was my first real programming project at work, if you don’t count small JavaScript image replacement and form validation scripts, way before CSS and HTML would do these things for you! I frequently forget that I’m old until I say things like this.

So I set up a switch statement with the expression being url.action (the action property in the query string). The switch cases are includes to individual view files, or database calls with a redirect back to another action.

Really this isn’t too different from modern-day frameworks - a router, views, and room for back-end activities.

What did I learn?

It was nice to have a central place to apply security and global request filters. With all the requests coming in through this one file, it was the central hub of the application. That also opened it up to trouble. One coding mistake on the switch meant that the whole application was broken. I made a lot of coding mistakes back then, so things broke frequently.

I used an include for the HTML header and footer, so those just got included right on the switch page. Easy way to make a layout, even if it’s rather lame by today’s standards.

I initially had all the database communication right there in the switch. That really doesn’t scale since that flux capacitor there is now doing literally everything for the whole application. Pretty yucky but I didn’t know better.

Also, this being one of my first professional projects ever, I quickly realized the need for better organization by filename taxonomy.

Fusebox

The first version of Fusebox was merely a word, a convention of organization, which was really not much different than what I had built as a teenager. I’m sure it was at least a little more formal than that, but the internet was young and we didn’t exactly google for information – you had to know someone.

The second version of Fusebox had some official files - some amount of hard matter for the framework. Fusebox 3 actually set you up with structure and files and sub-folders of switches - a real framework finally.

Fuesbox eventually became the gold standard for frameworks in the world of ColdFusion. It was a short-lived title, in those years when XML was cool, before object-oriented features were added.

Have you built your own framework like me?

(Discuss with Disqus!)

Coding on a Chromebook

posted under category: IDEs and tools on February 13, 2021 by Nathan

I mentioned how I’m teaching a high school coding class at our home school co-op. At the beginning of the 2020/2021 school year, I specified that students need a Windows laptop, or a Mac if there was no other option. I don’t like to support Apple devices. I specified that no Chromebooks would be allowed in the classroom. It was the right choice last August, but this next school year, I’m going to let Chromebooks in.

Every week I write up a new presentation in Google Slides, and present it to the class on my Chromebook. Between Google Docs for the slides and GitHub for the files, I have access to everything I need across all of my devices. But what about coding on the Chromebook? Aren’t Chromebooks underpowered laptops with nothing but a browser? How’s the coding experience, you ask? I’m so glad you did!

First, you should know that every Chrome OS device is essentially three things:

  1. A Google Chrome web browser device - the classic foundation and namesake it’s had since 2011
  2. An Android tablet with full access to the Google Play store and most Android apps and games, in fairly performant windowed environment, since 2016
  3. A Linux laptop with a Debian terminal that grants full access to apt-get anything you want, since 2018

There are some really great in-browser IDEs, but I like to keep things local and offline, cutting my choices down significantly. There aren’t any great Android-based IDEs that I’ve seen. But wouldn’t you just want to use everyone’s favorite coding tool? That’s right, I want VSCode on my Chromebook. And guess what? It’s become really easy to do this!

The steps have become essentially the same as they would be on any other operating system. Visit the VSCode website, click the giant download button, then double-click the installer. This was much harder only a few months ago! I was taken back when I had the chance to install it on a new device recently. It’s seamless. I also double-checked that it added VSCode as a known repository for the integrated package manager so that upgrading can be done with sudo apt-get update && sudo apt-get upgrade -y. Or of course you can go download the new version and run the installer again. That’s not quite as seamless as it is on Windows, but it’s not bad at all.

At the start of the 2020 schoolyear, I had an outdated Acer R11 Chromebook with a flimsy Celeron CPU. It performed fine, but the lower resolution 11 inch screen was pretty small for the task at hand, and starting VSCode was a commitment.

This year I invested my incredible teaching profits (that’s a joke!) when I found that Lenovo’s Chromebook line finally includes the incredibly affordable and powerful 10th gen i3 model with 8GB of memory. It’s a steal at $440. I’m not trying to advertise, but I do have an affiliate link to look at it on Amazon because it brings me some happiness and maybe you’d like to check it out. Something amazing about this device is that it launches VSCode in about 1 second - there’s no delay. It’s faster than my i7 work laptop. It has plenty of power for this job!

VSCode in Chrome OS

So my Chromebook has VSCode. What next?

Extensions! They all work. Everything I throw at it works perfectly. I’m not missing anything in this department.

Debugging! Works perfectly. I’ve only tried debugging JavaScript, web pages, and C# code, maybe Python last year, and they are every bit as capable as anywhere else.

Coding! Duh. It definitely works.

Anything wrong?

Only one thing doesn’t work for me. It’s the standard Chromebook keyboard. Not even the physical keys, this Lenovo has good feel for such a quiet sound. My gripes are about the keyboard on Chrome OS devices, namely these complaints -

  • The lack of a 6-key insert-delete, home-end, pageup-pagedown block is annoying enough. I miss that on every notebook keyboard though. The problem is that these keys literally don’t exist. On a Windows laptop, I can at least find these keys. They’re often hidden behind a function control key, but they are there. There’s no chance to find them on Chrome hardware.
  • No delete key. There is a way to delete - alt+backspace will delete in front of the cursor, while the standard backspace key only deletes what is behind the cursor. If you ever want to delete a file, you are forced to make the two-finger-salute.
  • Alt + Click is a right-click in Chrome OS, instead of the standard multi-cursor selection combo in VSCode. I suppose this is configurable so I can change it to the Ctrl key, but it’s very annoying.

Of course all of that can be ignored if you plug in an external keyboard. I’m not carrying a keyboard around in my bag, or over to the couch, so I just have to live with the pain.

Wrap-up

Coding on Chrome OS is great with VSCode, and it’s a very workable solution. Get a powerful processor, no Celerons or Pentium chips, and get plenty of memory. If you’re settling down for a long coding session. bring an external keyboard and mouse just like you would want with any laptop. Now that VSCode works flawlessly, the gates are open wide!

(Discuss with Disqus!)
Nathan is a software developer at The Boeing Company in Charleston, SC. He is essentially a big programming nerd. Really, you could say that makes him a nerd among nerds. Aside from making software for the web, he plays with tech toys and likes to think about programming's big picture while speaking at conferences and generally impressing people with massive nerdiness and straight-faced sarcastic humor. Nathan got his programming start writing batch files in DOS. It should go without saying, but these thought and opinions have nothing to do with Boeing in any way.
This blog is also available as an RSS 2.0 feed. Click your heels together and click here to contact Nathan.