Building a great front-end application
In this article, I will give some pointers on how to fully utilize the awesome front-end technology landscape in 2020 to create beautiful, engaging and high-performance browser-based applications while achieving great developer experience.
These are the attributes that I always want to see in a great front-end application:
- The application’s technical development should follow sound software engineering principles to ensure its correctness and robustness.
- The application should be visually polished, aesthetically pleasing and engaging to the user.
- The application should have good performance both on load and during subsequent usage, and both on mobile and desktop.
- Performance should not be thought of as a single moment in time but rather an experience that is captured by several different metrics. Their definitions can be found here.
- The application should use a combination or all of the tools below to ensure the depth and breadth of performance monitoring.
They can be classified into the following categories:
- Lab measurement: performance data collected within a controlled environment with predefined device and network settings.
- Real user monitoring (RUM): performance data collected from re al page loads experienced by your users in the wild.
- Local: performance data on a small number of machines that you control.
- At scale: performance data on a large number of devices that you don’t control. Here is a classification of representative tools in each category and how they are related to each other:
|At scale||WebPageTest (sort of)||Chrome User Experience Report (crUX)|
A performance budget should be established, measured and monitored from the very beginning of development. This budget should only be relaxed after careful considerations. Exceeding any budget metric should cause a build to fail.
The application should perform the following optimizations on assets:
- Minify and compress CSS and JS code.
- Compress images with imagemin.
- Convert animated gifs to MP4 or WebM videos with FFmpeg.
- Load images lazily, either natively (Chrome 76+) or with lazysizes.
- Create multiple versions of images (retina, non-retina, WebP etc) and use the picture element and srcset to serve responsive images.
- All server responses (static and dynamic) should be compressed with brotli, falling back to gzip.
- Web fonts should be loaded asynchronously and self-hosted instead of being hosted on a third-party services e.g. Google fonts. Special attention should be paid to how fonts are shown while being loaded.
The application should be served from a single origin. All static assets should be served from a CDN.
The application should be served over HTTP/2, which necessitates HTTPS. Both technologies are very useful. For example, HTTP/2 allows us to do server push. HTTPS allows service workers. Because HTTPS can be tricky to setup, it should not be added close to the release date. Instead, HTTPS should be used in development from the very beginning.
The application should use resource hints, such as preload, prefetch, dns-prefetch, preconnect, to optimize the loading sequence of assets.
The application should use
<script type="module">to distinguish modern browsers from older ones so that we can serve modern, less transpiled JS code to the former and legacy, more transpiled JS code to the latter. The application should use @babel/preset-env to transpile based on target browsers instead of transpile based on language features.
The application should utilize code-splitting and tree shaking to reduce the size of the initial payload.
The team should ensure that code coverage (the percentage of initially-loaded JS and CSS resources that are actually used on initial render) as tracked by Chrome dev tools should be as close to 100% as possible and revisit the code splitting configuration and the output of the webpack-bundle-analyzer plugin when this figure decreases unexpectedly.
Side-effectful imports (such as ‘import “someLib”’ instead of ‘import someLib from “someLib”’) should be tracked closely because they cannot be easily tree-shakeable.
Before adopting a third-party library to be used in the front-end, the team should evaluate how much this library will add to the size of the application in addition to its usefulness. bundlephobia can serve as rough back-of-the-envelope estimate.
The interactivity of app shell UI elements should be implemented with pure CSS to the extent possible, complemented by a minimal amount of JS for state management.
Critical CSS (those used to render above-the-fold content) should be extracted out of all the application CSS and inlined into the initial HTML. The extraction can be done with the server-side rendering feature of CSS-in-JS libraries like styled-components.
To ensure the fastest police initial paint, the combination of the very initial HTML, inlined CSS and inlined JS should weigh less than 14KB due to TCP slow start.
Select performance metrics should be displayed in a dashboard on a large monitor visible to everyone to encourage a performance-centric culture. One example is SpeedCurve’s TV mode.
The application should remain usable (minus data) and show an “you’re offline” status message when the user’s device is offline by using service workers and the Cache Storage API.
The application should use a component-based UI architecture with unidirectional data flow e.g. React.
The application should consider the near certainty that it will be modified substantially or even completely over its long lifetime and make design choices that accommodate these modifications. For example, a component-based architecture with a strong type system allows various parts of the application to be modified independently without causing breakage. Good integration and end-to-end test coverage allows large parts of the application or the entire application to be rewritten.
The application should be developed using a language with a good static type system. TypeScript is the standard (and compromising) choice but more adventurous (and more dogmatic) choices include ReasonML, Elm or PureScript.
The application should aim for zero run-time errors and use an in-depth defence against bugs and errors to achieve this goal:
- The TypeScript compiler gives immediate feedback on obvious but tedious errors that could have been caught by humans (but can be done better by the machine).
- ELinters can catch more framework or language-specific errors. For example, eslint-plugin-react enforces React’s “rule of hooks.” stylelint catches CSS errors.
- Autoformatters (prettier) enforce a common coding style without antagonizing team members who hold contrarian views. This common style helps reduce the cognitive load when reading code, potentially leading to easier detection of bugs.
- Webpack compilation errors can point to missing non-JS assets (images, text files etc).
- Unit tests can catch errors in logic-heavy parts of the code base.
- Visual regression and end-to-end tests protect against inadvertent changes in areas of the application that are not targeted by a feature under development. This becomes more and more important as the application grows in size and complexity.
- Human testing is sometimes required for features that cannot be machine tested due to technical limitations e.g. features that require WebGL. The number of human tests should be kept to a minimum and these tests should be performed before major releases.
- When all else fails, client-side monitoring services like Sentry inform the team of run-time errors experienced by the user so that they can be caught early and fixed before these errors affect a large number of users.
The codebase should be set up so that a production build can be made with a single command-line instruction (
make my-tasketc). The same requirements apply to getting a local development up and running. This requirement does not include prerequisite setup setup steps e.g. installing NodeJS or Docker.
All commit message should be well-written and follow the conventional commits standard. Commits should be kept atomic.
The application should follow the trunk-based development model (in contrast to the Git flow model). There’s only a single long-lived branch (master) and all new features branch off of master and are developed behind feature flags. Every feature branch will have its own “staging” environment. The only difference between any staging environment and production is the feature being developed on that branch.
The application should have a fixed release cadence instead of only be released when certain major features are ready. This release cycle works very well with trunk-based development.
The application should follow the Lerna monorepo model. The core visualizations and components are separated into their own mini packages that can be tested and released independently. These core components should be built in a way that allows them to be consumed by a variety of bundlers/environments (webpack, rollup, pika, parcel, in-browser) and released on both github and npm registries. There are several benefits to this approach:
- It will make the whole application more robust because its constituent components can be tested individually, reducing the scope of integration and end-to-end tests and making them more effective.
- It will facilitate the reuse of these components in other subsequent projects.
The application should use a package manager (yarn/npm) with the appropriate configurations (yarn.lock/package-lock.json) to ensure reproducible dependency trees.
The application should pin exact versions of all dependencies at installation to avoid inadvertent breakages. Not all package authors observe semantic versioning.
Parts of the code base that are released as library-like packages should follow semantic versioning and use a tool like semantic-release, which will parse commit messages to automatically figure out whether a major, minor or patch release is appropriate. On the other hand, the setting of the main front end’s version can be manual. In either case, each release should have a corresponding git tag.
All dependencies should be audited for security vulnerabilities regularly by using the appropriate “audit” flags of the package manager. Any vulnerabilities should be taken seriously and the offending package upgraded/removed after testing. After a good test base line has been established, consider enabling greenkeeper/dependabot to receive automatic pull requests when new versions of dependencies are released.
The application should be SEO-friendly. This can be accomplished with server-side rendering with progressive hydration.
The application should follow the RAIL performance model:
- Response: the application should provide feedback to user interactions within 100ms after the initial input. Ideally, the feedback would show the desired state. But if it’s going to take a long time, then a loading indicator or coloring for the active state will do. The main thing is to acknowledge the input.
- Animation: each frame of an animation should take less than 16ms to give users a smooth 60fps experience.
- Idle: in order to respond to any user input within 100ms, long-running tasks should yield back to the main thread every 50ms by using web workers or requestIdleCallback.
- Load: the app should aim to deliver the first meaningful paint (FMP) in under 1 second. Once that’s delivered, the app must remain responsive; the user mustn’t encounter trouble when scrolling, tapping or watching animations.
The application should attempt to use off-main thread rendering of visualizations if feasible using OffscreenCanvas and fall back to main thread rendering if not available.
The application should have fine-grained error handling using React error boundaries. Each reasonably complex component (e.g. a visualization/graph component) should be able to handle error that originated within it component tree. This requirement can be easily accomplished if the code base is a monorepo, in which case each package only needs to know about the possible errors that originate from within.
Those parts of the application that perform asynchronous activities should try to make those activities injectable for testing purposes.
The application should start using service workers early in its development in order to gain familiarity with them so that by the time of public release, the team has good experience with service workers. Messing up service workers can really break the application for users.
Parts of the code base that are logic-heavy, utility-like or library-like should aim to achieve 100% code coverage in testing. The more UI-like part can have lower code coverage.
The application should use the forking git workflow from the beginning in anticipation of collaboration from external contributors.
The application should consider the possibility that important parts of it, such as the visualizations, may be embedded in other websites e.g. in a scientific web page.
The application should prevent common security problems like XSS by sanitizing and validating user inputs, even if only used within the front-end.
All text-heavy pages and as many visualization pages as possible should be accessible using a combination of semantic HTML, proper ARIA attributes and good focus management. This is a good starting resource.
All text-heavy pages and as many visualization pages as possible should be usable on mobile devices.
Decisions made during the application’s development should not preclude the ability to add languages other than English at a later point. This requires adhering to a few principles:
- All user-facing texts for any language should be separated from the business logic code in a way that makes these texts easily swappable based on an environment variable specifying the target language.
- The complexity of these user-facing texts in one language should not influence the complexity of the corresponding texts in other languages. This is called asymmetric localization, exemplified by Fluent.
- Language-specific inflections (capitalization, possessive, ordinal number etc) should not leak into the business logic code.
- Although API requests can include language requirements, the structure of API responses should be language-agnostic.
The application should be very careful about what UI states/controls are exposed to the user through URL paths/parameters because once exposed, they cannot be easily taken back without adversely affecting already-shared links. This becomes a major problem when the application evolves over time. However, the problem of allowing users to share a specific state of the application with others still has to be solved. A solution is to give users “opaque” sharing links. This is how it works:
- The user hits the “share” button to obtain a link they can share with friends e.g. over social media.
- The application sends relevant application state to the server, which stores these states in a database (with versioned schema) and returns a UUID, which is displayed by the front-end to the user.
- When the recipient goes to that shared link, the server inlines the previously stored state into the response, thereby “hydrating” the application into the state that the sender saw.
- With new application versions, we need to be mindful of whether the current state schema still makes sense and will need to upgrade the database accordingly. We’ll also have to batch invalidate all known URLs with facebook/twitter crawlers when the new application version causes the social media text/image to change.
- One downside of this approach is the lack of (natural) canonical links.
“App tour guide” features should be considered early in the development process to help users learn how to use the application.
The application should provide users with a way to submit bug reports from within the UI. These reports should contain the application state and screenshot at the time of the bug.
The application should provide dynamic social media share images for visualization pages. This requires that the visualizations be renderable both server-side and client-side.
The application should consider having a dark mode that can be switched on/off manually or change with the user’s operating system’s settings.
- The application should consider using first-party analytics framework instead of google analytics to avoid inadvertent blocking by ad blockers.
- The application’s release should be accompanied by a series of blog posts explaining its underlying technologies and technical achievements.