June 22nd, 2024

Exposition of Front End Build Systems

Frontend build systems are crucial in web development, involving transpilation, bundling, and minification steps. Tools like Babel and Webpack optimize code for performance and developer experience. Various bundlers like Webpack, Rollup, Parcel, esbuild, and Turbopack are compared for features and performance.

Read original articleLink Icon
Exposition of Front End Build Systems

The article discusses the importance of frontend build systems in modern web development. It explains the three main build steps: transpilation, bundling, and minification. Transpilation converts modern JavaScript code into an older version for wider browser compatibility. Bundling combines multiple JavaScript files into a single bundle to reduce network requests. Minification reduces file sizes by removing unnecessary elements. Various tools like Babel, Webpack, and Terser are commonly used for these tasks. Code splitting divides bundles for better performance, while tree shaking optimizes bundle sizes by removing unused exports. Static assets like CSS and images are integrated into the distributable during bundling. Meta-frameworks and build tools streamline the build process and enhance developer experience by providing preconfigured systems. The article also compares popular bundlers like Webpack, Rollup, Parcel, esbuild, and Turbopack, highlighting their features and performance differences.

Related

AI-powered conversion from Enzyme to React Testing Library

AI-powered conversion from Enzyme to React Testing Library

Slack engineers transitioned from Enzyme to React Testing Library due to React 18 compatibility issues. They used AST transformations and LLMs for automated conversion, achieving an 80% success rate.

The demise of the mildly dynamic website (2022)

The demise of the mildly dynamic website (2022)

The evolution of websites from hand-crafted HTML to PHP enabled dynamic web apps with simple deployment. PHP's decline led to static site generators replacing mildly dynamic sites, shifting to JavaScript for features like comments.

Avoiding Emacs Bankruptcy

Avoiding Emacs Bankruptcy

Avoid "Emacs bankruptcy" by choosing efficient packages, deleting unnecessary configurations, and focusing on Emacs's core benefits. Prioritize power-to-weight ratio to prevent slowdowns and maintenance issues. Regularly reassess for a streamlined setup.

Software Engineering Practices (2022)

Software Engineering Practices (2022)

Gergely Orosz sparked a Twitter discussion on software engineering practices. Simon Willison elaborated on key practices in a blog post, emphasizing documentation, test data creation, database migrations, templates, code formatting, environment setup automation, and preview environments. Willison highlights the productivity and quality benefits of investing in these practices and recommends tools like Docker, Gitpod, and Codespaces for implementation.

New JavaScript Set Methods

New JavaScript Set Methods

New JavaScript Set methods introduced in major browsers like Firefox 127 offer efficient set operations without polyfills. These methods simplify tasks like finding intersections, unions, and subsets, enhancing working with unique collections.

Link Icon 1 comments
By @SahAssar - 4 months
> The browser must request each JavaScript file individually. In a large codebase, this can result in thousands of HTTP requests in order to render a single page. In the past, prior to HTTP/2, this would also result in thousands of TLS handshakes.

> In addition, several sequential network round trips may be needed before all the JavaScript is loaded. For example, if index.js imports page.js and page.js imports button.js, three sequential network round trips are necessary to fully load the JavaScript. This is called the waterfall problem.

In general the number of requests is not the current (in HTTP2/3) problem. In HTTP1 browsers limited the concurrent requests to each domain to 6, so if you had more than 6 resources you had to wait for a slot (this is head of line blocking). This was supposed to be fixed in HTTP1 with HTTP pipelining, but that never took off and support is basically non-existent. If your server was properly configured (with keep-alive) this still would not lead to additional TCP/TLS handshakes.

With HTTP2/HTTP3 the head of line blocking is fixed by multiplexing all requests over one connection, so the problem is more discovery/waterfall and compression than the number of requests.

Discovery/waterfall problem: If file A loads file B which loads file C you cannot know to start loading C until first A has loaded, then B, and then C (and this chain will probably be a lot deeper in most modern frontend apps). The idea to fix this was via either Server Push (which is sorta deprecated and removed) or HTTP 103 Early Hints (which would allow you to send preload hints before sending the actual server response). If File A loads B and C directly then the impact is much lower.

Compression: If file B and file C contain similar code (like a lot of frontend components do) then the result of individually compressing B and C is a lot larger than keeping them in the same file and compressing the combination. SDCH (Shared Dictionary Compression for HTTP) was supposed to fix this by allowing you to have a compression dictionary that could be used by many files but that was only implemented in chrome (which has since removed it) and basically only used by linkedin. A similar idea is present in brotli, where there is a single standardized shared dictionary based on common text on the web, and I think there has been some talk about how to revive this for zstd on the web.

So, the number of requests are usually not the issue, and if you used Server Push or Early Hints combined with SDCH you could have pretty close to the performance of bundling without build systems. But that ship sailed long ago with the deprecation and removal of Push and SDCH