Developer News

Selecting a Date Range in CSS

Css Tricks - Thu, 04/09/2026 - 3:52am

A date range selector lets users pick a time frame between a start and end date, which is useful in booking trips, sorting info by date blocks, picking time slots, and planning schedules.

Example pulled from Airbnb

I’m going to show you an example where, even though JavaScript is involved, the bulk of the work is handled by the “n of selector(s)” syntax of the CSS :nth-child selector, making it easy to build the range selection.

CodePen Embed Fallback The “n of selector” syntax

This syntax of the :nth-child selector filters elements by a given selector first among all the child elements, before selecting them by a counting order.

<p>The reclamation of land...</p> <p>The first reclamations can be traced...</p> <p class="accent">By 1996, a total of...</p> <p>Much reclamation has taken...</p> <p class="accent">Hong Kong legislators...</p> .accent { color: red; } .accent:nth-child(2) { font-weight: bold; /* does not work */ } :nth-child(2 of .accent){ text-decoration: underline; } CodePen Embed Fallback

There are two .accent-ed paragraphs with red text. As we try to target the second accented paragraph, .accent:nth-child(2) fails to select it because it’s trying to find an .accent element that’s the second child of its parent.

Whereas, :nth-child(2 of .accent) succeeds in selecting and styling the second accented paragraph because it’s only looking for the second element among the **.accent** elements rather than the second of all of the children.

The Layout

Moving onto our main example, let’s put together a month layout. It only takes a few lines of CSS.

<ul id="calendar"> <li class="day">Mon</li> <li class="day">Tue</li> <!-- up to Sat --> <li class="date">01<input type="checkbox" value="01"></li> <li class="date">02<input type="checkbox" value="02"></li> <!-- up to 31 --> </ul> #calendar { display: grid; grid-template-columns: repeat(7, 1fr); /* 7 for no. of days in a week */ } CodePen Embed Fallback Choose Only Two Dates

Now is when we reach for JavaScript since we can’t check/uncheck a control in CSS. But even here the “n of selector” syntax can be very useful.

When we pick two dates to create a range, clicking on a third date will update the range and remove one of the earlier dates.

You can set up the range re-adjustment logic in any way you like. I’m using this approach: If the third date is either earlier or later than the last return date, it becomes the new return date, and the old one is unselected. If the third date is earlier than the last onward date, it becomes the new onward date, and the old one is unselected.

const CAL = document.getElementById('calendar'); const DT = Array.from(CAL.getElementsByClassName('date')); CAL.addEventListener('change', e => { if (!CAL.querySelector(':checked')) return; /* When there are two checked boxes, calendar gets 'isRangeSelected' class */ CAL.className = CAL.querySelector(':nth-child(2 of :has(:checked))') ? 'isRangeSelected':''; /* When there are three checked boxes */ if (CAL.querySelector(':nth-child(3 of :has(:checked))')) { switch (DT.indexOf(e.target.parentElement)) { /* If the newly checked date is first among the checked ones, the second checked is unchecked. Onward date moved earlier. */ case DT.indexOf(CAL.querySelector(':nth-child(1 of :has(:checked))')): CAL.querySelector(':nth-child(2 of :has(:checked)) input').checked = 0; break; /* If the newly checked date is second among the checked ones, the third checked is unchecked. Return date moved earlier. */ case DT.indexOf(CAL.querySelector(':nth-child(2 of :has(:checked))')): CAL.querySelector(':nth-child(3 of :has(:checked)) input').checked = 0; break; /* If the newly checked date is third among the checked ones, the second checked is unchecked. Return date moved later. */ case DT.indexOf(CAL.querySelector(':nth-child(3 of :has(:checked))')): CAL.querySelector(':nth-child(2 of :has(:checked)) input').checked = 0; break; } } });

First, we get the index of the current checked date (DT.indexOf(e.target.parentElement)), then we see if that’s the same as the first checked among all the checked ones (:nth-child(1 of :has(:checked))), second (:nth-child(2 of :has(:checked))), or third (:nth-child(3 of :has(:checked))). Given that, we then uncheck the relevant box to revise the date range.

You’ll notice that by using the “n of selector” syntax, targeting the :checked box we want by its position among all checked ones is made much simpler — instead of indexing through a list of checked dates in JavaScript for this, we can directly select it.

Styling the range is even easier than this.

Styling the Range /* When two dates are selected */ .isRangeSelected { /* Dates following the first but not the second of selected */ :nth-child(1 of :has(:checked)) ~ :not(:nth-child(2 of :has(:checked)) ~ .date) { /* Range color */ background-color: rgb(228 239 253); } }

When there are two dates chosen, the dates between the first (1 of :has(:checked)) and second (2 of :has(:checked)) are colored pale blue, creating a visual range for that block of dates in the month.

The color is declared inside a compound selector that selects dates (.date) following the first of all checked date (:nth-child(1 of :has(:checked))), but not the second of all checked date (:not(:nth-child(2 of :has(:checked))).

Here’s the full example once again:

CodePen Embed Fallback

Selecting a Date Range in CSS originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Should Designers "Code"?

LukeW - Tue, 04/07/2026 - 4:00am

There's a question that never goes away in design: should designers code? My answer has always been yes. But for a decade or so, the complexity of front-end development made it impractical for most. Thankfully, AI coding agents have reopened the door.

Just like a sculptor needs to know how marble chisels, breaks, and buffs, a Web designer should know how CSS, HTML, and Javascript construct interfaces within a Web browser. You need to be intimate with your medium to know what it can and cannot do. Whether Web apps, iOS native apps, AI apps...

For years, many designers did exactly that. Looking at my personal GitHub history tells a story familiar to many. Steady coding until about 2014. Then almost nothing for a decade. Why?

Before 2014, a designer could build a lot with HTML, CSS, and Javascript. Then React and Angular gained traction, and "web app" went from "pages with some interactivity" to single-page applications with state management, routing, and build pipelines. The gap between "I can code a website" and "I can code in my team's dev environment" widened fast.

Tooling got heavier and frameworks churned constantly. Deployment went from dragging files to a server to CI/CD and cloud infrastructure. So little wonder that a designer who coded comfortably in 2012 could look at the 2015 landscape and reasonably decide to go back to Sketch (dated reference, I know.)

Thankfully technology never sits still and AI coding agents are now collapsing the gap between designing and building. Zooming in to the last few years of my GitHub history tells that story well.

For years, it was faster to mock up software than to ship it. Designers stayed "ahead" of engineering with prototypes. Now AI coding agents make development so much faster that the loop has flipped. Henry Modisett described this new state as "prototype to productize" rather than "design to build," and that sounds right to me.

Designers can now work iteratively with production code, not just prototypes. This kind of hands-on work creates better designers, ones who work through issues that previously got left for developers to figure out.

As always, designing software is better when you work in the medium, not a level or two abstracted away. AI tools make that possible again.

Alternatives to the !important Keyword

Css Tricks - Tue, 04/07/2026 - 3:54am

Every now and then, I stumble onto an old project of mine, or worse, someone else’s, and I’m reminded just how chaotic CSS can get over time. In most of these cases, the !important keyword seems to be involved in one way or another. And it’s easy to understand why developers rely on it. It provides an immediate fix and forces a rule to take precedence in the cascade.

That’s not to say !important doesn’t have its place. The problem is that once you start using it, you’re no longer working with the cascade; you’re bypassing it. This can quickly get out of hand in larger projects with multiple people working on them, where each new override makes the next one harder.

Cascade layers, specificity tricks, smarter ordering, and even some clever selector hacks can often replace !important with something cleaner, more predictable, and far less embarrassing to explain to your future self.

Let’s talk about those alternatives.

Specificity and !important

Selector specificity is a deep rabbit hole, and not the goal of this discussion. That said, to understand why !important exists, we need to look at how CSS decides which rules apply in the first place. I wrote a brief overview on specificity that serves as a good starting point. Chris also has a concise piece on it. And if you really want to go deep into all the edge cases, Frontend Masters has a thorough breakdown.

In short, CSS gives each selector a kind of “weight.” When two rules target the same element, the rule with higher specificity wins. If the specificity is equal, the rule declared later in the stylesheet takes precedence.

  • Inline styles (style="...") are the heaviest.
  • ID selectors (#header) are stronger than classes or type selectors.
  • Class, attribute, and pseudo-class selectors (.btn, [type="text"], :hover) carry medium weight.
  • Type selectors and pseudo-elements (div, p, ::before) have the lowest weight. Although, the * selector is even lower with a specificity of 0-0-0 compared to type selectors which have a specificity of 0-0-1.
/* Low specificity (0,0,1) */ p { color: gray; } /* Medium specificity (0,1,0) */ .button { color: blue; } /* High specificity (1,1,0) */ #header .button { color: red; } <!-- Inline style (1,0,0) --> <p style="color: green;">Hello</p>

Inline styles being the heaviest also explains why they’re often frowned upon and not considered “clean” CSS since they bypass most of the normal structure we try to maintain.

!important changes this behavior. It skips normal specificity and source order, pushing that declaration to the top within its origin and cascade layer:

p { color: red !important; } #main p { color: blue; }

Even though #main p is more specific, the paragraph will appear red because the !important declaration overrides it.

Why !important can be problematic

Here’s the typical lifecycle of !important in a project involving multiple developers:

“Why isn’t this working? Add !important. Okay, fixed.”

Then someone else comes along and tries to change that same component. Their rule doesn’t apply, and after some digging, they find the !important. Now they have a choice:

  • remove it and risk breaking something else,
  • or add another !important to override it.

And since no one is completely sure why the first one was added, the safer move often feels like adding another one. This can quickly spiral out of control in larger projects.

On a more technical note, the fundamental problem with !important is that it breaks the intended order of the cascade. CSS is designed to resolve conflicts predictably through specificity and source order. Later rules override earlier ones, and more specific selectors override less specific ones.

A common place where this becomes obvious is theme switching. Consider the example below:

.button { color: red !important; } .dark .button { color: white; }

Even inside a dark theme, the button stays red. This results in the stylesheet becoming harder to reason about, because the cascade is no longer predictable.

In large teams, especially, this results in maintenance and debugging becoming harder. None of this means !important should never be used. There are legitimate cases for it, especially in utility classes, accessibility overrides, or user stylesheets. But if you’re using it as your go-to method to resolve a selector/styling conflict, it’s usually a sign that something else in the cascade needs attention.

Let’s look at alternatives.

Cascade layers

Cascade layers are a more advanced feature of CSS, and there’s a lot of theory on them. For the purposes of this discussion, we’ll focus on how they help you avoid !important. If you want to learn more, Miriam Suzanne wrote a complete guide on CSS Cascade Layers on it that goes into considerable detail.

In short, cascade layers let you define explicit priority groups in your CSS. Instead of relying on selector specificity, you decide up front which category of styles should take precedence. You can define your layer order up front:

@layer reset, defaults, components, utilities;

This establishes priority from lowest to highest. Now you can add styles into those layers:

@layer defaults { a:any-link { color: maroon; } } @layer utilities { [data-color='brand'] { color: green; } }

Even though [data-color='brand'] has lower specificity than a:any-link, the utilities layer takes precedence because it was defined later in the layer stack.

It’s worth noting that specificity still works inside a layer. But between layers, layer order is given priority.

With cascade layers, you can prioritize entire categories of styles instead of individual rules. For example, your “overrides” layer always takes precedence over your “base” layer. This sort of architectural thinking, instead of reactive fixing saves a lot of headaches down the line.

One very common example is integrating third-party CSS. If a framework ships with highly specific selectors, you can do this:

@layer framework, components; @import url('framework.css') layer(framework); @layer components { .card { padding: 2rem; } }

Now your component styles automatically override the framework styles, regardless of their selector specificity, as long as the framework isn’t using !important.

And while we’re talking about it, it’s good to note that using !important with cascade layers is actually counterintuitive. That’s because !important actually reverses the layer order. It is no longer a quick way to jump to the top of the priorities — but an integrated part of our cascade layering; a way for lower layers to insist that some of their styles are essential.

So, if we were to order a set of layers like this:

  1. utilities (most powerful)
  2. components
  3. defaults (least powerful)

Using !important flips things on their head:

  1. !important defaults (most powerful)
  2. !important components
  3. !important utilities
  4. normal utilities
  5. normal components
  6. normal defaults (least powerful)

Notice what happens there: it generates three new, reversed important layers that supersede the original three layers while reversing the entire order.

The :is() pseudo

The :is() pseudo-class is interesting because it takes the specificity of its most specific argument. Say you have a component that needs to match the weight of a more specific selector elsewhere in the codebase:

/* somewhere in your styles */ #sidebar a { color: gray; } /* your component */ .nav-link { color: blue; }

Rather than using !important, you can bump .nav-link up by wrapping it in :is() with a more specific argument:

:is(#some_id, .nav-link) { color: blue; }

Now this has id-level specificity while matching only .nav-link. It’s worth noting that the selector inside :is() doesn’t have to match an actual element. We’re using #some_id purely to increase specificity in this case.

Note: If #some_id actually exists in your markup, this selector would also match that element. So it would be best to use an id not being used to avoid side effects.

On the flip side, :where() does the opposite. It always resolves to a specificity of (0,0,0), no matter what’s inside it. This is handy for reset or base styles where you want anything downstream to override easily.

Doubling up a selector

A pretty straightforward way of increasing a selectors specificity is repeating the selector. This is usually done with classes. For example:

.button { color: blue; } .button.button { color: red; /* higher specificity */ }

You would generally not want to do this too often as it can become a readability nightmare.

Reordering

CSS resolves ties in specificity by source order, so a rule that comes later is prioritized. This is easy to overlook, especially in larger stylesheets where styles are spread across multiple files.

If a more generic rule keeps overriding a more targeted one and the specificity is the same, check whether the generic rule is being loaded after yours. Flipping the order can fix the conflict without needing to increase specificity.

This is also why it’s worth thinking about stylesheet organization from the start. A common pattern is to go from generic to specific (resets and base styles first, then layout, then components, then utilities).

When using !important does make sense

After all that, it’s worth being clear: !important does have legitimate use cases. Chris discussed this a while back too, and the comments are worth a read too.

The most common case is utility classes. For example, the whole point of classes like .visually-hidden is that they do one thing, everywhere. In this cases, you don’t want a more specific selector quietly undoing it somewhere else. The same is true for state classes like .disabled or generic component styles like .button.

.visually-hidden { position: absolute !important; width: 1px !important; height: 1px !important; overflow: hidden !important; clip-path: inset(50%) !important; }

Third-party overrides are another common scenario. !important can be used here to either override inline styles being set in JavaScript or normal styles in a stylesheet that you can’t edit.

From an accessibility point of view, !important is irreplaceable for user stylesheets. Since these are applied on all webpages and there’s virtually no way to guarantee if the stylesheets’ selectors will always have the highest specificity, !important is basically the only reliable way to make sure your styles always get precedence.

Another good example is when it comes to respecting a user’s browser preferences, such as reducing motion:

@media screen and (prefers-reduced-motion: reduce) { * { animation-duration: 0.001ms !important; animation-iteration-count: 1 !important; transition-duration: 0.001ms !important; } } Wrapping up

The difference between good and bad use of !important really comes down to intent. Are you using it because you understand the CSS Cascade and have made a call that this declaration should always apply? Or are you using it as a band-aid? The latter will inevitably cause issues down the line.

Further reading

Alternatives to the !important Keyword originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Looking at New CSS Multi-Column Layout Wrapping Features

Css Tricks - Mon, 04/06/2026 - 3:55am

Multi-column layouts have not been used to their full potential, mostly because once content exceeded a limit, multi-column would force a horizontal scroll. It’s unintuitive and a UX no-no, especially on the modern web where the default scroll is vertical.

Take the following case for example:

CodePen Embed Fallback

The CSS code for that might look something like this:

body {   max-width: 700px; } .article {   column-gap: 10px;   column-count: 3;   height: 350px; }

When the content size exceeds the body container, multi-column creates additional columns and a horizontal scroll. However, we finally have the tools that have recently landed in Chrome that “fix” this without having to resort to trickier solutions.

Chrome 145 introduces the column-height and column-wrap properties, enabling us to wrap the additional content into a new row below, creating a vertical scroll instead of a horizontal scroll. 

So, now we can do something like this in Chrome 145+:

body {   max-width: 700px; } .article {   column-gap: 10px;   column-count: 3;   column-wrap: wrap;   height: 350px; }

And we get this nice multi-column layout that maintains the column-count:

This effectively transforms Multi-Column layouts into 2D Flows, helping us create a more web-appropriate scroll.

⚠️ Browser Support: As of April 2026, column-wrap and column-height are available in Chrome 145+. Firefox, Safari, and Edge do not yet support these properties.

What this actually solves

The new properties can be genuinely useful in several cases:

Fixed-height content blocks

This is probably one of the most useful use cases for these properties. If you’re working with content that has predictable or capped heights, like card grids where each card has a max-height, then this works beautifully. 

Toggle between column-wrap: wrap and column-wrap: nowrap in the following demo (Chrome 145+ needed) to check the difference.

CodePen Embed Fallback

In case you’re checking this in an unsupported browser, this is the nowrap layout:

And this is the wrap layout:

Wrapping creates a much more seamless flow. 

However, in case the content-per-card is unbalanced, then even with wrapping, it can lead to unbalanced layouts:

CodePen Embed Fallback Newspaper-style and Magazine-style layouts

Another real life use case is when designing newspaper-style layouts or sections where you’re willing to set explicit container and column heights. As can be seen in the earlier example, the combination of column-height and column-wrap helps make the layout responsive for different screen sizes, while retaining a more intuitive flow of information. 

Block-direction carousels

This is my personal favorite use case of the column-wrap feature! By setting the column height to match the viewport (e.g., 100dvh), you can essentially treat the multi-column flow as a pagination system, where your content fills the height of the screen and then “wraps” vertically. When combined with scroll-snap-type: y mandatory, you get a fluid, vertical page-flipping experience that handles content fragmentation without any manual clipping or JavaScript calculation.

Play around with the following demo and check it out for yourself. Unless you’re on Chrome 145+ you’ll get a horizontal scroll instead of the intended vertical.

CodePen Embed Fallback

There is a bit of a drawback to this though: If the content on a slide is too long, column-wrap will make it flow vertically, but the flow feels interrupted by that imbalance. 

What they don’t solve

While these properties are genuinely helpful, they are not one-stop solutions for all multi-column designs. Here are a few situations where they might not be the “right” approach.

Truly dynamic content

If the content height is unknown or unpredictable in advance (e.g., user-generated content, CMS-driven pages), then these properties are of little use. The design can still be wrapped vertically with the column-wrap property, however, the layout would remain unpredictable without a fixed column height.

It can lead to over-estimating the column height, leaving awkward gaps in the layout. Similarly, it can lead you to under-estimate the height, resulting in unbalanced columns. The fix here is then to use JS to calculate heights, which defeats the idea of a CSS-native solution.

Media-query-free responsiveness

For a truly “responsive” layout, we still need to use media queries to adjust column-count and column-height for different viewport sizes. While the wrapping helps and creates incremental benefits for a CSS-native solution, it can only help adjust the overflow behavior. Hence, the dependency on media query persists when supporting varying screen sizes.

Complex alignment needs

If you need precise control over where items sit in relation to each other, CSS Grid is still a better option. While multi-column with wrapping gives you flow, it still lacks positioning control.

Comparing alternatives

Let’s see how the multi-column approach compares with existing alternatives like CSS Grid, CSS Flexbox, and the evolving CSS Masonry, that offer similar layouts.

One key difference is that while grid and flexbox manage distinct containers, multi-column is the only system that can fragment a single continuous stream of content across multiple columns and rows. This makes it the best fit for presenting long-form content, like we saw in the newspaper layout example.

CSS Grid lets us control placement via the grid structure, making it great for complex layouts requiring precise positioning or following asymmetric designs, like dashboards or responsive image galleries that need to auto-fit according to the screen size.

Flexbox with wrapping is great for creating standard UI components like navigation bars and tag clouds that should wrap around on smaller screen sizes.

Note: Chrome is also experimenting with a new flex-wrap: balance keyword that could provide more wrapping control as well.

CSS Grid and Flexbox with wrapping are both good fits for layouts where each item is independent. They work well with content of dynamic heights and provide better alignment control compared to a multi-column approach. However, multi-column with the updated properties has an edge when it comes to fragmentation-aware layouts as we’ve seen.

CSS Masonry, on the other hand, will be useful for interlocking items with varying heights. This makes it perfect for creating style boards (like Pinterest) that pack items with varying heights in an efficient and aesthetic manner. Another good use case is e-commerce websites that use a masonry grid for product displays because descriptions and images can lead to differing card heights.

Conclusion

The new column-wrap and column-height properties supported in Chrome 145 could significantly increase the usability of multi-column layouts. By enabling 2D flows, we have a way to fragment content without losing the vertical scrolling experience.

That said, these features will not be a replacement for the structural precision of CSS Grid or the item-based flexibility of Flexbox. But they will fill a unique niche. As browser support continues to expand, the best way to approach multi-column layout is with an understanding of both its advantages and limitations. They won’t solve dynamic height issues or eliminate the need for media queries, but will allow flowing continuous content in a 2D space.

Looking at New CSS Multi-Column Layout Wrapping Features originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Making Complex CSS Shapes Using shape()

Css Tricks - Thu, 04/02/2026 - 3:58am

Creating rectangles, circles, and rounded rectangles is the basic of CSS. Creating more complex CSS shapes such as triangles, hexagons, stars, hearts, etc. is more challenging but still a simple task if we rely on modern features.

But what about those shapes having a bit of randomness and many curves?

A lot of names may apply here: random wavy, wiggly, blob, squiggly, ragged, torn, etc. Whatever you call them, we all agree that they are not trivial to create, and they generally belong to the SVG world or are created with tools and used as images. Thanks to the new shape() function, we can now build them using CSS.

I won’t tell you they are easy to create. They are indeed a bit tricky as they require a lot of math and calculation. For this reason, I built a few generators from which you can easily grab the code for the different shapes.

All you have to do is adjust the settings and get the code in no time. As simple as that!

While most of you may be tempted to bookmark the CSS generators and leave this article, I advise you to continue reading. Having the generators is good, but understanding the logic behind them is even better. You may want to manually tweak the code to create more shape variations. We will also see a few interesting examples, so stay until the end!

Notice: If you are new to shape(), I highly recommend reading my four-part series where I explain the basics. It will help you better understand what we are doing here.

How does it work?

While many of the shapes you can create with my generators look different, all of them rely on the same technique: a lot of curve commands. The main trick is to ensure two adjacent curve create a smooth curvature so that the full shape appears as one continuous curve.

Here is a figure of what one curve command can draw. I will be using only one control point:

Now, let’s put two curves next to each other:

The ending point of the first curve, E1, is the starting point of the second curve, S2. That point is placed within the segment formed by both the control points C1 and C2. That’s the criterion for having an overall smooth curve. If we don’t have that, we get a discontinued “bad” curve.

All we have to do is to randomly generate different curves while respecting the previous criterion between two consecutive curves. For the sake of simplicity, I will consider the common point between two curves to be the midpoint of the control points to have less randomness to deal with.

Creating the shapes

Let’s start with the easiest shape, a random wavy divider. A random curve on one side.

Two variables will control the shape: the granularity and the size. The granularity defines how many curves we will have (it will be an integer). The size defines the space where the curves will be drawn.

The first step is to create N points and evenly place them at the bottom of the element (N is the granularity).

Then, we randomly offset the vertical position of the points using the size variable. Each point will have an offset equal to a random value within the range [0 size].

From there, we take two adjacent points and define their midpoint. We get more points.

Do you start to see the idea? A first set of points is randomly placed while a second set is placed in a way that meets the criterion we defined previously. From there, we draw all the curves, and we get our shape.

The CSS code will look like this:

.shape { clip-path: shape(from Px1 Py1, curve to Px2 Py2 with Cx1 Cy1, curve to Px3 Py3 with Cx2 Cy2, /* ... */ curve to Pxi Pyi with Cx(i-1) Cy(i-1) /* ... */ ) }

The Ci are the points we randomly place (the control points) and Pi are the midpoints.

From there, we apply the same logic to the different sides to get different variation (bottom, top, bottom-top, all sides, etc.).

As for the blob, the logic is slightly different. Instead of considering a rectangular shape and straight lines, we use a circle.

We evenly place the points around the circle (the one formed by the element if it has border-radius: 50%). Then, we randomly offset them closer to the center. Finally, we add the midpoints and draw the shape.

We can still go fancier and combine the first technique with the circular one to consider a rectangle with rounded corners.

This was the trickiest one to implement as I had to deal with each corner, each side, and work with different granularities. However, the result was quite satisfying as it allows us to create a lot of fancy frames!

Show me the cool demos!

Enough theory, let’s see some cool examples and how to simply use the generators to create complex-looking shapes and animations.

We start with a classic layout featuring numerous wavy dividers!

CodePen Embed Fallback

We have four shapes in that demo, and all of them are a simple copy/paste from the wavy divider generator. The header uses the bottom configuration, the footer uses the top configuration and the other elements use the top + bottom configuration.

Let’s get fancy and add some animation.

CodePen Embed Fallback

Each element will have the following code:

@media screen and (prefers-reduced-motion: no-preference) { .element { --s1: shape( ... ); --s2: shape( ... ); animation: dance linear 1.6s infinite alternate; } @keyframes dance { 0% {clip-path: var(--s1)} to {clip-path: var(--s2)} } }

From the generator, you fix the granularity and size, then you generate two different shapes for each one of the variables (--s1 and --s2). The number of curves will be the same, which means the browser can have an interpolation between both shapes, hence we get a nice animation!

And what about introducing scroll-driven animation to have the animation based on the scroll? All you have to do is add animation-timeline: scroll() and it’s done.

CodePen Embed Fallback

Here is the same effect with a sticky header.

CodePen Embed Fallback

For this one, you play with the size. You fix the granularity and the shape ID then you consider a size equal to 0 for the initial shape (a rectangle) and a size different from 0 for the wavy one. Then you let the browser animate between both.

Do you see all the possibilities we have? You can either use the shapes as static decorations or create fancy animations between two (or more) by using the same granularity and adjusting the other settings (size and shape ID).

What cool demo can you create using those tricks? Share it in the comment section.

I will leave you with more examples you can use as inspiration.

A bouncing hover effect with blob shapes:

CodePen Embed Fallback CodePen Embed Fallback

A squishy button with a hover and click effect:

CodePen Embed Fallback

A wobbling frame animation:

CodePen Embed Fallback

liquid reveal effect:

CodePen Embed Fallback

And a set of fancy CSS loaders you can find at my site.

Conclusion

Do you see all the potential of the new shape() function? We now have the opportunity to create complex-looking shapes without resorting to SVG or images. In addition to that, we can easily have nice transition/animation.

Don’t forget to bookmark my CSS Generators website, from where you can get the code of the shapes we studied and more. I also have the CSS Shape website which I will soon update to utilize the new shape() for most of the shapes and optimize a lot of old code!

What about you? Can you think about a complex shape we can create using shape()? Perhaps you can give me the idea for my next generator!

Making Complex CSS Shapes Using shape() originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Front-End Fools: Top 10 April Fools’ UI Pranks of All Time

Css Tricks - Wed, 04/01/2026 - 4:00am

April Fools’ Day pranks on the web imply that we’re not trying to fool each other every day in web design anyway. Indeed, one of my favorite comments I received on an article was, “I can’t believe my eyes!” You shouldn’t, since web design relies on fooling the user’s brain by manipulating the way we process visual information via Gestalt laws, which make a website feel real.

April Fools’ Day on the web exemplifies what philosopher Jean Baudrillard called a deterrence machine — a single day on the calendar to celebrate funny fake news is like a theme park designed to make the fake constructs beyond its gates seem real by comparison. And oftentimes, the online pranks on April 1st are indistinguishable from the bizarreness that ensues all year round in the “real” virtual world.

Real things that looked like April Fools’ pranks

Tech has a history of April Fools’ Day announcements that remind me of what Philip K. Dick called “fake fakes,” emerging every year like real animals surreptitiously replacing the fake ones at Disneyland.

For instance, in 2004, people famously thought Gmail was an April Fools’ joke since it was announced on April 1st.

And on April Fools’ Day in 2013, long before the current generation of AI, Tom Murphy announced an AI that learns to play NES games. It was the real deal, even though he published the research paper and source code on “SIGBOVIK 2013, an April 1st conference that usually publishes fake research. Mine is real!” In Tom’s demo, the AI even devised the strategy of indefinitely pausing Tetris, because in that game on NES, “The only way to win is not to play.”

To give a more personal example of real tech that could be mistaken for an April Fools’ joke, my article on pure CSS collision detection was published on April 1st, 2025, my local time. I was amused when someone commented that using min to detect if a paddle was in range of a ball seemed like a clever hack that “brings up the question: Should game logic be done in CSS?” Of course it shouldn’t! I wasn’t seriously proposing this as the future of web game development.

I replied that if the commenter can take the idea seriously for a minute, it’s a testament to how far CSS has come as a language. It seems even funnier in hindsight, now that the range syntax has come to style queries, meaning we no longer need the min hack. So, maybe everyone should make games in CSS now, if the min hack was the only deal breaker (I kid because I love).

My CSS collision detection demo had a resurgence in popularity recently, when Chris Coyier chose it as a picked Pen. And in that CodePen, a comment again made me laugh: “Can it be multiplayer/online?” Yet, once I stopped laughing, I found myself trying to get a multiplayer mode working. Whether I can or not, I guess the joke’s on me for taking CSS hacking too seriously.

The thing is, much of what we have on the web this year seemed unthinkable last year.

Even the story of the origin of April Fool’s Day sounds like a geeky April Fools’ joke — the leading theory is that the 15th-century equivalent of the Y2K bug had some foolish people incorrectly celebrating the new year on April 1st when the Pope changed the calendars in France from the Julian Calendar to the Gregorian Calendar. And — April Fools’ again! — that’s a legend nobody has been able to prove happened.

But whichever way you feel about the constant disruptions at the heart of the evolution of tech, the disruptions work like pranks by flipping common narratives on their heads in the same way April Fools’ Day does. With that in mind, let’s go through history with an eye for exploring the core of truth inside the jokes of April Fools’ Days passed.

Note: These are the historical pranks I consider the top 10 most noteworthy, rather than the “best.” You’ll see that some of them crossed the line and/or backfired.

Google April Fools’ games

Google is famous for its April Fools’ pranks, but they’ve also historically blurred the line between pranks and features. For example, on April 1st 2019, Google introduced a temporary easter egg that transformed Google Calendar into a Space Invaders game. It was such a cool “joke” that nowadays, there’s a Chrome extension that offers a similar experience, turning your Google Calendar into a Breakout game. This extension also offers the option to actually delete items that your ball hit from your calendar at the end of a game.

On April Fools’ Day the same year as the original calendar game, Google also released a feature that allowed Google Maps users to play Snake on maps.

Personal Sidenote: The Google gag inspired an unreleased game I once made with an overworld that’s a gamified calendar, in which your character is trying to avoid an abusive partner by creating excuses not to be at home at the same time as their partner, but that’s a little dark for April Fools’.

Prank npm packages

In March 2016, a legit — if arguably trivial — eleven-line package was deleted from the npm registry after its creator decided to boycott npm. Turns out that deletion disrupted big companies whose code relied on the left-pad package and this prompted npm to change its policies on which packages can be deleted. I mention this because the humour of the npm packages released as jokes often revolves around poking fun at JavaScript developers’ overuse of dependencies that might not be needed.

Here is a 0kb npm package called vanilla-javascript and a page for the Vanilla JS “framework” that is always 0kb, no matter which features you add to the “bundle.” It lists all the JavaScript frameworks as “plugins.” Some of the dependent packages for vanilla-javascript are quite funny. I like false-js, which ensures true and false are defined properly. The library can be initialized with the settings disableAprilFoolsSideEffects, definitelyDisableAprilFoolsSideEffects, and strictDisableAprilFoolsSideEffectsCheck. If you read the source code, there is a comment saying, “Haha, this code is obfuscated, you’ll never figure out what happens on April Fools.”

There is also this useless library to get the current day. It seems plausible till you look carefully at the website and the description: “This package is ephemeral for April Fools’ Day and will be removed at some point.“ The testimonials from fictional time-traveling characters are also a bit of a giveaway, and you have to love that he updated it every day for months, “because… why not? &#x1f937;‍♂️”

More “terrible npm packages” for April Fools’ are here.

aprilFools.css

There’s another category of dependencies that are functional but used for playing April Fools pranks. For instance, aprilFools.css by Wes Bos, which has a comment at the top saying:

/* I assume no responsibility for angry co-workers or lost productivity Put these CSS definitons into your co-workers Custom.css file. They will be applied to every website they visit as well as their developer tools. */

It does things like use CSS transforms to turn the page upside down.

It strikes me that following the advice in the comments could be a slippery slope to a dark place of workplace bullying, if you were to try it on the wrong coworker, just because they left their computer unlocked. As Chris Coyier pointed out in his post on practical jokes in the browser:

“Fair warning on this stuff… you gotta be tasteful. Putting someone’s stapler in the jello is pretty hilarious unless it’s somehow a family heirloom, or it’s someone who’s been the target of a little too much office prankery to the point it isn’t funny anymore.”

April Fool’s pranks using VS Code Extensions

While we’re on the topic of behavior that blurs the line between pranks and workplace bullying, let’s talk about this list of VS Code Extensions that could be used to prank a coworker by causing their code editor UI to behave unexpectedly. Most of the examples sound funny and harmless, like having the IDE intermittently pop up “Dad Jokes” or make funny sounds when typing. Changing the code editor to resemble Slack using a theme is also funny.

Then there’s the last example that made me do a double-take: “Imagine hitting CTRL + S to save your work and then it gets erased!” Yeah, if I were interviewing someone and they mentioned they consider this a funny joke, I would end the interview there. And if anyone ever does this to me, I’m going to HR.

Pranks by the W3C

I don’t think of the W3C as having a sense of humor, although I guess getting me excited about HTML imports back in the day, only to discontinue them, was funny in hindsight, if you have a dark sense of humor. Nevertheless, they have posted pranks on their official website, such as restyling to make their page look like a nineties GeoCities website in 2012, or claiming they were reviving the <blink> tag in 2021. There’s a theme of playing on the nostalgia of people my age who want these things to be real.

Sidenote: If you want more Nineties internet experiences, the game Hypnospace Outlaw, set on a retro internet in an alternative 1999, might be up your alley.

Other sites over the years have played a similar joke, which can never fail to charm an old-timer like me who remembers using a web like this at the public library, back when the internet was too expensive for my family to afford at home.

StackOverflow retro restyle

I can’t get enough of these nostalgia trips, so here’s what StackOverflow looked like on April Fools’ Day in 2019. They turned the site “full GeoCities” for fun. Yet everything comes full circle. Now StackOverflow itself seems destined to become as fossilized as GeoCities. Even so, the site is currently attempting a new, real redesign to survive rather than for fun. It’s sobering to consider that maybe the only StackOverflow experience for the next generation of coders will be if ChatGPT gets a StackOverflow restyle on a future April Fools’.

Stack Egg

While we’re on the topic of StackOverflow, their Stack Egg prank from 2015 was very cool, but it might win my award for the most over-engineered April Fools’ prank that caused the most serious problems for a website. The premise was another Nineties throwback, this time to the nineties Tamagotchi craze.

The idea, as the creator describes it, was that every site on the Stack Exchange network would have its own “Stack Egg,” representing that site. The goal was to collaboratively keep your metaphorical “site” alive using hypothetical actions named after real actions on the site, such as upvotes to feed the Tamagotchi, and review actions to clean up the poop so the Tamagotchi doesn’t get sick.

It was a nifty concept, although like Google’s April Fools’ games, it’s more neat than laugh-out-loud funny. The part that does make me laugh — I don’t feel too guilty saying it since it was more than a decade ago — was that this is a game about keeping the websites alive, and it inadvertently DDoS-ed its own websites and took down the whole StackExchange network.

And yet, the creators thought the fact that they had the foresight to implement a feature flag that allowed switching it off meant this was a case study in “Operational Excellence in AFPs (April Fools’ Pranks).” Yep, that is an actual article published in a peer-reviewed journal. According to the article, the engineers involved pushed a fix about two hours later to salvage the prank. Code Golf was the winner of the game, in case you’re wondering. According to the same post that announced the winner, “it’s by no means designed to withstand exploits,” and in the two days the feature was live, users discovered a vulnerability that was “close to voting fraud.”

I mentioned the over-engineering, so here’s the part that makes the unintentional punchline even funnier: rather than investing more time guarding against the basics, such as not bringing down the website and considering security, the creator spent time making his own Turing-complete language to handle the LCD-style animations, “because I wanted to! Creating a programming language is fun.”

That’s such a classically geeky way to prioritize!

Google Mic Drop

If Stack Egg created the most issues I’ve ever heard of for a website that created the prank, the most mean-spirited high-profile UI prank — which caused the most problems for users — has to be Google Mic Drop. It dropped (pun intended) on April Fools’ Day 2016, shortly after Google changed its motto from “don’t be evil” to “do the right thing.” Then, they promptly redefined the “right thing” as sabotaging people’s professional reputations with a minion GIF.

Google added a button, nice and close to the regular “Send” button in Gmail, that would send a farewell message to the recipient with an animated Minion dropping a mic then block all emails from that recipient permanently, without prompting the sender to confirm first. Better still, there was a bug that meant the recipient could receive that “GIF of death” and the block, even if the sender managed to press the correct “Send” button in the confusing new UI.

The “hilarity” that ensued included:

  • A funeral home accidentally sent a mic drop and block to a grieving family.
  • A man posted on the Gmail help forum, “Thanks to Mic Drop, I just lost my job.”

Google disabled the feature before the end of April Fools’ Day and issued an apology saying, “It looks like we pranked ourselves this year.” I am not sure how the joke was on Google, so much as the people whose livelihoods and relationships were destroyed.

Remember when I said in the intro that April Fools’ is a distraction from how the joke is on us for believing that the web is what it seems? This Google prank was a reminder that if you believe an advertising company masquerading as a search company has the judgment and ethics to prioritize your interests, when they hoard your personal data and don’t actually care if you can find anything, the real mic drop moment is when you realize that your career and relationships are a data point in Google’s next A/B test.

Prank UI/UX research articles

The funniest part of these April Fools’ UI/UX advice articles is that they’re published by a serious, high-profile consultancy and research group, so the authors work hard to make it obvious these are April Fools’ hoaxes. In each article, “APRIL FOOLS” is in the title in ALL CAPS. And in the first paragraph of the newer hoax articles: “This article was published as an April Fool’s hoax and does not contain real recommendations.” I like to imagine the marketing department thought this was a great idea, and then the authors of the articles tried their best not to make fools of themselves. I noticed the group stopped posting hoax content after 2022.

Sidenote: Educational resources people rely on as a source may not be the best place for prank posts. It reminds me of this peer-reviewed radiology website that on April Fools ‘ Day 2015 posted a hoax X-ray image under the title “Ectopia cordis interna – Tin(Man) syndrome.” Over the years, medical professionals circulated the image unaware it was a hoax, and then, in 2025, six medical journal case studies involving the made-up condition had to be retracted.

Actually, the hoax UI/UX articles are educational, in a UI antipatterns kind of way, such as “Users Love Change: Combatting a UX Myth,” which advocates redesigning the UI as often as possible for the heck of it — except I can’t help but feel JIRA took that advice literally. The “Canine UX” article teaches ideas of user personas and design in a fun way. And “The User Experience of Public Bathrooms” reads as if George Costanza from Seinfeld turned his toilet obsession into a lesson in usability.

DigitalOcean buys codepen.io

Regular readers of CSS-Tricks know that the founder, Chris Coyier, really did decide in 2022 to sell the website to our current stewards, DigitalOcean, so that he could focus on his other projects, such as CodePen. Therefore, the announcement on CodePen that DigitalOcean was also buying that website seemed maddeningly plausible. The level of detail in the hoax announcement increased verisimilitude. For instance, the claim that users could use custom domain names on CodePen for free, as long as the domain was DigitalOcean-hosted. In fact, the only sign it was a prank is that nobody anywhere announced anything like this, unless you count me posting it today on a DigitalOcean-owned website.

Happy April Fools’ Day, everyone!

Front-End Fools: Top 10 April Fools’ UI Pranks of All Time originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Sniffing Out the CSS Olfactive API

Css Tricks - Wed, 04/01/2026 - 3:54am

A lot has happened in CSS in the last few years, but there’s nothing we needed less than the upcoming Olfactive API. Now, I know what you’re going to say, expanding the web in a more immersive way is a good thing, and in general I’d agree, but there’s no generalized hardware support for this yet and, in my opinion, it’s too much, too early.

First let’s look at the hardware. Disney World and other theme parks have done some niche so-called 4D movies (which is nonsense since there isn’t a fourth dimensional aspect, and if you consider time the fourth dimension then every movie is fourth dimensional). And a few startups have tried to bring olfactory senses into the modern day, but as of this writing, the hardware isn’t consumer-ready yet. That said, it’s in active development and one startup assured me the technology would be available within the year. (And startups never, ever lie about when their products will launch, right?)

Even if it does come out within the year, would we even want this? I mean Smell-O-Vision totally caught on, right? It’s definitely not considered one of the worst inventions of all time… But, alas, no one cares about the ravings of a mad man, at least, not this mad man, so the API rolls on.

Alright, I’m going to step off my soap box now and try to focus on the technology and how it works.

Smell Tech

One of the fights currently going on in the CSS Working Group is whether we should limit smells to those considered pleasing by the perfume industry or whether to open websites to a much wider variety. For instance, while everyone’s olfactory sense is different, the perfume industry has centered on a selection of fragrances that will be pleasing to a wide swath of people.

That said, there are a large number of pleasing fragrances that would not be included in this, such as food-based smells: fresh baked bread etc. Fragrances that the Big Food Lobby is itching to include in their advertisements. As of now the CSS Olfactive API only includes the twelve general categories used by the perfume industry, but just like there are ways to expand the color gamut, the system is built to allow for expanded smells in the future should the number of available fragrance fragments increase.

Smelly Families

You don’t have to look far online to find something called the Scent Wheel (alternately called the Fragrance Wheel or the Wheel of Smell-Tune, but that last one is only used by me). There are four larger families of smell:

  • Floral
  • Amber (previously called Oriental)
  • Woody
  • Fresh

These four are each subdivided into additional categories though there are overlaps between where one of the larger families begins/ends and the sub families begin/end

  • Floral:
    • Floral (fl)
    • Soft Floral (sf)
    • Floral Amber (fa)
  • Amber:
    • Soft Amber (sa)
    • Amber (am)
    • Woody Amber (wa)
  • Woody:
    • Woods (wo)
    • Mossy Woods (mw)
    • Dry Woods (dw)
  • Fresh (fr)
    • Aromatic (ar)
    • Citrus (ct)
    • Water (ho)
    • Green (gr)
    • Fruity (fu)

It’s from these fifteen fragrance categories that a scent can be made by mixing different amounts using the two letter identifiers. (We’ll talk about this when we discuss the scent() function later on. Note that “Fresh” is the only large family with its own identifier (fr) as the other larger families are duplicated in the sub-families)

Implementation

First of all, its implemented (wisely) in HTML in much the same way video and audio are with the addition of the <scent> element, and <source> was again used to give the browser different options for wafting the scent toward your sniffer. Three competing file formats are being developed .smll, .arma, and, I kid you not, .smly. One by Google, one by Mozilla, and one, again, not kidding, by Frank’s Fine Fragrances who intends to jump on this “fourth dimension of the web.”

<scent controls autosmell="none"> <source src=“mossywoods.smll” type=“scent/smll”> <source src=“mossywoods.arma” type=“scent/arma”> <source src=“mossywoods.smly” type=“scent/smly”> <a href=“mossywoods.smll”>Smell our Mossy Woods scent</a> </scent>

For accessibility, be sure that you set the autosmell attribute to none. In theory, this isn’t required, but some of the current hardware has a bug that turns on the wafter even if a smell hasn’t been activated.

However, similar to how you can use an image or video in the background of an element, you can also attach a scent profile to an element using the new scent-profile property.

scent-profile can take one of three things.

The keyword none (default):

scent-profile: none;

A url() function and the path to a file e.g.:

scent-profile: url(mossywoods.smll);

Or a set of aromatic identifiers using the scent() function:

scent-profile: scent(wo, ho, fu);

This produces a scent that has notes of woody, water, and fruity which was described to me as “an orchard in the rain” but to me smelled more like “a wooden bowl of watered-down applesauce.” Please take that with a grain of salt, though, as I have been told I have “the nasal palette of a dead fish.”

You can add up to five scent sub-families at once. This is an arbitrary limit, but more than that would likely muddle the scent. Equal amounts of each will be used, but you can use the new whf unit to adjust how much of each is used. 100whf is the most potent an aroma can be. Unlike most units, your implementation, must add up to 100whf or less. If your numbers add up to more than 100, the browser will take the first 100whfs it gets and ignore everything afterward.

scent-profile: scent(wo 20whf, ho 13whf, fu 67whf);

…or you could reduce the overall scent by choosing whfs less than 100:

scent-profile: scent(wo 5whf, ho 2whf, fu 14whf);

In the future, should other fragrances be allowed, they would simply need to add some new fragrance fragments from which to construct the aromatic air.

Sniffing Out Limitations

One large concern for the working group was that some developer would go crazy placing scent-profiles on every single element, both overwhelming the user and muddling each scent used.

As such it was decided that the browser will only allow one scent-profile to be set per the parent element’s sub tree. This basically means that once you set a scent-profile on a particular element you cannot add a scent profile to any of its descendants, nor can you add a scent profile to any of its siblings. In this way, a scent profile set on a hungry selector (e.g. * or div) will create a fraction of the scent profiles than what might otherwise be created. While there are clearly easy ways to maliciously get around this limitation, it was thought that this should at least prevent a developer from accidentally overwhelming the user.

Aromatic Accessibility

Since aromas can be overpowering they’ve also added a media-query:

.reeks { scent-profile: scent(fl, fa, fu); } @media (prefers-reduced-pungency: reduce) { .reeks { scent-profile: scent(fl 10whf, fa 10whf, fu 10whf); } } @media (prefers-reduced-pungency: remove) { .reeks { scent-profile: none; } } Browser Support

Surprisingly, despite Chrome Canary literally being named after a bird who would smell gas in the mine, Chrome has not yet begun experimenting with it. The only browser you can test things out on, as of this writing, is the KaiOS Browser.

Conclusion

There you have it. I still don’t think we need this, but with the continuing march of technology it’s probably not something we can stop. So let’s make an agreement between you reading this and me here writing this that you’ll always use your new-found olfactory powers for good… and that you won’t ever say this article stinks.

Learn more about the CSS Olfactive API.

Sniffing Out the CSS Olfactive API originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Form Automation Tips for Happier User and Clients

Css Tricks - Mon, 03/30/2026 - 4:12am

I deployed a contact form that last month that, in my opinion, was well executed. It had all the right semantics, seamless validation, and great keyboard support. You know, all of the features you’d want in your portfolio.

But… a mere two weeks after deployment, my client called. We lost a referral because it was sitting in your inbox over the weekend.

The form worked perfectly. The workflow didn’t.

The Problem Nobody Talks About

That gap between “the form works” and “the business works” is something we don’t really tend to discuss much as front-enders. We focus a great deal on user experience, validation methods, and accessibility, yet we overlook what the data does once it leaves our control. That is exactly where things start to fall apart in the real world.

Here’s what I learned from that experience that would have made for a much better form component.

Why “Send Email on Submit” Fails

The pattern we all use looks something like this:

fetch('/api/contact', { method: 'POST', body: JSON.stringify(formData) }) // Email gets sent and we call it done

I have seen duplicate submissions cause confusion, specifically when working with CRM systems, like Salesforce. For example, I have encountered inconsistent formatting that hinders automated imports. I have also experienced weekend queries that were overlooked until Monday morning. I have debugged queries where copying and pasting lost decimal places for quotes. There have also been “required” fields for which “required” was simply a misleading label.

I had an epiphany: the reality was that having a working form was just the starting line, not the end. The fact is that the email is not a notification; rather, it’s a handoff. If it’s treated merely as a notification, it puts us into a bottleneck with our own code. In fact, Litmus, as shown in their 2025 State of Email Marketing Report (sign-up required), found inbox-based workflows result in lagging follow-ups, particularly with sales teams that rely on lead generation.

Designing Forms for Automation

The bottom line is that front-end decisions directly influence back-end automation. In recent research from HubSpot, data at the front-end stage (i.e., the user interaction) makes or breaks what is coming next.

These are the practical design decisions that changed how I build forms:

Required vs. Optional Fields

Ask yourself: What does the business rely on the data for? Are phone calls the primary method for following up with a new lead? Then let’s make that field required. Is the lead’s professional title a crucial context for following up? If not, make it optional. This takes some interpersonal collaboration before we even begin marking up code.

For example, I made an incorrect assumption that a phone number field was an optional piece of information, but the CRM required it. The result? My submissions were invalidated and the CRM flat-out rejected them.

Now I know to drive my coding decisions from a business process perspective, not just my assumptions about what the user experience ought to be.

Normalize Data Early

Does the data need to be formatted in a specific way once it’s submitted? It’s a good idea to ensure that some data, like phone numbers, are formatted consistently so that the person on the receiving has an easier time scanning the information. Same goes when it comes to trimming whitespace and title casing.

Why? Downstream tools are dumb. They are utterly unable to make the correlation that “John Wick” and “john wick” are related submissions. I once watched a client manually clean up 200 CRM entries because inconsistent casing had created duplicate records. That’s the kind of pain that five minutes of front-end code prevents.

Prevent Duplicate Entries From the Front End

Something as simple as disabling the Submit button on click can save the headache of sifting through duplicative submissions. Show clear “submission states” like a loading indicator that an action is being processed. Store a flag that a submission is in progress.

Why? Duplicate CRM entries cost real money to clean up. Impatient users on slow networks will absolutely click that button multiple times if you let them.

Success and Error States That Matter

What should the user know once the form is submitted? I think it’s super common to do some sort of default “Thanks!” on a successful submission, but how much context does that really provide? Where did the submission go? When will the team follow up? Are there resources to check out in the meantime? That’s all valuable context that not only sets expectations for the lead, but gives the team a leg up when following up.

Error messages should help the business, too. Like, if we’re dealing with a duplicate submission, it’s way more helpful to say something like, “This email is already in our system” than some generic “Something went wrong” message.

A Better Workflow

So, how exactly would I approach form automation next time? Here are the crucial things I missed last time that I’ll be sure to hit in the future.

Better Validation Before Submission

Instead of simply checking if fields exist:

const isValid = email && name && message;

Check if they’re actually usable:

function validateForAutomation(data) { return { email: /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(data.email), name: data.name.trim().length >= 2, phone: !data.phone || /^\d{10,}$/.test(data.phone.replace(/\D/g, '')) }; }

Why this matters: CRMs will reject malformed emails. Your error handling should catch this before the user clicks submit, not after they’ve waited two seconds for a server response.

At the same time, it’s worth noting that the phone validation here covers common cases, but is not bulletproof for things like international formats. For production use, consider a library like libphonenumber for comprehensive validation.

Consistent Formatting

Format things before it sends rather than assuming it will be handled on the back end:

function normalizeFormData(data) { return { name: data.name.trim() .split(' ') .map(word => word.charAt(0).toUpperCase() + word.slice(1).toLowerCase()) .join(' '), email: data.email.trim().toLowerCase(), phone: data.phone.replace(/\D/g, ''), // Strip to digits message: data.message.trim() }; }

Why I do this: Again, I’ve seen a client manually fix over 200 CRM entries because “JOHN SMITH” and “john smith” created duplicate records. Fixing this takes five minutes to write and saves hours downstream.

There’s a caveat to this specific approach. This name-splitting logic will trip up on single names, hyphenated surnames, and edge cases like “McDonald” or names with multiple spaces. If you need rock-solid name handling, consider asking for separate first name and last name fields instead.

Prevent Double Submissions

We can do that by disabling the Submit button on click:

let submitting = false; async function handleSubmit(e) { e.preventDefault(); if (submitting) return; submitting = true; const button = e.target.querySelector('button[type="submit"]'); button.disabled = true; button.textContent = 'Sending...'; try { await sendFormData(); // Success handling } catch (error) { submitting = false; // Allow retry on error button.disabled = false; button.textContent = 'Send Message'; } }

Why this pattern works: Impatient users double-click. Slow networks make them click again. Without this guard, you’re creating duplicate leads that cost real money to clean up.

Structuring Data for Automation

Instead of this:

const formData = new FormData(form);

Be sure to structure the data:

const structuredData = { contact: { firstName: formData.get('name').split(' ')[0], lastName: formData.get('name').split(' ').slice(1).join(' '), email: formData.get('email'), phone: formData.get('phone') }, inquiry: { message: formData.get('message'), source: 'website_contact_form', timestamp: new Date().toISOString(), urgency: formData.get('urgent') ? 'high' : 'normal' } };

Why structured data matters: Tools like Zapier, Make, and even custom webhooks expect it. When you send a flat object, someone has to write logic to parse it. When you send it pre-structured, automation “just works.” This mirrors Zapier’s own recommendations for building more reliable, maintainable workflows rather than fragile single-step “simple zaps.”

Watch How Zapier Works (YouTube) to see what happens after your form submits.

Care About What Happens After Submit

An ideal flow would be:

  1. User submits form 
  2. Data arrives at your endpoint (or form service) 
  3. Automatically creates CRM contact 
  4. A Slack/Discord notification is sent to the sales team 
  5. A follow-up sequence is triggered 
  6. Data is logged in a spreadsheet for reporting

Your choices for the front end make this possible:

  • Consistency in formatting = Successful imports in CRM 
  • Structured data = Can be automatically populated using automation tools 
  • De-duplication = No messy cleanup tasks required 
  • Validation = Less “invalid entry” errors

Actual experience from my own work: After re-structuring a lead quote form, my client’s automated quote success rate increased from 60% to 98%. The change? Instead of sending { "amount": "$1,500.00"}, I now send { "amount": 1500}. Their Zapier integration couldn’t parse the currency symbol.

My Set of Best Practices for Form Submissions

These lessons have taught me the following about form design:

  1. Ask about the workflow early. “What happens after someone fills this out?” needs to be the very first question to ask. This surfaces exactly what really needs to go where, what data needs to come in with a specific format, and integrations to use. 
  2. Test with Real Data. I am also using my own input to fill out forms with extraneous spaces and strange character strings, such as mobile phone numbers and bad uppercase and lowercase letter strings. You might be surprised by the number of edge cases that can come about if you try inputting “JOHN SMITH ” instead of “John Smith.” 
  3. Add timestamp and source. It makes sense to design it into the system, even though it doesn’t necessarily seem to be necessary. Six months into the future, it’s going to be helpful to know when it was received. 
  4. Make it redundant. Trigger an email and a webhook. When sending via email, it often goes silent, and you won’t realize it until someone asks, “Did you get that message we sent you?”
  5. Over-communicate success. Setting the lead’s expectations is crucial to a more delightful experience. “Your message has been sent. Sarah from sales will answer within 24 hours.” is much better than a plain old “Success!”
The Real Finish Line

This is what I now advise other developers: “Your job doesn’t stop when a form posts without errors. Your job doesn’t stop until you have confidence that your business can act upon this form submission.”

That means:

  • No “copy paste” allowed 
  • No “I’ll check my email later” 
  • No duplicate entries to clean up 
  • No formatting fixes needed

The code itself is not all that difficult. The switch in attitude comes from understanding that a form is actually part of a larger system and not a standalone object. Once you think about forms this way, you think differently about them in terms of planning, validation, and data.

The next time you’re putting together a form, ask yourself: What happens when this data goes out of my hands? Answering that question makes you a better front-end developer.

The following CodePen demo is a side-by-side comparison of a standard form versus an automation-ready form. Both look identical to users, but the console output shows the dramatic difference in data quality.

CodePen Embed Fallback References & Further Reading

Form Automation Tips for Happier User and Clients originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Generative UI Notes

Css Tricks - Thu, 03/26/2026 - 4:59am

I’m really interested in this emerging idea that the future of web design is Generative UI Design. We see hints of this already in products, like Figma Sites, that tout being able to create websites on the fly with prompts.

Putting aside the clear downsides of shipping half-baked technology as a production-ready product (which is hard to do), the angle I’m particularly looking at is research aimed at using Generative AI (or GenAI) to output personalized interfaces. It’s wild because it completely flips the way we think about UI design on its head. Rather than anticipating user needs and designing around them, GenAI sees the user needs and produces an interface custom-tailored to them. In a sense, a website becomes a snowflake where no two experiences with it are the same.

Again, it’s wild. I’m not here to speculate, opine, or preach on Generative UI Design (let’s call it GenUI for now). Just loose notes that I’ll update as I continue learning about it.

Defining GenUI

Google Research (PDF):

Generative UI is a new modality where the AI model generates not only content, but the entire user experience. This results in custom interactive experiences, including rich formatting, images, maps, audio and even simulations and games, in response to any prompt (instead of the widely adopted “walls-of-text”).

NN/Group:

generative UI (genUI) is a user interface that is dynamically generated in real time by artificial intelligence to provide an experience customized to fit the user’s needs and context.

UX Collective:

A Generative User Interface (GenUI) is an interface that adapts to, or processes, context such as inputs, instructions, behaviors, and preferences through the use of generative AI models (e.g. LLMs) in order to enhance the user experience.

Put simply, a GenUI interface displays different components, information, layouts, or styles, based on who’s using it and what they need at that moment.

Credit: UX Collective Generative vs. Predictive AI

It’s easy to dump “AI” into one big bucket, but it’s often distinguished as two different types: predictive and generative.

Predictive AIGenerative AIInputsUses smaller, more targeted datasets as input data. (Smashing Magazine)Trained on large datasets containing millions of sample content. (U.S. Congress, PDF)OutputsForecasts future events and outcomes. (IBM)New content, including audio, code, images, text, simulations, and videos. (McKinsey)ExamplesChatGPT, ClaudeSora, Suno, Cursor

So, when we’re talking about GenAI, we’re talking about the ability to create new materials trained on existing materials. And when we’re talking specifically about GenUI, it’s about generating a user interface based on what the AI knows about the user.

Accessibility

And I should note that what I’m talking about here is not strictly GenUI in how we’ve defined it so far as UI output that adapts to individual user experiences, but rather “developing” generated interfaces. These so-called AI website builders do not adapt to the individual user, but it’s easy to see it heading in that direction.

The thing I’m most interested in — concerned with, frankly — is to what extent GenUI can reliably output experiences that cater to all users, regardless of impairment, be it aural, visual, physical, etc. There are a lot of different inputs to consider here, and we’ve seen just how awful the early results have been.

That last link is a big poke at Figma Sites. They’re easy to poke because they made the largest commercial push into GenUI-based web development. To their credit (perhaps?), they received the severe pushback and decided to do something about it, announcing updates and publishing a guide for improving accessibility on Figma-generated sites. But even those have their limitations that make the effort and advice seem less useful and more about saving face.

Anyway. There are plenty of other players to jump into the game, notably WordPress, but also others like Vercel, Squarespace, Wix, GoDaddy, Lovable, and Reeady.

Some folks are more optimistic than others that GenUI is not only capable of producing accessible experiences, but will replace accessibility practitioners altogether as the technology evolves. Jakob Nielsen famously made that claim in 2024 which drew fierce criticism from the community. Nielsen walked that back a year later, but not much.

I’m not even remotely qualified to offer best practices, opine on the future of accessibility practice, or speculate on future developments and capabilities. But as I look at Google’s People + AI Guidebook, I see no mention at all of accessibility despite dripping with “human-centered” design principles.

Accessibility is a lagging consideration to the hype, at least to me. That has to change if GenUI is truly the “future” of web design and development.

Examples & Resources

Google has a repository of examples showing how user input can be used to render a variety of interfaces. Going a step further is Google’s Project Genie that bills itself as creating “interactive worlds” that are “generated in real-time.” I couldn’t get an invite to try it out, but maybe you can.

In addition to that, Google has a GenUI SDK designed to integrate into Flutter apps. So, yeah. Connect to your LLM provider and let it rip to create adaptive interfaces.

Thesys is another one in the adaptive GenUI space. Copilot, too.

References

Generative UI Notes originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Experimenting With Scroll-Driven corner-shape Animations

Css Tricks - Mon, 03/23/2026 - 3:51am

Over the last few years, there’s been a lot of talk about and experimentation with scroll-driven animations. It’s a very shiny feature for sure, and as soon as it’s supported in Firefox (without a flag), it’ll be baseline. It’s part of Interop 2026, so that should be relatively soon. Essentially, scroll-driven animations tie an animation timeline’s position to a scroll position, so if you were 50% scrolled then you’d also be 50% into the animation, and they’re surprisingly easy to set up too.

I’ve been seeing significant interest in the new CSS corner-shape property as well, even though it only works in Chrome for now. This enables us to create corners that aren’t as rounded, or aren’t even rounded at all, allowing for some intriguing shapes that take little-to-no effort to create. What’s even more intriguing though is that corner-shape is mathematical, so it’s easily animated.

Hence, say hello to scroll-driven corner-shape animations (requires Chrome 139+ to work fully):

CodePen Embed Fallback corner-shape in a nutshell

Real quick — the different values for corner-shape:

corner-shape keywordsuperellipse() equivalentsquaresuperellipse(infinity)squirclesuperellipse(2)roundsuperellipse(1)bevelsuperellipse(0)scoopsuperellipse(-1)notchsuperellipse(-infinity) CodePen Embed Fallback

But what’s this superellipse() function all about? Well, basically, these keyword values are the result of this function. For example, superellipse(2) creates corners that aren’t quite squared but aren’t quite rounded either (the “squircle”). Whether you use a keyword or the superellipse() function directly, a mathematical equation is used either way, which is what makes it animatable. With that in mind, let’s dive into that demo above.

Animating corner-shape

The demo isn’t too complicated, so I’ll start off by dropping the CSS here, and then I’ll explain how it works line-by-line:

@keyframes bend-it-like-beckham { from { corner-shape: superellipse(notch); /* or */ corner-shape: superellipse(-infinity); } to { corner-shape: superellipse(square); /* or */ corner-shape: superellipse(infinity); } } body::before { /* Fill viewport */ content: ""; position: fixed; inset: 0; /* Enable click-through */ pointer-events: none; /* Invert underlying layer */ mix-blend-mode: difference; background: white; /* Don’t forget this! */ border-bottom-left-radius: 100%; /* Animation settings */ animation: bend-it-like-beckham; animation-timeline: scroll(); } /* Added to cards */ .no-filter { isolation: isolate; } CodePen Embed Fallback

In the code snippet above, body::before combined with content: "" creates a pseudo-element of the <body> with no content that is then fixed to every edge of the viewport. Also, since this animating shape will be on top of the content, pointer-events: none ensures that we can still interact with said content.

For the shape’s color I’m using mix-blend-mode: difference with background: white, which inverts the underlying layer, a trendy effect that to some degree only maintains the same level of color contrast. You won’t want to apply this effect to everything, so here’s a utility class to exclude the effect as needed:

/* Added to cards */ .no-filter { isolation: isolate; }

A comparison:

Left: Full application of blend mode. Right: Blend mode excluded from cards.

You’ll need to combine corner-shape with border-radius, which uses corner-shape: round under the hood by default. Yes, that’s right, border-radius doesn’t actually round corners — corner-shape: round does that under the hood. Rather, border-radius handles the x-axis and y-axis coordinates to draw from:

/* Syntax */ border-bottom-left-radius: <x-axis-coord> <y-axis-coord>; /* Usage */ border-bottom-left-radius: 50% 50%; /* Or */ border-bottom-left-radius: 50%;

In our case, we’re using border-bottom-left-radius: 100% to slide those coordinates to the opposite end of their respective axes. However, we’ll be overwriting the implied corner-shape: round in our @keyframe animation, so we refer to that with animation: bend-it-like-beckham. There’s no need to specify a duration because it’s a scroll-driven animation, as defined by animation-timeline: scroll().

In the @keyframe animation, we’re animating from corner-shape: superellipse(notch), which is like an inset square. This is equivalent to corner-shape: superellipse(-infinity), so it’s not actually squared but it’s so aggressively sharp that it looks squared. This animates to corner-shape: superellipse(square) (an outset square), or corner-shape: superellipse(infinity).

Animating corner-shape… revisited

The demo above is actually a bit different to the one that I originally shared in the intro. It has one minor flaw, and I’ll show you how to fix it, but more importantly, you’ll learn more about an intricate detail of corner-shape.

The flaw: at the beginning and end of the animation, the curvature looks quite harsh because we’re animating from notch and square, right? It also looks like the shape is being sucked into the corners. Finally, the shape being stuck to the sides of the viewport makes the whole thing feel too contained.

The solution is simple:

/* Change this... */ inset: 0; /* ...to this */ inset: -1rem;

This stretches the shape beyond the viewport, and even though this makes the animation appear to start late and finish early, we can fix that by not animating from/to -infinity/infinity:

@keyframes bend-it-like-beckham { from { corner-shape: superellipse(-6); } to { corner-shape: superellipse(6); } }

Sure, this means that part of the shape is always visible, but we can fiddle with the superellipse() value to ensure that it stays outside of the viewport. Here’s a side-by-side comparison:

And the original demo (which is where we’re at now):

CodePen Embed Fallback Adding more scroll features

Scroll-driven animations work very well with other scroll features, including scroll snapping, scroll buttons, scroll markers, simple text fragments, and simple JavaScript methods such as scrollTo()/scroll(), scrollBy(), and scrollIntoView().

For example, we only have to add the following CSS snippet to introduce scroll snapping that works right alongside the scroll-driven corner-shape animation that we’ve already set up:

:root { /* Snap vertically */ scroll-snap-type: y; section { /* Snap to section start */ scroll-snap-align: start; } } CodePen Embed Fallback “Masking” with corner-shape

In the example below, I’ve essentially created a border around the viewport and then a notched shape (corner-shape: notch) on top of it that’s the same color as the background (background: inherit). This shape completely covers the border at first, but then animates to reveal it (or in this case, the four corners of it):

CodePen Embed Fallback

If I make the shape a bit more visible, it’s easier to see what’s happening here, which is that I’m rotating this shape as well (rotate: 5deg), making the shape even more interesting.

This time around we’re animating border-radius, not corner-shape. When we animate to border-radius: 20vw / 20vh, 20vw and 20vh refers to the x-axis and y-axis of each corner, respectively, meaning that 20% of the border is revealed as we scroll.

The only other thing worth mentioning here is that we need to mess around with z-index to ensure that the content is higher up in the stacking context than the border and shape. Other than that, this example simply demonstrates another fun way to use corner-shape:

@keyframes tech-corners { from { border-radius: 0; } to { border-radius: 20vw / 20vh; } } /* Border */ body::before { /* Fill (- 1rem) */ content: ""; position: fixed; inset: 1rem; border: 1rem solid black; } /* Notch */ body::after { /* Fill (+ 3rem) */ content: ""; position: fixed; inset: -3rem; /* Rotated shape */ background: inherit; rotate: 5deg; corner-shape: notch; /* Animation settings */ animation: tech-corners; animation-timeline: scroll(); } main { /* Stacking fix */ position: relative; z-index: 1; } Animating multiple corner-shape elements

In this example, we have multiple nested diamond shapes thanks to corner-shape: bevel, all leveraging the same scroll-driven animation where the diamonds increase in size, using padding:

CodePen Embed Fallback <div id="diamonds"> <div> <div> <div> <div> <div> <div> <div> <div> <div> <div></div> </div> </div> </div> </div> </div> </div> </div> </div> </div> </div> <main> <!-- Content --> </main> @keyframes diamonds-are-forever { from { padding: 7rem; } to { padding: 14rem; } } #diamonds { /* Center them */ position: fixed; inset: 50% auto auto 50%; translate: -50% -50%; /* #diamonds, the <div>s within */ &, div { corner-shape: bevel; border-radius: 100%; animation: diamonds-are-forever; animation-timeline: scroll(); border: 0.0625rem solid #00000030; } } main { /* Stacking fix */ position: relative; z-index: 1; } That’s a wrap

We just explored animating from one custom superellipse() value to another, using corner-shape as a mask to create new shapes (again, while animating it), and animating multiple corner-shape elements at once. There are so many ways to animate corner-shape other than from one keyword to another, and if we make them scroll-driven animations, we can create some really interesting effects (although, they’d also look awesome if they were static).

Experimenting With Scroll-Driven corner-shape Animations originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

JavaScript for Everyone: Destructuring

Css Tricks - Thu, 03/19/2026 - 3:06am

Editor’s note: Mat Marquis and Andy Bell have released JavaScript for Everyone, an online course offered exclusively at Piccalilli. This post is an excerpt from the course taken specifically from a chapter all about JavaScript destructuring. We’re publishing it here because we believe in this material and want to encourage folks like yourself to sign up for the course. So, please enjoy this break from our regular broadcasting to get a small taste of what you can expect from enrolling in the full JavaScript for Everyone course.

I’ve been writing about JavaScript for long enough that I wouldn’t rule out a hubris-related curse of some kind. I wrote JavaScript for Web Designers more than a decade ago now, back in the era when packs of feral var still roamed the Earth. The fundamentals are sound, but the advice is a little dated now, for sure. Still, despite being a web development antique, one part of the book has aged particularly well, to my constant frustration.

An entire programming language seemed like too much to ever fully understand, and I was certain that I wasn’t tuned for it. I was a developer, sure, but I wasn’t a developer-developer. I didn’t have the requisite robot brain; I just put borders on things for a living.

JavaScript for Web Designers

I still hear this sentiment from incredibly talented designers and highly technical CSS experts that somehow can’t fathom calling themselves “JavaScript developers,” as though they were tragically born without whatever gland produces the chemicals that make a person innately understand the concept of variable hoisting and could never possibly qualify — this despite the fact that many of them write JavaScript as part of their day-to-day work. While I may not stand by the use of alert() in some of my examples (again, long time ago), the spirit of JavaScript for Web Designers holds every bit as true today as it did back then: type a semicolon and you’re writing JavaScript. Write JavaScript and you’re a JavaScript developer, full stop.

Now, sooner or later, you do run into the catch: nobody is born thinking like JavaScript, but to get really good at JavaScript, you will need to learn how. In order to know why JavaScript works the way it does, why sometimes things that feel like they should work don’t, and why things that feel like they shouldn’t work sometimes do, you need to go one step beyond the code you’re writing or even the result of running it — you need to get inside JavaScript’s head. You need to learn to interact with the language on its own terms.

That deep-magic knowledge is the goal of JavaScript for Everyone, a course designed to help you get from junior- to senior developer. In JavaScript for Everyone, my aim is to help you make sense of the more arcane rules of JavaScript as-it-is-played — not just teach you the how but the why, using the syntaxes you’re most likely to encounter in your day-to-day work. If you’re brand new to the language, you’ll walk away from this course with a foundational understanding of JavaScript worth hundreds of hours of trial-and-error; if you’re a junior developer, you’ll finish this course with a depth of knowledge to rival any senior.

Thanks to our friends here at CSS-Tricks, I’m able to share the entire lesson on destructuring assignment. These are some of my favorite JavaScript syntaxes, which I’m sure we can all agree are normal and in fact very cool things to have —syntaxes are as powerful as they are terse, all of them doing a lot of work with only a few characters. The downside of that terseness is that it makes these syntaxes a little more opaque than most, especially when you’re armed only with a browser tab open to MDN and a gleam in your eye. We got this, though — by the time you’ve reached the end of this lesson, you’ll be unpacking complex nested data structures with the best of them.

And if you missed it before, there’s another excerpt from the JavaScript for Everyone course covering JavaScript Expressions available here on CSS-Tricks.

Destructuring Assignment

When you’re working with a data structure like an array or object literal, you’ll frequently find yourself in a situation where you want to grab some or all of the values that structure contains and use them to initialize discrete variables. That makes those values easier to work with, but historically speaking, it can lead to pretty wordy code:

const theArray = [ false, true, false ]; const firstElement = theArray[0]; const secondElement = theArray[1]; const thirdElement = theArray[2];

This is fine! I mean, it works; it has for thirty years now. But as of 2015’s ES6, we’ve had a much more elegant option: destructuring.

Destructuring allows you to extract individual values from an array or object and assign them to a set of identifiers without needing to access the keys and/or values one at a time. In its most simple form — called binding pattern destructuring — each value is unpacked from the array or object literal and assigned to a corresponding identifier, all of which are declared with a single let or const (or var, technically, yes, fine). Brace yourself, because this is a strange one:

const theArray = [ false, true, false ]; const [ firstElement, secondElement, thirdElement ] = theArray; console.log( firstElement ); // Result: false console.log( secondElement ); // Result: true console.log( thirdElement ); // Result: false

That’s the good stuff, even if it is a little weird to see brackets on that side of an assignment operator. That one binding covers all the same territory as the much more verbose snippet above it.

When working with an array, the individual identifiers are wrapped in a pair of array-style brackets, and each comma separated identifier you specify within those brackets will be initialized with the value in the corresponding element in the source Array. You’ll sometimes see destructuring referred to as unpacking a data structure, but despite how that and “destructuring” both sound, the original array or object isn’t modified by the process.

Elements can be skipped over by omitting an identifier between commas, the way you’d leave out a value when creating a sparse array:

const theArray = [ true, false, true ]; const [ firstElement, , thirdElement ] = theArray; console.log( firstElement ); // Result: true console.log( thirdElement ); // Result: true

There are a couple of differences in how you destructure an object using binding pattern destructuring. The identifiers are wrapped in a pair of curly braces rather than brackets; sensible enough, considering we’re dealing with objects. In the simplest version of this syntax, the identifiers you use have to correspond to the property keys:

const theObject = { "theProperty" : true, "theOtherProperty" : false }; const { theProperty, theOtherProperty } = theObject; console.log( theProperty ); // result: true console.log( theOtherProperty ); // result: false

An array is an indexed collection, and indexed collections are intended to be used in ways where the specific iteration order matters — for example, with destructuring here, where we can assume that the identifiers we specify will correspond to the elements in the array, in sequential order.

That’s not the case with an object, which is a keyed collection — in strict technical terms, just a big ol’ pile of properties that are intended to be defined and accessed in whatever order, based on their keys. No big deal in practice, though; odds are, you’d want to use the property keys’ identifier names (or something very similar) as your identifiers anyway. Simple and effective, but the drawback is that it assumes a given… well, structure to the object being destructured.

This brings us to the alternate syntax, which looks absolutely wild, at least to me. The syntax is object literal shaped, but very, very different — so before you look at this, briefly forget everything you know about object literals:

const theObject = { "theProperty" : true, "theOtherProperty" : false }; const { theProperty : theIdentifier, theOtherProperty : theOtherIdentifier } = theObject; console.log( theIdentifier ); // result: true console.log( theOtherIdentifier ); // result: false

You’re still not thinking about object literal notation, right? Because if you were, wow would that syntax look strange. I mean, a reference to the property to be destructured where a key would be and identifiers where the values would be?

Fortunately, we’re not thinking about object literal notation even a little bit right now, so I don’t have to write that previous paragraph in the first place. Instead, we can frame it like this: within the parentheses-wrapped curly braces, zero or more comma-separated instances of the property key with the value we want, followed by a colon, followed by the identifier we want that property’s value assigned to. After the curly braces, an assignment operator (=) and the object to be destructured. That’s all a lot in print, I know, but you’ll get a feel for it after using it a few times.

The second approach to destructuring is assignment pattern destructuring. With assignment patterns, the value of each destructured property is assigned to a specific target — like a variable we declared with let (or, technically, var), a property of another object, or an element in an array.

When working with arrays and variables declared with let, assignment pattern destructuring really just adds a step where you declare the variables that will end up containing the destructured values:

const theArray = [ true, false ]; let theFirstIdentifier; let theSecondIdentifier [ theFirstIdentifier, theSecondIdentifier ] = theArray; console.log( theFirstIdentifier ); // true console.log( theSecondIdentifier ); // false

This gives you the same end result as you’d get using binding pattern destructuring, like so:

const theArray = [ true, false ]; let [ theFirstIdentifier, theSecondIdentifier ] = theArray; console.log( theFirstIdentifier ); // true console.log( theSecondIdentifier ); // false

Binding pattern destructuring will allow you to use const from the jump, though:

const theArray = [ true, false ]; const [ theFirstIdentifier, theSecondIdentifier ] = theArray; console.log( theFirstIdentifier ); // true console.log( theSecondIdentifier ); // false

Now, if you wanted to use those destructured values to populate another array or the properties of an object, you would hit a predictable double-declaration wall when using binding pattern destructuring:

// Error const theArray = [ true, false ]; let theResultArray = []; let [ theResultArray[1], theResultArray[0] ] = theArray; // Uncaught SyntaxError: redeclaration of let theResultArray

We can’t make let/const/var do anything but create variables; that’s their entire deal. In the example above, the first part of the line is interpreted as let theResultArray, and we get an error: theResultArray was already declared.

No such issue when we’re using assignment pattern destructuring:

const theArray = [ true, false ]; let theResultArray = []; [ theResultArray[1], theResultArray[0] ] = theArray; console.log( theResultArray ); // result: Array [ false, true ]

Once again, this syntax applies to objects as well, with a few little catches:

const theObject = { "theProperty" : true, "theOtherProperty" : false }; let theProperty; let theOtherProperty; ({ theProperty, theOtherProperty } = theObject ); console.log( theProperty ); // true console.log( theOtherProperty ); // false

You’ll notice a pair of disambiguating parentheses around the line where we’re doing the destructuring. You’ve seen this before: without the grouping operator, a pair of curly braces in a context where a statement is expected is assumed to be a block statement, and you get a syntax error:

// Error const theObject = { "theProperty" : true, "theOtherProperty" : false }; let theProperty; let theOtherProperty; { theProperty, theOtherProperty } = theObject; // Uncaught SyntaxError: expected expression, got '='

So far this isn’t doing anything that binding pattern destructuring couldn’t. We’re using identifiers that match the property keys, but any identifier will do, if we use the alternate object destructuring syntax:

const theObject = { "theProperty" : true, "theOtherProperty" : false }; let theFirstIdentifier; let theSecondIdentifier; ({ theProperty: theFirstIdentifier, theOtherProperty: theSecondIdentifier } = theObject ); console.log( theFirstIdentifier ); // true console.log( theSecondIdentifier ); // false

Once again, nothing binding pattern destructuring couldn’t do. But unlike binding pattern destructuring, any kind of assignment target will work with assignment pattern destructuring:

const theObject = { "theProperty" : true, "theOtherProperty" : false }; let resultObject = {}; ({ theProperty : resultObject.resultProp, theOtherProperty : resultObject.otherResultProp } = theObject ); console.log( resultObject ); // result: Object { resultProp: true, otherResultProp: false }

With either syntax, you can set “default” values that will be used if an element or property isn’t present at all, or it contains an explicit undefined value:

const theArray = [ true, undefined ]; const [ firstElement, secondElement = "A string.", thirdElement = 100 ] = theArray; console.log( firstElement ); // Result: true console.log( secondElement ); // Result: A string. console.log( thirdElement ); // Result: 100 const theObject = { "theProperty" : true, "theOtherProperty" : undefined }; const { theProperty, theOtherProperty = "A string.", aThirdProperty = 100 } = theObject; console.log( theProperty ); // Result: true console.log( theOtherProperty ); // Result: A string. console.log( aThirdProperty ); // Result: 100

Snazzy stuff for sure, but where this syntax really shines is when you’re unpacking nested arrays and objects. Naturally, there’s nothing stopping you from unpacking an object that contains an object as a property value, then unpacking that inner object separately:

const theObject = { "theProperty" : true, "theNestedObject" : { "anotherProperty" : true, "stillOneMoreProp" : "A string." } }; const { theProperty, theNestedObject } = theObject; const { anotherProperty, stillOneMoreProp = "Default string." } = theNestedObject; console.log( stillOneMoreProp ); // Result: A string.

But we can make this way more concise. We don’t have to unpack the nested object separately — we can unpack it as part of the same binding:

const theObject = { "theProperty" : true, "theNestedObject" : { "anotherProperty" : true, "stillOneMoreProp" : "A string." } }; const { theProperty, theNestedObject : { anotherProperty, stillOneMoreProp } } = theObject; console.log( stillOneMoreProp ); // Result: A string.

From an object within an object to three easy-to-use constants in a single line of code.

We can unpack mixed data structures just as succinctly:

const theObject = [{ "aProperty" : true, },{ "anotherProperty" : "A string." }]; const [{ aProperty }, { anotherProperty }] = theObject; console.log( anotherProperty ); // Result: A string.

A dense syntax, there’s no question of that — bordering on “opaque,” even. It might take a little experimentation to get the hang of this one, but once it clicks, destructuring assignment gives you an incredibly quick and convenient way to break down complex data structures without spinning up a bunch of intermediate data structures and values.

Rest Properties

In all the examples above we’ve been working with known quantities: “turn these X properties or elements into Y variables.” That doesn’t match the reality of breaking down a huge, tangled object, jam-packed array, or both.

In the context of a destructuring assignment, an ellipsis (that’s ..., not …, for my fellow Unicode enthusiasts) followed by an identifier (to the tune of ...theIdentifier) represents a rest property — an identifier that will represent the rest of the array or object being unpacked. This rest property will contain all the remaining elements or properties beyond the ones we’ve explicitly unpacked to their own identifiers, all bundled up in the same kind of data structure as the one we’re unpacking:

const theArray = [ false, true, false, true, true, false ]; const [ firstElement, secondElement, ...remainingElements ] = theArray; console.log( remainingElements ); // Result: Array(4) [ false, true, true, false ]

Generally I try to avoid using examples that veer too close to real-world use on purpose where they can get a little convoluted and I don’t want to distract from the core ideas — but in this case, “convoluted” is exactly what we’re looking to work around. So let’s use an object near and dear to my heart: (part of) the data representing the very first newsletter I sent out back when I started writing this course.

const firstPost = { "id": "mat-update-1.md", "slug": "mat-update-1", "body": "Hey, great to meet you, everybody. I'm Mat — \\"Wilto\\" is good too — and I'm here to teach you JavaScript. Not just what JavaScript is or what JavaScript does, but the *how* and the *why* of JavaScript. The weird stuff. The *deep magic_.\\n\\nWell, okay, I'm not *currently* here to teach you JavaScript, but I will be soon. Right now I'm just getting things in order for the course — planning, outlining, polishing the fancy semicolons that I only take out when I'm having company over, writing like 5,000 words about `this` as a warm-up that completely got away from me, that kind of thing.", "collection": "emails", "data": { "title": "Meet your Instructor", "pubDate": "2025-05-08T09:55:00.630Z", "headingSize": "large", "showUnsubscribeLink": true, "stream": "javascript-for-everyone" } };

Quite a bit going on in there. For purposes of this exercise, assume this is coming in from an external API the way it is over on my website — this isn’t an object we control. Sure, we can work with that object directly, but that’s a little unwieldy when all we need is, for example, the newsletter title and body:

const firstPost = { "id": "mat-update-1.md", "slug": "mat-update-1", "body": "Hey, great to meet you, everybody. I'm Mat — \\"Wilto\\" is good too — and I'm here to teach you JavaScript. Not just what JavaScript is or what JavaScript does, but the *how* and the *why* of JavaScript. The weird stuff. The *deep magic_.\\n\\nWell, okay, I'm not *currently* here to teach you JavaScript, but I will be soon. Right now I'm just getting things in order for the course — planning, outlining, polishing the fancy semicolons that I only take out when I'm having company over, writing like 5,000 words about `this` as a warm-up that completely got away from me, that kind of thing.", "data": { "title": "Meet your Instructor", "pubDate": "2025-05-08T09:55:00.630Z", "headingSize": "large", "showUnsubscribeLink": true, "stream": "javascript-for-everyone" } }; const { data : { title }, body } = firstPost; console.log( title ); // Result: Meet your Instructor console.log( body ); /* Result: Hey, great to meet you, everybody. I'm Mat — \\"Wilto\\" is good too — and I'm here to teach you JavaScript. Not just what JavaScript is or what JavaScript does, but the *how* and the *why* of JavaScript. The weird stuff. The *deep magic_. Well, okay, I'm not *currently* here to teach you JavaScript, but I will be soon. Right now I'm just getting things in order for the course — planning, outlining, polishing the fancy semicolons that I only take out when I'm having company over, writing like 5,000 words about `this` as a warm-up that completely got away from me, that kind of thing. */

That’s tidy; a couple dozen characters and we have exactly what we need from that tangle. I know I’m not going to need those id or slug properties to publish it on my own website, so I omit those altogether — but that inner data object has a conspicuous ring to it, like maybe one could expect it to contain other properties associated with future posts. I don’t know what those properties will be, but I know I’ll want them all packaged up in a way where I can easily make use of them. I want the firstPost.data.title property in isolation, but I also want an object containing all the rest of the firstPost.data properties, whatever they end up being:

const firstPost = { "id": "mat-update-1.md", "slug": "mat-update-1", "body": "Hey, great to meet you, everybody. I'm Mat — \\"Wilto\\" is good too — and I'm here to teach you JavaScript. Not just what JavaScript is or what JavaScript does, but the *how* and the *why* of JavaScript. The weird stuff. The *deep magic_.\\n\\nWell, okay, I'm not *currently* here to teach you JavaScript, but I will be soon. Right now I'm just getting things in order for the course — planning, outlining, polishing the fancy semicolons that I only take out when I'm having company over, writing like 5,000 words about `this` as a warm-up that completely got away from me, that kind of thing.", "data": { "title": "Meet your Instructor", "pubDate": "2025-05-08T09:55:00.630Z", "headingSize": "large", "showUnsubscribeLink": true, "stream": "javascript-for-everyone" } }; const { data : { title, ...metaData }, body } = firstPost; console.log( title ); // Result: Meet your Instructor console.log( metaData ); // Result: Object { pubDate: "2025-05-08T09:55:00.630Z", headingSize: "large", showUnsubscribeLink: true, stream: "javascript-for-everyone" }

Now we’re talking. Now we have a metaData object containing anything and everything else in the data property of the object we’ve been handed.

Listen. If you’re anything like me, even if you haven’t quite gotten your head around the syntax itself, you’ll find that there’s something viscerally satisfying about the binding in the snippet above. All that work done in a single line of code. It’s terse, it’s elegant — it takes the complex and makes it simple. That’s the good stuff.

And yet: maybe you can hear it too, ever-so-faintly? A quiet voice, way down in the back of your mind, that asks “I wonder if there’s an even better way.” For what we’re doing here, in isolation, this solution is about as good as it gets — but as far as the wide world of JavaScript goes: there’s always a better way. If you can’t hear it just yet, I bet you will by the end of the course.

Anyone who writes JavaScript is a JavaScript developer; there are no two ways about that. But the satisfaction of creating order from chaos in just a few keystrokes, and the drive to find even better ways to do it? Those are the makings of a JavaScript developer to be reckoned with.

You can do more than just “get by” with JavaScript; I know you can. You can understand JavaScript, all the way down to the mechanisms that power the language — the gears and springs that move the entire “interactive” layer of the web. To really understand JavaScript is to understand the boundaries of how users interact with the things we’re building, and broadening our understanding of the medium we work with every day sharpens all of our skills, from layout to accessibility to front-end performance to typography. Understanding JavaScript means less “I wonder if it’s possible to…” and “I guess we have to…” in your day-to-day decision making, even if you’re not the one tasked with writing it. Expanding our skillsets will always make us better — and more valued, professionally — no matter our roles.

JavaScript is a tricky thing to learn; I know that all too well — that’s why I wrote JavaScript for Everyone. You can do this, and I’m here to help.

I hope to see you there.

Check out the course

JavaScript for Everyone: Destructuring originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Consistent Character Maker Update

LukeW - Tue, 03/17/2026 - 4:00am

A couple months ago, I wrote about how design tools are the new design deliverables and built the LukeW Character Maker to illustrate the idea. Since then, people have made over 4,500 characters and I regularly get asked how it stays consistent. I recently updated the image model, error-checking, and prompts, so here's what changed and why.

New Image Model

Google recently released a new version of their image generation model (Nano Banana 2) and I put it to the test on my Character Maker. The results are noticeably more dynamic and three-dimensional than the previous version. Characters have more depth, better lighting, and more active poses. So I'm now using it as the default model (until Reve 1.5 is available as an API).

One of the ways I originally reinforced consistency in my character maker was by checking whether an image generation model's API returned images with the same dimensions as the reference images I sent it. If the dimensions didn't match, I knew the model had ignored the visual reference so I forced it to try again. In my testing, this was needed about 1 in every 30-40 images. A very simple check, but it worked well.

A week into using Nano Banana 2, that sizing check started throwing errors. Generated images were no longer coming back with the exact dimensions of my reference images, breaking my verification loop. I had to resize the reference images to match Google's default 1K image size (1365px by 768px). But that took away my consistency check, so I had reinforce my prompt rewriter to make up for it.

Update: A day after publishing this overview, Google quietly changed the image format their API returns (from PNG to WEBP). This made image dimensions read incorrectly, causing every generation attempt to fail. Had to implementation a fix that works regardless of what format Google decides to send back.

Prompt Rewriter Iteration

This is where most of the ongoing work happens. As real people used the tool, edge cases piled up and the first step of my pipeline (prompt rewriting) had to evolve. For example, my character is supposed to be faceless (no eyes, no mouth, no hair). This had to be reinforced progressively over several iterations. Turns out image models really want to put a face on things.

For color accuracy, I shifted from named colors like "lime-green" that relied on the reference images for accuracy to explicitly adding both HEX codes and RGB values. Getting the exact greens to reproduce consistently required that level of specificity. I also added default outfit color rules for when people try to request color changes.

Content moderation expanded steadily as people found creative ways to push boundaries. I blocked categories like gore, inappropriate clothing, and full body color changes, while loosening rejection criteria from blocking any "appearance changes" to only rejecting clearly inappropriate inputs. The goal: allow creative freedom while preventing abuse.

The overall approach was: start broad, then iteratively tighten character consistency while expanding content moderation guardrails as real usage revealed what was needed.

At this point, my character comes back consistent almost every time. About 1 in 50 generations still produces an extra arm or a mouth (he's faceless, remember?). I've tested checking each image with a vision model then sending it back for regeneration if something is off (examples above). But given how rarely this happens and how much latency and cost it would auto check every image, it's currently not worth the tradeoff for me. For other uses cases, it might be?

If you haven't already, try the LukeW Character Maker yourself. Though I might have to revisit the pipeline again if you get too creative.

What’s !important #7: random(), Folded Corners, Anchored Container Queries, and More

Css Tricks - Mon, 03/16/2026 - 5:06am

For this issue of What’s !important, we have a healthy balance of old CSS that you might’ve missed and new CSS that you don’t want to miss. This includes random(), random-item(), folded corners using clip-path, backdrop-filter, font-variant-numeric: tabular-nums, the Popover API, anchored container queries, anchor positioning in general, DOOM in CSS, customizable <select>, :open, scroll-triggered animations, <toolbar>, and somehow, more.

Let’s dig in.

Understanding random() and random-item()

Alvaro Montoro explains how the random() and random-item() CSS functions work. As it turns out, they’re actually quite complex:

width: random(--w element-shared, 1rem, 2rem); color: random-item(--c, red, orange, yellow, darkkhaki); Creating folded corners using clip-path

My first solution to folded corners involved actual images. Not a great solution, but that was the way to do it in the noughties. Since then we’ve been able to do it with box-shadow, but Kitty Giraudel has come up with a CSS clip-path solution that clips a custom shape (hover the kitty to see it in action):

CodePen Embed Fallback Revisiting backdrop-filter and font-variant-numeric: tabular-nums

Stuart Robson talks about backdrop-filter. It’s not a new CSS property, but it’s very useful and hardly ever talked about. In fact, up until now, I thought that it was for the ::backdrop pseudo-element, but we can actually use it to create all kinds of background effects for all kinds of elements, like this:

CodePen Embed Fallback

font-variant-numeric: tabular-nums is another one. This property and value prevents layout shift when numbers change dynamically, as they do with live clocks, counters, timers, financial tables, and so on. Amit Merchant walks you through it with this demo:

CodePen Embed Fallback Getting started with the Popover API

Godstime Aburu does a deep dive on the Popover API, a new(ish) but everyday web platform feature that simplifies tooltip and tooltip-like UI patterns, but isn’t without its nuances.

Unraveling yet another anchor positioning quirk

Just another anchor positioning quirk, this time from Chris Coyier. These quirks have been piling up for a while now. We’ve talked about them time and time again, but the thing is, they’re not bugs. Anchor positioning works in a way that isn’t commonly understood, so Chris’ article is definitely worth a read, as are the articles that he references.

Building dynamic toggletips using anchored container queries

In this walkthrough, I demonstrate how to build dynamic toggletips using anchored container queries. Also, I ran into an anchor positioning quirk, so if you’re looking to solidify your understanding of all that, I think the walkthrough will help with that too.

Demo (full effect requires Chrome 143+):

CodePen Embed Fallback DOOM in CSS

DOOM in CSS. DOOM. In CSS.

DOOM fully rendered in CSS. Every surface is a <div> that has a background image, with a clipping path with 3D transforms applied. Of course CSS does not have a movable camera, so we rotate and translate the scene around the user.

[image or embed]

— Niels Leenheer (@html5test.com) Mar 13, 2026 at 20:32 Safari updates, Chrome updates, and Quick Hits you missed

In addition, Chrome will ship every two weeks starting September.

From the Quick Hits reel, you might’ve missed that Font Awesome launched a Kickstarter campaign to transform Eleventy into Build Awesome, cancelled it because their emails failed to send (despite meeting their goal!), and vowed to try again. You can subscribe to the relaunch notification.

Also, <toolbar> is coming along according to Luke Warlow. This is akin to <focusgroup>, which we can actually test in Chrome 146 with the “Experimental Web Platform features” flag enabled.

Right, I’m off to slay some demons in DOOM. Until next time!

P.S. Congratulations to Kevin Powell for making it to 1 million YouTube subs!

What’s !important #7: random(), Folded Corners, Anchored Container Queries, and More originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

4 Reasons That Make Tailwind Great for Building Layouts

Css Tricks - Mon, 03/16/2026 - 4:01am

When I talk about layouts, I’m referring to how you place items on a page. The CSS properties that are widely used here include:

  • display — often grid or flex nowadays
  • margin
  • padding
  • width
  • height
  • position
  • top, left, bottom, right

I often include border-width as a minor item in this list as well.

At this point, there’s only one thing I’d like to say.

Tailwind is really great for making layouts.

There are many reasons why.

First: Layout styles are highly dependent on the HTML structure

When we shift layouts into CSS, we lose the mental structure and it takes effort to re-establish them. Imagine the following three-column grid in HTML and CSS:

<div class="grid"> <div class="grid-item"></div> <div class="grid-item"></div> </div> .grid { display: grid; grid-template-columns: 2fr 1fr; .grid-item:first-child { grid-column: span 2 } .grid-item:last-child { grid-column: span 1 } }

Now cover the HTML structure and just read the CSS. As you do that, notice you need to exert effort to imagine the HTML structure that this applies to.

Now imagine the same, but built with Tailwind utilities:

<div class="grid grid-cols-3"> <div class="col-span-2"></div> <div class="col-span-1"></div> </div>

You might almost begin to see the layout manifest in your eyes without seeing the actual output. It’s pretty clear: A three-column grid, first item spans two columns while the second one spans one column.

But grid-cols-3 and col-span-2 are kinda weird and foreign-looking because we’re trying to parse Tailwind’s method of writing CSS.

Now, watch what happens when we shift the syntax out of the way and use CSS variables to define the layout instead. The layout becomes crystal clear immediately:

<div class="grid-simple [--cols:3]"> <div class="[--span:2]"> ... </div> <div class="[--span:1]"> ... </div> </div>

Same three-column layout.

But it makes the layout much easier to write, read, and visualize. It also has other benefits, but I’ll let you explore its documentation instead of explaining it here.

For now, let’s move on.

Why not use 2fr 1fr?

It makes sense to write 2fr 1fr for a three-column grid, doesn’t it?

.grid { display: grid; grid-template-columns: 2fr 1fr; }

Unfortunately, it won’t work. This is because fr is calculated based on the available space after subtracting away the grid’s gutters (or gap).

Since 2fr 1fr only contains two columns, the output from 2fr 1fr will be different from a standard three-column grid.

Alright. Let’s continue with the reasons that make Tailwind great for building layouts.

Second: No need to name layouts

I think layouts are the hardest things to name. I rarely come up with better names than:

  • Number + Columns, e.g. .two-columns
  • Semantic names, e.g. .content-sidebar

But these names don’t do the layout justice. You can’t really tell what’s going on, even if you see .two-columns, because .two-columns can mean a variety of things:

  • Two equal columns
  • Two columns with 1fr auto
  • Two columns with auto 1fr
  • Two columns that spans total of 7 “columns” and the first object takes up 4 columns while the second takes up 3…

You can already see me tripping up when I try to explain that last one there…

Instead of forcing ourselves to name the layout, we can let the numbers do the talking — then the whole structure becomes very clear.

<div class="grid-simple [--cols:7]"> <div class="[--span:4]"> ... </div> <div class="[--span:3]"> ... </div> </div>

The variables paint a picture.

Third: Layout requirements can change depending on context

A “two-column” layout might have different properties when used in different contexts. Here’s an example.

In this example, you can see that:

  • A larger gap is used between the I and J groups.
  • A smaller gap is used within the I and J groups.

The difference in gap sizes is subtle, but used to show that the items are of separate groups.

Here’s an example where this concept is used in a real project. You can see the difference between the gap used within the newsletter container and the gap used between the newsletter and quote containers.

If this sort of layout is only used in one place, we don’t have to create a modifier class just to change the gap value. We can change it directly.

<div class="grid-simple [--cols:2] gap-8"> <div class="grid-simple gap-4 [--cols:2]"> ... </div> <div class="grid-simple gap-4 [--cols:2]"> ... </div> </div> Another common example

Let’s say you have a heading for a marketing section. The heading would look nicer if you are able to vary its max-width so the text isn’t orphaned.

text-balance might work here, but this is often nicer with manual positioning.

Without Tailwind, you might write an inline style for it.

<h2 class="h2" style="max-width: 12em;"> Your subscription has been confirmed </h2>

With Tailwind, you can specify the max-width in a more terse way:

<h2 class="h2 max-w-[12em]"> Your subscription has been confirmed </h2> Fourth: Responsive variants can be created on the fly

“At which breakpoint would you change your layouts?” is another factor you’d want to consider when designing your layouts. I shall term this the responsive factor for this section.

Most likely, similar layouts should have the same responsive factor. In that case, it makes sense to group the layouts together into a named layout.

.two-column { @apply grid-simple; /* --cols: 1 is the default */ @media (width >= 800px) { --cols:2; } }

However, you may have layouts where you want two-column grids on a mobile and a much larger column count on tablets and desktops. This layout style is commonly used in a site footer component.

Since the footer grid is unique, we can add Tailwind’s responsive variants and change the layout on the fly.

<div class="grid-simple [--cols:2] md:[--cols:5]"> <!-- span set to 1 by default so there's no need to specify them --> <div> ... </div> <div> ... </div> <div> ... </div> <div> ... </div> <div> ... </div> <div> ... </div> </div>

Again, we get to create a new layout on the fly without creating an additional modifier class — this keeps our CSS clean and focused.

How to best use Tailwind

This article is a sample lesson from my course, Unorthodox Tailwind, where I show you how to use Tailwind and CSS synergistically.

Personally, I think the best way to use Tailwind is not to litter your HTML with Tailwind utilities, but to create utilities that let you create layouts and styles easily.

I cover much more of that in the course if you’re interested to find out more!

4 Reasons That Make Tailwind Great for Building Layouts originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Durable Patterns in AI Product Design

LukeW - Thu, 03/12/2026 - 4:00am

In my recent Designing AI Products talk, I outlined several of the lessons we've learned building AI-native companies over the past four years. Specifically the patterns that keep proving durable as we speed-run through this evolution of what AI products will ultimately become.

I opened by framing something I think is really important: every time there's a major technology platform shift, almost everything about what an "application" is changes. From mainframes to personal computers, from desktop software to web apps, from web to mobile, the way we build, deliver, and experience software transforms completely each time.

There's always this awkward period where we try to cram the old paradigm into the new one. I dug up an old deck from when we were redesigning Yahoo, and even two years after the iPhone launched, we were still just trying to port the Yahoo webpage into a native iOS app. The same thing is happening now with AI. The difference is this evolution is moving really, really fast.

From there, I walked through the stages of AI product evolution as I've experienced them.

The first stage is AI working behind the scenes. Back in 2016, Google Translate was "completely reinvented," but the interface itself changed not at all. What actually happened was they replaced all these separate translation systems with a single neural network that could translate between language pairs it was never explicitly trained on. YouTube made a similar move with deep learning for video recommendations. The UIs stayed the same; everything transformative was happening under the hood.

I remember being at Google for years where the conversation was always about how to make machine learning more of a core part of the experience, but it never really got to the point where people were explicitly interacting with an AI model.

That changed with the explosion of chat. ChatGPT and everything that looks exactly like it made direct conversation with AI models the dominant pattern, and chat got bolted onto nearly every software product in a very short time. I illustrated this with Ask LukeW, a system I built almost three years ago that lets people talk to my body of work in natural language. It seems pretty simple now, but building and testing it surfaced a few patterns that have carried over into everything we've done since.

One is suggested questions. When you ask something, the system shows follow-up suggestions tied to your question and the broader corpus. When we tested this, we found these did an enormous amount of heavy lifting. They helped people understand what the system could do and how to use it.

A huge percentage of all interactions kicked off from one of these suggestions. And they've only gotten better with stronger models. In our newer products like Rev (for creatives) and Intent (for developers), the suggestions have become so relevant that people often just pick them with keyboard shortcuts instead of typing anything at all.

Another pattern is citation. Even just seeing where information comes from gives people a real trust boost. In Ask LukeW, you could hover over a citation and it would take you to the specific part of a document or video. This was an early example, but as AI systems gain access to more tools and can do much more than look up information, the question of how to represent what they did and why in the interface becomes increasingly important.

And the third is what I call the walls of text problem. Because so much of this is built on large language models, people are often left staring at big blocks of text they have to parse and interpret. We found that bringing back multimedia, like responding with images alongside text, or using diagrams and interactive elements, helped a lot.

Through that walkthrough of what now seems like a pretty simple AI application, I'd actually touched on what I think are the three core issues that remain with us today: capability awareness (what can I do here?), context awareness (what is the system looking at?), and the walls of text problem (too much output to process).

The next major stage is things becoming agentic. When AI models can use tools, make plans, configure those tools, analyze results, think in between steps, and fire off more tools based on what they find, the complexity of what to show in the UI explodes. And this compounds when you remember that most of this is getting bolted into side panels of existing software. I showed a developer tool where a single request to an agent produced this enormous thread of tool calls, model responses, more tool calls, and on and on. It's just a lot to take in.

A common reaction is to just show less of it, collapse it, or hide it entirely. And some AI products do that. But what I've seen consistently is that users fall into two groups. One group really wants to see what the system is thinking and doing and why. The other group just wants to let it rip and see what comes out. I originally thought this was a new-versus-experienced user thing, but it honestly feels more like two distinct mindsets.

We've tried many different approaches. In Bench, a workspace for knowledge work, we showed all tool calls on the left, let you click into each one to see what it did, and expand the thinking steps between them. You could even open individual tool calls and see their internal steps. That was a lot.

As we iterated, we moved from highlighting every tool call to condensing them, surfacing just what they were doing, and eventually showing processes inline as single lines you could expand if you wanted. The pattern we've landed on in Intent is collapsed single-line entries for each action. If you really want to, you can pop one open and see what happened inside, but for the most part, collapsing these things (and even finding ways to collapse collapses of these things) is where we are now.

We also experimented with separating process from results entirely. In ChatDB, when you ask a question, the thinking steps appear on the left while results show up on the right. You can scroll through results independently while keeping the summary visible, or open up the thought process to see why it did what it did. Changing the layout to give actual results more prominence while still making the reasoning accessible has worked well.

On the capability awareness front, I showed several approaches we've explored. One is prompt enhancement, where you type something simple and the model rewrites it into a much more detailed, context-aware instruction. This gets really interesting when the system can automatically search a codebase (like our product Augment does) to find relevant patterns and write better instructions that account for them.

Another approach was Bench's visual task builder, where you compose compound sentences from columns of capabilities: "I want to... search... Notion for... a topic... and create a PowerPoint summarizing the findings." This gives people tremendous visibility into what the system can do while also helping them point it in the right direction.

And then there's onboarding. Designers are familiar with the empty screen problem, and the usual advice is to throw tooltips or tutorials at it. But it turns out we can have the AI model handle all of this instead. In ChatDB, when you drag a spreadsheet onto the page, the system picks a color, picks an icon, names the dashboard, starts running analysis, and generates charts for you. You learn what it does by watching it do things, rather than trying to figure out what you can tell it to do.

For context awareness, I showed how products like Reve let you spatially tell the model what to pay attention to. You can highlight an object in an image, drag in reference art, move elements around, and then apply all those changes. You're being very explicit through the interface about what the model should focus on. I also showed context panels where you can attach files, select text, or point the model at specific folders.

The final stage I explored is agents orchestrating other agents. In Intent, there's an agent orchestration mode where a coordinator agent figures out the plan, shows it to you for review, and then kicks off a bunch of sub-agents to execute different parts of the work in parallel. You can watch each agent working on its piece. I think there's a big open question here about where the line is.

How much can people actually process and manage? If you use the metaphor of being a manager or a CEO, can you be a CEO of CEOs? I don't think we know yet, but this is clearly where the evolution is heading.

The throughline of the whole talk was that while the final form of AI applications hasn't been figured out, certain patterns keep proving their value at each stage. Those durable patterns, the ones that hang around and sometimes become even more important as things evolve, are the ones worth paying close attention to.

Finding the Role of Humans in AI Products

LukeW - Tue, 03/10/2026 - 4:00am

As AI products have evolved from models behind the scenes to chat interfaces to agentic systems to agents coordinating other agents, the design question has begun to shift. It used to be about how people interact with AI. Now it's about where and how people fit in.

The clearest example of this is in software development. In Anthropic's 2025 data, software developers made up 3% of U.S. workers but nearly 40% of all Claude conversations. A year later, their 2026 Measuring Agent Autonomy report showed software engineering accounting for roughly 50% of AI agent deployments. Whatever developers are doing with AI now, other domains are likely to follow suit.

And what developers have been doing is watching their role abstract upward at a pace that's hard to overstate.

  • First, humans wrote code. You typed, the computer did what you said.
  • Then machines started suggesting. GitHub Copilot's early form was essentially AI behind the scenes, offering inline completions. You picked which suggestions to use. Still very much in the driver's seat.
  • Then humans started talking to AI directly. The chat era. You could describe what you wanted in natural language, paste in a broken function, brainstorm architecture. The model became a collaborator.
  • Then agents got tools. The model doesn't just respond with text anymore. It searches files, calls APIs, writes code, checks its own work, and decides what to do next based on the results. You're no longer directing each step.
  • Then came orchestration. A coordinator agent receives your request, builds a plan, and delegates to specialized sub-agents. You review and approve the plan, but execution fans out across multiple autonomous workers.

To make this more tangible, our developer workspace, Intent, makes use of agent orchestration where a coordinator agent analyzes what needs to happen, searches across relevant resources, and generates a plan. Once you approve that plan, the coordinator kicks off specialized agents to do the work: one handling the design system, another building out navigation, another coordinating their outputs. Your role is to review, approve, and steer.

Stack that one more level and you've got machines running machines running machines. At which point: where exactly does the human sit?

To use a metaphor we're all familiar with: a manager keeps tabs on a handful of direct reports. A director manages managers. A CEO manages directors. At each layer, the person at the top trades direct understanding for leverage. They see less of the actual work and more of the summaries, status updates, and roll-ups.

But being an effective CEO is extraordinarily rare. Not just thinking you can do it, but actually doing it well. And a CEO of CEOs? The number of people who have operated at that scale is vanishingly small.

Which raises two questions. First, how far up the stack can humans actually go? Agent orchestration? Orchestration of orchestration? Where does it break down? Second, at whatever level we land on, what skills do people need to operate there?

The durable skills may turn out to be steering, delegation, and awareness: knowing what to ask for, how much autonomy to grant, and when to look under the hood. These aren't programming skills. They're closer to the skills of a good leader who knows when to let the team run and when to step in.

We used to design how people interact with software. Now we're designing how much they need to.

The Value of z-index

Css Tricks - Mon, 03/09/2026 - 4:20am

The z-index property is one of the most important tools any UI developer has at their disposal, as it allows you to control the stacking order of elements on a webpage. Modals, toasts, popups, dropdowns, tooltips, and many other common elements rely on it to ensure they appear above other content.

While most resources focus on the technical details or the common pitfalls of the Stacking Context (we’ll get to that in a moment…), I think they miss one of the most important and potentially chaotic aspects of z-index: the value.

In most projects, once you hit a certain size, the z-index values become a mess of “magic numbers”, a chaotic battlefield of values, where every team tries to outdo the others with higher and higher numbers.

How This Idea Started

I saw this line on a pull request a few years ago:

z-index: 10001;

I thought to myself, “Wow, that’s a big number! I wonder why they chose that specific value?” When I asked the author, they said: “Well, I just wanted to make sure it was above all the other elements on the page, so I chose a high number.”

This got me thinking about how we look at the stacking order of our projects, how we choose z-index values, and more importantly, the implications of those choices.

The Fear of Being Hidden

The core issue isn’t a technical one, but a lack of visibility. In a large project with multiple teams, you don’t always know what else is floating on the screen. There might be a toast notification from Team A, a cookie banner from Team B, or a modal from the marketing SDK.

The developer’s logic was simple in this case: “If I use a really high number, surely it will be on top.”

This is how we end up with magic numbers, these arbitrary values that aren’t connected to the rest of the application. They are guesses made in isolation, hoping to win the “arms race” of z-index values.

We’re Not Talking About Stacking Context… But…

As I mentioned at the beginning, there are many resources that cover z-index in the context of the Stacking Context. In this article, we won’t cover that topic. However, it’s impossible to talk about z-index values without at least mentioning it, as it’s a crucial concept to understand.

Essentially, elements with a higher z-index value will be displayed in front of those with a lower value as long as they are in the same Stacking Context.

If they aren’t, then even if you set a massive z-index value on an element in a “lower” stack, elements in a “higher” stack will stay on top of it, even if they have a very low z-index value. This means that sometimes, even if you give an element the maximum possible value, it can still end up being hidden behind something else.

CodePen Embed Fallback CodePen Embed Fallback

Now let’s get back to the values.

&#x1f4a1; Did you know? The maximum value for z-index is 2147483647. Why this specific number? It’s the maximum value for a 32-bit signed integer. If you try to go any higher, most browsers will simply clamp it to this limit.

The Problem With “Magic Numbers”

Using arbitrary high values for z-index can lead to several issues:

  1. Lack of maintainability: When you see a z-index value like 10001, it doesn’t tell you anything about its relationship to other elements. It’s just a number that was chosen without any context.
  2. Potential for conflicts: If multiple teams or developers are using high z-index values, they might end up conflicting with each other, leading to unexpected behavior where some elements are hidden behind others.
  3. Difficult to debug: When something goes wrong with the stacking order, it can be challenging to figure out why, especially if there are many elements with high z-index values.A Better Approach

I’ve encountered this “arms race” in almost every large project I’ve been a part of. The moment you have multiple teams working in the same codebase without a standardized system, chaos eventually takes over.

The solution is actually quite simple: tokenization of z-index values.

Now, wait, stay with me! I know that the moment someone mentions “tokens”, some developers might roll their eyes or shake their heads, but this approach actually works. Most of the major (and better-designed) design systems include z-index tokens for a reason. Teams that adopt them swear by them and never look back.

By using tokens, you gain:

  • Simple and easy maintenance: You manage values in one place.
  • Conflict prevention: No more guessing if 100 is higher than whatever Team B is using.
  • Easier debugging:: You can see exactly which “layer” an element belongs to.
  • Better Stacking Context management: It forces you to think about layers systematically rather than as random numbers.
A Practical Example

Let’s look at how this works in practice. I’ve prepared a simple demo where we manage our layers through a central set of tokens in the :root:

:root { --z-base: 0; --z-toast: 100; --z-popup: 200; --z-overlay: 300; } CodePen Embed Fallback

This setup is incredibly convenient. If you need to add a new popup or a toast, you know exactly which z-index to use. If you want to change the order — for example, to place toasts above the overlay — you don’t need to hunt through dozens of files. You just change the values in the :root, and everything updates accordingly in one place.

Handling New Elements

The real power of this system shines when your requirements change. Suppose you need to add a new sidebar and place it specifically between the base content and the toasts.

In a traditional setup, you’d be checking every existing element to see what numbers they use. With tokens, we simply insert a new token and adjust the scale:

:root { --z-base: 0; --z-sidebar: 100; --z-toast: 200; --z-popup: 300; --z-overlay: 400; } CodePen Embed Fallback

You don’t have to touch a single existing component with this setup. You update the tokens and you’re good to go. The logic of your application remains consistent, and you’re no longer guessing which number is “high enough”.

The Power of Relative Layering

We sometimes want to “lock” specific layers relative to each other. A great example of this is a background element for a modal or an overlay. Instead of creating a separate token for the background, we can calculate its position relative to the main layer.

Using calc() allows us to maintain a strict relationship between elements that always belong together:

.overlay-background { z-index: calc(var(--z-overlay) - 1); }

This ensures that the background will always stay exactly one step behind the overlay, no matter what value we assign to the --z-overlay token.

CodePen Embed Fallback Managing Internal Layers

Up until now, we’ve focused on the main, global layers of the application. But what happens inside those layers?

The tokens we created for the main layers (like 100, 200, etc.) are not suitable for managing internal elements. This is because most of these main components create their own Stacking Context. Inside a popup that has z-index: 300, a value of 301 is functionally identical to 1. Using large global tokens for internal positioning is confusing and unnecessary.

Note: For these local tokens to work as expected, you must ensure the container creates a Stacking Context. If you’re working on a component that doesn’t already have one (e.g., it doesn’t has a z-index set), you can create one explicitly using isolation: isolate.

To solve this, we can introduce a pair of “local” tokens specifically for internal use:

:root { /* ... global tokens ... */ --z-bottom: -10; --z-top: 10; }

This allows us to handle internal positioning with precision. If you need a floating action button inside a popup to stay on top, or a decorative icon on a toast to sit behind the main content, you can use these local anchors:

.popup-close-button { z-index: var(--z-top); } .toast-decorative-icon { z-index: var(--z-bottom); }

For even more complex internal layouts, you can still use calc() with these local tokens. If you have multiple elements stacking within a component, calc(var(--z-top) + 1) (or - 1) gives you that extra bit of precision without ever needing to look at global values.

CodePen Embed Fallback

This keeps our logic consistent: we think about layers and positions systematically, rather than throwing random numbers at the problem and hoping for the best.

Versatile Components: The Tooltip Case

One of the biggest headaches in CSS is managing components that can appear anywhere, like a tooltip.

Traditionally, developers give tooltips a massive z-index (like 9999) because they might appear over a modal. But if the tooltip is physically inside the modal’s DOM structure, its z-index is only relative to that modal anyway.

A tooltip simply needs to be above the content it’s attached to. By using our local tokens, we can stop the guessing game:

.tooltip { z-index: var(--z-top); }

Whether the tooltip is on a button in the main content, an icon inside a toast, or a link within a popup, it will always appear correctly above its immediate surroundings. It doesn’t need to know about the global “arms race” because it’s already standing on the “stable floor” provided by its parent layer’s token.

CodePen Embed Fallback Negative Values Can Be Good

Negative values often scare developers. We worry that an element with z-index: -1 will disappear behind the page background or some distant parent.

However, within our systematic approach, negative values are a powerful tool for internal decorations. When a component creates its own Stacking Context, the z-index is confined to that component. And z-index: var(--z-bottom) simply means “place this behind the default content of this specific container”.

This is perfect for:

  • Component backgrounds: Subtle patterns or gradients that shouldn’t interfere with text.
  • Shadow simulations: When you need more control than box-shadow provides.
  • Inner glows or borders: Elements that should sit “under” the main UI.
Conclusion: The z-index Manifesto

With just a few CSS variables, we’ve built a complete management system for z-index. It’s a simple yet powerful way to ensure that managing layers never feels like a guessing game again.

To maintain a clean and scalable codebase, here are the golden rules for working with z-index:

  1. No magic numbers: Never use arbitrary values like 999 or 10001. If a number isn’t tied to a system, it’s a bug waiting to happen.
  2. Tokens are mandatory: Every z-index in your CSS should come from a token, either a global layer token or a local positioning token.
  3. It’s rarely the value: If an element isn’t appearing on top despite a “high” value, the problem is almost certainly its Stacking Context, not the number itself.
  4. Think in layers: Stop asking “how high should this be?” and start asking “which layer does this belong to?”
  5. Calc for connection: Use calc() to bind related elements together (like an overlay and its background) rather than giving them separate, unrelated tokens.
  6. Local contexts for local problems: Use local tokens (--z-top, --z-bottom) and internal stacking contexts to manage complexity within components.

By following these rules, you turn z-index from a chaotic source of bugs into a predictable, manageable part of your design system. The value of z-index isn’t in how high the number is, but in the system that defines it.

Bonus: Enforcing a Clean System

A system is only as good as its enforcement. In a deadline-driven environment, it’s easy for a developer to slip in a quick z-index: 999 to “make it just work”. Without automation, your beautiful token system will eventually erode back into chaos.

To prevent this, I developed a library specifically designed to enforce this exact system: z-index-token-enforcer.

npm install z-index-token-enforcer --save-dev

It provides a unified set of tools to automatically flag any literal z-index values and require developers to use your predefined tokens:

  • Stylelint plugin: For standard CSS/SCSS enforcement
  • ESLint plugin: To catch literal values in CSS-in-JS and React inline styles
  • CLI scanner: A standalone script that can quickly scan files directly or be integrated into your CI/CD pipelines

By using these tools, you turn the “Golden Rules” from a recommendation into a hard requirement, ensuring that your codebase stays clean, scalable, and, most importantly, predictable.

The Value of z-index originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Popover API or Dialog API: Which to Choose?

Css Tricks - Mon, 03/02/2026 - 5:10am

Choosing between Popover API and Dialog API is difficult because they seem to do the same job, but they don’t!

After a bit lots of research, I discovered that the Popover API and Dialog API are wildly different in terms of accessibility. So, if you’re trying to decide whether to use Popover API or Dialog’s API, I recommend you:

  • Use Popover API for most popovers.
  • Use Dialog’s API only for modal dialogs.
Popovers vs. Dialogs

The relationship between Popovers and Dialogs are confusing to most developers, but it’s actually quite simple.

Dialogs are simply subsets of popovers. And modal dialogs are subsets of dialogs. Read this article if you want to understand the rationale behind this relationship.

This is why you could use the Popover API even on a <dialog> element.

<!-- Using popover on a dialog element --> <dialog popover>...</div>

Stylistically, the difference between popovers and modals are even clearer:

  • Modals should show a backdrop.
  • Popovers should not.

Therefore, you should never style a popover’s ::backdrop element. Doing so will simply indicate that the popover is a dialog — which creates a whole can of problems.

You should only style a modal’s ::backdrop element.

Popover API and its accessibility

Building a popover with the Popover API is relatively easy. You specify three things:

  • a popovertarget attribute on the popover trigger,
  • an id on the popover, and
  • a popover attribute on the popover.

The popovertarget must match the id.

<button popovertarget="the-popover"> ... </button> <dialog popover id="the-popover"> The Popover Content </dialog>

Notice that I’m using the <dialog> element to create a dialog role. This is optional, but recommended. I do this because dialog is a great default role since most popovers are simply just dialogs.

This two lines of code comes with a ton of accessibility features already built-in for you:

  • Automatic focus management
    • Focus goes to the popover when opening.
    • Focus goes back to the trigger when closing.
  • Automatic aria connection
    • No need to write aria-expanded, aria-popup and aria-controls. Browsers handle those natively. Woo!
  • Automatic light dismiss
    • Popover closes when user clicks outside.
    • Popover closes when they press the Esc key.

Now, without additional styling, the popover looks kinda meh. Styling is a whole ‘nother issue, so we’ll tackle that in a future article. Geoff has a few notes you can review in the meantime.

CodePen Embed Fallback Dialog API and its accessibility

Unlike the Popover API, the Dialog API doesn’t have many built-in features by default:

  • No automatic focus management
  • No automatic ARIA connection
  • No automatic light dismiss

So, we have to build them ourselves with JavaScript. This is why the Popover API is superior to the Dialog API in almost every aspect — except for one: when modals are involved.

The Dialog API has a showModal method. When showModal is used, the Dialog API creates a modal. It:

  1. automatically inerts other elements,
  2. prevents users from tabbing into other elements, and
  3. prevents screen readers from reaching other elements.

It does this so effectively, we no longer need to trap focus within the modal.

But we gotta take care of the focus and ARIA stuff when we use the Dialog API, so let’s tackle the bare minimum code you need for a functioning dialog.

We’ll begin by building the HTML scaffold:

<button class="modal-invoker" data-target="the-modal" aria-haspopup="dialog" >...</button> <dialog id="the-modal">The Popover Content</dialog>

Notice I did not add any aria-expanded in the HTML. I do this for a variety of reasons:

  1. This reduces the complexity of the HTML.
  2. We can write aria-expanded, aria-controls, and the focus stuff directly in JavaScript – since these won’t work without JavaScript.
  3. Doing so makes this HTML very reusable.
Setting up

I’m going to write about a vanilla JavaScript implementation here. If you’re using a framework, like React or Svelte, you will have to make a couple of changes — but I hope that it’s gonna be straightforward for you.

First thing to do is to loop through all dialog-invokers and set aria-expanded to false. This creates the initial state.

We will also set aria-controls to the <dialog> element. We’ll do this even though aria-controls is poop, ’cause there’s no better way to connect these elements (and there’s no harm connecting them) as far as I know.

const modalInvokers = Array.from(document.querySelectorAll('.modal-invoker')) modalInvokers.forEach(invoker => { const dialogId = invoker.dataset.target const dialog = document.querySelector(`#${dialogId}`) invoker.setAttribute('aria-expanded', false) invoker.setAttribute('aria-controls', dialogId) }) Opening the modal

When the invoker/trigger is clicked, we gotta:

  1. change the aria-expanded from false to true to show the modal to assistive tech users, and
  2. use the showModal function to open the modal.

We don’t have to write any code to hide the modal in this click handler because users will never get to click on the invoker when the dialog is opened.

modalInvokers.forEach(invoker => { // ... // Opens the modal invoker.addEventListener('click', event => { invoker.setAttribute('aria-expanded', true) dialog.showModal() }) }) CodePen Embed Fallback

Great. The modal is open. Now we gotta write code to close the modal.

Closing the modal

By default, showModal doesn’t have automatic light dismiss, so users can’t close the modal by clicking on the overlay, or by hitting the Esc key. This means we have to add another button that closes the modal. This must be placed within the modal content.

<dialog id="the-modal"> <button class="modal-closer">X</button> <!-- Other modal content --> </dialog>

When users click the close button, we have to:

  1. set aria-expanded on the opening invoker to false,
  2. close the modal with the close method, and
  3. bring focus back to the opening invoker element.
modalInvokers.forEach(invoker => { // ... // Opens the modal invoker.addEventListener('click', event => { invoker.setAttribute('aria-expanded', true) dialog.showModal() }) }) const modalClosers = Array.from(document.querySelectorAll('.modal-closer')) modalClosers.forEach(closer => { const dialog = closer.closest('dialog') const dialogId = dialog.id const invoker = document.querySelector(`[data-target="${dialogId}"]`) closer.addEventListener('click', event => { dialog.close() invoker.setAttribute('aria-expanded', false) invoker.focus() }) })

Phew, with this, we’re done with the basic implementation.

CodePen Embed Fallback

Of course, there’s advanced work like light dismiss and styling… which we can tackle in a future article.

Can you use the Popover API to create modals?

Yeah, you can.

But you will have to handle these on your own:

  1. Inerting other elements
  2. Trapping focus

I think what we did earlier (setting aria-expanded, aria-controls, and focus) are easier compared to inerting elements and trapping focus.

The Dialog API might become much easier to use in the future

A proposal about invoker commands has been created so that the Dialog API can include popovertarget like the Popover API.

This is on the way, so we might be able to make modals even simpler with the Dialog API in the future. In the meantime, we gotta do the necessary work to patch accessibility stuff.

Deep dive into building workable popovers and modals

We’ve only began to scratch the surface of building working popovers and modals with the code above — they’re barebone versions that are accessible, but they definitely don’t look nice and can’t be used for professional purposes yet.

To make the process of building popovers and modals easier, we will dive deeper into the implementation details for a professional-grade popover and a professional-grade modal in future articles.

In the meantime, I hope these give you some ideas on when to choose the Popover API and the Dialog API!

Remember, there’s no need to use both. One will do.

Popover API or Dialog API: Which to Choose? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Small Teams Win, Again

LukeW - Sun, 03/01/2026 - 4:00am

I’ve always believed in the power of small teams. The start-ups I co-founded never exceeded five employees, yet achieved a lot. With today's technology, even more companies can remain extremely small and be extremely effective. And that's awesome.

When Twitter acquired Bagcheck in 2011, Sam (CTO) and I were shipping multiple times a day. We started with a command line interface that let us figure out what objects and actions we needed before ever building any UI. When we did, we used logic-less templates so I could iterate on the front-end quickly while Sam managed the back-end code.

The point was to move fast and learn. With just two people building the product, we never got bottlenecked on decision-making or coordination. While conventional wisdom says "add more resources" to go faster, it rarely works out that way. Most companies go slow because of plodding decision making and opaque alignment. Smaller teams naturally don't have this problem.

But small teams can only do so much right? That's why every team in a big company is always asking for more resources. Not anymore.

Armed with highly capable AI systems, everyone (designer, developer, etc.) on a team can get more done. In big teams, though, these new capabilities smack head first into the decision-making and alignment problems that have always been there. In small teams, they don't.

So how small? Surely we need at least 100? 50? Bagcheck never crossed four employees and when Google acquired my next company, Polar, in 2014 there was five of us. These companies pre-dated AI coding agents and large language models. With today's AI capabilities, the number of people you need to get a lot done fast is probably a lot smaller than you think.

What’s !important #6: :heading, border-shape, Truncating Text From the Middle, and More

Css Tricks - Fri, 02/27/2026 - 6:30am

Despite what’s been a sleepy couple of weeks for new Web Platform Features, we have an issue of What’s !important that’s prrrretty jam-packed. The web community had a lot to say, it seems, so fasten your seatbelts!

@keyframes animations can be strings

Peter Kröner shared an interesting fact about @keyframes animations — that they can be strings:

@keyframes "@animation" { /* ... */ } #animate-this { animation: "@animation"; }

Yo dawg, time for a #CSS fun fact: keyframe names can be strings. Why? Well, in case you want your keyframes to be named “@keyframes,” obviously! #webdev

[image or embed]

— Peter Kröner (@sirpepe.bsky.social) Feb 18, 2026 at 10:33

I don’t know why you’d want to do that, but it’s certainly an interesting thing to learn about @keyframes after 11 years of cross-browser support!

: vs. = in style queries

Another hidden trick, this one from Temani Afif, has revealed that we can replace the colon in a style query with an equals symbol. Temani does a great job at explaining the difference, but here’s a quick code snippet to sum it up:

.Jay-Z { --Problems: calc(98 + 1); /* Evaluates as calc(98 + 1), color is blueivy */ color: if(style(--Problems: 99): red; else: blueivy); /* Evaluates as 99, color is red */ color: if(style(--Problems = 99): red; else: blueivy); }

In short, = evaluates --Problems differently to :, even though Jay-Z undoubtably has 99 of them (he said so himself).

Declarative <dialog>s (and an updated .visually-hidden)

David Bushell demonstrated how to create <dialog>s declaratively using invoker commands, a useful feature that allows us to skip some J’Script in favor of HTML, and works in all web browsers as of recently.

Also, thanks to an inquisitive question from Ana Tudor, the article spawned a spin-off about the minimum number of styles needed for a visually-hidden utility class. Is it still seven?

Maybe not…

How to truncate text from the middle

Wes Bos shared a clever trick for truncating text from the middle using only CSS:

Someone on reddit posted a demo where CSS truncates text from the middle. They didn't post the code, so here is my shot at it with Flexbox

[image or embed]

— Wes Bos (@wesbos.com) Feb 9, 2026 at 17:31

Donnie D’Amato attempted a more-native solution using ::highlight(), but ::highlight() has some limitations, unfortunately. As Henry Wilkinson mentioned, Hazel Bachrach’s 2019 call for a native solution is still an open ticket, so fingers crossed!

How to manage color variables with relative color syntax

Theo Soti demonstrated how to manage color variables with relative color syntax. While not a new feature or concept, it’s frankly the best and most comprehensive walkthrough I’ve ever read that addresses these complexities.

How to customize lists (the modern way)

In a similar article for Piccalilli, Richard Rutter comprehensively showed us how to customize lists, although this one has some nuggets of what I can only assume is modern CSS. What’s symbols()? What’s @counter-style and extends? Richard walks you through everything.

Source: Piccalilli.

Can’t get enough on counters? Juan Diego put together a comprehensive guide right here on CSS-Tricks.

How to create typescales using :heading

Safari Technology Preview 237 recently began trialing :heading/:heading(), as Stuart Robson explains. The follow-up is even better though, as it shows us how pow() can be used to write cleaner typescale logic, although I ultimately settled on the old-school <h1>–<h6> elements with a simpler implementation of :heading and no sibling-index():

:root { --font-size-base: 16px; --font-size-scale: 1.5; } :heading { /* Other heading styles */ } /* Assuming only base/h3/h2/h1 */ body { font-size: var(--font-size-base); } h3 { font-size: calc(var(--font-size-base) * var(--font-size-scale)); } h2 { font-size: calc(var(--font-size-base) * pow(var(--font-size-scale), 2)); } h1 { font-size: calc(var(--font-size-base) * pow(var(--font-size-scale), 3)); } Una Kravets introduced border-shape

Speaking of new features, border-shape came as a surprise to me considering that we already have — or will have — corner-shape. However, border-shape is different, as Una explains. It addresses the issues with borders (because it is the border), allows for more shapes and even the shape() function, and overall it works differently behind the scenes.

Source: Una Kravets. modern.css wants you to stop writing CSS like it’s 2015

It’s time to start using all of that modern CSS, and that’s exactly what modern.css wants to help you do. All of those awesome features that weren’t supported when you first read about them, that you forgot about? Or the ones that you missed or skipped completely? Well, modern.css has 75 code snippets and counting, and all you have to do is copy ‘em.

Kevin Powell also has some CSS snippets for you

And the commenters? They have some too!

Honestly, Kevin is the only web dev talker that I actually follow on YouTube, and he’s so close to a million followers right now, so make sure to hit ‘ol K-Po’s “Subscribe” button.

In case you missed it

Actually, you didn’t miss that much! Firefox 148 released the shape() function, which was being held captive by a flag, but is now a baseline feature. Safari Technology Preview 237 became the first to trial :heading. Those are all we’ve seen from our beloved browsers in the last couple of weeks (not counting the usual flurry of smaller updates, of course).

That being said, Chrome, Safari, and Firefox announced their targets for Interop 2026, revealing which Web Platform Features they intend to make consistent across all web browsers this year, which more than makes up for the lack of shiny features this week.

Also coming up (but testable in Chrome Canary now, just like border-shape) is the scrolled keyword for scroll-state container queries. Bramus talks about scrolled scroll-state queries here.

Remember, if you don’t want to miss anything, you can catch these Quick Hits as the news breaks in the sidebar of css-tricks.com.

See you in a fortnight!

What’s !important #6: :heading, border-shape, Truncating Text From the Middle, and More originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Syndicate content
©2003 - Present Akamai Design & Development.