Web Standards

Scroll-Driven Animations Inside a CSS Carousel

Css Tricks - Thu, 05/15/2025 - 2:30am

I was reflecting on what I learned about CSS Carousels recently. There’s a lot they can do right out of the box (and some things they don’t) once you define a scroll container and hide the overflow.

Hey, isn’t there another fairly new CSS feature that works with scroll regions? Oh yes, that’s Scroll-Driven Animations. Shouldn’t that mean we can trigger an animation while scrolling through the items in a CSS carousel?

Why yes, that’s exactly what it means. At least in Chrome at the time I’m playing with this:

CodePen Embed Fallback

It’s as straightforward as you might expect: define your keyframes and apply them on the carousel items:

@keyframes foo { from { height: 0; } to { height: 100%; font-size: calc(2vw + 1em); } } .carousel li { animation: foo linear both; animation-timeline: scroll(inline); }

There are more clever ways to animate these things of course. But what’s interesting to me is that this demo now combines CSS Carousels with Scroll-Driven Animations. The only rub is that the demo also slaps CSS Scroll Snapping in there with smooth scrolling, which is effectively wiped out when applying the scroll animation.

I thought I might work around that with a view() timeline instead. That certainly makes for a smoother animation that is applied to each carousel item as they scroll into view, but no dice on smooth scrolling.

CodePen Embed Fallback

Scroll-Driven Animations Inside a CSS Carousel originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

This Isn’t Supposed to Happen: Troubleshooting the Impossible

Css Tricks - Wed, 05/14/2025 - 4:01am

I recently rebuilt my portfolio (johnrhea.com). After days and days of troubleshooting and fixing little problems on my local laptop, I uploaded my shiny new portfolio to the server — and triumphantly watched it not work at all…

The browser parses and runs JavaScript, right? Maybe Chrome will handle something a little different from Firefox, but if the same code is on two different servers it should work the same in Chrome (or Firefox) no matter which server you look at, right? Right?

First, the dynamically generated stars wouldn’t appear and when you tried to play the game mode, it was just blank. No really terrible website enemies appeared, nor could they shoot any bad experience missiles at you, at least, not in the game mode, but I guess my buggy website literally sent a bad experience missile at you. Over on the page showing my work, little cars were supposed to zoom down the street, but they didn’t show up, either.

Let me tell you, there was no crying or tears of any kind. I was very strong and thrilled, just thrilled, to accept the challenge of figuring out what was going on. I frantically googled things like “What could cause JavaScript to act differently on two servers?”, “Why would a server change how JavaScript works?”, and “Why does everyone think I’m crying when I’m clearly not?” But to no avail.

There were some errors in the console, but not ones that made sense. I had an SVG element that we’ll call car (because that’s what I named it). I created it in vanilla JavaScript, added it to the page, and zoomed it down the gray strip approximating a street. (It’s a space theme where you can explore planets. It’s really cool. I swear.) I was setting transforms on car using car.style.transform and it was erroring out. car.style was undefined.

I went back to my code on my laptop. Executes flawlessly. No errors.

To get past the initial error, I switched it from car.style to using setAttribute e.g. car.setAttribute('style', 'transform: translate(100px, 200px)');. This just got me to the next error. car was erroring out on some data-* attributes I was using to hold information about the car, e.g. car.dataset.xspeed would also come back undefined when I tried to access them. This latter technology has been supported in SVG elements since 2015, yet it was not working on the server, and worked fine locally. What the Hoobastank could be happening? (Yes, I’m referencing the 1990s band and, no, they have nothing to do with the issue. I just like saying… errr… writing… their name.)

With search engines not being much help (mostly because the problem isn’t supposed to exist), I contacted my host thinking maybe some kind of server configuration was the issue. The very polite tech tried to help, checking for server errors and other simple misconfigurations, but there were no issues he could find. After reluctantly serving as my coding therapist and listening to my (tearless) bemoaning of ever starting a career in web development, he basically said they support JavaScript, but can’t really go into custom code, so best of luck. Well, thanks for nothing, person whom I will call Truckson! (That’s not his real name, but I thought “Carson” was too on the nose.)

Next, and still without tears, I tried to explain my troubles to ChatGPT with the initial prompt: “Why would JavaScript on two different web servers act differently?” It was actually kind of helpful with a bunch of answers that turned out to be very wrong.

  • Maybe there was an inline SVG vs SVG in an img issue? That wasn’t it.
  • Could the browser be interpreting the page as plain text instead of HTML through some misconfiguration? Nope, it was pulling down HTML, and the headers were correct.
  • Maybe the browser is in quirks mode? It wasn’t.
  • Could the SVG element be created incorrectly? You can’t create an SVG element in HTML using document.createElement('svg') because SVG actually has a different namespace. Instead, you have to use document.createElementNS("http://www.w3.org/2000/svg", 'svg'); because SVG and HTML use similar, but very different, standards. Nope, I’d used the createElementNS function and the correct namespace.

Sidenote: At several points during the chat session, ChatGPT started replies with, “Ah, now we’re getting spicy 🔥” as well as, “Ah, this is a juicy one. 🍇” (emojis included). It also used the word “bulletproof” a few times in what felt like a tech-bro kind of way. Plus there was a “BOOM. 💥 That’s the smoking gun right there”, as well as an “Ahhh okay, sounds like there’s still a small gremlin in the works.” I can’t decide whether I find this awesome, annoying, horrible, or scary. Maybe all four?

Next, desperate, I gave our current/future robot overlord some of my code to give it context and show it that none of these were the issue. It still harped on the misconfiguration and kept having me output things to check if the car element was an SVG element. Again, locally it was an SVG element, but on the server it came back that it wasn’t.

  • Maybe using innerHTML to add some SVG elements to the car element garbled the car element into not being an SVG element? ChatGPT volunteered to rewrite a portion of code to fix this. I put the new code into my system. It worked locally! Then I uploaded it to the server and… no dice. Same error was still happening.

I wept openly. I mean… I swallowed my emotions in a totally healthy and very manly way. And that’s the end of the article, no redemption, no solution, no answer. Just a broken website and the loud sobs of a man who doesn’t cry… ever…

…You still here?

Okay, you’re right. You know I wouldn’t leave you hanging like that. After the non-existent sob session, I complained to ChatGPT, it again gave me some console logs including having the car element print out its namespace and that’s when the answer came to me. You see the namespace for an SVG is this:

http://www.w3.org/2000/svg

What it actually printed was this:

https://www.w3.org/2000/svg

One letter. That’s the difference.

Normally you want everything to be secure, but that’s not really how namespaces work. And while the differences between these two strings is minimal, I might as well have written document.createElementNS("Gimme-them-SVGzers", "svg");. Hey, W3C, can I be on the namespace committee?

But why was it different? You’d be really mad if you read this far and it was just a typo in my code. Right?

You’ve invested some time into this article, and I already did the fake-out of having no answer. So, having a code typo would probably lead to riots in the streets and hoards of bad reviews.

Don’t worry. The namespace was correct in my code, so where was that errant “s” coming from?

I remembered turning on a feature in my host’s optimization plugin: automatically fix insecure pages. It goes through and changes insecure links to secure ones. In 99% of cases, it’s the right choice. But apparently it also changes namespace URLs in JavaScript code.

I turned that feature off and suddenly I was traversing the galaxy, exploring planets with cars zooming down gray pseudo-elements, and firing lasers at really terrible websites instead of having a really terrible website. There were no tears (joyful or otherwise) nor were there celebratory and wildly embarrassing dance moves that followed.

Have a similar crazy troubleshooting issue? Have you solved an impossible problem? Let me know in the comments.

This Isn’t Supposed to Happen: Troubleshooting the Impossible originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Using Pages CMS for Static Site Content Management

Css Tricks - Mon, 05/12/2025 - 2:42am

Friends, I’ve been on the hunt for a decent content management system for static sites for… well, about as long as we’ve all been calling them “static sites,” honestly.

I know, I know: there are a ton of content management system options available, and while I’ve tested several, none have really been the one, y’know? Weird pricing models, difficult customization, some even end up becoming a whole ‘nother thing to manage.

Also, I really enjoy building with site generators such as Astro or Eleventy, but pitching Markdown as the means of managing content is less-than-ideal for many “non-techie” folks.

A few expectations for content management systems might include:

  • Easy to use: The most important feature, why you might opt to use a content management system in the first place.
  • Minimal Requirements: Look, I’m just trying to update some HTML, I don’t want to think too much about database tables.
  • Collaboration: CMS tools work best when multiple contributors work together, contributors who probably don’t know Markdown or what GitHub is.
  • Customizable: No website is the same, so we’ll need to be able to make custom fields for different types of content.

Not a terribly long list of demands, I’d say; fairly reasonable, even. That’s why I was happy to discover Pages CMS.

According to its own home page, Pages CMS is the “The No-Hassle CMS for Static Site Generators,” and I’ll to attest to that. Pages CMS has largely been developed by a single developer, Ronan Berder, but is open source, and accepting pull requests over on GitHub.

Taking a lot of the “good parts” found in other CMS tools, and a single configuration file, Pages CMS combines things into a sleek user interface.

Pages CMS includes lots of options for customization, you can upload media, make editable files, and create entire collections of content. Also, content can have all sorts of different fields, check the docs for the full list of supported types, as well as completely custom fields.

There isn’t really a “back end” to worry about, as content is stored as flat files inside your git repository. Pages CMS provides folks the ability to manage the content within the repo, without needing to actually know how to use Git, and I think that’s neat.

User Authentication works two ways: contributors can log in using GitHub accounts, or contributors can be invited by email, where they’ll receive a password-less, “magic-link,” login URL. This is nice, as GitHub accounts are less common outside of the dev world, shocking, I know.

Oh, and Pages CMS has a very cheap barrier for entry, as it’s free to use.

Pages CMS and Astro content collections

I’ve created a repository on GitHub with Astro and Pages CMS using Astro’s default blog starter, and made it available publicly, so feel free to clone and follow along.

I’ve been a fan of Astro for a while, and Pages CMS works well alongside Astro’s content collection feature. Content collections make globs of data easily available throughout Astro, so you can hydrate content inside Astro pages. These globs of data can be from different sources, such as third-party APIs, but commonly as directories of Markdown files. Guess what Pages CMS is really good at? Managing directories of Markdown files!

Content collections are set up by a collections configuration file. Check out the src/content.config.ts file in the project, here we are defining a content collection named blog:

import { glob } from 'astro/loaders'; import { defineCollection, z } from 'astro:content'; const blog = defineCollection({ // Load Markdown in the `src/content/blog/` directory. loader: glob({ base: './src/content/blog', pattern: '**/*.md' }), // Type-check frontmatter using a schema schema: z.object({ title: z.string(), description: z.string(), // Transform string to Date object pubDate: z.coerce.date(), updatedDate: z.coerce.date().optional(), heroImage: z.string().optional(), }), }); export const collections = { blog };

The blog content collection checks the /src/content/blog directory for files matching the **/*.md file type, the Markdown file format. The schema property is optional, however, Astro provides helpful type-checking functionality with Zod, ensuring data saved by Pages CMS works as expected in your Astro site.

Pages CMS Configuration

Alright, now that Astro knows where to look for blog content, let’s take a look at the Pages CMS configuration file, .pages.config.yml:

content: - name: blog label: Blog path: src/content/blog filename: '{year}-{month}-{day}-{fields.title}.md' type: collection view: fields: [heroImage, title, pubDate] fields: - name: title label: Title type: string - name: description label: Description type: text - name: pubDate label: Publication Date type: date options: format: MM/dd/yyyy - name: updatedDate label: Last Updated Date type: date options: format: MM/dd/yyyy - name: heroImage label: Hero Image type: image - name: body label: Body type: rich-text - name: site-settings label: Site Settings path: src/config/site.json type: file fields: - name: title label: Website title type: string - name: description label: Website description type: string description: Will be used for any page with no description. - name: url label: Website URL type: string pattern: ^(https?:\/\/)?(www\.)?[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}(\/[^\s]*)?$ - name: cover label: Preview image type: image description: Image used in the social preview on social networks (e.g. Facebook, Twitter...) media: input: public/media output: /media

There is a lot going on in there, but inside the content section, let’s zoom in on the blog object.

- name: blog label: Blog path: src/content/blog filename: '{year}-{month}-{day}-{fields.title}.md' type: collection view: fields: [heroImage, title, pubDate] fields: - name: title label: Title type: string - name: description label: Description type: text - name: pubDate label: Publication Date type: date options: format: MM/dd/yyyy - name: updatedDate label: Last Updated Date type: date options: format: MM/dd/yyyy - name: heroImage label: Hero Image type: image - name: body label: Body type: rich-text

We can point Pages CMS to the directory we want to save Markdown files using the path property, matching it up to the /src/content/blog/ location Astro looks for content.

path: src/content/blog

For the filename we can provide a pattern template to use when Pages CMS saves the file to the content collection directory. In this case, it’s using the file date’s year, month, and day, as well as the blog item’s title, by using fields.title to reference the title field. The filename can be customized in many different ways, to fit your scenario.

filename: '{year}-{month}-{day}-{fields.title}.md'

The type property tells Pages CMS that this is a collection of files, rather than a single editable file (we’ll get to that in a moment).

type: collection

In our Astro content collection configuration, we define our blog collection with the expectation that the files will contain a few bits of meta data such as: title, description, pubDate, and a few more properties.

We can mirror those requirements in our Pages CMS blog collection as fields. Each field can be customized for the type of data you’re looking to collect. Here, I’ve matched these fields up with the default Markdown frontmatter found in the Astro blog starter.

fields: - name: title label: Title type: string - name: description label: Description type: text - name: pubDate label: Publication Date type: date options: format: MM/dd/yyyy - name: updatedDate label: Last Updated Date type: date options: format: MM/dd/yyyy - name: heroImage label: Hero Image type: image - name: body label: Body type: rich-text

Now, every time we create a new blog item in Pages CMS, we’ll be able to fill out each of these fields, matching the expected schema for Astro.

Aside from collections of content, Pages CMS also lets you manage editable files, which is useful for a variety of things: site wide variables, feature flags, or even editable navigations.

Take a look at the site-settings object, here we are setting the type as file, and the path includes the filename site.json.

- name: site-settings label: Site Settings path: src/config/site.json type: file fields: - name: title label: Website title type: string - name: description label: Website description type: string description: Will be used for any page with no description. - name: url label: Website URL type: string pattern: ^(https?:\/\/)?(www\.)?[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}(\/[^\s]*)?$ - name: cover label: Preview image type: image description: Image used in the social preview on social networks (e.g. Facebook, Twitter...)

The fields I’ve included are common site-wide settings, such as the site’s title, description, url, and cover image.

Speaking of images, we can tell Pages CMS where to store media such as images and video.

media: input: public/media output: /media

The input property explains where to store the files, in the /public/media directory within our project.

The output property is a helpful little feature that conveniently replaces the file path, specifically for tools that might require specific configuration. For example, Astro uses Vite under the hood, and Vite already knows about the public directory and complains if it’s included within file paths. Instead, we can set the output property so Pages CMS will only point image path locations starting at the inner /media directory instead.

To see what I mean, check out the test post in the src/content/blog/ folder:

--- title: 'Test Post' description: 'Here is a sample of some basic Markdown syntax that can be used when writing Markdown content in Astro.' pubDate: 05/03/2025 heroImage: '/media/blog-placeholder-1.jpg' ---

The heroImage now property properly points to /media/... instead of /public/media/....

As far as configurations are concerned, Pages CMS can be as simple or as complex as necessary. You can add as many collections or editable files as needed, as well as customize the fields for each type of content. This gives you a lot of flexibility to create sites!

Connecting to Pages CMS

Now that we have our Astro site set up, and a .pages.config.yml file, we can connect our site to the Pages CMS online app. As the developer who controls the repository, browse to https://app.pagescms.org/ and sign in using your GitHub account.

You should be presented with some questions about permissions, you may need to choose between giving access to all repositories or specific ones. Personally, I chose to only give access to a single repository, which in this case is my astro-pages-cms-template repo.

After providing access to the repo, head on back to the Pages CMS application, where you’ll see your project listed under the “Open a Project” headline.

Clicking the open link will take you into the website’s dashboard, where we’ll be able to make updates to our site.

Creating content

Taking a look at our site’s dashboard, we’ll see a navigation on the left side, with some familiar things.

  • Blog is the collection we set up inside the .pages.config.yml file, this will be where we we can add new entries to the blog.
  • Site Settings is the editable file we are using to make changes to site-wide variables.
  • Media is where our images and other content will live.
  • Settings is a spot where we’ll be able to edit our .pages.config.yml file directly.
  • Collaborators allows us to invite other folks to contribute content to the site.

We can create a new blog post by clicking the Add Entry button in the top right

Here we can fill out all the fields for our blog content, then hit the Save button.

After saving, Pages CMS will create the Markdown file, store the file in the proper directory, and automatically commit the changes to our repository. This is how Pages CMS helps us manage our content without needing to use git directly.

Automatically deploying

The only thing left to do is set up automated deployments through the service provider of your choice. Astro has integrations with providers like Netlify, Cloudflare Pages, and Vercel, but can be hosted anywhere you can run node applications.

Astro is typically very fast to build (thanks to Vite), so while site updates won’t be instant, they will still be fairly quick to deploy. If your site is set up to use Astro’s server-side rendering capabilities, rather than a completely static site, the changes might be much faster to deploy.

Wrapping up

Using a template as reference, we checked out how Astro content collections work alongside Pages CMS. We also learned how to connect our project repository to the Pages CMS app, and how to make content updates through the dashboard. Finally, if you are able, don’t forget to set up an automated deployment, so content publishes quickly.

Using Pages CMS for Static Site Content Management originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

UXPA: Using AI to Streamline Persona & Journey Map Creation

LukeW - Thu, 05/08/2025 - 2:00pm

In her Using AI to Streamline Personas and Journey Map Creation talk at UXPA Boston, Kyle Soucy shared how UX researchers can effectively use AI for personas and journey maps while maintaining research integrity. Here are my notes from her talk:

  • Proto-personas help teams align on assumptions before research. Calling them "assumptions-based personas" helps teams understand research is still needed
  • For proto-personas, use documented assumptions, anecdotal evidence, and market research
  • Research-based personas are based on actual ethnographic research and insights from transcripts, surveys, analytics, etc.
  • Decide on persona sections yourself - this is the researcher's job, not AI's. every element should have a purpose and be relevant to understanding the user
  • Upload data to your Gen AI tool - most tools accept various file formats
  • Different AI tools have different security levels. Be aware of your organization's stance on data privacy
  • Use behavior prompts to get richer information about users, such as "When users encounter X, what do they typically do?"
  • For proto-personas: Ask AI to generate research questions to validate assumptions
  • For research-based personas: Request day-in-the-life narratives
  • Every element on a persona should have a purpose. If it's not helping your design team understand or empathize with users better, it doesn't belong
  • Researchers determine journey map elements (stages, information needed)
  • AI helps fill in the content based on research data
  • Include clear definitions of terms in your prompts (e.g., "jobs to be done")
  • Ask AI to label assumptions when data is incomplete to identify research gaps
  • Don't rely on AI for generating opportunities, this requires team effort
  • AI is a tool for efficiency, not a replacement for UX researchers. The only way to keep AI from taking your job is to use it to do your job better
  • Garbage in, garbage out - biases in your data will be amplified
  • AI tools hallucinate information - know your data well enough to spot inaccuracies
  • Don't use AI for generating opportunities or solutions - this requires team expertise

UXPA: Designing Humane Experiences

LukeW - Thu, 05/08/2025 - 2:00pm

In his Designing Humane Experiences: 5 Lessons from History's Greatest Innovation talk at UXPA Boston, Darrell Penta explored how the Korean alphabet (Hangul), created by King Sejong 600 years ago, exemplifies humane, user-centered design principles that remain relevant today. Here's my notes from his talk:

  • Humane design shows compassion, kindness, and a concern for the suffering or well-being of others, even when such behavior is neither required nor expected
  • When we approach design with compassion and concern for others' well-being, we unlock our ability to create innovative experiences
  • In 15th century Korea (and most historical societies), literacy was restricted to elites
  • Learning to read and write Chinese characters (used in Korea at that time) took years of dedicated study something common people couldn't afford
  • King Sejong created an entirely new alphabet rather than adapting an existing one. There's ben only four instances in history of writing systems were invented independently. most are adaptations of existing systems
Korean Alphabet Innovations
  • Letters use basic geometric forms (lines, circles, squares) making them visually distinct and easier to learn
  • Consonants and vowels have clearly different visual treatments, unlike in English where nothing in the letter shapes indicates their class
  • The shapes of consonants reflect how the mouth forms those sounds: the shape of closed lips, the tongue position behind teeth, etc.
  • Sound features are mapped to visual features in a consistent way. base shapes represent basic sounds. Additional strokes represent additional sound features
  • Letters are arranged in syllable blocks, making the syllable count visible
  • Alphabet was designed for the technology of the time (brush and ink)
  • Provided comprehensive documentation explaining the system
  • Created with flexibility to be written in multiple directions (horizontally or vertically)
  • 5 Lessons for Designers
    1. Be Principled and Predictable: Develop clear, consistent design principles and apply them systematically
    2. Prioritize Information Architecture: Don't treat it as an afterthought
    3. Embrace Constraints: View limitations as opportunities for innovation
    4. Design with Compassion: Consider the broader social impact of your design
    5. Empower Users: Create solutions that provide access and opportunity

UXPA: Bridging AI and Human Expertise

LukeW - Thu, 05/08/2025 - 2:00pm

In his presentation Bridging AI and Human Expertise at UXPA Boston 2025, Stewart Smith shared insights on designing expert systems that effectively bridge artificial intelligence and human expertise. Here are my notes from his talk:

  • Expert systems simulate human expert decision-making to solve complex problems like GPS routing and supply chain planning
  • Key components include knowledge base, inference engine, user interface, explanation facility, and knowledge acquisition
  • Traditional systems were rule-based, but AI is transforming them with machine learning for pattern recognition
  • The explanation facility justifies conclusions by answering "why" and "how" questions
  • Trust is the cornerstone of system adoption. if people don't trust your system, they won't use it
  • Explainability must be designed into the system from the beginning to trace key decisions
  • The "black box problem" occurs when you know inputs and outputs but can't see inner workings
  • High-stakes domains like finance or healthcare require greater explainability
  • Aim for balance between under-reliance (missed opportunities) and over-reliance (atrophied skills) on AI
  • Over-reliance creates false security when users habitually approve system recommendations
  • Human experts remain essential for catching bad data feeds or biased data
  • Present AI as augmentation to decision-making, not replacement
  • Provide confidence scores or indicators of the system's certainty level
  • Ensure users can adjust and override AI recommendations where necessary
  • Present AI insights within existing workflows that match expert mental models
  • Clearly differentiate between human and AI-generated insights
  • Training significantly increases AI literacy—people who haven't used AI often underestimate it
  • Highlight success stories and provide social proof of AI's benefits
  • Focus on automating routine decisions to give people more time for complex tasks
  • Trust is the foundation of AI adoption.
  • Explainability is a spectrum and must be balanced with performance.
  • UX plays a critical role in bridging AI capabilities and human expertise.

Orbital Mechanics (or How I Optimized a CSS Keyframes Animation)

Css Tricks - Thu, 05/08/2025 - 2:33am

I recently updated my portfolio at johnrhea.com. (If you’re looking to add a CSS or front-end engineer with storytelling and animation skills to your team, I’m your guy.) I liked the look of a series of planets I’d created for another personal project and decided to reuse them on my new site. Part of that was also reusing an animation I’d built circa 2019, where a moon orbited around the planet.

Initially, I just plopped the animations into the new site, only changing the units (em units to viewport units using some complicated math that I was very, very proud of) so that they would scale properly because I’m… efficient with my time. However, on mobile, the planet would move up a few pixels and down a few pixels as the moons orbited around it. I suspected the plopped-in animation was the culprit (it wasn’t, but at least I got some optimized animation and an article out of the deal).

Here’s the original animation:

CodePen Embed Fallback

My initial animation for the moon ran for 60 seconds. I’m folding it inside a disclosure widget because, at 141 lines, it’s stupid long (and, as we’ll see, emphasis on the stupid). Here it is in all its “glory”:

Open code #moon1 { animation: moon-one 60s infinite; } @keyframes moon-one { 0% { transform: translate(0, 0) scale(1); z-index: 2; animation-timing-function: ease-in; } 5% { transform: translate(-3.51217391vw, 3.50608696vw) scale(1.5); z-index: 2; animation-timing-function: ease-out; } 9.9% { z-index: 2; } 10% { transform: translate(-5.01043478vw, 6.511304348vw) scale(1); z-index: -1; animation-timing-function: ease-in; } 15% { transform: translate(1.003478261vw, 2.50608696vw) scale(0.25); z-index: -1; animation-timing-function: ease-out; } 19.9% { z-index: -1; } 20% { transform: translate(0, 0) scale(1); z-index: 2; animation-timing-function: ease-in; } 25% { transform: translate(-3.51217391vw, 3.50608696vw) scale(1.5); z-index: 2; animation-timing-function: ease-out; } 29.9% { z-index: 2; } 30% { transform: translate(-5.01043478vw, 6.511304348vw) scale(1); z-index: -1; animation-timing-function: ease-in; } 35% { transform: translate(1.003478261vw, 2.50608696vw) scale(0.25); z-index: -1; animation-timing-function: ease-out; } 39.9% { z-index: -1; } 40% { transform: translate(0, 0) scale(1); z-index: 2; animation-timing-function: ease-in; } 45% { transform: translate(-3.51217391vw, 3.50608696vw) scale(1.5); z-index: 2; animation-timing-function: ease-out; } 49.9% { z-index: 2; } 50% { transform: translate(-5.01043478vw, 6.511304348vw) scale(1); z-index: -1; animation-timing-function: ease-in; } 55% { transform: translate(1.003478261vw, 2.50608696vw) scale(0.25); z-index: -1; animation-timing-function: ease-out; } 59.9% { z-index: -1; } 60% { transform: translate(0, 0) scale(1); z-index: 2; animation-timing-function: ease-in; } 65% { transform: translate(-3.51217391vw, 3.50608696vw) scale(1.5); z-index: 2; animation-timing-function: ease-out; } 69.9% { z-index: 2; } 70% { transform: translate(-5.01043478vw, 6.511304348vw) scale(1); z-index: -1; animation-timing-function: ease-in; } 75% { transform: translate(1.003478261vw, 2.50608696vw) scale(0.25); z-index: -1; animation-timing-function: ease-out; } 79.9% { z-index: -1; } 80% { transform: translate(0, 0) scale(1); z-index: 2; animation-timing-function: ease-in; } 85% { transform: translate(-3.51217391vw, 3.50608696vw) scale(1.5); z-index: 2; animation-timing-function: ease-out; } 89.9% { z-index: 2; } 90% { transform: translate(-5.01043478vw, 6.511304348vw) scale(1); z-index: -1; animation-timing-function: ease-in; } 95% { transform: translate(1.003478261vw, 2.50608696vw) scale(0.25); z-index: -1; animation-timing-function: ease-out; } 99.9% { z-index: -1; } 100% { transform: translate(0, 0) scale(1); z-index: 2; animation-timing-function: ease-in; } }

If you look at the keyframes in that code, you’ll notice that the 0% to 20% keyframes are exactly the same as 20% to 40% and so on up through 100%. Why I decided to repeat the keyframes five times infinitely instead of just repeating one set infinitely is a decision lost to antiquity, like six years ago in web time. We can also drop the duration to 12 seconds (one-fifth of sixty) if we were doing our due diligence.

I could thus delete everything from 20% on, instantly dropping the code down to 36 lines. And yes, I realize gains like this are unlikely to be possible on most sites, but this is the first step for optimizing things.

#moon1 { animation: moon-one 12s infinite; } @keyframes moon-one { 0% { transform: translate(0, 0) scale(1); z-index: 2; animation-timing-function: ease-in; } 5% { transform: translate(-3.51217391vw, 3.50608696vw) scale(1.5); z-index: 2; animation-timing-function: ease-out; } 9.9% { z-index: 2; } 10% { transform: translate(-5.01043478vw, 6.511304348vw) scale(1); z-index: -1; animation-timing-function: ease-in; } 15% { transform: translate(1.003478261vw, 2.50608696vw) scale(0.25); z-index: -1; animation-timing-function: ease-out; } 19.9% { z-index: -1; } 20% { transform: translate(0, 0) scale(1); z-index: 2; animation-timing-function: ease-in; } }

Now that we’ve gotten rid of 80% of the overwhelming bits, we can see that there are five main keyframes and two additional ones that set the z-index close to the middle and end of the animation (these prevent the moon from dropping behind the planet or popping out from behind the planet too early). We can change these five points from 0%, 5%, 10%, 15%, and 20% to 0%, 25%, 50%, 75%, and 100% (and since the 0% and the former 20% are the same, we can remove that one, too). Also, since the 10% keyframe above is switching to 50%, the 9.9% keyframe can move to 49.9%, and the 19.9% keyframe can switch to 99.9%, giving us this:

#moon1 { animation: moon-one 12s infinite; } @keyframes moon-one { 0%, 100% { transform: translate(0, 0) scale(1); z-index: 2; animation-timing-function: ease-in; } 25% { transform: translate(-3.51217391vw, 3.50608696vw) scale(1.5); z-index: 2; animation-timing-function: ease-out; } 49.9% { z-index: 2; } 50% { transform: translate(-5.01043478vw, 6.511304348vw) scale(1); z-index: -1; animation-timing-function: ease-in; } 75% { transform: translate(1.003478261vw, 2.50608696vw) scale(0.25); z-index: -1; animation-timing-function: ease-out; } 99.9% { z-index: -1; } }

Though I was very proud of myself for my math wrangling, numbers like -3.51217391vw are really, really unnecessary. If a screen was one thousand pixels wide, -3.51217391vw would be 35.1217391 pixels. No one ever needs to go down to the precision of a ten-millionth of a pixel. So, let’s round everything to the tenth place (and if it’s a 0, we’ll just drop it). We can also skip z-index in the 75% and 25% keyframes since it doesn’t change.

Here’s where that gets us in the code:

#moon1 { animation: moon-one 12s infinite; } @keyframes moon-one { 0%, 100% { transform: translate(0, 0) scale(1); z-index: 2; animation-timing-function: ease-in; } 25% { transform: translate(-3.5vw, 3.5vw) scale(1.5); z-index: 2; animation-timing-function: ease-out; } 49.9% { z-index: 2; } 50% { transform: translate(-5vw, 6.5vw) scale(1); z-index: -1; animation-timing-function: ease-in; } 75% { transform: translate(1vw, 2.5vw) scale(0.25); z-index: -1; animation-timing-function: ease-out; } 99.9% { z-index: -1; } }

After all our changes, the animation still looks pretty close to what it was before, only way less code:

CodePen Embed Fallback

One of the things I don’t like about this animation is that the moon kind of turns at its zenith when it crosses the planet. It would be much better if it traveled in a straight line from the upper right to the lower left. However, we also need it to get a little larger, as if the moon is coming closer to us in its orbit. Because both translation and scaling were done in the transform property, I can’t translate and scale the moon independently.

If we skip either one in the transform property, it resets the one we skipped, so I’m forced to guess where the mid-point should be so that I can set the scale I need. One way I’ve solved this in the past is to add a wrapping element, then apply scale to one element and translate to the other. However, now that we have individual scale and translate properties, a better way is to separate them from the transform property and use them as separate properties. Separating out the translation and scaling shouldn’t change anything, unless the original order they were declared on the transform property was different than the order of the singular properties.

#moon1 { animation: moon-one 12s infinite; } @keyframes moon-one { 0%, 100% { translate: 0 0; scale: 1; z-index: 2; animation-timing-function: ease-in; } 25% { translate: -3.5vw 3.5vw; z-index: 2; animation-timing-function: ease-out; } 49.9% { z-index: 2; } 50% { translate: -5vw 6.5vw; scale: 1; z-index: -1; animation-timing-function: ease-in; } 75% { translate: 1vw 2.5vw; scale: 0.25; animation-timing-function: ease-out; } 99.9% { z-index: -1; } }

Now that we can separate the scale and translate properties and use them independently, we can drop the translate property in the 25% and 75% keyframes because we don’t want them placed precisely in that keyframe. We want the browser’s interpolation to take care of that for us so that it translates smoothly while scaling.

#moon1 { animation: moon-one 12s infinite; } @keyframes moon-one { 0%, 100% { translate: 0 0; scale: 1; z-index: 2; animation-timing-function: ease-in; } 25% { scale: 1.5; animation-timing-function: ease-out; } 49.9% { z-index: 2; } 50% { translate: -5vw 6.5vw; scale: 1; z-index: -1; animation-timing-function: ease-in; } 75% { scale: 0.25; animation-timing-function: ease-out; } 99.9% { z-index: -1; } } CodePen Embed Fallback

Lastly, those different timing functions don’t make a lot of sense anymore because we’ve got the browser working for us, and if we use an ease-in-out timing function on everything, then it should do exactly what we want.

#moon1 { animation: moon-one 12s infinite ease-in-out; } @keyframes moon-one { 0%, 100% { translate: 0 0; scale: 1; z-index: 2; } 25% { scale: 1.5; } 49.9% { z-index: 2; } 50% { translate: -5vw 6.5vw; scale: 1; z-index: -1; } 75% { scale: 0.25; } 99.9% { z-index: -1; } } CodePen Embed Fallback

And there you go: 141 lines down to 28, and I think the animation looks even better than before. It will certainly be easier to maintain, that’s for sure.

But what do you think? Was there an optimization step I missed? Let me know in the comments.

Orbital Mechanics (or How I Optimized a CSS Keyframes Animation) originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Why is Nobody Using the hwb() Color Function?

Css Tricks - Wed, 05/07/2025 - 2:25am

Okay, nobody is an exaggeration, but have you seen the stats for hwb()? They show a steep decline, and after working a lot on color in the CSS-Tricks almanac, I’ve just been wondering why that is.

hwb() is a color function in the sRGB color space, which is the same color space used by rgb()hsl() and the older hexadecimal color format (e.g. #f8a100). hwb() is supposed to be more intuitive and easier to work with than hsl(). I kinda get why it’s considered “easier” since you specify how much black or white you want to add to a given color. But, how is hwb() more intuitive than hsl()?

hwb() accepts three values, and similar to hsl(), the first value specifies the color’s hue (between 0deg–360deg), while the second and third values add whiteness (0 – 100) and blackness (0 – 100) to the mix, respectively.

According to Google, the term “intuitive” means “what one feels to be true even without conscious reasoning; instinctive.” As such, it does truly seem that hwb() is more intuitive than hsl(), but it’s only a slight notable difference that makes that true.

Let’s consider an example with a color. We’ll declare light orange in both hsl() and hwb():

/* light orange in hsl */ .element-1 { color: hsl(30deg 100% 75%); } /* light orange in hwb() */ .element-2 { color: hwb(30deg 50% 0%); }

These two functions produce the exact same color, but while hwb() handles ligthness with two arguments, hsl() does it with just one, leaving one argument for the saturation. By comparison, hwb() provides no clear intuitive way to set just the saturation. I’d argue that makes the hwb() function less intuitive than hsl().

I think another reason that hsl() is generally more intuitive than hwb() is that HSL as a color model was created in the 1970s while HWB as a color model was created in 1996. We’ve had much more time to get acquainted with hsl() than we have hwb(). hsl() was implemented by browsers as far back as 2008, Safari being the first and other browsers following suit. Meanwhile, hwb() gained support as recently as 2021! That’s more than a 10-year gap between functions when it comes to using them and being familiar with them.

There’s also the fact that other color functions that are used to represent colors in other color spaces — such as lab()lch()oklab(), and oklch() — offer more advantages, such as access to more colors in the color gamut and perceptual uniformity. So, maybe being intuitive is coming at the expense of having a more robust feature set, which could explain why you might go with a less intuitive function that doesn’t use sRGB.

Look, I can get around the idea of controlling how white or black you want a color to look based on personal preferences, and for designers, it’s maybe easier to mix colors that way. But I honestly would not opt for this as my go-to color function in the sRGB color space because hsl() does something similar using the same hue, but with saturation and lightness as the parameters which is far more intuitive than what hwb() offers.

I see our web friend, Stefan Judis, preferring hsl() over hwb() in his article on hwb().

Lea Verou even brought up the idea of removing hwb() from the spec in 2022, but a decision was made to leave it as it was since browsers were already implementing the function. And although,I was initially pained by the idea of keeping hwb() around, I also quite understand the feeling of working on something, and then seeing it thrown in the bin. Once we’ve introduced something, it’s always tough to walk it back, especially when it comes to maintaining backwards compatibility, which is a core tenet of the web.

I would like to say something though: lab(), lch(), oklab(), oklch() are already here and are better color functions than hwb(). I, for one, would encourage using them over hwb() because they support so many more colors that are simply missing from the hsl() and hwb() functions.

I’ve been exploring colors for quite some time now, so any input would be extremely helpful. What color functions are you using in your everyday website or web application, and why?

More on color Almanac on Feb 22, 2025 hsl() .element { color: hsl(90deg, 50%, 50%); } Sunkanmi Fafowora Almanac on Mar 4, 2025 lab() .element { color: lab(50% 50% 50% / 0.5); } Sunkanmi Fafowora Almanac on Mar 12, 2025 lch() .element { color: lch(10% 0.215 15deg); } Sunkanmi Fafowora Almanac on Apr 29, 2025 oklab() .element { color: oklab(25.77% 25.77% 54.88%; } Sunkanmi Fafowora Almanac on May 10, 2025 oklch() .element { color: oklch(70% 0.15 240); } Gabriel Shoyombo

Why is Nobody Using the hwb() Color Function? originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

GSAP is Now Completely Free, Even for Commercial Use!

Css Tricks - Tue, 05/06/2025 - 4:14am

Back in October, the folks behind the GreenSock Animation Platform (GSAP) joined forces with Webflow, the visual website builder. Now, the team’s back with another announcement: Along with the version 3.13 release, GSAP, and all its awesome plugins, are now freely available to everyone.

Thanks to Webflow GSAP is now 100% free including all of the bonus plugins like SplitTextMorphSVG, and all the others that were exclusively available to Club GSAP members. That’s right, the entire GSAP toolset is free, even for commercial use! 🤯

Webflow is celebrating over on their blog as well:

With Webflow’s support, the GSAP team can continue to lead the charge in product and industry innovation while allowing even more developers the opportunity to harness the full breadth of GSAP-powered motion.

Check out the GSAP blog to read more about the announcement, then go animate something awesome and share it with us!

GSAP is Now Completely Free, Even for Commercial Use! originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Modern Scroll Shadows Using Scroll-Driven Animations

Css Tricks - Mon, 05/05/2025 - 3:01am

Using scroll shadows, especially for mobile devices, is a subtle bit of UX that Chris has covered before (indeed, it’s one of his all-time favorite CSS tricks), by layering background gradients with different attachments, we can get shadows that are covered up when you’ve scrolled to the limits of the element.

Geoff covered a newer approach that uses the animation-timeline property. Using animation-timeline, we can tie CSS animation to the scroll position. His example uses pseudo-elements to render the scroll shadows, and animation-range to animate the opacity of the pseudo-elements based on scroll.

Here’s yet another way. Instead of using shadows, let’s use a CSS mask to fade out the edges of the scrollable element. This is a slightly different visual metaphor that works great for horizontally scrollable elements — places where your scrollable element doesn’t have a distinct border of its own. This approach still uses animation-timeline, but we’ll use custom properties instead of pseudo-elements. Since we’re fading, the effect also works regardless of whether we’re on a dark or light background.

Getting started with a scrollable element

First, we’ll define our scrollable element with a mask that fades out the start and end of the container. For this example, let’s consider the infamous table that can’t be responsive and has to be horizontally scrollable on mobile.

Let’s add the mask. We can use the shorthand and find the mask as a linear gradient that fades out on either end. A mask lets the table fade into the background instead of overlaying a shadow, but you could use the same technique for shadows.

CodePen Embed Fallback .scrollable { mask: linear-gradient(to right, #0000, #ffff 3rem calc(100% - 3rem), #0000); } Defining the custom properties and animation

Next, we need to define our custom properties and the animation. We’ll define two separate properties, --left-fade and --right-fade, using @property. Using @property is necessary here to specify the syntax of the properties so that browsers can animate the property’s values.

@property --left-fade { syntax: "<length>"; inherits: false; initial-value: 0; } @property --right-fade { syntax: "<length>"; inherits: false; initial-value: 0; } @keyframes scrollfade { 0% { --left-fade: 0; } 10%, 100% { --left-fade: 3rem; } 0%, 90% { --right-fade: 3rem; } 100% { --right-fade: 0; } }

Instead of using multiple animations or animation-range, we can define a single animation where --left-fade animates from 0 to 3rem between 0-10%, and --right-fade animates from 3rem to 0 between 90-100%. Now we update our mask to use our custom properties and tie the scroll-timeline of our element to its own animation-timeline.

Putting it all together

Putting it all together, we have the effect we’re after:

CodePen Embed Fallback

We’re still waiting for some browsers (Safari) to support animation-timeline, but this gracefully degrades to simply not fading the element at all.

Wrapping up

I like this implementation because it combines two newer bits of CSS — animating custom properties and animation-timeline — to achieve a practical effect that’s more than just decoration. The technique can even be used with scroll-snap-based carousels or cards:

CodePen Embed Fallback

It works regardless of content or background and doesn’t require JavaScript. It exemplifies just how far CSS has come lately.

Modern Scroll Shadows Using Scroll-Driven Animations originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Make the AI Models do the Prompting

LukeW - Sun, 05/04/2025 - 2:00pm

Despite all the mind-blowing advances in AI models over the past few years, they still face a massive obstacle to achieving their potential: people don't know what AI can do nor how to guide it. One of the ways we've been addressing this is by having LLMs rewrite people's prompts.

Prompt Writing & Editing

The preview release of Reve's (our AI for creative tooling company) text to image model helps people get better image generation results by re-writing their prompts in several ways.

Reve's enhance feature (on by default) takes someone's image prompt and re-writes it in a way that optimizes for a better result but also teaches people about the image model's capabilities. Reve is especially strong at adhering to very detailed prompts but many people's initial instructions are short and vague. To get to a better result, the enhance feature drafts a much comprehensive prompt which not only makes Reve's strengths clear but also teaches people how to get the most of the model.

The enhance feature also harmonizes prompts when someone make changes. For instance, if the prompt includes several mentions of the main subject, like a horse, and you change one of them to a cow, the enhance feature will make sure to harmonize all the "horse" mentions to "cow" for you.

But aren't these long prompts too complicated for most people to edit? This is why the default mode in Reve is instruct and prompt editing is one click away. Through natural language instructions, people can edit any image they create without having to dig through a wall of prompt text.

Even better, though, is starting an image generation with an image. In this approach you simply upload an image and Reve writes a comprehensive prompt for it. From there you can either use the instruct mode to make changes or dive into the full prompt to make edits.

Plan Creation & Tool Use

As if it wasn't hard enough to prompt an AI model to do what you want, things get even harder with agentic interfaces. When AI models can make use of tools to get things done in addition to using their own built-in capabilities, people now have to know not only what AI models can do but what the tools they have access to can do as well.

In response to an instruction in Bench (our AI for knowledge work company), the system uses an AI model to plan an appropriate set of actions in response. This plan includes not only the tools (search, browse, fact check, create PowerPoint, etc.) that make the most sense to complete the task but also their settings. Since people don't know what tools Bench can use nor what parameters the tools accept, once again an AI model rewrites people's prompts for them into something much more effective.

For instance, when using the search tool, Bench will not only decide on and execute the most relevant search queries but also set parameters like date range or site-specific constraints. In most cases, people don't need to worry about these parameters. In fact, we put them all behind a little settings icon so people can focus on the results of their task and let Bench do the thinking. But in cases where people want to make modifications to the choices Bench made, they can.

Behind the scenes in Bench, the system not only re-writes people's instructions to pick and make effective use of tools but it also decides which AI models to call and when. How much of that should be exposed to people so they can both modify it if needed and understand how things work has been a topic of debate. There's clearly a tradeoff with doing everything for people automatically and giving them more explicit (but more complicated) controls.

At a high level, though, AI models are much better at writing prompts for AI models than most people are. So the approach we've continued to take is letting the AI models rewrite and optimize people's initial prompts for the best possible outcome.

CSS shape() Commands

Css Tricks - Fri, 05/02/2025 - 2:36am

The CSS shape() function recently gained support in both Chromium and WebKit browsers. It’s a way of drawing complex shapes when clipping elements with the clip-path property. We’ve had the ability to draw basic shapes for years — think circle, ellipse(), and polygon() — but no “easy” way to draw more complex shapes.

Well, that’s not entirely true. It’s true there was no “easy” way to draw shapes, but we’ve had the path() function for some time, which we can use to draw shapes using SVG commands directly in the function’s arguments. This is an example of an SVG path pulled straight from WebKit’s blog post linked above:

<svg viewBox="0 0 150 100" xmlns="http://www.w3.org/2000/svg"> <path fill="black" d="M0 0 L 100 0 L 150 50 L 100 100 L 0 100 Q 50 50 0 0 z " /> </svg>

Which means we can yank those <path> coordinates and drop them into the path() function in CSS when clipping a shape out of an element:

.clipped { clip-path: path("M0 0 L 100 0 L 150 50 L 100 100 L 0 100 Q 50 50 0 0 z"); }

I totally understand what all of those letters and numbers are doing. Just kidding, I’d have to read up on that somewhere, like Myriam Frisano’s more recent “Useful Recipes For Writing Vectors By Hand” article. There’s a steep learning curve to all that, and not everyone — including me — is going down that nerdy, albeit interesting, road. Writing SVG by hand is a niche specialty, not something you’d expect the average front-ender to know. I doubt I’m alone in saying I’d rather draw those vectors in something like Figma first, export the SVG code, and copy-paste the resulting paths where I need them.

The shape() function is designed to be more, let’s say, CSS-y. We get new commands that tell the browser where to draw lines, arcs, and curves, just like path(), but we get to use plain English and native CSS units rather than unreadable letters and coordinates. That opens us up to even using CSS calc()-ulations in our drawings!

Here’s a fairly simple drawing I made from a couple of elements. You’ll want to view the demo in either Chrome 135+ or Safari 18.4+ to see what’s up.

CodePen Embed Fallback

So, instead of all those wonky coordinates we saw in path(), we get new terminology. This post is really me trying to wrap my head around what those new terms are and how they’re used.

In short, you start by telling shape() where the starting point should be when drawing. For example, we can say “from top left” using directional keywords to set the origin at the top-left corner of the element. We can also use CSS units to set that position, so “from 0 0” works as well. Once we establish that starting point, we get a set of commands we can use for drawing lines, arcs, and curves.

I figured a table would help.

CommandWhat it meansUsageExampleslineA line that is drawn using a coordinate pairThe by keyword sets a coordinate pair used to determine the length of the line.line by -2px 3pxvlineVertical lineThe to keyword indicates where the line should end, based on the current starting point.

The by keyword sets a coordinate pair used to determine the length of the line.vline to 50pxhlineHorizontal lineThe to keyword indicates where the line should end, based on the current starting point.

The by keyword sets a coordinate pair used to determine the length of the line.hline to 95%arcAn arc (oh, really?!). An elliptical one, that is, sort of like the rounded edges of a heart shape.The to keyword indicates where the arc should end.

The with keyword sets a pair of coordinates that tells the arc how far right and down the arc should slope.

The of keyword specifies the size of the ellipse that the arc is taken from. The first value provides the horizontal radius of the ellipse, and the second provides the vertical radius. I’m a little unclear on this one, even after playing with it.arc to 10% 50% of 1%curveA curved lineThe to keyword indicates where the curved line should end.

The with keyword sets “control points” that affect the shape of the curve, making it deep or shallow.curve to 0% 100% with 50% 0%smoothAdds a smooth Bézier curve command to the list of path data commandsThe to keyword indicates where the curve should end.

The by keyword sets a coordinate pair used to determine the length of the curve.

The with keyword specifies control points for the curve.smooth by 50% 50% with 50% 5%

The spec is dense, as you might expect with a lot of moving pieces like this. Again, these are just my notes, but let me know if there’s additional nuance you think would be handy to include in the table.

Oh, another fun thing: you can adjust the shape() on hover/focus. The only thing is that I was unable to transition or animate it, at least in the current implementation.

CodePen Embed Fallback

CSS shape() Commands originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

State of Devs: A Survey for Every Developer

Css Tricks - Thu, 05/01/2025 - 2:34am

I don’t know if I should say this on a website devoted to programming, but I sometimes feel like *lowers voice* coding is actually the least interesting part of our lives.

After all, last time I got excited meeting someone at a conference it was because we were both into bouldering, not because we both use React. And The Social Network won an Oscar for the way it displayed interpersonal drama, not for its depiction of Mark Zuckerberg’s PHP code. 

Yet for the past couple years, I’ve been running developer surveys (such as the State of JS and State of CSS) that only ask about code. It was time to fix that. 

A new kind of survey

The State of Devs survey is now open to participation, and unlike previous surveys it covers everything except code: career, workplace, but also health, hobbies, and more. 

I’m hoping to answer questions such as:

  • What are developers’ favorite recent movies and video games?
  • What kind of physical activity do developers practice?
  • How much sleep are we all getting?

But also address more serious topics, including:

  • What do developers like about their workplace?
  • What factors lead to workplace discrimination?
  • What global issues are developers most concerned with?
Reaching out to new audiences

Another benefit from branching out into new topics is the chance to reach out to new audiences.

It’s no secret that people who don’t fit the mold of the average developer (whether because of their gender, race, age, disabilities, or a myriad of other factors) often have a harder time getting involved in the community, and this also shows up in our data. 

In the past, we’ve tried various outreach strategies to help address these imbalances in survey participation, but the results haven’t always been as effective as we’d hoped. 

So this time, I thought I’d try something different and have the survey itself include more questions relevant to under-represented groups, asking about workplace discrimination:

As well as actions taken in response to said discrimination:

Yet while obtaining a more representative data sample as a result of this new focus would be ideal, it isn’t the only benefit. 

The most vulnerable among us are often the proverbial canaries in the coal mine, suffering first from issues or policies that will eventually affect the rest of the community as well, if left unchecked. 

So, facing these issues head-on is especially valuable now, at a time when “DEI” is becoming a new taboo, and a lot of the important work that has been done to make things slightly better over the past decade is at risk of being reversed.

The big questions

Finally, the survey also tries to go beyond work and daily life to address the broader questions that keep us up at night:

There’s been talk in recent years about keeping the workplace free of politics. And why I can certainly see the appeal in that, in 2025, it feels harder than ever to achieve that ideal. At a time when people are losing rights and governments are sliding towards authoritarianism, should we still pretend that everything is fine? Especially when you factor in the fact that the tech community is now a major political player in its own right…

So while I didn’t push too far in that direction for this first edition of the survey, one of my goals for the future is to get a better grasp of where exactly developers stand in terms of ideology and worldview. Is this a good idea, or should I keep my distance from any hot-button issues? Don’t hesitate to let me know what you think, or suggest any other topic I should be asking about next time. 

In the meantime, go take the survey, and help us get a better picture of who exactly we all are!

State of Devs: A Survey for Every Developer originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Revisiting Image Maps

Css Tricks - Wed, 04/30/2025 - 2:12am

I mentioned last time that I’ve been working on a new website for Emmy-award-winning game composer Mike Worth. He hired me to create a highly graphical design that showcases his work.

Mike loves ’90s animation, particularly Disney’s Duck Tales and other animated series. He challenged me to find a way to incorporate their retro ’90s style into his design without making it a pastiche. But that wasn’t my only challenge. I also needed to achieve that ’90s feel by using up-to-the-minute code to maintain accessibility, performance, responsiveness, and semantics.

Designing for Mike was like a trip back to when mainstream website design seemed more spontaneous and less governed by conventions and best practices. Some people describe these designs as “whimsical”:

adjective

  1. spontaneously fanciful or playful
  2. given to whims; capricious
  3. quaint, unusual, or fantastic

Collins English Dictionary

But I’m not so sure that’s entirely accurate. “Playful?” Definitely. “Fanciful?” Possibly. But “fantastic?” That depends. “Whimsy” sounds superfluous, so I call it “expressive” instead.

Studying design from way back, I remembered how websites often included graphics that combined branding, content, and navigation. Pretty much every reference to web design in the ’90s — when I designed my first website — talks about Warner Brothers’ Space Jam from 1996.

Warner Brothers’ Space Jam (1996)

So, I’m not going to do that.

Brands like Nintendo used their home pages to direct people to their content while making branded visual statements. Cheestrings combined graphics with navigation, making me wonder why we don’t see designs like this today. Goosebumps typified this approach, combining cartoon illustrations with brightly colored shapes into a functional and visually rich banner, proving that being useful doesn’t mean being boring.

Left to right: Nintendo, Cheestrings, Goosebumps.

In the ’90s, when I developed graphics for websites like these, I either sliced them up and put their parts in tables or used mostly forgotten image maps.

A brief overview of properties and values

Let’s run through a quick refresher. Image maps date all the way back to HTML 3.2, where, first, server-side maps and then client-side maps defined clickable regions over an image using map and area elements. They were popular for graphics, maps, and navigation, but their use declined with the rise of CSS, SVG, and JavaScript.

<map> adds clickable areas to a bitmap or vector image.

<map name="projects"> ... </map>

That <map> is linked to an image using the usemap attribute:

<img usemap="#projects" ...>

Those elements can have separate href and alt attributes and can be enhanced with ARIA to improve accessibility:

<map name="projects"> <area href="" alt="" … /> ... </map>

The shape attribute specifies an area’s shape. It can be a primitive circle or rect or a polygon defined by a set of absolute x and y coordinates:

<area shape="circle" coords="..." ... /> <area shape="rect" coords="..." ... /> <area shape="poly" coords="..." ... />

Despite their age, image maps still offer plenty of benefits. They’re lightweight and need (almost) no JavaScript. More on that in just a minute. They’re accessible and semantic when used with alt, ARIA, and title attributes. Despite being from a different era, even modern mobile browsers support image maps.

Design by Andy Clarke, Stuff & Nonsense. Mike Worth’s website will launch in April 2025, but you can see examples from this article on CodePen.

My design for Mike Worth includes several graphic navigation elements, which made me wonder if image maps might still be an appropriate solution.

Image maps in action

Mike wants his website to showcase his past work and the projects he’d like to do. To make this aspect of his design discoverable and fun, I created a map for people to explore by pressing on areas of the map to open modals. This map contains numbered circles, and pressing one pops up its modal.

My first thought was to embed anchors into the external map SVG:

<img src="projects.svg" alt="Projects"> <svg ...> ... <a href="..."> <circle cx="35" cy="35" r="35" fill="#941B2F"/> <path fill="#FFF" d="..."/> </a> </svg>

This approach is problematic. Those anchors are only active when SVG is inline and don’t work with an <img> element. But image maps work perfectly, even though specifying their coordinates can be laborious. Fortunately, plenty of tools are available, which make defining coordinates less tedious. Upload an image, choose shape types, draw the shapes, and copy the markup:

<img src="projects.svg" usemap="#projects-map.svg"> <map name="projects-map.svg"> <area href="" alt="" coords="..." shape="circle"> <area href="" alt="" coords="..." shape="circle"> ... </map>

Image maps work well when images are fixed sizes, but flexible images present a problem because map coordinates are absolute, not relative to an image’s dimensions. Making image maps responsive needs a little JavaScript to recalculate those coordinates when the image changes size:

function resizeMap() { const image = document.getElementById("projects"); const map = document.querySelector("map[name='projects-map']"); if (!image || !map || !image.naturalWidth) return; const scale = image.clientWidth / image.naturalWidth; map.querySelectorAll("area").forEach(area => { if (!area.dataset.originalCoords) { area.dataset.originalCoords = area.getAttribute("coords"); } const scaledCoords = area.dataset.originalCoords .split(",") .map(coord => Math.round(coord * scale)) .join(","); area.setAttribute("coords", scaledCoords); }); } ["load", "resize"].forEach(event => window.addEventListener(event, resizeMap) );

I still wasn’t happy with this implementation as I wanted someone to be able to press on much larger map areas, not just the numbered circles.

Every <path> has coordinates which define how it’s drawn, and they’re relative to the SVG viewBox:

<svg width="1024" height="1024"> <path fill="#BFBFBF" d="…"/> </svg>

On the other hand, a map’s <area> coordinates are absolute to the top-left of an image, so <path> values need to be converted. Fortunately, Raphael Monnerat has written PathToPoints, a tool which does precisely that. Upload an SVG, choose the point frequency, copy the coordinates for each path, and add them to a map area’s coords:

<map> <area href="" shape="poly" coords="..."> <area href="" shape="poly" coords="..."> <area href="" shape="poly" coords="..."> ... </map> More issues with image maps

Image maps are hard-coded and time-consuming to create without tools. Even with tools for generating image maps, converting paths to points, and then recalculating them using JavaScript, they could be challenging to maintain at scale.

<area> elements aren’t visible, and except for a change in the cursor, they provide no visual feedback when someone hovers over or presses a link. Plus, there’s no easy way to add animations or interaction effects.

But the deal-breaker for me was that an image map’s pixel-based values are unresponsive by default. So, what might be an alternative solution for implementing my map using CSS, HTML, and SVG?

Anchors positioned absolutely over my map wouldn’t solve the pixel-based positioning problem or give me the irregular-shaped clickable areas I wanted. Anchors within an external SVG wouldn’t work either.

But the solution was staring me in the face. I realized I needed to:

  1. Create a new SVG path for each clickable area.
  2. Make those paths invisible.
  3. Wrap each path inside an anchor.
  4. Place the anchors below other elements at the end of my SVG source.
  5. Replace that external file with inline SVG.

I created a set of six much larger paths which define the clickable areas, each with its own fill to match its numbered circle. I placed each anchor at the end of my SVG source:

<svg … viewBox="0 0 1024 1024"> <!-- Visible content --> <g>...</g> <!-- Clickable areas -->` <g id="links">` <a href="..."><path fill="#B48F4C" d="..."/></a>` <a href="..."><path fill="#6FA676" d="..."/></a>` <a href="..."><path fill="#30201D" d="..."/></a>` ... </g> </svg>

Then, I reduced those anchors’ opacity to 0 and added a short transition to their full-opacity hover state:

#links a { opacity: 0; transition: all .25s ease-in-out; } #links a:hover { opacity: 1; }

While using an image map’s <area> sadly provides no visual feedback, embedded anchors and their content can respond to someone’s action, hint at what’s to come, and add detail and depth to a design.

I might add gloss to those numbered circles to be consistent with the branding I’ve designed for Mike. Or, I could include images, titles, or other content to preview the pop-up modals:

<g id="links"> <a href="…"> <path fill="#B48F4C" d="..."/> <image href="..." ... /> </a> </g>

Try it for yourself:

CodePen Embed Fallback Expressive design, modern techniques

Designing Mike Worth’s website gave me a chance to blend expressive design with modern development techniques, and revisiting image maps reminded me just how important a tool image maps were during the period Mike loves so much.

Ultimately, image maps weren’t the right tool for Mike’s website. But exploring them helped me understand what I really needed: a way to recapture the expressiveness and personality of ’90s website design using modern techniques that are accessible, lightweight, responsive, and semantic. That’s what design’s about: choosing the right tool for a job, even if that sometimes means looking back to move forward.

Biography: Andy Clarke

Often referred to as one of the pioneers of web design, Andy Clarke has been instrumental in pushing the boundaries of web design and is known for his creative and visually stunning designs. His work has inspired countless designers to explore the full potential of product and website design.

Andy’s written several industry-leading books, including Transcending CSS, Hardboiled Web Design, and Art Direction for the Web. He’s also worked with businesses of all sizes and industries to achieve their goals through design.

Visit Andy’s studio, Stuff & Nonsense, and check out his Contract Killer, the popular web design contract template trusted by thousands of web designers and developers.

Revisiting Image Maps originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Open Up With Brad Frost, Episode 2

Css Tricks - Tue, 04/29/2025 - 4:27am

Brad Frost is running this new little podcast called Open Up. Folks write in with questions about the “other” side of web design and front-end development — not so much about tools and best practices as it is about the things that surround the work we do, like what happens if you get laid off, or AI takes your job, or something along those lines. You know, the human side of what we do in web design and development.

Well, it just so happens that I’m co-hosting the show. In other words, I get to sprinkle in a little advice on top of the wonderful insights that Brad expertly doles out to audience questions.

Our second episode just published, and I thought I’d share it. We’re finding our sea legs with this whole thing and figuring things out as we go. We’ve opened things up (get it?!) to a live audience and even pulled in one of Brad’s friends at the end to talk about the changing nature of working on a team and what it looks like to collaborate in a remote-first world.

https://www.youtube.com/watch?v=bquVF5Cibaw

Open Up With Brad Frost, Episode 2 originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Anchor Positioning Just Don’t Care About Source Order

Css Tricks - Mon, 04/28/2025 - 2:43am

Ten divs walk into a bar:

<div>1</div> <div>2</div> <div>3</div> <div>4</div> <div>5</div> <div>6</div> <div>7</div> <div>8</div> <div>9</div> <div>10</div>

There’s not enough chairs for them to all sit at the bar, so you need the tenth div to sit on the lap of one of the other divs, say the second one. We can visually cover the second div with the tenth div but have to make sure they are sitting next to each other in the HTML as well. The order matters.

<div>1</div> <div>2</div> <div>10</div><!-- Sitting next to Div #2--> <div>3</div> <div>4</div> <div>5</div> <div>6</div> <div>7</div> <div>8</div> <div>9</div>

The tenth div needs to sit on the second div’s lap rather than next to it. So, perhaps we redefine the relationship between them and make this a parent-child sorta thing.

<div>1</div> <div class="parent"> 2 <div class="child">10</div><!-- Sitting in Div #2's lap--> </div> <div>3</div> <div>4</div> <div>5</div> <div>6</div> <div>7</div> <div>8</div> <div>9</div>

Now we can do a little tricky positioning dance to contain the tenth div inside the second div in the CSS:

.parent { position: relative; /* Contains Div #10 */ } .child { position: absolute; }

We can inset the child’s position so it is pinned to the parent’s top-left edge:

.child { position: absolute; inset-block-start: 0; inset-inline-start: 0; }

And we can set the child’s width to 100% of the parent’s size so that it is fully covering the parent’s lap and completely obscuring it.

.child { position: absolute; inset-block-start: 0; inset-inline-start: 0; width: 100%; }

Cool, it works!

CodePen Embed Fallback

Anchor positioning simplifies this process a heckuva lot because it just doesn’t care where the tenth div is in the HTML. Instead, we can work with our initial markup containing 10 individuals exactly as they entered the bar. You’re going to want to follow along in the latest version of Chrome since anchor positioning is only supported there by default at the time I’m writing this.

<div>1</div> <div class="parent">2</div> <div>3</div> <div>4</div> <div>5</div> <div>6</div> <div>7</div> <div>8</div> <div>9</div> <div class="child">10</div>

Instead, we define the second div as an anchor element using the anchor-name property. I’m going to continue using the .parent and .child classes to keep things clear.

.parent { anchor-name: --anchor; /* this can be any name formatted as a dashed ident */ }

Then we connect the child to the parent by way of the position-anchor property:

.child { position-anchor: --anchor; /* has to match the `anchor-name` */ }

The last thing we have to do is position the child so that it covers the parent’s lap. We have the position-area property that allows us to center the element over the parent:

.child { position-anchor: --anchor; position-area: center; }

If we want to completely cover the parent’s lap, we can set the child’s size to match that of the parent using the anchor-size() function:

.child { position-anchor: --anchor; position-area: center; width: anchor-size(width); } CodePen Embed Fallback

No punchline — just one of the things that makes anchor positioning something I’m so excited about. The fact that it eschews HTML source order is so CSS-y because it’s another separation of concerns between content and presentation.

Anchor Positioning Just Don’t Care About Source Order originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

The Evolution of AI Products

LukeW - Sun, 04/27/2025 - 2:00pm

At this point, the use of artificial intelligence and machine learning models in software has a long history. But the past three years really accelerated the evolution of "AI products". From behind the scenes models to chat to agents, here's how I've seen things evolve for the AI-first companies we've built during this period.

Anthropic, one of the World's leading AI labs, recently released data on what kinds of jobs make the most use of their foundation model, Claude. Computer and math use outpaced other jobs by a very wide margin which matches up with AI adoption by software engineers. To date, they've been the most open to not only trying AI but applying it to their daily tasks.

As such, the evolution of AI products is currently most clear in AI for coding companies like Augment. When Augment started over two years ago, they used AI models to power code completions in existing developer tools. A short time later, they launched a chat interface where developers could interact directly with AI models. Last month, they launched Augment Agent which pairs AI models with tools to get more things done. Their transition isn't an isolated example.

Machine Learning Behind the Scenes

Before everyone was creating chat interfaces and agents, large-scale machine learning systems were powering software interfaces behind the scenes. Back in 2016 Google Translate announced the use of deep learning to enable better translations across more languages. YouTube's video recommendations also dramatically improved the same year from deep learning techniques.

Although machine-learning and AI models were responsible for key parts of these product's overall experience, they remained in the background providing critical functionality but they did so indirectly.

Chat Interfaces to AI Models

The practice of directly interacting with AI models was mostly limited to research efforts until the launch of ChatGPT. All the sudden, millions of people were directly interacting with an AI model and the information found in its weights (think fuzzy database that accesses its information through complex predictive instead of simple look-up techniques).

ChatGPT was exactly that: one could chat with the GPT model trained by OpenAI. This brought AI models from the background of products to the foreground and led to an explosion of chat interfaces to text, image, video, and 3D models of various sizes.

Retrieval Augmented Products

Pretty quickly companies realized that AI models provided much better results if they were given more context. At first, this meant people writing prompts (or instructions for AI models) with more explicit intent and often increasing length. To scale this approach beyond prompting, retrieval-augmented-generation (RAG) products began to emerge.

My personal AI system, Ask LukeW, makes extensive use of indexing, retrieval, and re-ranking systems to create a product that serves as a natural language interface to my nearly 30 years of writings and talks. ChatGPT has also become retrieval-augmented product as it regularly makes use of Web search instead of just its weights when it responds to user instructions.

Tool Use & Foreground Agents

Though it can significantly improve AI products, information retrieval is only one tool that AI systems can now (with a few of the most recent foundation models) make use of. When AI models have access to a number of different tools and can plan which ones to use and how, things become agentic.

For instance our AI-powered workspace, Bench, has many tools it can use to retrieve information but also tools to fact-check data, do data analysis, generate Powerpoint decks, create images, and much more. In this type of product experience, people give AI models instructions. Then the models make plans, pick tools, configure them, and make use of the results to move on to the next step or not. People can steer or refine this process with user interface controls or, more commonly, further instructions.

Bench allows people to interrupt agentic process with natural language, to configure tool parameters and rerun them, select models to use with different tools and much more. But in the vast majority of cases, the system evaluates its options and makes these decisions itself to give people the best possible outcome.

Background Agents

When people first begin using agentic AI products, they tend to monitor and steer the system to make sure it's doing the things they asked for correctly. After a while though, confidence sets in and the work of monitoring AI models as they execute multi-step processes becomes a chore. You quickly get to wanting multiple process to run in the background and only bother you when they are done or need help. Enter... background agents.

AI products that make use of background agents, allow people to run multiple process in parallel, across devices, and even schedule them to run at specific times or with particular triggers. In these products, the interface needs to support monitoring and managing lots of agentic workflows concurrently instead of guiding one at a time.

Agent to Agent

So what's next? Once AI products can run multiple tasks themselves remotely, it feels like the inevitable next step is for these products to begin to collaborate and interact with each other. Google's recently announced Agent to Agent protocol is specifically designed to enable "multi-agent ecosystem across siloed data systems and applications." Does this result in very different product and UI experience? Probably. What does it look like? I don't know yet.

AI Product Evolution To Date

As it's highly unlikely the pace of change in AI products will slow down anytime soon. The evolution of AI products I outlined is a timestamp of where we are now. In fact, I put it all into one image for just that reason: to reference the "current" state of things. Pretty confident that I'll have to revisit this in the not too distant future...

The Lost CSS Tricks of Cohost.org

Css Tricks - Thu, 04/24/2025 - 2:49am

You would be forgiven if you’ve never heard of Cohost.org. The bespoke, Tumblr-like social media website came and went in a flash. Going public in June 2022 with invite-only registrations, Cohost’s peach and maroon landing page promised that it would be “posting, but better.” Just over two years later, in September 2024, the site announced its shutdown, its creators citing burnout and funding problems. Today, its servers are gone for good. Any link to cohost.org redirects to the Wayback Machine’s slow but comprehensive archive.

The landing page for Cohost.org, featuring our beloved eggbug.

Despite its short lifetime, I am confident in saying that Cohost delivered on its promise. This is in no small part due to its user base, consisting mostly of niche internet creatives and their friends — many of whom already considered “posting” to be an art form. These users were attracted to Cohost’s opinionated, anti-capitalist design that set it apart from the mainstream alternatives. The site was free of advertisements and follower counts, all feeds were purely chronological, and the posting interface even supported a subset of HTML.

It was this latter feature that conjured a community of its own. For security reasons, any post using HTML was passed through a sanitizer to remove any malicious or malformed elements. But unlike most websites, Cohost’s sanitizer was remarkably permissive. The vast majority of tags and attributes were allowed — most notably inline CSS styles on arbitrary elements.

Users didn’t take long to grasp the creative opportunities lurking within Cohost’s unassuming “new post” modal. Within 48 hours of going public, the fledgling community had figured out how to post poetry using the <details> tag, port the Apple homepage from 1999, and reimplement a quick-time WarioWare game. We called posts like these “CSS Crimes,” and the people who made them “CSS Criminals.” Without even intending to, the developers of Cohost had created an environment for a CSS community to thrive.

In this post, I’ll show you a few of the hacks we found while trying to push the limits of Cohost’s HTML support. Use these if you dare, lest you too get labelled a CSS criminal.

Width-hacking

Many of the CSS crimes of Cohost were powered by a technique that user @corncycle dubbed “width-hacking.” Using a combination of the <details> element and the CSS calc() function, we can get some pretty wild functionality: combination lockstile matching games, Zelda-style top-down movement, the list goes on.

If you’ve been around the CSS world for a while, there’s a good chance you’ve been exposed to the old checkbox hack. By combining a checkbox, a label, and creative use of CSS selectors, you can use the toggle functionality of the checkbox to implement all sorts of things. Tabbed areas, push toggles, dropdown menus, etc.

However, because this hack requires CSS selectors, that meant we couldn’t use it on Cohost — remember, we only had inline styles. Instead, we used the relatively new elements <details> and <summary>. These elements provide the same visibility-toggling logic, but now directly in HTML. No weird CSS needed.

CodePen Embed Fallback

These elements work like so: All children of the <details> element are hidden by default, except for the <summary> element. When the summary is clicked, it “opens” the parent details element, causing its children to become visible.

We can add all sorts of styles to these elements to make this example more interesting. Below, I have styled the constituent elements to create the effect of a button that lights up when you click on it.

CodePen Embed Fallback

This is achieved by giving the <summary> element a fixed position and size, a grey background color, and an outset border to make it look like a button. When it’s clicked, a sibling <div> is revealed that covers the <summary> with its own red background and border. Normally, this <div> would block further click events, but I’ve given it the declaration pointer-events: none. Now all clicks pass right on through to the <summary> element underneath, allowing you to turn the button back off.

This is all pretty nifty, but it’s ultimately the same logic as before: something is toggled either on or off. These are only two states. If we want to make games and other gizmos, we might want to represent hundreds to thousands of states.

Width-hacking gives us exactly that. Consider the following example:

CodePen Embed Fallback

In this example, three <details> elements live together in an inline-flex container. Because all the <summary> elements are absolutely-positioned, the width of their respective <details> elements are all zero when they’re closed.

Now, each of these three <details> has a small <div> inside. The first has a child with a width of 1px, the second a child with a width of 2px, and the third a width of 4px. When a <details> element is opened, it reveals its hidden <div>, causing its own width to increase. This increases the width of the inline-flex container. Because the width of the container is the sum of its children, this means its width directly corresponds to the specific <details> elements that are open.

For example, if just the first and third <details> are open, the inline-flex container will have the width 1px + 4px = 5px. Conversely, if the inline-flex container is 2px wide, we can infer that the only open <details> element is the second one. With this trick, we’ve managed to encode all eight states of the three <details> into the width of the container element.

This is pretty cool. Maybe we could use this as an element of some kind of puzzle game? We could show a secret message if the right combination of buttons is checked. But how do we do that? How do we only show the secret message for a specific width of that container div?

CodePen Embed Fallback

In the preceding CodePen, I’ve added a secret message as two nested divs. Currently, this message is always visible — complete with a TODO reminding us to implement the logic to hide it unless the correct combination is set.

You may wonder why we’re using two nested divs for such a simple message. This is because we’ll be hiding the message using a peculiar method: We will make the width of the parent div.secret be zero. Because the overflow: hidden property is used, the child div.message will be clipped, and thus invisible.

Now we’re ready to implement our secret message logic. Thanks to the fact that percentage sizes are relative to the parent, we can use 100% as a stand-in for the parent’s width. We can then construct a complicated CSS calc() formula that is 350px if the container div is our target size, and 0px otherwise. With that, our secret message will be visible only when the center button is active and the others are inactive. Give it a try!

CodePen Embed Fallback

This complicated calc() function that’s controlling the secret div’s width has the following graph:

You can see that it’s a piecewise linear curve, constructed from multiple pieces using min/max. These pieces are placed in just the right spots so that the function maxes out when the container div is 2px— which we’ve established is precisely when only the second button is active.

A surprising variety of games can be implemented using variations on this technique. Here is a tower of Hanoi game I had made that uses both width and height to track the game’s state.

SVG animation

So far, we’ve seen some basic functionality for implementing a game. But what if we want our games to look good? What if we want to add ✨animations?✨ Believe it or not, this is actually possible entirely within inline CSS using the power of SVG.

SVG (Scalable Vector Graphics) is an XML-based image format for storing vector images. It enjoys broad support on the web — you can use it in <img> elements or as the URL of a background-image property, among other things.

Like HTML, an SVG file is a collection of elements. For SVG, these elements are things like <rect>, <circle>, and <text>, to name a few. These elements can have all sorts of properties defined, such as fill color, stroke width, and font family.

A lesser-known feature of SVG is that it can contain <style> blocks for configuring the properties of these elements. In the example below, an SVG is used as the background for a div. Inside that SVG is a <style> block that sets the fillcolor of its <circle> to red.

CodePen Embed Fallback

An even lesser-known feature of SVG is that its styles can use media queries. The size used by those queries is the size of the div it is a background of.

In the following example, we have a resizable <div> with an SVG background. Inside this SVG is a media query which will change the fill color of its <circle> to blue when the width exceeds 100px. Grab the resize handle in its bottom right corner and drag until the circle turns blue.

CodePen Embed Fallback

Because resize handles don’t quite work on mobile, unfortunately, this and the next couple of CodePens are best experienced on desktop.

This is an extremely powerful technique. By mixing it with width-hacking, we could encode the state of a game or gizmo in the width of an SVG background image. This SVG can then show or hide specific elements depending on the corresponding game state via media queries.

But I promised you animations. So, how is that done? Turns out you can use CSS animations within SVGs. By using the CSS transition property, we can make the color of our circle smoothly transition from red to blue.

CodePen Embed Fallback

Amazing! But before you try this yourself, be sure to look at the source code carefully. You’ll notice that I’ve had to add a 1×1px, off-screen element with the ID #hack. This element has a very simple (and nearly unnoticeable) continuous animation applied. A “dummy animation” like this is necessary to get around some web browsers’ buggy detection of SVG animation. Without that hack, our transition property wouldn’t work consistently.

For the fun of it, let’s combine this tech with our previous secret message example. Instead of toggling the secret message’s width between the values of 0px and 350px, I’ve adjusted the calc formula so that the secret message div is normally 350px, and becomes 351px if the right combination is set.

Instead of HTML/CSS, the secret message is now just an SVG background with a <text> element that says “secret message.” Using media queries, we change the transform scale of this <text> to be zero unless the div is 351px. With the transition property applied, we get a smooth transition between these two states.

Click the center button to activate the secret message:

CodePen Embed Fallback

The first cohost user to discover the use of media queries within SVG backgrounds was @ticky for this post. I don’t recall who figured out they could animate, but I used the tech quite extensively for this quiz that tells you what kind of soil you’d like if you were a worm.

Wrapping up

And that’s will be all for now. There are a number of techniques I haven’t touched on — namely the fun antics one can get up to with the resize property. If you’d like to explore the world of CSS crimes further, I’d recommend this great linkdump by YellowAfterlife, or this video retrospective by rebane2001.

It will always hurt to describe Cohost in the past tense. It truly was a magical place, and I don’t think I’ll be able to properly convey what it was like to be there at its peak. The best I can do is share the hacks we came up with: the lost CSS tricks we invented while “posting, but better.”

The Lost CSS Tricks of Cohost.org originally published on CSS-Tricks, which is part of the DigitalOcean family. You should get the newsletter.

Designing Perplexity

LukeW - Wed, 04/23/2025 - 2:00pm

In his AI Speaker Series presentation at Sutter Hill Ventures, Henry Modisett, Head of Design at Perplexity, shared insights on designing AI products and the evolving role of designers in this new landscape. Here's my notes from his talk:

  • Technological innovation is outpacing our ability to thoughtfully apply it
  • We're experiencing a "macro novelty effect" where people are either experiencing AI for the first time or rejecting it based on preconceptions
  • Most software will evolve to contain AI components, similar to how most software now has internet connectivity
  • New product paradigms are emerging that don't fit traditional software design wisdom
  • There's a significant amount of relearning required for engineers and designers in the AI era
  • The industry is experiencing rapid change with companies only being "two or three weeks ahead of each other"
  • AI products that defy conventional wisdom are gaining daily usage
  • Successful AI products often "boil the ocean" by building everything at once, contrary to traditional startup advice

Design Challenges Before AI
  • Before AI, two of the hardest design problems were complexity management (organizing many features) and dynamic experiences (like email or ranked feeds)
  • Complexity Management: Designing interfaces that remain intuitive despite growing feature sets
  • Dynamic Experiences: Creating systems where every user has a different experience (like Gmail)
  • Machine Learning Interfaces: Designing for recommendation systems where the UI primarily exists to collect signals for ranking
New Design Challenges with AI
  • Designing based on trajectory: creating experiences that anticipate how technology will improve. Many AI projects begin without knowing if they'll work technically
  • Speed is the most important facet of user experience, but many AI products work slowly
  • Building AI products is comparable to urban planning, with unpredictability from both users and the AI itself
  • Designing for non-deterministic outcomes from both users and AI
  • Deciding when to anthropomorphize AI and when to treat it as a tool. "If your fork said 'bon appétit' every time you picked it up, people would get sick of that
  • Traditional PRD > Design > Engineering > Ship process no longer works
  • New approach: Strategic conversation > Get anything working > Prune possibilities > Design > Ship > Observe
  • "Prototype to productize" rather than "design to build"
  • Designers need to work directly with the actual product, not just mockups. At Perplexity, designers and engineers collaborate directly on prompting as a programming language.
  • Product mechanics (how it works) matter more than UI aesthetics. This comes from game design thinking: mechanics > dynamics > aesthetics
  • AI allows for abstracting complexity away from users, providing power through simple interfaces Natural language interfaces can make powerful capabilities accessible
  • But natural language isn't always the most efficient input method (precision)
  • Discoverability: How do users know what the product can do?
  • Make opinionated products that clearly communicate their value. The best software comes when people with strong opinions on how it should work are working directly on the code.

Just in Time Content

LukeW - Sat, 04/19/2025 - 2:00pm

Jenson Huang (NVIDIA's CEO) famously declared that every pixel will be generated, not rendered. While for some types of media that vision is further out, for written content this proclamation has already come to pass. We’re in an age of just in time content.

Traditionally if you wanted to produce a piece of written content on a topic you’d have two choices. Do the research yourself, write a draft, edit, refine, and finally publish. Or you could get someone else to do that process for you either by hiring them directly or indirectly by getting content they wrote for a publisher.

Today written content is generated in real-time for anyone on anything. That’s a pretty broad statement to make so let me make it more concrete. I’ve written 3 books, thousands of articles, and given hundreds of talks on digital product design. The generative AI feature on my Website, Ask LukeW, searches all this content, finds, ranks, and re-ranks it in order to answer people’s questions on the topics I’ve written about.

Because all my content has been broken down into almost atomic units, there’s an endless number of recombinations possible. Way more than I could have possibly ever written myself. For instance, if someone asks:

Each corresponding answer is a unique composition of content that did not exist before. Every response is created for a specific person with a specific need at a specific time. After that, it’s no longer relevant. That may sound extreme but I’ve long contended that as soon as something is published, especially news and non-fiction, it’s out of date. That’s why project sites within companies are never up to date and why news articles just keep coming.

But if you keep adding bits of additional content to an overall corpus for generative AI to draw from, the responses can remain timely and relevant. That’s what I’ve been doing with the content corpus Ask LukeW draws from. While I’ve written 89 publicly visible blog posts over the past two years, I added over 500 bits of content behind the scenes that the Ask LukeW feature can draw from. Most of it driven by questions people asked that Ask LukeW wasn’t able to answer well but should have given the information I have in my head.

For me this feels like the new way of publishing. I'm building a corpus with infinite malleability instead of a more limited number of discrete artifacts.

Two years ago, I had to build a system to power the content corpus indexing, retrieval, and ranking that makes Ask LukeW work. Today people can do this on the fly. For instance in this video example using Bench, I make use of a PDF of my book and Web search results to expand on a topic in my tone and voice with citations across both sources. The end result is written content assembled from multiple corpuses: my book and the Web.

It’s not just PDFs and Web pages though, nearly anything can serve as a content corpus for generative publishing. In this example from Bench, I use a massive JSON file to create a comprehensive write-up about the water levels in Lake Almanor, CA. The end result combines data from the file with AI model weights to produce a complete analysis of the lake’s changing water levels over the years alongside charts and insights about changing patterns.

As these examples illustrate, publishing has changed. Content is now generated just in time for anyone on anything. And as the capabilities of AI models and tools keep advancing, we’re going to see publishing change even more.

Syndicate content
©2003 - Present Akamai Design & Development.