Developer News

Interactive Rebase: Clean up your Commit History

Css Tricks - Fri, 11/12/2021 - 5:16am

This article is part of our “Advanced Git” series. Be sure to follow Tower on Twitter or sign up for their newsletter to hear about the next articles.

Interactive Rebase is the Swiss Army knife of Git commands: lots of use cases and lots of possibilities! It’s really a great addition to any developer’s tool chain, because it lets you revise your local commit history—before you share your work with the rest of the team.

Let’s see what you can do with an interactive rebase and then look at some practical examples.

Advanced Git series:
  1. Part 1: Creating the Perfect Commit in Git
  2. Part 2: Branching Strategies in Git
  3. Part 3: Better Collaboration With Pull Requests
  4. Part 4: Merge Conflicts
  5. Part 5: Rebase vs. Merge
  6. Part 6: Interactive Rebase (You are here!)
  7. Part 7: Cherry-Picking Commits in Git
  8. Part 8: Using the Reflog to Restore Lost Commits (Coming soon!)

Rewriting your commit history

In short, interactive rebase allows you to manipulate your commit history. It’s meant for optimizing and cleaning up. You can…

  • change commit messages
  • combine multiple commits
  • split and edit existing commits
  • reorder commits
  • delete commits

Keep in mind that an interactive rebase rewrites your commit history: all of the involved commits get a new hash ID. Also, a quick reminder: commit IDs are there to identify commits—they are SHA-1 checksums. So, by changing that hash, you technically create completely new commits. This means that you shouldn’t use an interactive rebase on stuff that you’ve already pushed to a shared remote repository. Your colleagues might have based their work on these commits—and when you use interactive rebase to rewrite commit history, you are changing these base commits.

All of this means that an interactive rebase is meant to help you clean up and optimize your own local commit history before you merge (and possibly push) it back into a shared team branch.

Interactive rebase workflow

Before we take interactive rebase for a test drive, let’s look at the general workflow. This is always the same, no matter what exactly we’re doing—deleting a commit, changing a commit message, combining commits… the steps are identical.

The first step is to determine the range of commits you want to manipulate. How far back in time do you want to go? Once you have the answer, you can start your interactive rebase session. Here, you have the chance to edit your commit history. For example, you can manipulate the selected commits by reordering, deleting, combining them, and so on.

In your first step, you are always going to look at the current state of the commit history. You can use the git log command to examine a project’s history and show the commit log.

Here’s the little example repository we’re going to use throughout this article:

Note that I’m using the Tower Git desktop GUI in some of my screenshots for easier visualization.

After you’ve examined the list, it’s time to start the work. Let’s do this step-by-step. In the examples of this article, we will do the following things:

  • First, we change an old commit’s message.
  • Secondly, we combine two old commits.
  • After that, we split one commit.
  • Finally, we delete a commit.
Change a commit message

In many cases, you’ll want to change the most recent commit. Keep in mind that there’s a shortcut for this scenario which doesn’t involve interactive rebase:

$ git commit --amend

This command can modify both the content and the message of the most recent commit, and it opens your default text editor. Here you can make your changes, save them, and quit the editor. This will not only update the commit message, but will effectively change the commit itself and write a new one.

Again, please be careful and don’t amend your last commit if you’ve already pushed it to the remote repository!

For any other commit (anything older than the most recent one), you have to perform an interactive rebase. To run git rebase interactively, add the -i option. 

The first step is to determine the base commit: the parent commit of the one you want to change. You can achieve this by using the commit’s hash ID or by doing a little bit of counting. To change the last three commit messages (or at least one of them), you can define the parent commit like this:

$ git rebase -i HEAD~3

An editor window opens and you can see all three commits you selected (and by “selected” I mean a range of commits: from HEAD all the way down to HEAD~3). Please notice the reverse order: unlike git log, this editor shows the oldest commit (HEAD~3) at the top and the newest at the bottom.

In this window you don’t actually change the commit message. You only tell Git what kind of manipulation you want to perform. Git offers a series of keywords for this—in our case, we change the word pick to reword which allows us to change the commit messages. After saving and closing the editor, Git will show the actual commit message and you can change it. Save and exit again, that’s it!

Combining two commits

In this next example, we’ll combine the two commits—“7b2317cf Change the page structure” and “6bcf266 Optimize markup”—so that they become one single commit. Again, as a first step you need to determine the base commit. And again, we have to go back to at least the parent commit:

$ git rebase -i HEAD~3

The editor window opens again, but instead of reword, we’ll enter squash. To be exact, we replace pick with squash in line 2 to combine it with line 1. This is an important bit to keep in mind: the squash keyword combines the line you mark up with the line above it!

After saving the changes and closing the window, a new editor window pops up. Why’s that? By combining two commits we are creating… well… a new commit! And this new commit wants a commit message. Enter the message, save and close the window… and you’ve successfully combined the two commits. Powerful stuff!

Finally a little “pro tip” for those of you working with the “Tower” Git desktop GUI: to perform a squash, you can simply drag and drop commits onto each other, right in the commits view. And if you want to change a commit message, simply right click the commit in question and select “Edit commit message” from the contextual menu.

Deleting a commit

We’re bringing in the big guns for our final example: we are going to delete a revision from our commit history! To do this, we’re using the drop keyword to mark up the commit we want to get rid of:

drop 0023cdd Add simple robots.txt pick 2b504be Change headlines for about and imprint pick 6bcf266 Optimizes markup structure in index page

This is probably a good moment to answer a question you might have had for some time now: what can you do if you’re in the middle of a rebase operation and think, “Oh, no, this wasn’t such a good idea after all”? No problem—you can always abort! Just enter the following command to turn back to the state your repository was in before you initiated the rebase:

$ git rebase --abort Changing the past

These were just a few examples of what an interactive rebase can do. There are plenty of other possibilities to control and revise your local commit history.

If you want to dive deeper into advanced Git tools, feel free to check out my (free!) “Advanced Git Kit”: it’s a collection of short videos about topics like branching strategies, Interactive Rebase, Reflog, Submodules and much more.

Happy rebasing and hacking—and see you soon for the next part in our series on “Advanced Git”!

Advanced Git series:
  1. Part 1: Creating the Perfect Commit in Git
  2. Part 2: Branching Strategies in Git
  3. Part 3: Better Collaboration With Pull Requests
  4. Part 4: Merge Conflicts
  5. Part 5: Rebase vs. Merge
  6. Part 6: Interactive Rebase (You are here!)
  7. Part 7: Cherry-Picking Commits in Git
  8. Part 8: Using the Reflog to Restore Lost Commits (Coming soon!)

The post Interactive Rebase: Clean up your Commit History appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Semantic menu context

Css Tricks - Thu, 11/11/2021 - 10:07am

Scott digs into the history of the <menu> element. He traced it as far back as HTML 2 (!) in a 1994 changelog. The vibe then, it seems, was to mark up a list. I would suspect the intention is much like <nav> is today, but I really don’t know.

Short story: HTML 4 deprecated it, HTML 5 revived it—this time as a “group of commands”—and then HTML 5.2 deprecated it again. Kind of a bummer since it has some clear use cases.

So, it’s been quite the roller coaster for ol’ <menu>! There never seems to be any easy wins for HTML evolution. As of now, it’s in “don’t bother” territory:

I really wrote this post as a sort of counter point to the often uttered phrase “use semantic HTML and you get accessibility for free!” That statement, on its surface, is largely true. And you should use semantic HTML wherever its use is appropriate. <menu>, unfortunately, doesn’t really give us all that much, even though it has clearly defined semantics. Its intended semantics and what we actually need in reality are better served by either just using the more robust <ul> element, or creating your own role=toolbar, menubar, etc.. Using this semantic element, for semantics sake, is just that.

To Shared LinkPermalink on CSS-Tricks

The post Semantic menu context appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Easy Dark Mode (and Multiple Color Themes!) in React

Css Tricks - Thu, 11/11/2021 - 5:28am

I was working on a large React application for a startup, and aside from just wanting some good strategies to keep our styles organized, I wanted to give this whole “dark mode” thing a shot. With the huge ecosystem around React, you might think that there would be a go-to solution for style themes, but a little web searching shows that really isn’t the case.

There are plenty of different options out there, but many of them tie into very specific CSS strategies, like using CSS Modules, some form of CSS-in-JS, etc. I also found tools specific to certain frameworks, like Gatsby, but not a generic React project. What I was looking for was a basic system that’s easy to set up and work with without jumping through a ton of hoops; something fast, something easy to get a whole team of front-end and full-stack developers onboarded with quickly.

The existing solution I liked the best centered around using CSS variables and data attributes, found in this StackOverflow answer. But that also relied on some useRef stuff that felt hack-y. As they say in every infomercial ever, there’s got to be a better way!

Fortunately, there is. By combining that general CSS variable strategy with the beautiful useLocalStorage hook, we have a powerful, easy-to-use theming system. I’m going to walk through setting this thing up and running it, starting from a brand new React app. And if you stick around to the end, I also show you how to integrate it with react-scoped-css, which is what makes this my absolutely preferred way to work with CSS in React.

Project setup

Let’s pick this up at a very good place to start: the beginning.

This guide assumes a basic familiarity with CSS, JavaScript, and React.

First, make sure you have a recent version of Node and npm installed. Then navigate to whatever folder you want your project to live in, run git bash there (or your preferred command line tool), then run:

npx create-react-app easy-react-themes --template typescript

Swap out easy-react-themes with the name of your project, and feel free to leave off the --template typescript if you’d rather work in JavaScript. I happen to like TypeScript but it genuinely makes no difference for this guide, other than files ending in .ts/.tsx vs .js/.jsx.

Now we’ll open up our brand new project in a code editor. I’m using VS Code for this example, and if you are too, then you can run these commands:

cd easy-react-themes code . Not much to look at yet, but we’ll change that!

Running npm start next starts your development server, and produces this in a new browser window:

And, finally, go ahead and install the use-local-storage package with:

npm i use-local-storage

And that’s it for the initial setup of the project!

Code setup

Open the App.tsx file and get rid of the stuff we don’t need.

We want to go from this… …to this.

Delete the entire content in App.css:

Woot! Now let’s create our themes! Open up the index.css file and add this to it:

:root { --background: white; --text-primary: black; --text-secondary: royalblue; --accent: purple; } [data-theme='dark'] { --background: black; --text-primary: white; --text-secondary: grey; --accent: darkred; }

Here’s what we have so far:

See what we just did there? If you’re unfamiliar with CSS Custom Properties (as also known as CSS variables), they allow us to define a value to be used elsewhere in our stylesheets, with the pattern being --key: value. In this case, we’re only defining a few colors and applying them to the :root element so they can be used be used wherever else we need them across the whole React project.

The second part, starting with [data-theme='dark'], is where things get interesting. HTML (and JSX, which we’re using to create HTML in React) allows us to set completely arbitrary properties for our HTML elements with the data-* attribute. In this case, we are giving the outermost <div> element of our application a data-theme attribute and toggling its value between light and dark. When it’s dark, the CSS[data-theme='dark'] section overrides the variables we defined in the :root, so any styling which relies on those variables is toggled as well.

Let’s put that into practice. Back in App.tsx, let’s give React a way to track the theme state. We’d normally use something like useState for local state, or Redux for global state management, but we also want the user’s theme selection to stick around if they leave our app and come back later. While we could use Redux and redux-persist, that’s way overkill for our needs.

Instead, we’re using the useLocalStorage hook we installed earlier. It gives us a way to store things in local storage, as you might expect, but as a React hook, it maintains stateful knowledge of what it’s doing with localStorage, making our lives easy.

Some of you might be thinking “Oh no, what if the page renders before our JavaScript checks in with localStorage and we get the dreaded “flash of wrong theme?” But you don’t have to worry about that here since our React app is completely rendered client-side; the initial HTML file is basically a skeleton with a with a single <div> that React attaches the app to. All of the final HTML elements are generated by JavaScript after checking localStorage.

So, first, import the hook at the top of App.tsx with:

import useLocalStorage from 'use-local-storage'

Then, inside our App component, we use it with:

const defaultDark = window.matchMedia('(prefers-color-scheme: dark)').matches; const [theme, setTheme] = useLocalStorage('theme', defaultDark ? 'dark' : 'light');

This does a few things for us. First, we’re checking if the user has set a theme preference in their browser settings. Then we’re creating a stateful theme variable that is tied to localStorage and the setTheme function to update theme. useLocalStorage adds a key:value pair to localStorage if it doesn’t already exist, which defaults to theme: "light", unless our matchMedia check comes back as true, in which case it’s theme: "dark". That way, we’re gracefully handling both possibilities of keeping the theme settings for a returning user, or respecting their browser settings by default if we’re working with new users.

Next, we add a tiny bit of content to the App component so we have some elements to style, along with a button and function to actually allow us to toggle the theme.

The finished App.tsx file

The secret sauce is on line 14 where we’ve added data-theme={theme} to our top-level <div>. Now, by switching the value of theme, we are choosing whether or not to override the CSS variables in :root with the ones in the data-theme='dark' section of the index.css file.

The last thing we need to do is add some styling that uses those CSS variables we made earlier, and it’ll up and running! Open App.css and drop this CSS in there:

.App { color: var(--text-primary); background-color: var(--background); font-size: large; font-weight: bold; padding: 20px; height: calc(100vh - 40px); transition: all .5s; } button { color: var(--text-primary); background-color: var(--background); border: 2px var(--text-primary) solid; float: right; transition: all .5s; }

Now the background and text for the main <div>, and the background, text, and outline of the <button> rely on the CSS variables. That means when the theme changes, everything that depends on those variables update as well. Also note that we added transition: all .5s to both the App and <button> for a smooth transition between color themes.

Now, head back to the browser that’s running the app, and here’s what we get:

Tada! Let’s add another component just to show how the system works if we’re building out a real app. We’ll add a /components folder in /src, put a /square folder in /components, and add a Square.tsx and square.css, like so:

Let’s import it back into App.tsx, like so:

Here’s what we have now as a result:

And there we go! Obviously, this is a pretty basic case where we’re only using a default (light) theme, and a secondary (dark) theme. But if your application calls for it, this system could be used to implement multiple theme options. Personally, I’m thinking of giving my next project options for light, dark, chocolate, and strawberry—go nuts!

Bonus: Integrating with React Scoped CSS:

Using React Scoped CSS is my favorite way to keep each component’s CSS encapsulated to prevent name collision messiness and unintended style inheritance. My previous go-to for this was CSS Modules, but that has the downside of making the in-browser DOM look like a robot wrote all of the class names… because that’s exactly the case. This lack of human-readability makes debugging far more annoying than it has to be. Enter React Scoped CSS. We get to keep writing CSS (or Sass) exactly the way we have been, and the output looks like a human wrote it.

Seeing as the the React Scoped CSS repo provides full and detailed installation instructions, I’ll merely summarize them here.

First, install and configure Create React App Configuration Override (CRACO) according to their instructions. Craco is a tool that lets us override some of the default webpack configuration that’s bundled into create-react-app (CRA). Normally, if you want to adjust webpack in a CRA project, you first have to “eject” the project, which is an irreversible operation, and makes you fully responsible for all of the dependencies that are normally handled for you. You usually want to avoid ejecting unless you really, really know what you’re doing and have a good reason to go down that road. Instead, CRACO let’s us make some minor adjustments to our webpack config without things getting messy.

Once that’s done, install the React Scoped CSS package:

npm i craco-plugin-scoped-css

(The README instructions use yarn for installation instead of npm, but either is fine.) Now that it’s installed, simply rename the CSS files by adding .scoped before the .css, like so:

app.css -> app.scoped.css

And we need to make sure we’re using a new name when importing that CSS into a component:

import './app.css'; -> import './app.scoped.css';

Now all of the CSS is encapsulated so that it only applies to the components they’re imported into. It works by using data-* properties, much like our theme system, so when a scoped CSS file is imported into a component, all of that component’s elements are labeled with a property, like data-v-46ef2374, and the styles from that file are wrapped so that they only apply to elements with that exact data property.

That’s all wonderful, but the little trick to making that work with this theming system is that we explicitly don’t want the CSS variables encapsulated; we want them applied to the whole project. So, we simply don’t change index.css to have scoped in it… in other words, we can leave that CSS file alone. That’s it! Now we have a powerful theming system working in harmony with scoped CSS— we’re living the dream!

GitHub Repo Live Demo

Thank you so much taking a read through this guide, and if it helped you build something awesome, I would love to know about it!

The post Easy Dark Mode (and Multiple Color Themes!) in React appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Quickly Get Alerted to Front-End Errors and Performance Issues

Css Tricks - Thu, 11/11/2021 - 5:22am

(This is a sponsored post.)

Measuring things is great. They say what you only fix what you measure. Raygun is great at measuring websites. Measuring performance, measuring errors and crashes, measuring code problems.

You know what’s even better than measuring? Having a system in place to notify you when anything significant happens with those measurements. That’s why Raygun now has powerful alerting.

Let’s look at some of the possibilities of alerts you can set up on your website so that you’re alerted when things go wrong.

Alert 1) Spike in Errors

In my experience, when you see a spike in errors being thrown in your app, it’s likely because a new release has gone to production, and it’s not behaving how you expected it to.

You need to know now, because errors like this can be tricky. Maybe it worked just fine in development, so you need as much time as you can get to root out what the problem is.

Creating a customized alert situation like this in Raygun is very straightforward! Here’s a quick video:

Alert 2) Critical Error

You likely want to be keeping an eye on all errors, but some errors are more critical than others. If a user throws an error trying to update their biography to a string that contains an emoji, well that’s unfortunate and you want to know about it so you can fix it. But if they can’t sign up, add to cart, or check out — well, that’s extra bad, and you need to know about it instantly so you can fix it as immediately as possible. If your users can’t do the main thing they are on your website to do, you’re seriously jeopardizing your business.

With Raygun Alerting, there are actually a couple ways to set this up.

  1. Set up the alert to watch for an Error Message containing any particular text
  2. (and/or) Set up the alert to watch for a particular tag

Error Message text is a nice catch-all as you should be able to catch anything with that. But tagging is more targetted. These tags are of your own design, as you send them over yourself from your own app. For example in JavaScript, say you performed some mission-critical operation in a try/catch block. Should the catch happen, you could send Raygun an event like:

rg4js('send', { error: e, tags: ['signup', 'mission_critical']; });

Then create alerts based on those tags as needed.

Alert 3) Slow Load Time

I’m not sure most people think about website performance tracking as something you tie real time alerting to, but you should! There is no reason a websites load time would all the sudden nose dive (e.g. change from, say 2 seconds to 5 seconds), unless something has changed. So if it does nose dive, you should be alerted right away, so you can examine recent changes and fix it.

With Raygun, an alert like this is extremely simple to set up. Here’s an example alert set up to watch for a certain load time threshold and email if there is ever a 10 minute time period in which loading times exceed that.

Setting up the alert in Raygun Email notification of slowness

If you don’t want to be that aggressive to start with loading time, try 4 seconds. That’s the industry standard for slow loading. If you never get any alerts, slowly notch it down over time, giving you and your team progressively more impressive loading times to stay vigilant about.

Aside from alerts, you’ll also get weekly emails giving you an overview of performance issues.

Alert 4) Core Web Vitals

The new gold-standard web performance metrics are Core Web Vitals (which we’ve written about how Raygun helps with before) (CWV) because they measure things that really matter to users, as well as are an SEO ranking factor for Google. Those are two big reasons to be extra careful with them and set up alerts if your website breaks acceptable thresholds you set up.

For example, CLS is Culumative Layout Shift. Google tells us CLS under 0.1 is good and above 0.25 is bad. So why don’t we shoot for staying under 0.1?

Here we’ve got an alert where if the CLS creeps up over 0.1, we’ll be alerted. Maybe we accidentally added some new content to the site (ads?) that arrive after the page loads and push content around. Perhaps we’ve adjusted a layout in a way that makes things more shifty than they were. Perhaps we’ve updated our custom fonts such that when the load they cause shifting. If we’re alerted, we can fix it the moment we’re aware of it so the negative consequences don’t stick around.

Conclusion

For literally everything that you measure that you know is important to you, there should be an alerting mechanic in place. For anything website performance or error tracking related, Raygun has a perfect solution.

The post Quickly Get Alerted to Front-End Errors and Performance Issues appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Don’t Snore on CORS

Css Tricks - Wed, 11/10/2021 - 12:38pm

Whatever, I just needed a title. Everyone’s favorite web security feature has crossed my desk a bunch of times lately and I always feel like that is a sign I should write something because that’s what blogging is.

The main problem with CORS is that developers don’t understand CORS. The basic concept of it is supposed to be easy: don’t run code across origins. Meaning if I, at css-tricks.com, try to fetch some JavaScript from an external URL, like any-other-website.com, the browser will just stop it by default. You’ll see an error in the console. Not allowed.

Unless, that is, the other website sends a header that specifically allows this. My domain can be whitelisted or there could be a wildcard that allows it. There is way more detail here (like preflighting and credentials) and, as ever, the MDN article does a good job on that front.

What have traditionally been hair-pulling moments for me are when CORS seems to behave inconsistently. Two requests will go through and a third will fail, which seems inexplicable, but was reproducible. (Perhaps there was a load balancer involved with half-cached headers? Who knows.) Or I’m trying to use a proxy and the proxy stops working. I can’t even remember all the examples, but I bet I’ve been in meetings trying to debug CORS issues over 100 times in my life.

Anyway, those times where CORS have crossed my desk recently:

  • This video, Learn CORS In 6 Minutes, has 10,000 likes and seems to have struck a chord with folks. A non-ironic npm install cors was the solution here.
  • You have to literally tell servers to have the correct headers. So, similar to the video above, I had to do that in a video about Cloudflare Workers, where I used cross-origin (but you don’t have to, which is actually a very cool feature of Cloudflare Workers).
  • Jake’s article “How to win at CORS” which includes a playground.
  • There are browser extensions (like ones for Firefox and Chrome) that yank in CORS headers for you, which feels like a questionable workaround, but I wouldn’t blame anybody for using in development.
  • I wrote about how easy it is to proxy… anything, including a third-party JavaScript file and make it first-party. Plenty of people pointed out in the comments that doing that totally removes the protection you get from CORS, which is danger-danger. Agreed, unless you 100% control that third-party, it’s quite dangerous.

The post Don’t Snore on CORS appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Quick and Dirty Bootstrap Overrides at Runtime

Css Tricks - Wed, 11/10/2021 - 5:00am

Oh, Bootstrap, that old standard web library that either you hate or you spend all your time defending as “it’s fine, it’s not that bad.” Regardless of what side you fall on, it’s a powerful UI framework that’s everywhere, most people know the basics of it, and it gives you extremely predictable results.

For better or worse, Bootstrap is opinionated. It wants you to construct your HTML a certain way, it wants you to override styles a certain way, it wants to be built from core files a certain way, and it wants to be included in websites a certain way. Most of the time, unless you have a coworker who writes Bootstrap badly, this is fine, but it doesn’t cover all use cases.

Bootstrap wants to be generated server-side and it does not like having its styles overridden at runtime. If you’re in a situation where you want some sort of visual theme feature in your application, what Bootstrap wants you to do is generate separate stylesheets for each theme and swap out stylesheets as you need. This is a great way to do it if you have pre-defined themes you’re offering to users. But what if you want user-defined themes? You could set up your app to run Sass and compile new stylesheets and save them to the server, but that’s a lot of work—plus you have to go talk to the back-end guys and DevOps which is a bunch of hassle if you only want to, say, swap out primary and secondary colors, for example.

So this is where I was.

I’m building a multi-user SaaS app using Django and Vue with a fixed layout, but also a requirement to be able to change the branding colors for each user account with an automatic default color theme. There is another requirement that we don’t re-deploy the app every time a new user is added. And, finally, every single back-end and DevOps dev is currently swamped with other projects, so I have to solve this problem on my own.

Since I really don’t want to compile Sass at runtime, I could just create stylesheets and inject them into pages, but this is a bad solution since we’re focusing on colors. Compiled Bootstrap stylesheets render out the color values as explicit hex values, and (I just checked) there are 23 different instances of primary blue in my stylesheet. I would need to override every instance of that just for primary colors, then do it again for secondary, warning, danger, and all the other conventions and color standardizations we want to change. It’s complicated and a lot of work. I don’t want to do that.

Luckily, this new app doesn’t have a requirement to support Internet Explorer 11, so that means I have CSS variables at my disposal. They’re great, too, and they can be defined after loading a stylesheet, flowing in every direction and changing all the colors I want, right? And Bootstrap generates that big list of variables in the :root element, so this should be simple.

This is when I learned that Bootstrap only renders some of its values as variables in the stylesheet, and that this list of variables is intended entirely for end-user consumption. Most of the variables in that list ate not referenced in the rest of the stylesheet, so redefining them does nothing. (However, it’s worth a note that better variable support at runtime may be coming in the future.)

So what I want is my Bootstrap stylesheet to render with CSS variables that I can manipulate on the server side instead of static color values, and strictly speaking, that’s not possible. Sass won’t compile if you set color variables as CSS variables. There are a couple of clever tricks available to make Sass do this (here’s one, and another), but they require branching Bootstrap, and branching away from the upgrade path introduces a bit of brittleness to my app that I’m unwilling to add. And if I’m perfectly honest, the real reason I didn’t implement those solutions was that I couldn’t figure out how to make any of them work with my Sass compiler. But you might have better luck.

This is where I think it’s worth explaining my preferred workflow. I prefer to run Sass locally on my dev machine to build stylesheets and commit the compiled stylesheets to the repo. Best practices would suggest the stylesheets should be compiled during deployment, and that’s correct, but I work for a growing, perpetually understaffed startup. I work with Sass because I like it, but in what is clearly a theme for my job, I don’t have the time, power or spiritual fortitude to integrate my Sass build with our various deployment pipelines.

It’s also a bit of lawful evil self-defense: I don’t want our full-stack developers to get their mitts on my finely-crafted styles and start writing whatever they want; and I’ve discovered that for some reason they have a terrible time getting Node installed on their laptops. Alas! They just are stuck asking me to do it, and that’s exactly how I want things.

All of which is to say: if I can’t get the stylesheets to render with the variables in it, there’s nothing stopping me from injecting the variables into the stylesheet after it’s been compiled.

Behold the power of find and replace!

What we do is go into Bootstrap and find the colors we want to replace, conveniently found at the top of your compiled stylesheet in the :root style:

:root { --bs-blue: #002E6D; --bs-indigo: #6610F2; --bs-purple: #6F42C1; --bs-pink: #E83E8C; --bs-red: #DC3545; --bs-orange: #F2581C; --bs-yellow: #FFC107; --bs-green: #28A745; --bs-teal: #0C717A; --bs-cyan: #007DBC; --bs-white: #fff; --bs-gray: #6c757d; --bs-gray-dark: #343a40; --bs-gray-100: #f8f9fa; --bs-gray-200: #e9ecef; --bs-gray-300: #dee2e6; --bs-gray-400: #ced4da; --bs-gray-500: #adb5bd; --bs-gray-600: #6c757d; --bs-gray-700: #495057; --bs-gray-800: #343a40; --bs-gray-900: #212529; --bs-primary: #002E6D; --bs-brand: #DC3545; --bs-secondary: #495057; --bs-success: #28A745; --bs-danger: #DC3545; --bs-warning: #FFC107; --bs-info: #007DBC; --bs-light: #fff; --bs-dark: #212529; --bs-background-color: #e9ecef; --bs-bg-light: #f8f9fa; --bs-primary-rgb: 13, 110, 253; --bs-secondary-rgb: 108, 117, 125; --bs-success-rgb: 25, 135, 84; --bs-info-rgb: 13, 202, 240; --bs-warning-rgb: 255, 193, 7; --bs-danger-rgb: 220, 53, 69; --bs-light-rgb: 248, 249, 250; --bs-dark-rgb: 33, 37, 41; --bs-white-rgb: 255, 255, 255; --bs-black-rgb: 0, 0, 0; --bs-body-rgb: 33, 37, 41; --bs-font-sans-serif: system-ui, -apple-system, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, Liberation Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; --bs-font-monospace: SFMono-Regular, Menlo, Monaco, Consolas, Liberation Mono, Courier New, monospace; --bs-gradient: linear-gradient(180deg, rgba(255, 255, 255, 0.15), rgba(255, 255, 255, 0)); --bs-body-font-family: Source Sans Pro; --bs-body-font-size: 1rem; --bs-body-font-weight: 400; --bs-body-line-height: 1.5; --bs-body-color: #212529; --bs-body-bg: #e9ecef; }

Grab the value for, say, --bs-primary, the good ol’ Bootstrap blue. I use Gulp to compile my stylesheets, so let’s take a look at the Sass task function for that in the gulpfile.js:

var gulp = require('gulp'); var sass = require('gulp-sass')(require('sass')); var sourcemaps = require('gulp-sourcemaps'); function sassCompile() { return gulp.src('static/sass/project.scss') .pipe(sourcemaps.init()) .pipe(sass({outputStyle: 'expanded'})) .pipe(sourcemaps.write('.')) .pipe(gulp.dest('/static/css/')); } exports.sass = sassCompile;

I want to copy and replace this color throughout my entire stylesheet with a CSS variable, so I installed gulp-replace to do that. We want our find-and-replace to happen at the very end of the process, after the stylesheet is compiled but before it’s saved. That means we ought to put the pipe at the end of the sequence, like so:

var gulp = require('gulp'); var sass = require('gulp-sass')(require('sass')); var sourcemaps = require('gulp-sourcemaps'); var gulpreplace = require('gulp-replace'); function sassCompile() { return gulp.src('static/sass/project.scss') .pipe(sourcemaps.init()) .pipe(sass({outputStyle: 'expanded'})) .pipe(sourcemaps.write('.')) .pipe(gulpreplace(/#002E6D/ig, 'var(--ct-primary)')) .pipe(gulp.dest('static/css/')); } exports.sass = sassCompile;

Compile the stylesheet, and check it out.

:root { --bs-blue: var(--ct-primary); --bs-indigo: #6610F2; --bs-purple: #6F42C1; --bs-pink: #E83E8C; --bs-red: #DC3545; --bs-orange: #F2581C; --bs-yellow: #FFC107; --bs-green: #28A745; --bs-teal: #0C717A; --bs-cyan: #007DBC; --bs-white: #fff; --bs-gray: #6c757d; --bs-gray-dark: #343a40; --bs-gray-100: #f8f9fa; --bs-gray-200: #e9ecef; --bs-gray-300: #dee2e6; --bs-gray-400: #ced4da; --bs-gray-500: #adb5bd; --bs-gray-600: #6c757d; --bs-gray-700: #495057; --bs-gray-800: #343a40; --bs-gray-900: #212529; --bs-primary: var(--ct-primary); --bs-brand: #DC3545; --bs-secondary: #495057; --bs-success: #28A745; --bs-danger: #DC3545; --bs-warning: #FFC107; --bs-info: #007DBC; --bs-light: #fff; --bs-dark: #212529; --bs-background-color: #e9ecef; --bs-bg-light: #f8f9fa; --bs-primary-rgb: 13, 110, 253; --bs-secondary-rgb: 108, 117, 125; --bs-success-rgb: 25, 135, 84; --bs-info-rgb: 13, 202, 240; --bs-warning-rgb: 255, 193, 7; --bs-danger-rgb: 220, 53, 69; --bs-light-rgb: 248, 249, 250; --bs-dark-rgb: 33, 37, 41; --bs-white-rgb: 255, 255, 255; --bs-black-rgb: 0, 0, 0; --bs-body-rgb: 33, 37, 41; --bs-font-sans-serif: system-ui, -apple-system, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, Liberation Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; --bs-font-monospace: SFMono-Regular, Menlo, Monaco, Consolas, Liberation Mono, Courier New, monospace; --bs-gradient: linear-gradient(180deg, rgba(255, 255, 255, 0.15), rgba(255, 255, 255, 0)); --bs-body-font-family: Source Sans Pro; --bs-body-font-size: 1rem; --bs-body-font-weight: 400; --bs-body-line-height: 1.5; --bs-body-color: #212529; --bs-body-bg: #e9ecef; }

Cool, OK, we now have an entire stylesheet that wants a variable value for blue. Notice it changed both the primary color and the “blue” color. This isn’t a subtle technique. I call it quick-and-dirty for a reason, but it’s fairly easy to get more fine-grained control of your color replacements if you need them. For instance, if you want to keep “blue” and “primary” as separate values, go into your Sass and redefine the $blue and $primary Sass variables into different values, and then you can separately find-and-replace them as needed.

Next, we need to define our new default variable value in the app. It’s as simple as doing this in the HTML head:

<link href="/static/css/project.css" rel="stylesheet"> <style> :root { --ct-primary: #002E6D; } </style>

Run that and everything shows up. Everything that needs to be blue is blue. Repeat this process a few times, and you suddenly have lots of control over the colors in your Bootstrap stylesheet. These are the variables I’ve chosen to make available to users, along with their default color values:

--ct-primary: #002E6D; --ct-primary-hover: #00275d; --ct-secondary: #495057; --ct-secondary-hover: #3e444a; --ct-success: #28A745; --ct-success-hover: #48b461; --ct-danger: #DC3545; --ct-danger-hover: #bb2d3b; --ct-warning: #FFC107; --ct-warning-hover: #ffca2c; --ct-info: #007DBC; --ct-info-hover: #006aa0; --ct-dark: #212529; --ct-background-color: #e9ecef; --ct-bg-light: #f8f9fa; --bs-primary-rgb: 0, 46, 109; --bs-secondary-rgb: 73, 80, 87; --bs-success-rgb: 40, 167, 69; --bs-info-rgb: 0, 125, 188; --bs-warning-rgb: 255, 193, 7; --bs-danger-rgb: 220, 53, 69; --bs-light-rgb: 248, 249, 250; --bs-dark-rgb: 33, 37, 41; --bs-white-rgb: 255, 255, 255; --bs-black-rgb: 0, 0, 0; --bs-body-rgb: 33, 37, 41;

Now the fun begins! From here, you can directly manipulate these defaults if you like, or add a second :root style below the defaults to override only the colors you want. Or do what I do, and put a text field in the user profile that outputs a :root style into your header overriding whatever you need. Voilà, you can now override Bootstrap at runtime without recompiling the stylesheet or losing your mind.

This isn’t an elegant solution, certainly, but it solves a very specific use case that developers have been trying to solve for years now. And until Bootstrap decides it wants to let us easily override variables at runtime, this has proven to be a very effective solution for me.

The post Quick and Dirty Bootstrap Overrides at Runtime appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

cleanup.pictures

Css Tricks - Wed, 11/10/2021 - 4:55am

Nice domain, eh? Does just what it says on the tin: cleans up pictures. You draw over areas of the image you want cleaned up, and it does its best using weird science. It’s like Photoshop’s Spot Healing Brush, only a single-use free website. Much like the amazing remove.bg which is an equally amazing single-use website (and domain name).

To Shared LinkPermalink on CSS-Tricks

The post cleanup.pictures appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Detecting Specific Text Input with HTML and CSS

Css Tricks - Tue, 11/09/2021 - 12:53pm

Louis Lazaris breaks down some bonafide CSS trickery from Jane. The Pen shows off interactivity where:

  1. You have to press a special combination of keys on a keyboard.
  2. Then type a secret password.

From there, a special message pops up on the screen. Easily JavaScript territory, but no, this is done here entirely in HTML and CSS, which is wild.

CodePen Embed Fallback

A lot of little known features and tricks is combined here to pull this off, like HTML’s accesskey and pattern attributes, as well as :not(), :placeholder-shown, and :valid in CSS—not to mention the custom property toggle trick.

That’s… wow. And yet, look how very little code it is.

To Shared LinkPermalink on CSS-Tricks

The post Detecting Specific Text Input with HTML and CSS appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

VideoPress for WordPress

Css Tricks - Tue, 11/09/2021 - 12:53pm

(This is a sponsored post.)

The leade here is that VideoPress makes video on WordPress way better. VideoPress is a part of Jetpack. And now, if VideoPress is the only thing you care about from the Jetpack world, you can pay for it à la carte as low as $4.77/month. Or, get it included in the Jetpack Complete plan.

Lemme get into it, so you can see all of what VideoPress does for you.

Optimized, CDN-Hosted Video

When you drag-and-drop a video file onto the WordPress editor (even without VideoPress), it will upload and display just like an image will. Video files are generally much larger files than images, so right off the bat, you might run into file size limits. Your WordPress host likely limits upload size between 4-128 MB. Video can easily be bigger than that. With VideoPress you’ve got up to 5 GB per file to work with (although they recommend 1 GB or lower for the best uploading success). You get 1 TB of total storage.

Even if you manage to host hundreds-of-megabyte video files yourself, that’s a heck of a lot of bandwidth for your own servers to be serving. Video is really meant to be served from servers tuned for video distribution, which is exactly what you get on VideoPress. It’s kind of like having your own personal YouTube or Vimeo, where the videos become embeds that come from a host service rather than hosted yourself, which is particularly ideal for video.

For performance reasons alone, VideoPress is worth it. You likely know that images are hard. Between WordPress and Jetpack, image handling on your website is extremely good (images are optimized, CDN-hosted, served with srcset/sizes, lazy-loaded, etc.). VideoPress makes video handling extremely good with the same features as well as some features really unique to video, like streaming video with adaptive bitrate streaming optimized for mobile.

Feature-Rich, Add-Free, Customizeable Player

So in a way, yes, you get a video player that is like what you’d get with YouTube. You get additional features like playback speed control, picture-in-picture, full-screen, volume control, etc.

That’s a lot better than a native <video> element that you get by default. But unlike a YouTube player, there is no ads, potentially showing things you don’t want your visitors seeing on your site.

Here’s an example

In an extremely meta move, here’s an embedded VideoPress video of Dave and I talking… about VideoPress:

That video is just over 1 GB as I uploaded it!

Mobile “Posters”

With the native HTML <video> tag, on mobile browsers, you see literally nothing in the space the video renders unless you provide an image poster attribute on the video. That’s… fine, but it’s an awful lot of work to have to hand-craft an image for every video you ever post anywhere. I’d much rather have the video automatically show the first frame or some computer-chosen frame. You get that with VideoPress, so your videos on mobile look much nicer without having to do any work.

Getting all meta again, showing a screenshot of this very post in order to show off the mobile posters.

This seems like a tiny thing, but to me it’s not. I really like having a zero-effort way to make videos on mobile look good by default. The play button is maybe a little enormous but I’ll live.

There is nothing to learn

You just flip the switch to turn it on.

Flipping this switch in your Jetpack Settings is all that is required to use VideoPress.

Then with videos you upload, VideoPress takes over and does its thing. Your videos are uploaded to the VideoPress cloud for serving from there, but also uploaded to your WordPress media library, so you’ll always have the canonical version.

Here’s how it works:

Shortcodes

Another neat tidbit: once the video uploads, VideoPress generates a shortcode you can use to plop the video anywhere.

It’s available with a Jetpack Complete plan or à la carte.

You can start using VideoPress right away if you already have a Jetpack Complete plan. Or, it’s available as an à la carte offering at just an $4.77 per month.

If you go for the Jetpack Complete plan you’ll also gain access to a ton more goodies, like real-time site backups, automated security scans, A complete CRM, spam protection, and Jetpack Search — all of which we use right here on CSS-Tricks.

Get VideoPress

The post VideoPress for WordPress appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

React Suspense: Lessons Learned While Loading Data

Css Tricks - Tue, 11/09/2021 - 5:20am

Suspense is React’s forthcoming feature that helps coordinate asynchronous actions—like data loading—allowing you to easily prevent inconsistent state in your UI. I’ll provide a better explanation of what exactly that means, along with a quick introduction of Suspense, and then go over a somewhat realistic use case, and cover some lessons learned.

The features I’m covering are still in the alpha stage, and should by no means be used in production. This post is for folks who want to take a sneak peek at what’s coming, and see what the future looks like.

A Suspense primer

One of the more challenging parts of application development is coordinating application state and how data loads. It’s common for a state change to trigger new data loads in multiple locations. Typically, each piece of data would have its own loading UI (like a “spinner”), roughly where that data lives in the application. The asynchronous nature of data loading means each of these requests can be returned in any order. As a result, not only will your app have a bunch of different spinners popping in and out, but worse, your application might display inconsistent data. If two out of three of your data loads have completed, you’ll have a loading spinner sitting on top of that third location, still displaying the old, now outdated data.

I know that was a lot. If you find any of that baffling, you might be interested in a prior post I wrote about Suspense. That goes into much more detail on what Suspense is and what it accomplishes. Just note that a few minor pieces of it are now outdated, namely, the useTransition hook no longer takes a timeoutMs value, and waits as long as needed instead.

Now let’s do a quick walkthrough of the details, then get into a specific use case, which has a few lurking gotchas.

How does Suspense work?

Fortunately, the React team was smart enough to not limit these efforts to just loading data. Suspense works via low-level primitives, which you can apply to just about anything. Let’s take a quick look at these primitives.

First up is the <Suspense> boundary, which takes a fallback prop:

<Suspense fallback={<Fallback />}>

Whenever any child under this component suspends, it renders the fallback. No matter how many children are suspending, for whatever reason, the fallback is what shows. This is one way React ensures a consistent UI—it won’t render anything, until everything is ready.

But what about after things have rendered, initially, and now the user changes state, and loads new data. We certainly don’t want our existing UI to vanish and display our fallback; that would be a poor UX. Instead, we probably want to show one loading spinner, until all data are ready, and then show the new UI.

The useTransition hook accomplishes this. This hook returns a function and a boolean value. We call the function and wrap our state changes. Now things get interesting. React attempts to apply our state change. If anything suspends, React sets that boolean to true, then waits for the suspension to end. When it does, it’ll try to apply the state change again. Maybe it’ll succeed this time, or maybe something else suspends instead. Whatever the case, the boolean flag stays true until everything is ready, and then, and only then, does the state change complete and get reflected in the UI.

Lastly, how do we suspend? We suspend by throwing a promise. If data is requested, and we need to fetch, then we fetch—and throw a promise that’s tied to that fetch. The suspension mechanism being at a low level like this means we can use it with anything. The React.lazy utility for lazy loading components works with Suspense already, and I’ve previously written about using Suspense to wait until images are loaded before displaying a UI in order to prevent content from shifting.

Don’t worry, we’ll get into all this.

What we’re building

We’ll build something slightly different than the examples of many other posts like this. Remember, Suspense is still in alpha, so your favorite data loading utility probably doesn’t have Suspense support just yet. But that doesn’t mean we can’t fake a few things and get an idea of how Suspense works.

Let’s build an infinite loading list that displays some data, combined with some Suspense-based preloaded images. We’ll display our data, along with a button to load more. As data renders, we’ll preload the associated image, and Suspend until it’s ready.

This use case is based on actual work I’ve done on my side project (again, don’t use Suspense in production—but side projects are fair game). I was using my own GraphQL client, and this post is motivated by some of the difficulties I ran into. We’ll just fake the data loading in order to keep things simple and focus on Suspense itself, rather than any individual data loading utility.

Let’s build!

Here’s the sandbox for our initial attempt. We’re going to use it to walk through everything, so don’t feel pressured to understand all the code right now.

Our root App component renders a Suspense boundary like this:

<Suspense fallback={<Fallback />}>

Whenever anything suspends (unless the state change happened in a useTransition call), the fallback is what renders. To make things easier to follow, I made this Fallback component turn the entire UI pink, that way it’s tough to miss; our goal is to understand Suspense, not to build a quality UI.

We’re loading the current chunk of data inside of our DataList component:

const newData = useQuery(param);

Our useQuery hook is hardcoded to return fake data, including a timeout that simulates a network request. It handles caching the results and throws a promise if the data is not yet cached.

We’re keeping (at least for now) state in the master list of data we’re displaying:

const [data, setData] = useState([]);

As new data comes in from our hook, we append it to our master list:

useEffect(() => { setData((d) => d.concat(newData)); }, [newData]);

Lastly, when the user wants more data, they click the button, which calls this:

function loadMore() { startTransition(() => { setParam((x) => x + 1); }); }

Finally, note that I’m using a SuspenseImg component to handle preloading the image I’m displaying with each piece of data. There are only five random images being displayed, but I’m adding a query string to ensure a fresh load for each new piece of data we encounter.

Recap

To summarize where we are at this point, we have a hook that loads the current data. The hook obeys Suspense mechanics, and throws a promise while loading is happening. Whenever that data changes, the running total list of items is updated and appended with the new items. This happens in useEffect. Each item renders an image, and we use a SuspenseImg component to preload the image, and suspend until it’s ready. If you’re curious how some of that code works, check out my prior post on preloading images with Suspense.

Let’s test

This would be a pretty boring blog post if everything worked, and don’t worry, it doesn’t. Notice how, on the initial load, the pink fallback screen shows and then quickly hides, but then is redisplayed.

When we click the button that’s loads more data, we see the inline loading indicator (controlled by the useTransition hook) flip to true. Then we see it flip to false, before our original pink fallback shows. We were expecting to never see that pink screen again after the initial load; the inline loading indicator was supposed to show until everything was ready. What’s going on?

The problem

It’s been hiding right here in plain sight the entire time:

useEffect(() => { setData((d) => d.concat(newData)); }, [newData]);

useEffect runs when a state change is complete, i.e., a state change has finished suspending, and has been applied to the DOM. That part, “has finished suspending,” is key here. We can set state in here if we’d like, but if that state change suspends, again, that is a brand new suspension. That’s why we saw the pink flash on initial load, as well subsequent loads when the data finished loading. In both cases, the data loading was finished, and then we set state in an effect which caused that new data to actually render, and suspend again, because of the image preloads.

So, how do we fix this? On one level, the solution is simple: stop setting state in the effect. But that’s easier said than done. How do we update our running list of entries to append new results as they come in, without using an effect. You might think we could track things with a ref.

Unfortunately, Suspense comes with some new rules about refs, namely, we can’t set refs inside of a render. If you’re wondering why, remember that Suspense is all about React attempting to run a render, seeing that promise get thrown, and then discarding that render midway through. If we mutated a ref before that render was cancelled and discarded, the ref would still have that changed, but invalid value. The render function needs to be pure, without side effects. This has always been a rule with React, but it matters more now.

Re-thinking our data loading

Here’s the solution, which we’ll go over, piece by piece.

First, instead of storing our master list of data in state, let’s do something different: let’s store a list of pages we’re viewing. We can store the most recent page in a ref (we won’t write to it in render, though), and we’ll store an array of all currently-loaded pages in state.

const currentPage = useRef(0); const [pages, setPages] = useState([currentPage.current]);

In order to load more data, we’ll update accordingly:

function loadMore() { startTransition(() => { currentPage.current = currentPage.current + 1; setPages((pages) => pages.concat(currentPage.current)); }); }

The tricky part, however, is turning those page numbers into actual data. What we certainly cannot do is loop over those pages and call our useQuery hook; hooks cannot be called in a loop. What we need is a new, non-hook-based data API. Based on a very unofficial convention I’ve seen in past Suspense demos, I’ll name this method read(). It is not going to be a hook. It returns the requested data if it’s cached, or throws a promise otherwise. For our fake data loading hook, no real changes were necessary; I simple copy-and-pasted the hook, then renamed it. But for an actual data loading utility library, authors will likely need to do some work to expose both options as part of their public API. In my GraphQL client referenced earlier, there is indeed both a useSuspenseQuery hook, and also a read() method on the client object.

With this new read() method in place, the final piece of our code is trivial:

const data = pages.flatMap((page) => read(page));

We’re taking each page, and requesting the corresponding data with our read() method. If any of the pages are uncached (which really should only be the last page in the list) then a promise is thrown, and React suspends for us. When the promise resolves, React attempts the prior state change again, and this code runs again.

Don’t let the flatMap call confuse you. That does the exact same thing as map except it takes each result in the new array and, if it itself is an array, “flattens” it.

The result

With these changes in place, everything works as we expected it to when we started. Our pink loading screen shows once on their initial load, then, on subsequent loads, the inline loading state shows until everything is ready.

Parting thoughts

Suspense is an exciting update that’s coming to React. It’s still in the alpha stages, so don’t try to use it anywhere that matters. But if you’re the kind of developer who enjoys taking a sneak peek at upcoming things, then I hope this post provided you some good context and info that’s useful when this releases.

The post React Suspense: Lessons Learned While Loading Data appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS Grid Can Do Auto Height Transitions

Css Tricks - Mon, 11/08/2021 - 9:01am

Bonafide CSS trick alert! Nelson Menezes figured out a new way (that only works in Firefox for now) that is awfully clever.

Perhaps you know that CSS cannot animate to auto dimensions, which is super unfortunate. Animating from zero to “whatever is necessary” would be very helpful very often. We’ve documented the available techniques. They boil down to:

  • Animate the max-height to some more than you need value, which makes timing easing imprecise and janky.
  • Use JavaScript to measure the final size and animate to that, which means… using JavaScript.

Nelson’s technique is neither of those, nor some transform-based way with visual awkwardness. This technique uses CSS Grid at its core…

.expander { display: grid; grid-template-rows: 0fr; transition: grid-template-rows 1s; } .expander.expanded { grid-template-rows: 1fr; }

Unbelievably, in Firefox, that transitions content inside that area between 0 and the natural height of the content. There is only a little more to it, like hiding overflow and visibility to make it look right while maintaining accessibility:

CodePen Embed Fallback

That’s wonderful. Let’s get some stars on this issue and maybe Chrome will pick it up. But of course, even better would be if auto height transitions just started working. I can’t imagine that’s totally outside the realm of possibility.

The post CSS Grid Can Do Auto Height Transitions appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Icon Glassmorphism Effect in CSS

Css Tricks - Mon, 11/08/2021 - 4:57am

I recently came across a cool effect known as glassmorphism in a Dribble shot. My first thought was I could quickly recreate it in a few minutes if I just use some emojis for the icons without wasting time on SVG-ing them.

The effect we’re after.

I couldn’t have been more wrong about those “few minutes” — they ended up being days of furiously and frustratingly scratching this itch!

It turns out that, while there are resources on how to CSS such an effect, they all assume the very simple case where the overlay is rectangular or at most a rectangle with border-radius. However, getting a glassmorphism effect for irregular shapes like icons, whether these icons are emojis or proper SVGs, is a lot more complicated than I expected, so I thought it would be worth sharing the process, the traps I fell into and the things I learned along the way. And also the things I still don’t understand.

Why emojis?

Short answer: because SVG takes too much time. Long answer: because I lack the artistic sense of just drawing them in an image editor, but I’m familiar with the syntax enough such that I can often compact ready-made SVGs I find online to less than 10% of their original size. So, I cannot just use them as I find them on the internet — I have to redo the code to make it super clean and compact. And this takes time. A lot of time because it’s detail work.

And if all I want is to quickly code a menu concept with icons, I resort to using emojis, applying a filter on them in order to make them match the theme and that’s it! It’s what I did for this liquid tab bar interaction demo — those icons are all emojis! The smooth valley effect makes use of the mask compositing technique.

Liquid navigation.

Alright, so this is going to be our starting point: using emojis for the icons.

The initial idea

My first thought was to stack the two pseudos (with emoji content) of the navigation links, slightly offset and rotate the bottom one with a transform so that they only partly overlap. Then, I’d make the top one semitransparent with an opacity value smaller than 1, set backdrop-filter: blur() on it, and that should be just about enough.

Now, having read the intro, you’ve probably figured out that didn’t go as planned, but let’s see what it looks like in code and what issues there are with it.

We generate the nav bar with the following Pug:

- let data = { - home: { ico: '&#x1f3e0;', hue: 200 }, - notes: { ico: '&#x1f5d2;️', hue: 260 }, - activity: { ico: '&#x1f514;', hue: 320 }, - discovery: { ico: '&#x1f9ed;', hue: 30 } - }; - let e = Object.entries(data); - let n = e.length; nav - for(let i = 0; i > n; i++) a(href='#' data-ico=e[i][1].ico style=`--hue: ${e[i][1].hue}deg`) #{e[i][0]}

Which compiles to the HTML below:

<nav> <a href='#' data-ico='&#x1f3e0;' style='--hue: 200deg'>home</a> <a href='#' data-ico='&#x1f5d2;️' style='--hue: 260deg'>notes</a> <a href='#' data-ico='&#x1f514;' style='--hue: 320deg'>activity</a> <a href='#' data-ico='&#x1f9ed;' style='--hue: 30deg'>iscovery</a> </nav>

We start with layout, making our elements grid items. We place the nav in the middle, give links explicit widths, put both pseudos for each link in the top cell (which pushes the link text content to the bottom cell) and middle-align the link text and pseudos.

body, nav, a { display: grid; } body { margin: 0; height: 100vh; } nav { grid-auto-flow: column; place-self: center; padding: .75em 0 .375em; } a { width: 5em; text-align: center; &::before, &::after { grid-area: 1/ 1; content: attr(data-ico); } } Firefox screenshot of the result after we got layout basics sorted.

Note that the look of the emojis is going to be different depending on the browser you’re using view the demos.

We pick a legible font, bump up its size, make the icons even bigger, set backgrounds, and a nicer color for each of the links (based on the --hue custom property in the style attribute of each):

body { /* same as before */ background: #333; } nav { /* same as before */ background: #fff; font: clamp(.625em, 5vw, 1.25em)/ 1.25 ubuntu, sans-serif; } a { /* same as before */ color: hsl(var(--hue), 100%, 50%); text-decoration: none; &::before, &::after { /* same as before */ font-size: 2.5em; } } Chrome screenshot of the result (live demo) after prettifying things a bit.

Here’s where things start to get interesting because we start differentiating between the two emoji layers created with the link pseudos. We slightly move and rotate the ::before pseudo, make it monochrome with a sepia(1) filter, get it to the right hue, and bump up its contrast() — an oldie but goldie technique from Lea Verou. We also apply a filter: grayscale(1) on the ::after pseudo and make it semitransparent because, otherwise, we wouldn’t be able to see the other pseudo through it.

a { /* same as before */ &::before { transform: translate(.375em, -.25em) rotate(22.5deg); filter: sepia(1) hue-rotate(calc(var(--hue) - 50deg)) saturate(3); } &::after { opacity: .5; filter: grayscale(1); } } Chrome screenshot of the result (live demo) after differentiating between the two icon layers. Hitting a wall

So far, so good… so what? The next step, which I foolishly thought would be the last when I got the idea to code this, involves setting a backdrop-filter: blur(5px) on the top (::after) layer.

Note that Firefox still needs the gfx.webrender.all and layout.css.backdrop-filter.enabled flags set to true in about:config in order for the backdrop-filter property to work.

The flags that are still required in Firefox for backdrop-filter to work.

Sadly, the result looks nothing like what I expected. We get a sort of overlay the size of the entire top icon bounding box, but the bottom icon isn’t really blurred.

Chrome (top) and Firefox (bottom) screenshots of the result (live demo) after applying backdrop-filter.

However, I’m pretty sure I’ve played with backdrop-filter: blur() before and it worked, so what the hairy heck is going on here?

Working glassmorphism effect (live demo) in an older demo I coded. Getting to the root of the problem

Well, when you have no idea whatsoever why something doesn’t work, all you can do is take another working example, start adapting it to try to get the result you want… and see where it breaks!

So let’s see a simplified version of my older working demo. The HTML is just an article in a section. In the CSS, we first set some dimensions, then we set an image background on the section, and a semitransparent one on the article. Finally, we set the backdrop-filter property on the article.

section { background: url(cake.jpg) 50%/ cover; } article { margin: 25vmin; height: 40vh; background: hsla(0, 0%, 97%, .25); backdrop-filter: blur(5px); } Working glassmorphism effect (live demo) in a simplified test.

This works, but we don’t want our two layers nested in one another; we want them to be siblings. So, let’s make both layers article siblings, make them partly overlap and see if our glassmorphism effect still works.

<article class='base'></article> <article class='grey'></article> article { width: 66%; height: 40vh; } .base { background: url(cake.jpg) 50%/ cover; } .grey { margin: -50% 0 0 33%; background: hsla(0, 0%, 97%, .25); backdrop-filter: blur(5px); } Chrome (top) and Firefox (bottom) screenshots of the result (live demo) when the two layers are siblings.

Everything still seems fine in Chrome and, for the most part, Firefox too. It’s just that the way blur() is handled around the edges in Firefox looks awkward and not what we want. And, based on the few images in the spec, I believe the Firefox result is also incorrect?

I suppose one fix for the Firefox problem in the case where our two layers sit on a solid background (white in this particular case) is to give the bottom layer (.base) a box-shadow with no offsets, no blur, and a spread radius that’s twice the blur radius we use for the backdrop-filter applied on the top layer (.grey). Sure enough, this fix seems to work in our particular case.

Things get a lot hairier if our two layers sit on an element with an image background that’s not fixed (in which case, we could use a layered backgrounds approach to solve the Firefox issue), but that’s not the case here, so we won’t get into it.

Still, let’s move on to the next step. We don’t want our two layers to be two square boxes, we want then to be emojis, which means we cannot ensure semitransparency for the top one using a hsla() background — we need to use opacity.

.grey { /* same as before */ opacity: .25; background: hsl(0, 0%, 97%); } The result (live demo) when the top layer is made semitransparent using opacity instead of a hsla() background.

It looks like we found the problem! For some reason, making the top layer semitransparent using opacity breaks the backdrop-filter effect in both Chrome and Firefox. Is that a bug? Is that what’s supposed to happen?

Bug or not?

MDN says the following in the very first paragraph on the backdrop-filter page:

Because it applies to everything behind the element, to see the effect you must make the element or its background at least partially transparent.

Unless I don’t understand the above sentence, this appears to suggest that opacity shouldn’t break the effect, even though it does in both Chrome and Firefox.

What about the spec? Well, the spec is a huge wall of text without many illustrations or interactive demos, written in a language that makes reading it about as appealing as sniffing a skunk’s scent glands. It contains this part, which I have a feeling might be relevant, but I’m unsure that I understand what it’s trying to say — that the opacity set on the top element that we also have the backdrop-filter on also gets applied on the sibling underneath it? If that’s the intended result, it surely isn’t happening in practice.

The effect of the backdrop-filter will not be visible unless some portion of element B is semi-transparent. Also note that any opacity applied to element B will be applied to the filtered backdrop image as well.

Trying random things

Whatever the spec may be saying, the fact remains: making the top layer semitransparent with the opacity property breaks the glassmorphism effect in both Chrome and Firefox. Is there any other way to make an emoji semitransparent? Well, we could try filter: opacity()!

At this point, I should probably be reporting whether this alternative works or not, but the reality is… I have no idea! I spent a couple of days around this step and got to check the test countless times in the meanwhile — sometimes it works, sometimes it doesn’t in the exact same browsers, wit different results depending on the time of day. I also asked on Twitter and got mixed answers. Just one of those moments when you can’t help but wonder whether some Halloween ghost isn’t haunting, scaring and scarring your code. For eternity!

It looks like all hope is gone, but let’s try just one more thing: replacing the rectangles with text, the top one being semitransparent with color: hsla(). We may be unable to get the cool emoji glassmorphism effect we were after, but maybe we can get such a result for plain text.

So we add text content to our article elements, drop their explicit sizing, bump up their font-size, adjust the margin that gives us partial overlap and, most importantly, replace the background declarations in the last working version with color ones. For accessibility reasons, we also set aria-hidden='true' on the bottom one.

<article class='base' aria-hidden='true'>Lion &#x1f9e1;</article> <article class='grey'>Lion &#x1f5a4;</article> article { font: 900 21vw/ 1 cursive; } .base { color: #ff7a18; } .grey { margin: -.75em 0 0 .5em; color: hsla(0, 0%, 50%, .25); backdrop-filter: blur(5px); } Chrome (top) and Firefox (bottom) screenshots of the result (live demo) when we have two text layers.

There are couple of things to note here.

First, setting the color property to a value with a subunitary alpha also makes emojis semitransparent, not just plain text, both in Chrome and in Firefox! This is something I never knew before and I find absolutely mindblowing, given the other channels don’t influence emojis in any way.

Second, both Chrome and Firefox are blurring the entire area of the orange text and emoji that’s found underneath the bounding box of the top semitransparent grey layer, instead of just blurring what’s underneath the actual text. In Firefox, things look even worse due to that awkward sharp edge effect.

Even though the box blur is not what we want, I can’t help but think it does make sense since the spec does say the following:

[…] to create a “transparent” element that allows the full filtered backdrop image to be seen, you can use “background-color: transparent;”.

So let’s make a test to check what happens when the top layer is another non-rectangular shape that’s not text, but instead obtained with a background gradient, a clip-path or a mask!

Chrome (top) and Firefox (bottom) screenshots of the result (live demo) when the top layer is a non-rectangular shape.

In both Chrome and Firefox, the area underneath the entire box of the top layer gets blurred when the shape is obtained with background: gradient() which, as mentioned in the text case before, makes sense per the spec. However, Chrome respects the clip-path and mask shapes, while Firefox doesn’t. And, in this case, I really don’t know which is correct, though the Chrome result does make more sense to me.

Moving towards a Chrome solution

This result and a Twitter suggestion I got when I asked how to make the blur respect the text edges and not those of its bounding box led me to the next step for Chrome: applying a mask clipped to the text on the top layer (.grey). This solution doesn’t work in Firefox for two reasons: one, text is sadly a non-standard mask-clip value that only works in WebKit browsers and, two, as shown by the test above, masking doesn’t restrict the blur area to the shape created by the mask in Firefox anyway.

/* same as before */ .grey { /* same as before */ -webkit-mask: linear-gradient(red, red) text; /* only works in WebKit browsers */ } Chrome screenshot of the result (live demo) when the top layer has a mask restricted to the text area.

Alright, this actually looks like what we want, so we can say we’re heading in the right direction! However, here we’ve used an orange heart emoji for the bottom layer and a black heart emoji for the top semitransparent layer. Other generic emojis don’t have black and white versions, so my next idea was to initially make the two layers identical, then make the top one semitransparent and use filter: grayscale(1) on it.

article { color: hsla(25, 100%, 55%, var(--a, 1)); font: 900 21vw/ 1.25 cursive; } .grey { --a: .25; margin: -1em 0 0 .5em; filter: grayscale(1); backdrop-filter: blur(5px); -webkit-mask: linear-gradient(red, red) text; } Chrome screenshot of the result (live demo) when the top layer gets a grayscale(1) filter.

Well, that certainly had the effect we wanted on the top layer. Unfortunately, for some weird reason, it seems to have also affected the blurred area of the layer underneath. This moment is where to briefly consider throwing the laptop out the window… before getting the idea of adding yet another layer.

It would go like this: we have the base layer, just like we have so far, slightly offset from the other two above it. The middle layer is a “ghost” (transparent) one that has the backdrop-filter applied. And finally, the top one is semitransparent and gets the grayscale(1) filter.

body { display: grid; } article { grid-area: 1/ 1; place-self: center; padding: .25em; color: hsla(25, 100%, 55%, var(--a, 1)); font: 900 21vw/ 1.25 pacifico, z003, segoe script, comic sans ms, cursive; } .base { margin: -.5em 0 0 -.5em; } .midl { --a: 0; backdrop-filter: blur(5px); -webkit-mask: linear-gradient(red, red) text; } .grey { filter: grayscale(1) opacity(.25) } Chrome screenshot of the result (live demo) with three layers.

Now we’re getting somewhere! There’s just one more thing left to do: make the base layer monochrome!

/* same as before */ .base { margin: -.5em 0 0 -.5em; filter: sepia(1) hue-rotate(165deg) contrast(1.5); } Chrome screenshot of the result (live demo) we were after.

Alright, this is the effect we want!

Getting to a Firefox solution

While coding the Chrome solution, I couldn’t help but think we may be able to pull off the same result in Firefox since Firefox is the only browser that supports the element() function. This function allows us to take an element and use it as a background for another element.

The idea is that the .base and .grey layers will have the same styles as in the Chrome version, while the middle layer will have a background that’s (via the element() function) a blurred version of our layers.

To make things easier, we start with just this blurred version and the middle layer.

<article id='blur' aria-hidden='true'>Lion &#x1f981;</article> <article class='midl'>Lion &#x1f981;</article>

We absolutely position the blurred version (still keeping it in sight for now), make it monochrome and blur it and then use it as a background for .midl.

#blur { position: absolute; top: 2em; right: 0; margin: -.5em 0 0 -.5em; filter: sepia(1) hue-rotate(165deg) contrast(1.5) blur(5px); } .midl { --a: .5; background: -moz-element(#blur); }

We’ve also made the text on the .midl element semitransparent so we can see the background through it. We’ll make it fully transparent eventually, but for now, we still want to see its position relative to the background.

Firefox screenshot of the result (live demo) when using the blurred element #blur as a background.

We can notice a one issue right away: while margin works to offset the actual #blur element, it does nothing for shifting its position as a background. In order to get such an effect, we need to use the transform property. This can also help us if we want a rotation or any other transform — as it can be seen below where we’ve replaced the margin with transform: rotate(-9deg).

Firefox screenshot of the result (live demo) when using transform: rotate() instead of margin on the #blur element.

Alright, but we’re still sticking to just a translation for now:

#blur { /* same as before */ transform: translate(-.25em, -.25em); /* replaced margin */ } Firefox screenshot of the result (live demo) when using transform: translate() instead of margin on the #blur element.

One thing to note here is that a bit of the blurred background gets cut off as it goes outside the limits of the middle layer’s padding-box. That doesn’t matter at this step anyway since our next move is to clip the background to the text area, but it’s good to just have that space since the .base layer is going to get translated just as far.

Firefox screenshot highlighting how the translated #blur background exceeds the limits of the padding-box on the .midl element.

So, we’re going to bump up the padding by a little bit, even if, at this point, it makes absolutely no difference visually as we’re also setting background-clip: text on our .midl element.

article { /* same as before */ padding: .5em; } #blur { position: absolute; bottom: 100vh; transform: translate(-.25em, -.25em); filter: sepia(1) hue-rotate(165deg) contrast(1.5) blur(5px); } .midl { --a: .1; background: -moz-element(#blur); background-clip: text; }

We’ve also moved the #blur element out of sight and further reduced the alpha of the .midl element’s color, as we want a better view at the background through the text. We’re not making it fully transparent, but still keeping it visible for now just so we know what area it covers.

Firefox screenshot of the result (live demo) after clipping the .midl element’s background to text.

The next step is to add the .base element with pretty much the same styles as it had in the Chrome case, only replacing the margin with a transform.

<article id='blur' aria-hidden='true'>Lion &#x1f981;</article> <article class='base' aria-hidden='true'>Lion &#x1f981;</article> <article class='midl'>Lion &#x1f981;</article> #blur { position: absolute; bottom: 100vh; transform: translate(-.25em, -.25em); filter: sepia(1) hue-rotate(165deg) contrast(1.5) blur(5px); } .base { transform: translate(-.25em, -.25em); filter: sepia(1) hue-rotate(165deg) contrast(1.5); }

Since a part of these styles are common, we can also add the .base class on our blurred element #blur in order to avoid duplication and reduce the amount of code we write.

<article id='blur' class='base' aria-hidden='true'>Lion &#x1f981;</article> <article class='base' aria-hidden='true'>Lion &#x1f981;</article> <article class='midl'>Lion &#x1f981;</article> #blur { --r: 5px; position: absolute; bottom: 100vh; } .base { transform: translate(-.25em, -.25em); filter: sepia(1) hue-rotate(165deg) contrast(1.5) blur(var(--r, 0)); } Firefox screenshot of the result (live demo) after adding the .base layer.

We have a different problem here. Since the .base layer has a transform, it’s now on top of the .midl layer in spite of DOM order. The simplest fix? Add z-index: 2 on the .midl element!

Firefox screenshot of the result (live demo) after fixing the layer order such that .base is underneath .midl.

We still have another, slightly more subtle problem: the .base element is still visible underneath the semitransparent parts of the blurred background we’ve set on the .midl element. We don’t want to see the sharp edges of the .base layer text underneath, but we are because blurring causes pixels close to the edge to become semitransparent.

The blur effect around the edges.

Depending on what kind of background we have on the parent of our text layers, this is a problem that can be solved with a little or a lot of effort.

If we only have a solid background, the problem gets solved by setting the background-color on our .midl element to that same value. Fortunately, this happens to be our case, so we won’t go into discussing the other scenario. Maybe in another article.

.midl { /* same as before */ background: -moz-element(#blur) #fff; background-clip: text; } Firefox screenshot of the result (live demo) after ensuring the .base layer isn’t visible through the background of the .midl one.

We’re getting close to a nice result in Firefox! All that’s left to do is add the top .grey layer with the exact same styles as in the Chrome version!

.grey { filter: grayscale(1) opacity(.25); }

Sadly, doing this doesn’t produce the result we want, which is something that’s really obvious if we also make the middle layer text fully transparent (by zeroing its alpha --a: 0) so that we only see its background (which uses the blurred element #blur on top of solid white) clipped to the text area:

Firefox screenshot of the result (live demo) after adding the top .grey layer.

The problem is we cannot see the .grey layer! Due to setting z-index: 2 on it, the middle layer .midl is now above what should be the top layer (the .grey one), in spite of the DOM order. The fix? Set z-index: 3 on the .grey layer!

.grey { z-index: 3; filter: grayscale(1) opacity(.25); }

I’m not really fond of giving out z-index layer after layer, but hey, it’s low effort and it works! We now have a nice Firefox solution:

Firefox screenshot of the result (live demo) we were after. Combining our solutions into a cross-browser one

We start with the Firefox code because there’s just more of it:

<article id='blur' class='base' aria-hidden='true'>Lion &#x1f981;</article> <article class='base' aria-hidden='true'>Lion &#x1f981;</article> <article class='midl' aria-hidden='true'>Lion &#x1f981;</article> <article class='grey'>Lion &#x1f981;</article> body { display: grid; } article { grid-area: 1/ 1; place-self: center; padding: .5em; color: hsla(25, 100%, 55%, var(--a, 1)); font: 900 21vw/ 1.25 cursive; } #blur { --r: 5px; position: absolute; bottom: 100vh; } .base { transform: translate(-.25em, -.25em); filter: sepia(1) hue-rotate(165deg) contrast(1.5) blur(var(--r, 0)); } .midl { --a: 0; z-index: 2; background: -moz-element(#blur) #fff; background-clip: text; } .grey { z-index: 3; filter: grayscale(1) opacity(.25); }

The extra z-index declarations don’t impact the result in Chrome and neither does the out-of-sight #blur element. The only things that this is missing in order for this to work in Chrome are the backdrop-filter and the mask declarations on the .midl element:

backdrop-filter: blur(5px); -webkit-mask: linear-gradient(red, red) text;

Since we don’t want the backdrop-filter to get applied in Firefox, nor do we want the background to get applied in Chrome, we use @supports:

$r: 5px; /* same as before */ #blur { /* same as before */ --r: #{$r}; } .midl { --a: 0; z-index: 2; /* need to reset inside @supports so it doesn't get applied in Firefox */ backdrop-filter: blur($r); /* invalid value in Firefox, not applied anyway, no need to reset */ -webkit-mask: linear-gradient(red, red) text; @supports (background: -moz-element(#blur)) { /* for Firefox */ background: -moz-element(#blur) #fff; background-clip: text; backdrop-filter: none; } }

This gives us a cross-browser solution!

Chrome (top) and Firefox (bottom) screenshots of the result (live demo) we were after.

While the result isn’t the same in the two browsers, it’s still pretty similar and good enough for me.

What about one-elementing our solution?

Sadly, that’s impossible.

First off, the Firefox solution requires us to have at least two elements since we use one (referenced by its id) as a background for another.

Second, while the first thought with the remaining three layers (which are the only ones we need for the Chrome solution anyway) is that one of them could be the actual element and the other two its pseudos, it’s not so simple in this particular case.

For the Chrome solution, each of the layers has at least one property that also irreversibly impacts any children and any pseudos it may have. For the .base and .grey layers, that’s the filter property. For the middle layer, that’s the mask property.

So while it’s not pretty to have all those elements, it looks like we don’t have a better solution if we want the glassmorphism effect to work on emojis too.

If we only want the glassmorphism effect on plain text — no emojis in the picture — this can be achieved with just two elements, out of which only one is needed for the Chrome solution. The other one is the #blur element, which we only need in Firefox.

<article id='blur'>Blood</article> <article class='text' aria-hidden='true' data-text='Blood'></article>

We use the two pseudos of the .text element to create the base layer (with the ::before) and a combination of the other two layers (with the ::after). What helps us here is that, with emojis out of the picture, we don’t need filter: grayscale(1), but instead we can control the saturation component of the color value.

These two pseudos are stacked one on top of the other, with the bottom one (::before) offset by the same amount and having the same color as the #blur element. This color value depends on a flag, --f, that helps us control both the saturation and the alpha. For both the #blur element and the ::before pseudo (--f: 1), the saturation is 100% and the alpha is 1. For the ::after pseudo (--f: 0), the saturation is 0% and the alpha is .25.

$r: 5px; %text { // used by #blur and both .text pseudos --f: 1; grid-area: 1/ 1; // stack pseudos, ignored for absolutely positioned #base padding: .5em; color: hsla(345, calc(var(--f)*100%), 55%, calc(.25 + .75*var(--f))); content: attr(data-text); } article { font: 900 21vw/ 1.25 cursive } #blur { position: absolute; bottom: 100vh; filter: blur($r); } #blur, .text::before { transform: translate(-.125em, -.125em); @extend %text; } .text { display: grid; &::after { --f: 0; @extend %text; z-index: 2; backdrop-filter: blur($r); -webkit-mask: linear-gradient(red, red) text; @supports (background: -moz-element(#blur)) { background: -moz-element(#blur) #fff; background-clip: text; backdrop-filter: none; } } } CodePen Embed Fallback Applying the cross-browser solution to our use case

The good news here is our particular use case where we only have the glassmorphism effect on the link icon (not on the entire link including the text) actually simplifies things a tiny little bit.

We use the following Pug to generate the structure:

- let data = { - home: { ico: '&#x1f3e0;', hue: 200 }, - notes: { ico: '&#x1f5d2;️', hue: 260 }, - activity: { ico: '&#x1f514;', hue: 320 }, - discovery: { ico: '&#x1f9ed;', hue: 30 } - }; - let e = Object.entries(data); - let n = e.length; nav - for(let i = 0; i < n; i++) - let ico = e[i][1].ico; a.item(href='#' style=`--hue: ${e[i][1].hue}deg`) span.icon.tint(id=`blur${i}` aria-hidden='true') #{ico} span.icon.tint(aria-hidden='true') #{ico} span.icon.midl(aria-hidden='true' style=`background-image: -moz-element(#blur${i})`) #{ico} span.icon.grey(aria-hidden='true') #{ico} | #{e[i][0]}

Which produces an HTML structure like the one below:

<nav> <a class='item' href='#' style='--hue: 200deg'> <span class='icon tint' id='blur0' aria-hidden='true'>&#x1f3e0;</span> <span class='icon tint' aria-hidden='true'>&#x1f3e0;</span> <span class='icon midl' aria-hidden='true' style='background-image: -moz-element(#blur0)'>&#x1f3e0;</span> <span class='icon grey' aria-hidden='true'>&#x1f3e0;</span> home </a> <!-- the other nav items --> </nav>

We could probably replace a part of those spans with pseudos, but I feel it’s more consistent and easier like this, so a span sandwich it is!

One very important thing to notice is that we have a different blurred icon layer for each of the items (because each and every item has its own icon), so we set the background of the .midl element to it in the style attribute. Doing things this way allows us to avoid making any changes to the CSS file if we add or remove entries from the data object (thus changing the number of menu items).

We have almost the same layout and prettified styles we had when we first CSS-ed the nav bar. The only difference is that now we don’t have pseudos in the top cell of an item’s grid; we have the spans:

span { grid-area: 1/ 1; /* stack all emojis on top of one another */ font-size: 4em; /* bump up emoji size */ }

For the emoji icon layers themselves, we also don’t need to make many changes from the cross-browser version we got a bit earlier, though there are a few lttle ones.

First off, we use the transform and filter chains we picked initially when we were using the link pseudos instead of spans. We also don’t need the color: hsla() declaration on the span layers any more since, given that we only have emojis here, it’s only the alpha channel that matters. The default, which is preserved for the .base and .grey layers, is 1. So, instead of setting a color value where only the alpha, --a, channel matters and we change that to 0 on the .midl layer, we directly set color: transparent there. We also only need to set the background-color on the .midl element in the Firefox case as we’ve already set the background-image in the style attribute. This leads to the following adaptation of the solution:

.base { /* mono emoji version */ transform: translate(.375em, -.25em) rotate(22.5deg); filter: sepia(1) hue-rotate(var(--hue)) saturate(3) blur(var(--r, 0)); } .midl { /* middle, transparent emoji version */ color: transparent; /* so it's not visible */ backdrop-filter: blur(5px); -webkit-mask: linear-gradient(red 0 0) text; @supports (background: -moz-element(#b)) { background-color: #fff; background-clip: text; backdrop-filter: none; } }

And that’s it — we have a nice icon glassmorphism effect for this nav bar!

Chrome (top) and Firefox (bottom) screenshots of the desired emoji glassmorphism effect (live demo).

There’s just one more thing to take care of — we don’t want this effect at all times; only on :hover or :focus states. So, we’re going to use a flag, --hl, which is 0 in the normal state, and 1 in the :hover or :focus state in order to control the opacity and transform values of the .base spans. This is a technique I’ve detailed in an earlier article.

$t: .3s; a { /* same as before */ --hl: 0; color: hsl(var(--hue), calc(var(--hl)*100%), 65%); transition: color $t; &:hover, &:focus { --hl: 1; } } .base { transform: translate(calc(var(--hl)*.375em), calc(var(--hl)*-.25em)) rotate(calc(var(--hl)*22.5deg)); opacity: var(--hl); transition: transform $t, opacity $t; }

The result can be seen in the interactive demo below when the icons are hovered or focused.

CodePen Embed Fallback What about using SVG icons?

I naturally asked myself this question after all it took to get the CSS emoji version working. Wouldn’t the plain SVG way make more sense than a span sandwich, and wouldn’t it be simpler? Well, while it does make more sense, especially since we don’t have emojis for everything, it’s sadly not less code and it’s not any simpler either.

But we’ll get into details about that in another article!

The post Icon Glassmorphism Effect in CSS appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Does the Next Generation of Static Site Generators Make Building Sites Better?

Css Tricks - Mon, 11/08/2021 - 4:56am

Just ran across îles, a new static site generator mostly centered around Vue. The world has no particular shortage of static site generators, but it’s interesting to see what this “next generation” of SSGs seem to focus on or try to solve.

îles looks to take a heaping spoonful of inspiration from Astro. If we consider them together, along with other emerging and quickly-evolving SSGs, there is some similarities:

  • Ship zero JavaScript by default. Interactive bits are opt-in — that’s what the islands metaphor is all about. Astro and îles do it at the per-component level and SvelteKit prefers it at the page level.
  • Additional fanciness around controls for when hydration happens, like “when the browser is idle,” or “when the component is visible.”
  • Use a fast build tool, like Vite which is Go-based esbuild under the hood. Or Rust-based swc in the case of Next 12.
  • Support multiple JavaScript frameworks for componentry. Astro and îles do this out of the box, and another example is how Slinkity brings that to Eleventy.
  • File-system based routing.
  • Assumption that Markdown is used for content.

When you compare these to first-cohort SSGs, like Jekyll, I get a few feelings:

  1. These really aren’t that much different. The feature set is largely the same.
  2. The biggest change is probably that far more of them are JavaScript library based. Turns out JavaScript libraries are what really what people wanted out of HTML preprocessors, perhaps because of the strong focus on components.
  3. They are incrementally better. They are faster, the live reloading is better, the common needs have been ironed out.

The post Does the Next Generation of Static Site Generators Make Building Sites Better? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Favicons: How to Make Sure Browsers Only Download the SVG Version

Css Tricks - Fri, 11/05/2021 - 10:47am

Šime Vidas DM’d me the other day about this thread from subzey on Twitter. My HTML for favicons was like this:

<!-- Warning! Typo! --> <link rel="icon" href="/favicon.ico" size="any"> <link rel="icon" href="/favicon.svg" type="image/svg+xml">

The attribute size is a typo there, and should be sizes. Like this:

<!-- Correct --> <link rel="icon" href="/favicon.ico" sizes="any"> <link rel="icon" href="/favicon.svg" type="image/svg+xml">

And with that, Chrome no longer double downloaded both icons, and instead uses the SVG alone (as it should). Just something to watch out for. My ICO file is 5.8kb, so now that’s 5.8kb saved on every single uncached page load, which feels non-trivial to me.

Šime noted this in Web Platform News #42:

SVG favicons are supported in all modern browsers except Safari. If your website declares both an ICO (fallback) and SVG icon, make sure to add the sizes=any attribute to the ICO <link> to prevent Chrome from downloading and using the ICO icon instead of the SVG icon (see Chrome bug 1162276 for more info). CSS-Tricks is an example of a website that has the optimal icon markup in its <head> (three <link> elements, one each for favicon.ico, favicon.svg, and apple-touch-icon.png).

That note about CSS-Tricks is a bit generous in that it’s only correct because my incorrectness was pointed out ahead of time. I think the root of my typo was Andrey’s article, but that’s been fixed. Andrey’s article is still likely the best reference for the most practical favicon markup.

The post Favicons: How to Make Sure Browsers Only Download the SVG Version appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Yes, Design Systems Do Improve Developer Efficiency and Design Consistency

Css Tricks - Fri, 11/05/2021 - 9:58am

One of the toughest things about being someone who cares deeply about design systems is making the case for a dedicated design system. Folks in leadership will often ask you to prove the value of it. Why should we care about good front-end development and consistency? Sure, sure, sure, they say—everyone wants a flashy design system—but is it worth the cost?

That question is tough because developer productivity, front-end quality, and even accessibility to some extent, are all such nebulous things. In contrast, this is one of the smartest things about Google’s Core Web Vitals because it puts a number on the problem and provides very actionable things to do next.

When it comes to design systems, we don’t really have metrics that we can point to and say “Ah, yes, I need to put folks on the design systems team so that we can push our design system up from a bad score of 60/100.” It would be neat if we did, but I don’t think we ever will.

Enter Sparkbox. They wanted to fix this by testing how much faster their eight developers were in a little test. They got their devs to make a form, by hand, and then do it again using IBM’s Carbon design system, which they’d never used before.

The results are super interesting:

Using a design system made a simple form page 47% faster to develop versus coding it from scratch. The median time for the scratch submissions was 4.2 hours compared to the 2 hour median time for Carbon submissions. The Carbon timing included the time the developers spent familiarizing themselves with the design system.

Now imagine if those devs were familiar with Carbon’s design system! If that was the case, I imagine the time to build those forms would be way, way faster than those initial results.

To Shared LinkPermalink on CSS-Tricks

The post Yes, Design Systems Do Improve Developer Efficiency and Design Consistency appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

How to Create an Animated Chart of Nested Squares Using Masks

Css Tricks - Fri, 11/05/2021 - 4:45am

We have many well-known chart types: bar, donut, line, pie, you name it. All popular chart libraries support these. Then there are the chart types that do not even have a name. Check out this dreamt-up chart with stacked (nested) squares that can help visualize relative sizes, or how different values compare to one another:

What we’re making

Without any interactivity, creating this design is fairly straightforward. One way to do it is is to stack elements (e.g. SVG <rect> elements, or even HTML divs) in decreasing sizes, where all of their bottom-left corners touch the same point.

But things get trickier once we introduce some interactivity. Here’s how it should be: When we move our mouse over one of the shapes, we want the others to fade out and move away.

We’ll create these irregular shapes using rectangles and masks — literal <svg> with <rect> and <mask> elements. If you are entirely new to masks, you are in the right place. This is an introductory-level article. If you are more seasoned, then perhaps this cut-out effect is a trick that you can take with you.

Now, before we begin, you may wonder if a better alternative to SVG to using custom shapes. That’s definitely a possibility! But drawing shapes with a <path> can be intimidating, or even get messy. So, we’re working with “easier” elements to get the same shapes and effects.

For example, here’s how we would have to represent the largest blue shape using a <path>.

<svg viewBox="0 0 320 320" width="320" height="320"> <path d="M320 0H0V56H264V320H320V0Z" fill="#264653"/> </svg>

If the 0H0V56… does not make any sense to you, check out “The SVG path Syntax: An Illustrated Guide” for a thorough explanation of the syntax.

The basics of the chart

Given a data set like this:

type DataSetEntry = { label: string; value: number; }; type DataSet = DataSetEntry[]; const rawDataSet: DataSet = [ { label: 'Bad', value: 1231 }, { label: 'Beginning', value: 6321 }, { label: 'Developing', value: 10028 }, { label: 'Accomplished', value: 12123 }, { label: 'Exemplary', value: 2120 } ];

…we want to end up with an SVG like this:

<svg viewBox="0 0 320 320" width="320" height="320"> <rect width="320" height="320" y="0" fill="..."></rect> <rect width="264" height="264" y="56" fill="..."></rect> <rect width="167" height="167" y="153" fill="..."></rect> <rect width="56" height="56" y="264" fill="..."></rect> <rect width="32" height="32" y="288" fill="..."></rect> </svg> Determining the highest value

It will become apparent in a moment why we need the highest value. We can use the Math.max() to get it. It accepts any number of arguments and returns the highest value in a set.

const dataSetHighestValue: number = Math.max( ...rawDataSet.map((entry: DataSetEntry) => entry.value) );

Since we have a small dataset, we can just tell that we will get 12123.

Calculating the dimension of the rectangles

If we look at the design, the rectangle representing the highest value (12123) covers the entire area of the chart.

We arbitrarily picked 320 for the SVG dimensions. Since our rectangles are squares, the width and height are equal. How can we make 12123 equal to 320? How about the less “special” values? How big is the 6321 rectangle?

Asked another way, how do we map a number from one range ([0, 12123]) to another one ([0, 320])? Or, in more math-y terms, how do we scale a variable to an interval of [a, b]?

For our purposes, we are going to implement the function like this:

const remapValue = ( value: number, fromMin: number, fromMax: number, toMin: number, toMax: number ): number => { return ((value - fromMin) / (fromMax - fromMin)) * (toMax - toMin) + toMin; }; remapValue(1231, 0, 12123, 0, 320); // 32 remapValue(6321, 0, 12123, 0, 320); // 167 remapValue(12123, 0, 12123, 0, 320); // 320

Since we map values to the same range in our code, instead of passing the minimums and maximums over and over, we can create a wrapper function:

const valueRemapper = ( fromMin: number, fromMax: number, toMin: number, toMax: number ) => { return (value: number): number => { return remapValue(value, fromMin, fromMax, toMin, toMax); }; }; const remapDataSetValueToSvgDimension = valueRemapper( 0, dataSetHighestValue, 0, svgDimension );

We can use it like this:

remapDataSetValueToSvgDimension(1231); // 32 remapDataSetValueToSvgDimension(6321); // 167 remapDataSetValueToSvgDimension(12123); // 320 Creating and inserting the DOM elements

What remains has to do with DOM manipulation. We have to create the <svg> and the five <rect> elements, set their attributes, and append them to the DOM. We can do all this with the basic createElementNS, setAttribute, and the appendChild functions.

Notice that we are using the createElementNS instead of the more common createElement. This is because we are working with an SVG. HTML and SVG elements have different specs, so they fall under a different namespace URI. It just happens that the createElement conveniently uses the HTML namespace! So, to create an SVG, we have to be this verbose:

document.createElementNS('http://www.w3.org/2000/svg', 'svg') as SVGSVGElement;

Surely, we can create another helper function:

const createSvgNSElement = (element: string): SVGElement => { return document.createElementNS('http://www.w3.org/2000/svg', element); };

When we are appending the rectangles to the DOM, we have to pay attention to their order. Otherwise, we would have to specify the z-index explicitly. The first rectangle has to be the largest, and the last rectangle has to be the smallest. Best to sort the data before the loop.

const data = rawDataSet.sort( (a: DataSetEntry, b: DataSetEntry) => b.value - a.value ); data.forEach((d: DataSetEntry, index: number) => { const rect: SVGRectElement = createSvgNSElement('rect') as SVGRectElement; const rectDimension: number = remapDataSetValueToSvgDimension(d.value); rect.setAttribute('width', `${rectDimension}`); rect.setAttribute('height', `${rectDimension}`); rect.setAttribute('y', `${svgDimension - rectDimension}`); svg.appendChild(rect); });

The coordinate system starts from the top-left; that’s where the [0, 0] is. We are always going to draw the rectangles from the left side. The x attribute, which controls the horizontal position, defaults to 0, so we don’t have to set it. The y attribute controls the vertical position.

To give the visual impression that all of the rectangles originate from the same point that touches their bottom-left corners, we have to push the rectangles down so to speak. By how much? The exact amount that the rectangle does not fill. And that value is the difference between the dimension of the chart and the particular rectangle. If we put all the bits together, we end up with this:

CodePen Embed Fallback

We already added the code for the animation to this demo using CSS.

Cutout rectangles

We have to turn our rectangles into irregular shapes that sort of look like the number seven, or the letter L rotated 180 degrees.

If we focus on the “missing parts” then we can see they cutouts of the same rectangles we’re already working with.

We want to hide those cutouts. That’s how we are going to end up with the L-shapes we want.

Masking 101

A mask is something you define and later apply to an element. Typically, the mask is inlined in the <svg> element it belongs to. And, generally, it should have a unique id because we have to reference it in order to apply the mask to an element.

<svg> <mask id="..."> <!-- ... --> </mask> </svg>

In the <mask> tag, we put the shapes that serve as the actual masks. We also apply the mask attribute to the elements.

<svg> <mask id="myCleverlyNamedMask"> <!-- ... --> </mask> <rect mask="url(#myCleverlyNamedMask)"></rect> </svg>

That’s not the only way to define or apply a mask, but it’s the most straightforward way for this demo. Let’s do a bit of experimentation before writing any code to generate the masks.

We said that we want to cover the cutout areas that match the sizes of the existing rectangles. If we take the largest element and we apply the previous rectangle as a mask, we end up with this code:

<svg viewBox="0 0 320 320" width="320" height="320"> <mask id="theMask"> <rect width="264" height="264" y="56" fill=""></rect> </mask> <rect width="320" height="320" y="0" fill="#264653" mask="url(#theMask)"></rect> </svg>

The element inside the mask needs a fill value. What should that be? We’ll see entirely different results based on the fill value (color) we choose.

The white fill

If we use a white value for the fill, then we get this:

Now, our large rectangle is the same dimension as the masking rectangle. Not exactly what we wanted.

The black fill

If we use a black value instead, then it looks like this:

We don’t see anything. That’s because what is filled with black is what becomes invisible. We control the visibility of masks using white and black fills. The dashed lines are there as a visual aid to reference the dimensions of the invisible area.

The gray fill

Now let’s use something in-between white and black, say gray:

It’s neither fully opaque or solid; it’s transparent. So, now we know we can control the “degree of visibility” here by using something different than white and black values which is a good trick to keep in our back pockets.

The last bit

Here’s what we’ve covered and learned about masks so far:

  • The element inside the <mask> controls the dimension of the masked area.
  • We can make the contents of the masked area visible, invisible, or transparent.

We have only used one shape for the mask, but as with any general purpose HTML tag, we can nest as many child elements in there as we want. In fact, the trick to achieve what we want is using two SVG <rect> elements. We have to stack them one on top of the other:

<svg viewBox="0 0 320 320" width="320" height="320"> <mask id="maskW320"> <rect width="320" height="320" y="0" fill="???"></rect> <rect width="264" height="264" y="56" fill="???"></rect> </mask> <rect width="320" height="320" y="0" fill="#264653" mask="url(#maskW320)"></rect> </svg>

One of our masking rectangles is filled with white; the other is filled with black. Even if we know the rules, let’s try out the possibilities.

<mask id="maskW320"> <rect width="320" height="320" y="0" fill="black"></rect> <rect width="264" height="264" y="56" fill="white"></rect> </mask>

The <mask> is the dimension of the largest element and the largest element is filled with black. That means everything under that area is invisible. And everything under the smaller rectangle is visible.

Now let’s do flip things where the black rectangle is on top:

<mask id="maskW320"> <rect width="320" height="320" y="0" fill="white"></rect> <rect width="264" height="264" y="56" fill="black"></rect> </mask>

This is what we want!

Everything under the largest white-filled rectangle is visible, but the smaller black rectangle is on top of it (closer to us on the z-axis), masking that part.

Generating the masks

Now that we know what we have to do, we can create the masks with relative ease. It’s similar to how we generated the colored rectangles in the first place — we create a secondary loop where we create the mask and the two rects.

This time, instead of appending the rects directly to the SVG, we append it to the mask:

data.forEach((d: DataSetEntry, index: number) => { const mask: SVGMaskElement = createSvgNSElement('mask') as SVGMaskElement; const rectDimension: number = remapDataSetValueToSvgDimension(d.value); const rect: SVGRectElement = createSvgNSElement('rect') as SVGRectElement; rect.setAttribute('width', `${rectDimension}`); // ...setting the rest of the attributes... mask.setAttribute('id', `maskW${rectDimension.toFixed()}`); mask.appendChild(rect); // ...creating and setting the attributes for the smaller rectangle... svg.appendChild(mask); }); data.forEach((d: DataSetEntry, index: number) => { // ...our code to generate the colored rectangles... });

We could use the index as the mask’s ID, but this seems a more readable option, at least to me:

mask.setAttribute('id', `maskW${rectDimension.toFixed()}`); // maskW320, masW240, ...

As for adding the smaller rectangle in the mask, we have easy access the value we need because we previously ordered the rectangle values from highest to lowest. That means the next element in the loop is the smaller rectangle, the one we should reference. And we can do that by its index.

// ...previous part where we created the mask and the rectangle... const smallerRectIndex = index + 1; // there's no next one when we are on the smallest if (data[smallerRectIndex] !== undefined) { const smallerRectDimension: number = remapDataSetValueToSvgDimension( data[smallerRectIndex].value ); const smallerRect: SVGRectElement = createSvgNSElement( 'rect' ) as SVGRectElement; // ...setting the rectangle attributes... mask.appendChild(smallerRect); } svg.appendChild(mask);

What is left is to add the mask attribute to the colored rectangle in our original loop. It should match the format we chose:

rect.setAttribute('mask', `url(#maskW${rectDimension.toFixed()})`); // maskW320, maskW240, ... The final result

And we are done! We’ve successfully made a chart that’s made out of nested squares. It even comes apart on mouse hover. And all it took was some SVG using the <mask> element to draw the cutout area of each square.

CodePen Embed Fallback

The post How to Create an Animated Chart of Nested Squares Using Masks appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Introducing Svelte, and Comparing Svelte with React and Vue

Css Tricks - Thu, 11/04/2021 - 11:20am

Josh Collingsworth is clearly a big fan of Svelte, so while this is a fun and useful comparison article, it’s here to crown Svelte the winner all the way through.

A few things I find compelling:

One of the things I like most about Svelte is its HTML-first philosophy. With few exceptions, Svelte code is entirely browser-readable HTML and JavaScript. In fact, technically, you could call Svelte code as a small superset of HTML.

And:

Svelte is reactive by default. This means when a variable is reassigned, every place it’s used or referenced also updates automatically. (React and Vue both require you to explicitly initialize reactive variables.)

I do find the component format nice to look at, like how you just write HTML. You don’t even need a <template> around it, or to return anything. I imagine Astro took inspiration from this in how you can also just chuck a <style> tag in there and scope styles if you want. But I think I prefer how the “fenced” JavaScript at the top only runs during the build by default.

P.S. I really like Josh’s header/footer random square motif so I tried to reverse engineer it:

CodePen Embed Fallback

To Shared LinkPermalink on CSS-Tricks

The post Introducing Svelte, and Comparing Svelte with React and Vue appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Fixing the Drift in Shape Rotations

Css Tricks - Thu, 11/04/2021 - 7:52am

Steve Ruiz calls this post an “extra-obscure edition of design tool micro-UX,” but I find it fascinating! If you select a bunch of elements in a design tool, then rotate then, then later select those same elements and try to rotate them back, you’ll find they have “drifted” a bit from the original location.

It’s because the selection of elements needs to rotate around a center (the transform-origin, in CSS parlance), but where that center is located is calculated differently post-rotation. The trick, if any particular design tool cares to fix it:

[…] here’s the fix: once a user starts a rotation, we hold onto the the center point; if the user rotates again, we re-use that same point; and we only give it up once the user makes a new selection.

There’s a related tweet thread.

To Shared LinkPermalink on CSS-Tricks

The post Fixing the Drift in Shape Rotations appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Scroll-Linked Animations With the Web Animations API (WAAPI) and ScrollTimeline

Css Tricks - Thu, 11/04/2021 - 4:26am

The Scroll-linked Animations specification is an upcoming and experimental addition that allows us to link animation-progress to scroll-progress: as you scroll up and down a scroll container, a linked animation also advances or rewinds accordingly.

We covered some use cases in a previous piece here on CSS-Tricks, all driven by the CSS @scroll-timeline at-rule and animation-timeline property the specification provides — yes, that’s correct: all those use cases were built using only HTML and CSS. No JavaScript.

Apart from the CSS interface we get with the Scroll-linked Animations specification, it also describes a JavaScript interface to implement scroll-linked animations. Let’s take a look at the ScrollTimeline class and how to use it with the Web Animations API.

Web Animations API: A quick recap

The Web Animations API (WAAPI) has been covered here on CSS-Tricks before. As a small recap, the API lets us construct animations and control their playback with JavaScript.

Take the following CSS animation, for example, where a bar sits at the top of the page, and:

  1. animates from red to darkred, then
  2. animates from zero width to full-width (by scaling the x-axis).
CodePen Embed Fallback

Translating the CSS animation to its WAAPI counterpart, the code becomes this:

new Animation( new KeyframeEffect( document.querySelector('.progressbar'), { backgroundColor: ['red', 'darkred'], transform: ['scaleX(0)', 'scaleX(1)'], }, { duration: 2500, fill: 'forwards', easing: 'linear', } ) ).play(); CodePen Embed Fallback

Or alternatively, using a shorter syntax with Element.animate():

document.querySelector('.progressbar').animate( { backgroundColor: ['red', 'darkred'], transform: ['scaleX(0)', 'scaleX(1)'], }, { duration: 2500, fill: 'forwards', easing: 'linear', } ); CodePen Embed Fallback

In those last two JavaScript examples, we can distinguish two things. First, a keyframes object that describes which properties to animate:

{ backgroundColor: ['red', 'darkred'], transform: ['scaleX(0)', 'scaleX(1)'], }

Second is an options Object that configures the animation duration, easing, etc.:

{ duration: 2500, fill: 'forwards', easing: 'linear', } Creating and attaching a scroll timeline

To have our animation be driven by scroll — instead of the monotonic tick of a clock — we can keep our existing WAAPI code, but need to extend it by attaching a ScrollTimeline instance to it.

This ScrollTimeline class allows us to describe an AnimationTimeline whose time values are determined not by wall-clock time, but by the scrolling progress in a scroll container. It can be configured with a few options:

  • source: The scrollable element whose scrolling triggers the activation and drives the progress of the timeline. By default, this is document.scrollingElement (i.e. the scroll container that scrolls the entire document).
  • orientation: Determines the direction of scrolling, which triggers the activation and drives the progress of the timeline. By default, this is vertical (or block as a logical value).
  • scrollOffsets: These determine the effective scroll offsets, moving in the direction specified by the orientation value. They constitute equally-distanced in progress intervals in which the timeline is active.

These options get passed into the constructor. For example:

const myScrollTimeline = new ScrollTimeline({ source: document.scrollingElement, orientation: 'block', scrollOffsets: [ new CSSUnitValue(0, 'percent'), new CSSUnitValue(100, 'percent'), ], });

It’s not a coincidence that these options are exactly the same as the CSS @scroll-timeline descriptors. Both approaches let you achieve the same result with the only difference being the language you use to define them.

To attach our newly-created ScrollTimeline instance to an animation, we pass it as the second argument into the Animation constructor:

new Animation( new KeyframeEffect( document.querySelector('#progress'), { transform: ['scaleX(0)', 'scaleX(1)'], }, { duration: 1, fill: 'forwards' } ), myScrollTimeline ).play(); CodePen Embed Fallback

When using the Element.animate() syntax, set it as the timeline option in the options object:

document.querySelector("#progress").animate( { transform: ["scaleX(0)", "scaleX(1)"] }, { duration: 1, fill: "forwards", timeline: myScrollTimeline } ); CodePen Embed Fallback

With this code in place, the animation is driven by our ScrollTimeline instance instead of the default DocumentTimeline.

The current experimental implementation in Chromium uses scrollSource instead of source. That’s the reason you see both source and scrollSource in the code examples.

A word on browser compatibility

At the time of writing, only Chromium browsers support the ScrollTimeline class, behind a feature flag. Thankfully there’s the Scroll-Timeline Polyfill by Robert Flack that we can use to fill the unsupported gaps in all other browsers. In fact, all of the demos embedded in this article include it.

The polyfill is available as a module and registers itself if no support is detected. To include it, add the following import statement to your JavaScript code:

import 'https://flackr.github.io/scroll-timeline/dist/scroll-timeline.js';

The polyfill also registers the required CSS Typed Object Model classes, should the browser not support it. (&#x1f440; Looking at you, Safari.)

Advanced scroll timelines

Apart from absolute offsets, scroll-linked animations can also work with element-based offsets:

With this type of Scroll Offsets the animation is based on the location of an element within the scroll-container.

Typically this is used to animate an element as it comes into the scrollport until it has left the scrollport; e.g. while it is intersecting.

An element-based offset consists of three parts that describe it:

  1. target: The tracked DOM element.
  2. edge: This is what the ScrollTimeline’s source watches for the target to cross.
  3. threshold: A number ranging from 0.0 to 1.0 that indicates how much of the target is visible in the scroll port at the edge. (You might know this from IntersectionObserver.)

Here’s a visualization:

CodePen Embed Fallback

If you want to know more about element-based offsets, including how they work, and examples of commonly used offsets, check out this article.

Element-based offsets are also supported by the JS ScrollTimeline interface. To define one, use a regular object:

{ target: document.querySelector('#targetEl'), edge: 'end', threshold: 0.5, }

Typically, you pass two of these objects into the scrollOffsets property.

const $image = document.querySelector('#myImage'); $image.animate( { opacity: [0, 1], clipPath: ['inset(45% 20% 45% 20%)', 'inset(0% 0% 0% 0%)'], }, { duration: 1, fill: "both", timeline: new ScrollTimeline({ scrollSource: document.scrollingElement, timeRange: 1, fill: "both", scrollOffsets: [ { target: $image, edge: 'end', threshold: 0.5 }, { target: $image, edge: 'end', threshold: 1 }, ], }), } );

This code is used in the following demo below. It’s a JavaScript-remake of the effect I covered last time: as an image scrolls into the viewport, it fades-in and becomes unmasked.

CodePen Embed Fallback More examples

Here are a few more examples I cooked up.

Horizontal scroll section

This is based on a demo by Cameron Knight, which features a horizontal scroll section. It behaves similarly, but uses ScrollTimeline instead of GSAP’s ScrollTrigger.

CodePen Embed Fallback

For more on how this code works and to see a pure CSS version, please refer to this write-up.

CoverFlow

Remember CoverFlow from iTunes? Well, here’s a version built with ScrollTimeline:

CodePen Embed Fallback

This demo does not behave 100% as expected in Chromium due to a bug. The problem is that the start and end positions are incorrectly calculated. You can find an explanation (with videos) in this Twitter thread.

More information on this demo can be found in this article.

CSS or JavaScript?

There’s no real difference using either CSS or JavaScript for the Scroll-linked Animations, except for the language used: both use the same concepts and constructs. In the true spirit of progressive enhancement, I would grab to CSS for these kind of effects.

However, as we covered earlier, support for the CSS-based implementation is fairly poor at the time of writing:

Because of that poor support, you’ll certainly get further with JavaScript at this very moment. Just make sure your site can also be viewed and consumed when JavaScript is disabled. &#x1f609;

The post Scroll-Linked Animations With the Web Animations API (WAAPI) and ScrollTimeline appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Chapter 10: Browser Wars

Css Tricks - Wed, 11/03/2021 - 5:09am

In June of 1995, representatives from Microsoft arrived at the Netscape offices. The stated goal was to find ways to work together—Netscape as the single dominant force in the browser market and Microsoft as a tech giant just beginning to consider the implications of the Internet. Both groups, however, were suspicious of ulterior motives.

Marc Andreessen was there. He was already something of a web celebrity. Newly appointed Netscape CEO James Barksdale also came. On the Microsoft side was a contingent of product managers and engineers hoping to push Microsoft into the Internet market.

The meeting began friendly enough, as the delegation from Microsoft shared what they were working on in the latest version of their operating system, Windows 95. Then, things began to sour.

According to accounts from Netscape, “Microsoft offered to make an investment in Netscape and give Netscape’s software developers crucial technical information about the Windows operating system if Netscape would agree not to make a browser for [the] Windows 95 operating system.” If that was to be believed, Microsoft would have tiptoed over the line of what is legal. The company would be threatening to use its monopoly to squash competition.

Andreessen, no stranger to dramatic flair, would later dress the meeting up with a nod to The Godfather in his deposition to the Department of Justice: “I expected to find a bloody computer monitor in my bed the next day.”

Microsoft claimed the meeting was a “setup,” initiated by Netscape to bait them into a comprising situation they could turn to their advantage later.

There are a few different places to mark the beginning of the browser wars. The release of Internet Explorer 1, for instance (late summer, 1995). Or the day Andreessen called out Microsoft as nothing but a “poorly debugged set of device drivers” (early 1995). But June 21, 1995—when Microsoft and Netscape came to a meeting as conceivable friends and left as bitter foes—may be the most definitive.

Andreessen called it “free, but not free.”

Here’s how it worked. When the Netscape browser was released it came with fee of $39 per copy. That was officially speaking. But fully function Netscape beta versions were free to download for their website. And universities and non-profits could easily get zero-cost licenses.

For the upstarts of the web revolution and open source tradition, Netscape was free enough. For the buttoned-up corporations buying in bulk with specific contractual needs, they could license the software for a reasonable fee. Free, but not free. “It looks free optically, but it is not,” a Netscape employee would later describe it. “Corporations have to pay for it. Maintenance has to be paid.”

“It’s basically a Microsoft lesson, right?” was how Andreessen framed it. “If you get ubiquity, you have a lot of options, a lot of ways to benefit from that.” If people didn’t have a way to get quick and easy access to Netscape, it would never spread. It was a lesson Andreessen had learned behind his computer terminal at the NCSA research lab at the University of Illinois. Just a year prior, he and his friends built the wildly successful, cross-platform Mosaic browser.

Andreessen worked on Mosaic for several years in the early ’90’s. But he began to feel cramped by increasing demands from higher-ups at NCSA hoping to capitalize on the browser’s success. At the end of 1993, Andreessen headed west, to stake his claim in Silicon Valley. That’s where he met James Clark.

Netscape Communications Corporation co-founders Jim Clark, left, and Marc Andreessen (AP Photo/HO)

Clark had just cut ties with Silicon Graphics, the company he created. A legend in the Bay Area, Clark was well known in the valley. When he saw the web for the first time, someone suggested he meet with Andreessen. So he did. The two hit it off immediately.

Clark—with his newly retired time and fortune—brought an inner circle of tech visionaries together for regular meetings. “For the invitees, it seemed like a wonderful opportunity to talk about ideas, technologies, strategies,” one account would later put it. “For Clark, it was the first step toward building a team of talented like-minded people who populate his new company.” Andreessen, still very much the emphatic and relentless advocate of the web, increasingly moved to the center of this circle.

The duo considered several ideas. None stuck. But they kept coming back to one. Building the world’s first commercial browser.

And so, on a snowy day in mid-April 1994, Andreessen and Clark took a flight out to Illinois. They were there with a single goal: Hire the members of the original Mosaic team still working at the NCSA lab for their new company. They went straight to the lobby of a hotel just outside the university. One by one, Clark met with five of the people who had helped create Mosaic (plus Lou Montulli, creator of Lynx and a student at University of Kansas) and offered them a job.

Right in a hotel room, Clark printed out contracts with lucrative salaries and stock options. Then he told them the mission of his new company. “Its mandate—Beat Mosaic!—was clear,” one employee recalled. By the time Andreessen and Clark flew back to California the next day, they’d have the six new employees of the soon-to-be-named Netscape.

Within six months they would release their first browser—Netscape Navigator. Six months after that, the easy-to-use, easy-to-install browser, would overrun the market and bring millions of users online for the first time.

Clark, speaking to the chaotic energy of the browser team and the speed at which they built software that changed the world, would later say Netscape gave “anarchy credibility.” Writer John Cassidy puts that into context. “Anarchy in the post-Netscape sense meant that a group of college kids could meet up with a rich eccentric, raise some money from a venture capitalist, and build a billion-dollar company in eighteen months,” adding, “Anarchy was capitalism as personal liberation.”

Inside of Microsoft were a few restless souls.

The Internet, and the web, was passing the tech giant by. Windows was the most popular operating system in the world—a virtual monopoly. But that didn’t mean they weren’t vulnerable.

As early as 1993, three employees at Microsoft—Steven Sinofsky, J. Allard, and Benjamin Slivka—began to sound the alarms. Their uphill battle to make Microsoft realize the promise of the Internet is documented in the “Inside Microsoft” profile penned by Kathy Rebell, which published in Bloomberg in 1996. “I dragged people into my office kicking and screaming,” Sinofsky told Rebello, “I got people excited about this stuff.”

Some employees believed Microsoft was distracted by a need to control the network. Investment poured into a proprietary network, like CompuServe or Prodigy, called the Microsoft Network (or MSN). Microsoft wanted to control the entire networked experience. But MSN would ultimately be a huge failure.

Slivka and Allard believed Microsoft was better positioned to build with the Internet rather than compete against it. “Microsoft needs to ensure that we ride the success of the Web, instead of getting drowned by it,” wrote Slivka in some of his internal communication.

Allard went a step further, drafting an internal memo named “Windows: The Next Killer Application for the Internet.” Allard’s approach, laid out in the document, would soon be the cornerstone of Microsoft’s Internet strategy. It consisted of three parts. First, embrace the open standards of the web. Second, extend its technology to the Microsoft ecosystem. Finally (and often forgotten), innovate and improve web technologies.

After a failed bid to acquire BookLink’s InternetWorks browser in 1994—AOL swooped in and outbid them—Microsoft finally got serious about the web. And their meeting with Netscape didn’t yield any results. Instead, they negotiated a deal with NCSA’s commercial partner Spyglass to license Mosaic for the first Microsoft browser.

In August of 1995, Microsoft released Internet Explorer version 1.0. It wasn’t very original, based on code that Spyglass had licensed to dozens of other partners. Shipped as part of an Internet Jumpstart add-on, the browser was bare-bones, clunkier and harder to use than what Netscape offered.

Source: Web Design Museum

On December 7th, Bill Gates hosted a large press conference on the anniversary of Pearl Harbor. He opened with news about the Microsoft Network, the star of the show. But he also demoed Internet Explorer, borrowing language directly from Allard’s proposal. “So the Internet, the competition will be kind of, once again, embrace and extend,” Gates announced, “And we will embrace all the popular Internet protocols… We will do some extensions to those things.”

Microsoft had entered the market.

Like many of her peers, Rosanne Siino began learning the world of personal computing on her own. After studying English in college—with an eye towards journalism—Siino found herself at a PR firm with clients like Dell and Seagate. Siino was naturally curious and resourceful, and read trade magazines and talked to engineers to learn what she could about personal computing in the information age.

She developed a special talent for taking the language and stories of engineers and translating them into bold visions of the future. Friendly, and always engaging, Siino built up a Rolodex of trade publication and general media contacts along the way.

After landing a job at Silicon Graphics, Siino worked closely with James Clark (he would later remark she was “one of the best PR managers at SGI”). She identified with Clark’s restlessness when he made plans to leave the company—an exit she helped coordinate—and decided if the opportunity came to join his new venture, she’d jump ship.

A few months later, she did. Siino was employee number 19 at Netscape; its first public relations hire.

When Siino arrived at the brand new Netscape offices in Mountain View, the first thing she did was sit down and talk to each one of the engineers. She wanted to hear—straight from the source—what the vision of Netscape was. She heard a few things. Netscape was building a “killer application,” one that would make other browsers irrelevant. They had code that was better, faster, and easier to use than anything out there.

Siino knew she couldn’t sell good code. But a young and hard working group of fresh-out-of-college transplants from rural America making a run at entrenched Silicon Valley; that was something she could sell. “We had this twenty-two-year-old kid who was pretty damn interesting and I thought, ‘There’s a story right there,'” she later said in an interview for the book Architects of the Web, “‘And we had this crew of kids who had come out from Illinois and I thought, ‘There’s a story there too.'”

Inside of Netscape, some executives and members of the board had been talking about an IPO. With Microsoft hot on their heels, and competitor Spyglass launching a successful IPO of their own, timing was critical. “Before very long, Microsoft was sure to attack the Web browser market in a more serious manner,” writer John Cassidy explains, “If Netscape was going to issue stock, it made sense to do so while the competition was sparse.” Not to mention, a big, flashy IPO was just what the company needed to make headlines all around the country.

In the months leading up to the IPO, Siino crafted a calculated image of Andreeseen for the press. She positioned him as a leader of the software generation, an answer to the now-stodgy, Silicon-driven hardware generation of the 60’s and 70’s. In interviews and profiles, Siino made sure Andreeseen came off as a whip-smart visionary ready to tear down the old ways of doing things; the “new Bill Gates.”

That required a fair bit of cooperation from Andreeseen. “My other real challenge was to build up Marc as a persona,” she would later say. Sometimes, Andreessen would complain about the interviews, “but I’d be like, ‘Look, we really need to do this.’ And he’s savvy in that way. He caught on.'” Soon, it was almost natural, and as Andreeseen traveled around with CEO James Barksdale to talk to potential investors ahead of their IPO, Netscape hype continued to inflate.

August 9, 1995, was the day of the Netscape IPO. Employees buzzed around the Mountain View offices, too nervous to watch the financial news beaming from their screens or the TV. “It was like saying don’t notice the pink elephant dancing in your living room,” [Siino said later]. They shouldn’t have worried. In its first day of trading, the Netscape stock price rose 108%. It was best opening day for a stock on Wall Street. Some of the founding employees went to bed that night millionaires.

Not long after, Netscape released version 2 of their browser. It was their most ambitious release to date. Bundled in the software were tools for checking email, talking with friends, and writing documents. It was sleek and fast. The Netscape homepage that booted up each time the software started sported all sorts of nifty and well-known web adventures.

Not to mention JavaScript. Netscape 2 was the first version to ship with Java applets, small applications run directly in the browser. With Java, Netscape aimed to compete directly with Microsoft and their operating system.

To accompany the release, Netscape recruited young programmer Brendan Eich to work on a scripting language that riffed on Java. The result was JavaScript. Ecih created the first version in 10 days as a way for developers to make pages more interactive and dynamic. It was primitive, but easy to grasp, and powerful. Since then, it has become one of the most popular programming languages in the world.

Microsoft wasn’t far behind. But Netscape felt confident. They had pulled off the most ambitious product the web had ever seen. “In a fight between a bear and an alligator, what determines the victor is the terrain,” Andreessen said in an interview from the early days of Netscape. “What Microsoft just did was move into our terrain”

There’s an old adage at Microsoft, that it never gets something right until version 3.0. It was true even of their flagship product, Windows, and has notoriously been true of its most famous applications.

The first version of Internet Explorer was a rushed port of the Mosaic code that acted as little more than a a public statement that Microsoft was going into the browser business. The second version, released just after Netscape’s IPO in late 1995, saw rapid iteration, but lagged far behind. With Internet Explorer 3, Microsoft began to get the browser right.

Microsoft’s big, showy press conference hyped Internet Explorer as a true market challenger. Behind the scenes, it operated more like a skunkworks experiment. Six people were on the original product team. In a company of tens of thousands. “A bit like the original Mac team, the IE team felt like the vanguard of Microsoft,” one-time Internet Explorer lead Brad Silverberg would later say, “the vanguard of the industry, fighting for its life.”

That changed quickly. Once Microsoft recognized the potential of the web, they shifted their weight to it. In Speeding the Net, a comprehensive account of the rise of Netscape and its fall at the hands of Microsoft, authors Josh Quittner and Michelle Slatall, describe the Microsoft strategy. “In a way, the quality of it didn’t really matter. If the first generation flopped, Gates could assign a team of his best and brightest programmers to write an improved model. If that one failed too, he could hire even better programmers and try again. And again. And again. He had nearly unlimited resources.”

By version 3, the Internet Explorer team had a hundred people on it (including Chris Wilson of the original NCSA Mosaic team). That number would reach the thousands in a few short years. The software rapidly closed the gap. Internet Explorer introduced features that had given Netscape an edge—and even introduced their own HTML extensions, dynamic animation tools for developers, and rudimentary support of CSS.

In the summer of 1996, Walt Mossberg talked up Microsoft’s browsers. Only months prior he had labeled Netscape Navigator the “clear victor.” But he was beginning to change his mind. “I give the edge, however, to Internet Explorer 3.0,” he wrote upon Microsoft’s version 3. “It’s a better browser than Navigator 3.0 because it is easier to use and has a cleaner, more flexible user interface.”

Microsoft Internet Explorer 3.0.01152 Netscape Navigator 3.04

Still, most Microsoft executives knew that competing on features would never be enough. In December of 1996, senior VP James Allchin emailed his boss, Paul Maritz. He laid out the current strategy, an endless chase after Netscape’s feature set. “I don’t understand how IE is going to win,” Allchin conceded, “My conclusion is that we must leverage Windows more.” In the same email, he added, “We should think first about an integrated solution — that is our strength.” Microsoft was not about to simply lie down and allow themselves to be beaten. They focused on two things: integration with Windows and wider distribution.

When it was released, Internet Explorer 4 was more tightly integrated with the operating system than any previous version; an almost inseparable part of the Windows package. It could be used to browse files and folders. Its “push” technology let you stream the web, even when you weren’t actively using the software. It used internal APIs that were unavailable to outside developers to make the browser faster, smoother, and readily available.

And then there was distribution. Days after Netscape and AOL shook on a deal to include their browser on the AOL platform, AOL abruptly changed their mind and when with Internet Explorer instead. It would later be revealed that Microsoft had made them, as one writer put it (extending The Godfather metaphor once more), an “offer they couldn’t refuse.” Microsoft had dropped their prices down to the floor and—more importantly—promised AOL precious real estate pre-loaded on the desktop of every copy of the next Windows release.

Microsoft fired their second salvo with Compaq. Up to that point, all Compaq computers had shipped with Netscape pre-installed on Windows. When Windows threatened to suspend their license to use Windows at all (which was revealed later in court documents), that changed to Internet Explorer too.

By the time Windows ’98 was released, Internet Explorer 4 came already installed, free for every user, and impossible to remove.

“Mozilla!” interjected Jamie Zawinski. He was in a meeting at the time, which now rang in deafening silence for just a moment. Heads turned. Then, they kept going.

This was early days at Netscape. A few employees from engineering and marketing huddled together to try to come up with a name for the thing. One employee suggested they were going to crush Mosaic, like a bug. Zawinski—with a dry, biting humor he was well known for—thought Mozilla, “as in Mosaic meets Godzilla.”

Eventually, marketer Greg Sands settled on Netscape. But around the office, the browser was, from then on, nicknamed Mozilla. Early marketing materials on the web even featured a Mozilla inspired mascot, a green lizard with a know-it-all smirk, before they shelved it for something more professional.

Credit: Dave Titus

Credit: Dave Titus Credit: Dave Titus

It would be years before the name would come back in any public way; and Zawinski would have a hand in that too.

Zawinski had been with Netscape since almost the beginning. He was employee number 20, brought in right after Rosanne Siino, to replace the work that Andreessen had done at NCSA working on the flagship version of Netscape for X-Windows. By the time he joined, he already had something of a reputation for solving complex technical challenges.

Jaime Zawinski

Zawinski’s earliest memories of programming date back to eighth grade. In high school, he was a terrible student. But he still managed to get a job after school as a programmer, working on the one thing that managed to keep him interested: code. After that, he started work for the startup Lucid, Inc., which boasted a strong pedigree of programming legends at its helm. Zawinski worked on the Common Lisp programming language and the popular IDE Emacs; technologies revered in the still small programming community. By virtue of his work on the projects, Zawinski had instant credibility among the tech elite.

At Netscape, the engineering team was central to the way things worked. It was why Siino had chosen to meet with members of that team as soon as she began, and why she crafted the story of Netscape around the way they operated. The result was a high-pressure, high-intensity atmosphere so indispensable company that it would become party of the companies mythology. They moved so quickly that many began to call such a rapid pace of development “Netscape Time.”

“It was really a great environment. I really enjoyed it,” Zawinski would later recall. “Because everyone was so sure they were right, we fought constantly but it allowed us to communicate fast.” But tempers did flare (one article details a time when he threw a chair against the wall and left abruptly for two weeks after his computer crashed), and many engineers would later reflect on the toxic workplace. Zawinski once put it simply: “It wasn’t healthy.”

Still, engineers had a lot of sway at the organization. Many of them, Zawinski included, were advocates of free software. “I guess you can say I’ve been doing free software since I’ve been doing software,” he would later say in an interview. For Zawinski, software was meant to be free. From his earliest days on the Netscape project, he advocated for a more free version of the browser. He and others on the engineering team were at least partly responsible for the creative licensing that went into the company’s “free, but not free” business model.

In 1997, technical manager Frank Hecker breathed new life into the free software paradigm. He wrote a 30-page whitepaper proposing what several engineers had wanted for years—to release the entire source of the browser for free. “The key point I tried to make in the document,” Hecker asserted, “was that in order to compete effectively Netscape needed more people and companies working with Netscape and invested in Netscape’s success.”

With the help of CTO Eric Hahn, Hecker and Zawinski made their case all the way to the top. By the time they got in the room with James Barksdale, most of the company had already come around to the idea. Much to everyone’s surprise, Barksdale agreed.

On January 23, 1998, Netscape made two announcements. The first everyone expected. Netscape had been struggling to compete with Microsoft for nearly a year. The most recent release of Internet Explorer version 4, bundled directly into the Windows operating system for free, was capturing ever larger portions of their market share. So Netscape announced it would be giving its browser away for free too.

The next announcement came as a shock. Netscape was going open source. The browser’s entire source code—millions of lines of code—would be released to the public and open to contributions from anybody in the world. Led by Netscape veterans like Michael Toy, Tara Hernandez, Scott Collins, and Jamie Zawinski, the team would have three months to excise the code base and get it ready for public distribution. The effort had a name too: Mozilla.

Firefox 1.0 (Credit: Web Design Museum)

On the surface, Netscape looked calm and poised to take on Microsoft with the force of the open source community at their wings. Inside the company, things looked much different. The three months that followed were filled with frenetic energy, close calls, and unparalleled pace. Recapturing the spirit of the earliest days of innovation at Netscape, engineers worked frantically to patch bugs and get the code ready to be released to the world. In the end, they did it, but only by the skin of their teeth.

In the process, the project spun out into an independent organization under the domain Mozilla.org. It was staffed entirely by Netscape engineers, but Mozilla was not technically a part of Netscape. When Mozilla held a launch party in April of 1998, just months after their public announcement, it didn’t just have Netscape members in attendance.

Zawinski had organized the party, and he insisted that a now growing community of people outside the company who had contributed to the project be a part of it. “We’re giving away the code. We’re sharing responsibility for development of our flagship product with the whole net, so we should invite them to the party as well,” he said, adding, “It’s a new world.”

On the day of his testimony in November of 1998, Steve McGeady sat, as one writer described, “motionless in the witness box.” He had been waiting for this moment for a long time; the moment when he could finally reveal, in his view, the nefarious and monopolist strain that coursed through Microsoft.

The Department of Justice had several key witnesses in their antitrust case against Microsoft, but McGeady was a linchpin. As Vice President at Intel, McGeady had regular dealings with Microsoft; and his company stood outside of the Netscape and Microsoft conflict. There was an extra layer of tension to his particular testimony though. “The drama was heightened immeasurably by one stark reality,” noted in one journalist’s accounting of the trial, “nobody—literally, nobody—knew what McGeady was going to say.”

When he got his chance to speak, McGeady testified that high-ranking Microsoft executives had told him that their goal was to “cut off Netscape’s air supply.” Using their monopoly position in the operating system market, Microsoft threatened computer manufacturers—many of whom Intel had regular dealings—to ship their computers with Internet Explorer or face having their Windows licenses revoked entirely.

Drawing on the language Bill Gates used in his announcement of Internet Explorer, McGeady claimed that one executive had laid out their strategy: “embrace, extend and extinguish.” According to his allegations, Microsoft never intended to enter into a competition with Netscape. They were ready to use every aggressive tactic and walk the line of legality to crush them. It was a major turning point for the case and a massive win for the DOJ.

The case against Microsoft, however, had begun years earlier, when Netscape retained a team from the antitrust law firm Sonsini Goodrich & Rosati in the summer of 1995. The legal team included outspoken anti-Microsoft crusader Gary Reback, as well as Susan Creighton. Reback would be the most public member of the firm in the coming half-decade, but it would be Creighton’s contributions that would ultimately turn the attention of the DOJ. Creighton began her career as a clerk for Supreme Court Justice Sandra Day O’Conner. She quickly developed a reputation for precision and thoroughness. Her patterned, deliberate and methodical approach made her a perfect fit for a full and complete breakdown of Microsoft’s anti-competitive strategy.

Susan Creighton (Credit: Wilson Sonsini Goorich & Rosati)

Creighton’s work with Netscape led her to write a two-hundred and twenty-two page document detailing the anti-competitive practices of Microsoft. She laid out her case plain, and simply. “It is about a monopolist (Microsoft) that has maintained its monopoly (desktop operating systems) for more than ten years. That monopoly is threatened by the introduction of a new technology (Web software)…”

The document was originally planned as a book, but Netscape feared that if the public knew just how much danger they were in from Microsoft, their stock price would plummet. Instead, Creighton and Netscape handed it off the Department of Justice.

Inside the DOJ, it would trigger a renewed interest in ongoing antitrust investigations of Microsoft. Years of subpoenaing, information gathering, and lengthy depositions would follow. After almost three years, in May of 1998, the Department of Justice and 20 state attorneys filed an antitrust suit against Microsoft, a company which had only just then crossed over a 50% share of the browser market.

“No firm should be permitted to use its monopoly power to develop a chokehold on the browser software needed to access the Internet,” announced Janet Reno—the prosecuting attorney general under President Clinton—when charges were brought against Microsoft.

At the center of the trial was not necessarily the stranglehold Microsoft had on the software of personal computers—not technically an illegal practice. It was the way they used their monopoly to directly counter competition in other markets. For instance, the practice threatening to revoke licenses to manufacturers that packaged computers with Netscape. Netscape’s account of the June 1995 meeting factored in as well (when Andreessen was asked why he had taken such detailed notes on the meeting, he replied “I thought that it might be a topic of discussion at some point with the US government on antitrust issues.”)

Throughout the trial, both publicly and privately, Microsoft reacted to scrutiny poorly. They insisted that they were right; that they were doing what was best for the customers. In interviews and depositions, Bill Gates would often come off as curt and dismissive, unable or unwilling to yield to any cessation of power. The company insisted that the browser and operating system were co-existent, one could not live without the other—a fact handily refuted by the judge when he noted that he had managed to uninstall Internet Explorer from Windows in “less than 90 seconds.” The trial became a national sensation as tech enthusiasts and news junkies waited with bated breath for each new revelation.

Microsoft President Bill Gates, left, testifies on Capitol Hill, and Tuesday, March 3, 1998. (Credit: Ken Cedeno/AP file photo)

In November of 1999, the presiding judge issued his ruling. Microsoft had, in fact, used its monopoly power and violated antitrust laws. That was followed in the summer of 2000 by a proposed remedy: Microsoft was to be broken up into two separate companies, one to handle its operating software, and the other its applications. “When Microsoft has to compete by innovating rather than reaching for its crutch of the monopoly, it will innovate more; it will have to innovate more. And the others will be free to innovate,” Iowa State Attorney General Tom Miller said after the judge’s ruling was announced.

That never happened. An appeal in 2002 resulted in a reversal of the ruling and the Department of Justice agreed to a lighter consent decree. By then, Internet Explorer’s market share stood at around 90%. The browser wars were, effectively, over.

“Are you looking for an alternative to Netscape and Microsoft Explorer? Do you like the idea of having an MDI user interface and being able to browse in multiple windows?… Is your browser slow? Try Opera.”

That short message announced Opera to the world for the first time in April of 1995, posted by the browser’s creators to a Usenet forum about Windows. The tone of the message—technically meticulous, a little pointed, yet genuinely idealistic—reflected the philosophy of Opera’s creators, Jon Stephenson von Tetzchner and Geir Ivarsøy. Opera, they claimed, was well-aligned with the ideology of the web.

Opera began as a project run out of the Norwegian telecommunications firm Telnor. Once it became stable, von Tetzchner and Ivarsøy rented space at Telnor to spin it out into an independent company. Not long after, they posted that announcement and released the first version of the Opera web browser.

The team at Opera was small, but focused and effective, loyal to the open web. “Browsers are in our blood,” Tetzchner would later say. Time and time again, the Opera team would prove that. They were staffed by the web’s true believers, and have often prided themselves on leading the development of web standards and an accessible web.

In the mid-to-late 90’s, Geir Ivarsøy was the first person to implement the CSS standard in any browser, in Opera 3.5. That would prove more than enough to convince the creator of CSS, Håkon Wium Lie, to join the company as CTO. Ian Hickson worked at Opera during the time he developed the CSS Acid Test at the W3C.

The original CSS Acid Test (Credit: Eric Meyer)

The company began developing a version of their browser for low-powered mobile devices in developing nations as early as 1998. They have often tried to push the entire web community towards web standards, leading when possible by example.

Years after the antitrust lawsuit of Microsoft, and resulting reversal in the appeal, Opera would find themselves embroiled in a conflict on a different front of the browser wars.

In 2007, Opera filed a complaint with the European Commission. Much like the case made by Creighton and Netscape, Opera alleged that Microsoft was abusing its monopoly position by bundling new versions of Internet Explorer with Windows 7. The EU had begun to look into allegations against Microsoft almost as soon as the Department of Justice, but the Opera complaint added a substantial and recent area of inquiry. Opera claimed that Microsoft was limiting user choice by making opaque additional browser options. “You could add more browsers, to give consumers a real choice between browsers, you put them in front of their eyeballs,” Lie said at the time of the complaint.

In Opera’s summary of their complaints they evoked in themselves the picture of a free and open web. Opera, they argued, were advocates of a web as the web was intended—accessible, universal, and egalitarian. Once again citing the language of “embrace, extend, and extinguish,” the company also called out Microsoft for trying to take control over the web standards process. “The complaint calls on Microsoft to adhere to its own public pronouncements to support these standards, instead of stifling them with its notorious ‘Embrace, Extend and Extinguish’ strategy,” it read.

The browser “ballot box“ (Credit: Ars Technica)

In 2010, the European Commission issued a ruling, forcing Microsoft to show a so-called “ballot box” to European users of Windows—a website users could see the first time they accessed the Internet that listed twelve alternative browsers to download, including Opera and Mozilla. Microsoft included this website in their European Windows installs for five years, until their obligation lapsed.

Netscape Navigator 5 never shipped. It echoes, unreleased, in the halls of software’s most public and recognized vaporware.

After Netscape open-sourced their browser as part of the Mozilla project, the focus of the company split. Between being acquired by AOL and continuing pressure from Microsoft, Netscape was on its last legs. The public trial of Microsoft brought some respite, but too little, too late. “It’s one of the great ironies here,” Netscape lawyer Gary Reback would later say, “after years of effort to get the government to do something, by [1998] Netscape’s body is already in the morgue.” Meanwhile, management inside of Netscape couldn’t decide how best to integrate with the Mozilla team. Rather than work alongside the open-source project, they continued to maintain a version of Netscape separate and apart from the public project.

In October of 1998, Brendan Eich—who was part of the core Mozilla team—published a post to the Mozilla blog. “It’s time to stop banging our heads on the old layout and FE codebase,” he wrote. “We’ve pulled more useful miles out of those vehicles than anyone rightly expected. We now have a great new layout engine that can view hundreds of top websites.”

Many Mozilla contributors agreed with the sentiment, but the rewrite Eich proposed would spell the project’s initial downfall. While Mozilla tinkered away on a new rendering engine for the browser—which would soon be known as Gecko—Netscape scrapped its planned version 5.

Progress ground to a halt. Zawinski, one of the Mozilla team members opposed to the rewrite, would later describe his frustration when he resigned from Netscape in 1999. “It constituted an almost-total rewrite of the browser, throwing us back six to 10 months. Now we had to rewrite the entire user interface from scratch before anyone could even browse the Web, or add a bookmark.” Scott Collins, one of the original Netscape programmers, would put it less diplomatically: “You can’t put 50 pounds of crap in a ten pound bag, it took two years. And we didn’t get out a 5.0, and that cost us everything, it was the biggest mistake ever.”

The result was a world-class browser with great standards support and a fast-running browser engine. But it wasn’t ready until April of 2000, when Netscape 6 was finally released. By then, Microsoft had eclipsed Netscape, owning 80% of the browser market. It would never be enough to take back a significant portion of that browser share.

“I really think the browser wars are over,” said one IT exec after the release of Netscape 6. He was right. Netscape would sputter out for years. As for Mozilla, that would soon be reborn as something else entirely.

The post Chapter 10: Browser Wars appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.