Developer News

2021 Design Systems (Survey/Courses)

Css Tricks - Thu, 05/13/2021 - 12:55pm

My friends at Sparkbox are doing a survey on design systems, as they do each year. Go ahead and fill it out if you please. Here are the results from last year. In both 2019 and 2020, the vibe was that design systems (both as an idea and as a real thing that real companies really use) are maturing. But still, it was only a quarter of folks who said their actual design system was mature. I wonder if that’ll go up this year.

In my circles, “design system” isn’t the buzzword it was a few years ago, but it doesn’t mean it’s less popular. If anything, they are more popular almost entering the territory of assumed, like responsive design is. I do feel like if you’re building a website from components, well, you’ve got a component library at least, which is how I think of design systems (as a dude who pretty much exclusively works on websites).

I’d better be careful though. I know design systems mean different things to different people. Speaking of which, I’d be remiss if I didn’t also shout out the fact that Ethan has a handful of totally free courses he’s created on design systems.

As you might have guessed from the titles, we’ve broadly organized the courses around roles: whether you’re a designer, a developer, or a product manager, we’ve got something for you. Each course focuses on what I think are the fundamentals of design systems work, so I’ve designed them to be both high-level and packed with information

The post 2021 Design Systems (Survey/Courses) appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

2021 Design Systems (Survey/Courses)

Css Tricks - Thu, 05/13/2021 - 12:55pm

My friends at Sparkbox are doing a survey on design systems, as they do each year. Go ahead and fill it out if you please. Here are the results from last year. In both 2019 and 2020, the vibe was that design systems (both as an idea and as a real thing that real companies really use) are maturing. But still, it was only a quarter of folks who said their actual design system was mature. I wonder if that’ll go up this year.

In my circles, “design system” isn’t the buzzword it was a few years ago, but it doesn’t mean it’s less popular. If anything, they are more popular almost entering the territory of assumed, like responsive design is. I do feel like if you’re building a website from components, well, you’ve got a component library at least, which is how I think of design systems (as a dude who pretty much exclusively works on websites).

I’d better be careful though. I know design systems mean different things to different people. Speaking of which, I’d be remiss if I didn’t also shout out the fact that Ethan has a handful of totally free courses he’s created on design systems.

As you might have guessed from the titles, we’ve broadly organized the courses around roles: whether you’re a designer, a developer, or a product manager, we’ve got something for you. Each course focuses on what I think are the fundamentals of design systems work, so I’ve designed them to be both high-level and packed with information

The post 2021 Design Systems (Survey/Courses) appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

SVGator 3.0 Reshapes the Way You Create and Animate SVG With Extensive New Features

Css Tricks - Thu, 05/13/2021 - 4:24am

Building animations can get complicated, particularly compelling animations where you aren’t just rolling a ball across the screen, but building a scene where many things are moving and changing in concert. When compelling animation is the goal, a timeline GUI interface is a long-proven way to get there. That was true in the days of Flash (RIP), and still true today in every serious video editing tool.

But what do we have on the web for animating vector graphics? We have SVGator, an innovative and unique SVG animation creator that doesn’t require any coding skills! With the new SVGator 3.0 release it becomes a ground-breaking tool for building web animations in a visually intuitive way.

It’s worth just looking at the homepage to see all the amazing animations built using the tool itself. A powerful tool right at your fingertips

I love a good browser-based design tool. All my projects in the cloud, waiting for me no matter where I roam. Here’s my SVGator project as I was working on a basic animation myself:

Users of any design tool should feel at home here. It comes with an intuitive interface that rises to the demands of a professional workflow with options that allow more control over your workspace. You will find a list of your elements and groups on the left side, along with the main editing tools right at the top. All the properties can be found on the right side and you can bring any elements to life by setting up keyframes on the timeline.

I’ll be honest: I’ve never really used SVGator before, I read zero documentation, and I was able to fiddle together these animated CSS-Tricks stars in just a few minutes:

See that animation above?

  • It was output as a single animation-stars.svg document I was able to upload to this blog post without any problem.
  • Uses only CSS inside the SVG for animation. There is a JavaScript-animation output option too for greater cross-browser compatibility and extra features (e.g. start animation only when it enters the view or start the animation on click!)
  • The final file is just 5 KB. Tiny!
  • The SVG, including the CSS animation bits, are pre-optimized and compressed.
  • And again, it only took me a couple minutes to figure out.

Imagine what you could do if you actually had design and animation talent.

Intuitive and useful export options Optimized SVG output SVGator is also an outstanding vector editor

With the 3.0 release, SVGator is no longer just an animation tool, but a full-featured SVG creator! This means that there’s no need for a third-party vector editing tool, you can start from scratch in SVGator or edit any pre-existing SVGs that you’ve ever created. A Pen tool with fast editing options, easily editable shapes, compound options, and lots of other amazing features will assist you in your work.

That’s particularly useful with morphing tweens, which SVGator totally supports!

Here’s my one word review:

The all-new interface in SVGator has a sidebar of collapsible panels for controlling all aspects of the design. See here how easy this animation was to make by controlling a line of text, the transforms, and the filters, and then choosing keyframes for those things below.

Suitable plans for everyone

If you want to use SVGator as a vector editing tool, that’s absolutely free, which means that you can create and export an endless number of static vector graphics, regardless of your subscription plan In fact, even on the Free plan, you export three animations per month. It’s just when you want more advanced animation tooling or unlimited animated exports per month than that you have to upgrade to a paid plan.

Go Try SVGator

The post SVGator 3.0 Reshapes the Way You Create and Animate SVG With Extensive New Features appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Making Disabled Buttons More Inclusive

Css Tricks - Wed, 05/12/2021 - 4:31am

Let’s talk about disabled buttons. Specifically, let’s get into why we use them and how we can do better than the traditional disabled attribute in HTML (e.g. <button disabled> ) to mark a button as disabled.

There are lots of use cases where a disabled button makes a lot of sense, and we’ll get to those reasons in just a moment. But throughout this article, we’ll be looking at a form that allows us to add a number of tickets to a shopping cart.

This is a good baseline example because there’s a clear situation for disabling the “Add to cart” button: when there are no tickets to add to the cart.

But first, why disabled buttons?

Preventing people from doing an invalid or unavailable action is the most common reason we might reach for a disabled button. In the demo below, we can only add tickets if the number of tickets being added to the cart is greater than zero. Give it a try:

CodePen Embed Fallback

Allow me to skip the code explanation in this demo and focus our attention on what’s important: the “Add to cart” button.

<button type="submit" disabled="disabled"> Add to cart </button>

This button is disabled by the disabled attribute. (Note that this is a boolean attribute, which means that it can be written as disabled or disabled="disabled".)

Everything seems fine… so what’s wrong with it?

Well, to be honest, I could end the article right here asking you to not use disabled buttons because they suck, and instead use better patterns. But let’s be realistic: sometimes disabling a button is the solution that makes the most sense.

With that being said, for this demo purpose, we’ll pretend that disabling the “Add to cart” button is the best solution (spoiler alert: it’s not). We can still use it to learn how it works and improve its usability along the way.

Types of interactions

I’d like to clarify what I mean by disabled buttons being usable. You may think, If the button is disabled, it shouldn’t be usable, so… what’s the catch? Bear with me.

On the web, there are multiple ways to interact with a page. Using a mouse is one of the most common, but there are others, like sighted people who use the keyboard to navigate because of a motor impairment.

Try to navigate the demo above using only the Tab key to go forward and Tab + Shift to go backward. You’ll notice how the disabled button is skipped. The focus goes directly from the ticket input to the “dummy terms” link.

Using the Tab key, it changes the focus from the input to the link, skipping the “Add to cart” button.

Let’s pause for a second and recap the reason that lead us to disable the button in the first place versus what we had actually accomplished.

It’s common to associate “interacting” with “clicking” but they are two different concepts. Yes, click is a type of interaction, but it’s only one among others, like hover and focus.

In other words…

All clicks are interactions, but not all interactions are clicks.

Our goal is to prevent the click, but by using disabled, we are preventing not only the click, but also the focus, which means we might be doing as much harm as good. Sure, this behavior might seem harmless, but it causes confusion. People with a cognitive disability may struggle to understand why they are unable to focus the button.

In the following demo, we tweaked the layout a little. If you use a mouse, and hover over the submit button, a tooltip is shown explaining why the button is disabled. That’s great!

But if you use only the keyboard, there’s no way of seeing that tooltip because the button cannot be focused with disabled. The same thing happens in touch devices too. Ouch!

CodePen Embed Fallback Using the mouse, the tooltip on the “Add to cart” button is visible on hover. But the tooltip is missing when using the Tab key.

Allow me to once again skip past the code explanation. I highly recommend reading “Inclusive Tooltips” by Haydon Pickering and “Tooltips in the time of WCAG 2.1” by Sarah Higley to fully understand the tooltip pattern.

ARIA to the rescue

The disabled attribute in a <button> is doing more than necessary. This is one of the few cases I know of where a native HTML attribute can do more harm than good. Using an ARIA attribute can do a better job, allowing us to add not only focus on the button, but do so consistently to create an inclusive experience for more people and use cases.

The disabled attribute correlates to aria-disabled="true". Give the following demo a try, again, using only the keyboard. Notice how the button, although marked disabled, is still accessible by focus and triggers the tooltip!

CodePen Embed Fallback Using the Tab key, the “Add to cart” button is focused and it shows the tooltip.

Cool, huh? Such tiny tweak for a big improvement!

But we’re not done quite yet. The caveat here is that we still need to prevent the click programmatically, using JavaScript.

elForm.addEventListener('submit', function (event) { event.preventDefault(); /* prevent native form submit */ const isDisabled = elButtonSubmit.getAttribute('aria-disabled') === 'true'; if (isDisabled || isSubmitting) { // return early to prevent the ticket from being added return; } isSubmitting = true; // ... code to add to cart... isSubmitting = false; })

You might be familiar with this pattern as a way to prevent double clicks from submitting a form twice. If you were using the disabled attribute for that reason, I’d prefer not to do it because that causes the temporary loss of the keyboard focus while the form is submitting.

The difference between disabled and aria-disabled

You might ask: if aria-disabled doesn’t prevent the click by default, what’s the point of using it? To answer that, we need to understand the difference between both attributes:

Feature / Attributedisabledaria-disabled="true"Prevent click✅❌Prevent hover❌❌Prevent focus✅❌Default CSS styles✅❌Semantics✅✅

The only overlap between the two is semantics. Both attributes will announce that the button is indeed disabled, and that’s a good thing.

Contrary to the disabled attribute, aria-disabled is all about semantics. ARIA attributes never change the application behavior or styles by default. Their only purpose is to help assistive technologies (e.g. screen readers) to announce the page content in a more meaningful and robust way.

So far, we’ve talked about two types of people, those who click and those who Tab. Now let’s talk about another type: those with visual impairments (e.g. blindness, low vision) who use screen readers to navigate the web.

People who use screen readers, often prefer to navigate form fields using the Tab key. Now look an how VoiceOver on macOS completely skips the disabled button.

The VoiceOver screen reader skips the “Add to cart” button when using the Tab key.

Once again, this is a very minimal form. In a longer one, looking for a submit button that isn’t there right away can be annoying. Imagine a form where the submit button is hidden and only visible when you completely fill out the form. That’s what some people feel like when the disabled attribute is used.

Fortunately, buttons with disabled are not totally unreachable by screen readers. You can still navigate each element individually, one by one, and eventually you’ll find the button.

The VoiceOver screen reader is able to find and announce the “Add to cart” button.

Although possible, this is an annoying experience. On the other hand, with aria-disabled, the screen reader will focus the button normally and properly announce its status. Note that the announcement is slightly different between screen readers. For example, NVDA and JWAS say “button, unavailable” but VoiceOver says “button, dimmed.”

The VoiceOver screen reader can find the “Add to cart” button using Tab key because of aria-disabled.

I’ve mapped out how both attributes create different user experiences based on the tools we just used:

Tool / Attributedisabledaria-disabled="true"Mouse or tapPrevents a button click.Requires JS to prevent the click.TabUnable to focus the button.Able to focus the button.Screen readerButton is difficult to locate.Button is easily located.

So, the main differences between both attributes are:

  • disabled might skip the button when using the Tab key, leading to confusion.
  • aria-disabled will still focus the button and announce that it exists, but that it isn’t enabled yet; the same way you might perceive it visually.

This is the case where it’s important to acknowledge the subtle difference between accessibility and usability. Accessibility is a measure of someone being able to use something. Usability is a measure of how easy something is to use.

Given that, is disabled accessible? Yes. Does it have a good usability? I don’t think so.

Can we do better?

I wouldn’t feel good with myself if I finished this article without showing you the real inclusive solution for our ticket form example. Whenever possible, don’t use disabled buttons. Let people click it at any time and, if necessary, show an error message as feedback. This approach also solves other problems:

  • Less cognitive friction: Allow people to submit the form at any time. This removes the uncertainty of whether the button is even disabled in the first place.
  • Color contrast: Although a disabled button doesn’t need to meet the WCAG 1.4.3 color contrast, I believe we should guarantee that an element is always properly visible regardless of its state. But that’s something we don’t have to worry about now because the button isn’t disabled anymore.
CodePen Embed Fallback Final thoughts

The disabled attribute in <button> is a peculiar case where, although highly known by the community, it might not be the best approach to solve a particular problem. Don’t get me wrong because I’m not saying disabled is always bad. There are still some cases where it still makes sense to use it (e.g. pagination).

To be honest, I don’t see the disabled attribute exactly as an accessibility issue. What concerns me is more of a usability issue. By swapping the disabled attribute with aria-disabled, we can make someone’s experience much more enjoyable.

This is yet one more step into my journey on web accessibility. Over the years, I’ve discovered that accessibility is much more than complying with web standards. Dealing with user experiences is tricky and most situations require making trade-offs and compromises in how we approach a solution. There’s no silver bullet for perfect accessibility.

Our duty as web creators is to look for and understand the different solutions that are available. Only then we can make the best possible choice. There’s no sense in pretending the problems don’t exist.

At the end of the day, remember that there’s nothing preventing you from making the web a more inclusive place.

Bonus

Still there? Let me mention two last things about this demo that I think are worthy:

1. Live Regions will announce dynamic content

In the demo, two parts of the content changed dynamically: the form submit button and the success confirmation (“Added [X] tickets!”).

These changes are visually perceived, however, for people with vision impairments using screen readers, that just ain’t the reality. To solve it, we need to turn those messages into Live Regions. Those allow assistive technologies to listen for changes and announce the updated messages when they happen.

There is a .sr-only class in the demo that hides a <span> containing a loading message, but allows it to be announced by screen readers. In this case, aria-live="assertive" is applied to the <span> and it holds a meaningful message after the form is submitting and is in the process of loading. This way, we can announce to the user that the form is working and to be patient as it loads. Additionally, we do the same to the form feedback element.

<button type="submit" aria-disabled="true"> Add to cart <span aria-live="assertive" class="sr-only js-loadingMsg"> <!-- Use JavaScript to inject the the loading message --> </span> </button> <p aria-live="assertive" class="formStatus"> <!-- Use JavaScript to inject the success message --> </p>

Note that the aria-live attribute must be present in the DOM right from the beginning, even if the element doesn’t hold any message yet, otherwise, Assistive Technologies may not work properly.

Form submit feedback message being announced by the screen reader.

There’s much more to tell you about this little aria-live attribute and the big things it does. There are gotchas as well. For example, if it is applied incorrectly, the attribute can do more harm than good. It’s worth reading “Using aria-live” by Ire Aderinokun and Adrian Roselli’s “Loading Skeletons” to better understand how it works and how to use it.

2. Do not use pointer-events to prevent the click

This is an alternative (and incorrect) implementation that I’ve seen around the web. This uses pointer-events: none; in CSS to prevent the click (without any HTML attribute). Please, do not do this. Here’s an ugly Pen that will hopefully demonstrate why. I repeat, do not do this.

Although that CSS does indeed prevent a mouse click, remember that it won’t prevent focus and keyboard navigation, which can lead to unexpected outcomes or, even worse, bugs.

In other words, using this CSS rule as a strategy to prevent a click, is pointless (get it?). ;)

The post Making Disabled Buttons More Inclusive appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

I could build this during the weekend

Css Tricks - Tue, 05/11/2021 - 11:04am

How many times have you heard that (or even uttered it under your own breath)? I know I’ve heard it in conversations. I also know I’ve wondered the same thing about a product or two — hey, the idea here is super simple, let’s get a couple buddies together and make the same thing, only better!

I like João’s take here. He reminds us that the core use case of an app or SaaS is often as easy as it sounds. But it’s the a lack of “second-order thinking” that prevents from understanding the complexities behind the scenes:

  • Was the team short-staffed and forced to make concessions?
  • Was the project managed in a waterfall, preventing some ideas from making it into the release?
  • Was there a time constraint that influenced the direction of the project?
  • Was the budget just not there to afford a specific feature?
  • Was there disharmony on the team?

There’s so much that we don’t see behind the product. João articulates this so clearly when he explains why a company like Uber needs hundreds of mobile app developers. They’re not there to support the initial use case; they’re charged with solving second-order factors and hopefully in a way that keeps complexity at a minimum while scaling with the rest of the system.

The world is messy. As software is more ubiquitous, we’re encoding this chaos in 1’s and 0’s. It’s more than that. Some scenarios are more difficult to encode in software than their pre-digital counterparts. A physical taxi queue at the airport is quite simple to understand. There’s no GPS technology involved, no geofencing. A person and a car can only be in one place at a time. In the digital world, things get messier.

I’m reminded of a post that Chris wrote up a while back where he harps on a seemingly simple Twitter feature request:

Why can’t I edit my tweets?! Twitter should allow that.

It’s the same deal. Features are complicated. Products are complicated. Yes, it would be awesome if this app had one particular feature or used some slick framework to speed things up — but there’s always context and second-order thinking to factor in before going straight into solution mode. Again, João says it much better than I’m able to:

It’s easy to oversimplify problems and try new, leaner technologies that optimize for our use cases. However, when scaling it to the rest of the organization, we start to see the dragons. People that didn’t build their software the right way. Entire tech stacks depending on libraries that teams can’t update due to reasons™. Quickly, we start realizing that our lean way of doing things may not serve most situations.

Speaking for myself at least, it’s tempting to jump straight into a solution or some conclusion. We’re problem solvers, right? That’s what we’re paid to do! But that’s where the dragons João describes come into view. There’s always more to the question (and the answer to it). Unless we slay those dragons, we risk making false assumptions and, ultimately, incorrect answers to seemingly simple things.

As always, front-end developers are aware. That includes being aware of second-order considerations.

Direct Link to ArticlePermalink

The post I could build this during the weekend appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Next Gen CSS: @container

Css Tricks - Tue, 05/11/2021 - 4:36am

Chrome is experimenting with @container, a property within the CSS Working Group Containment Level 3 spec being championed by Miriam Suzanne of Oddbird, and a group of engineers across the web platform. @container brings us the ability to style elements based on the size of their parent container.

The @container API is not stable, and is subject to syntax changes. If you try it out on your own, you may encounter a few bugs. Please report those bugs to the appropriate browser engine!

Bugs: Chrome | Firefox | Safari

You can think of these like a media query (@media), but instead of relying on the viewport to adjust styles, the parent container of the element you’re targeting can adjust those styles.

Container queries will be the single biggest change in web styling since CSS3, altering our perspective of what “responsive design” means.

No longer will the viewport and user agent be the only targets we have to create responsive layout and UI styles. With container queries, elements will be able to target their own parents and apply their own styles accordingly. This means that the same element that lives in the sidebar, body, or hero could look completely different based on its available size and dynamics.

@container in action CodePen Embed Fallback

In this example, I’m using two cards within a parent with the following markup:

<div class="card-container"> <div class="card"> <figure> ... </figure> <div> <div class="meta"> <h2>...</h2> <span class="time">...</span> </div> <div class="notes"> <p class="desc">...</p> <div class="links">...</div> </div> <button>...</button> </div> </div> </div>

Then, I’m setting containment (the contain property) on the parent on which I’ll be querying the container styles (.card-container). I’m also setting a relative grid layout on the parent of .card-container, so its inline-size will change based on that grid. This is what I’m querying for with @container:

.card-container { contain: layout inline-size; width: 100%; }

Now, I can query for container styles to adjust styles! This is very similar to how you would set styles using width-based media queries, using max-width to set styles when an element is smaller than a certain size, and min-width when it is larger.

/* when the parent container is smaller than 850px, remove the .links div and decrease the font size on the episode time marker */ @container (max-width: 850px) { .links { display: none; } .time { font-size: 1.25rem; } /* ... */ } /* when the parent container is smaller than 650px, decrease the .card element's grid gap to 1rem */ @container (max-width: 650px) { .card { gap: 1rem; } /* ... */ } Container Queries + Media Queries

One of the best features of container queries is the ability to separate micro layouts from macro layouts. You can style individual elements with container queries, creating nuanced micro layouts, and style entire page layouts with media queries, the macro layout. This creates a new level of control that enables even more responsive interfaces.

Here’s another example that shows the power of using media queries for macro layout (i.e. the calendar going from single-panel to multi-panel), and micro layout (i.e. the date layout/size and event margins/size shifting), to create a beautiful orchestra of queries.

CodePen Embed Fallback Container Queries + CSS Grid

One of my personal favorite ways to see the impact of container queries is to see how they work within a grid. Take the following example of a plant commerce UI:

No media queries are used on this website at all. Instead, we are only using container queries along with CSS grid to display the shopping card component in different views.

In the product grid, the layout is created with grid-template-columns: repeat(auto-fit, minmax(230px, 1fr));. This creates a layout that tells the cards to take up the available fractional space until they hit 230px in size, and then to flow to the next row. Check out more grid tricks at 1linelayouts.com.

Then, we have a container query that styles the cards to take on a vertical block layout when they are less than 350px wide, and shifts to a horizontal inline layout by applying display: flex (which has an inline flow by default).

@container (min-width: 350px) { .product-container { padding: 0.5rem 0 0; display: flex; } /* ... */ }

This means that each card owns its own responsive styling. This yet another example of where you can create a macro layout with the product grid, and a micro layout with the product cards. Pretty cool!

Usage

In order to use @container, you first need to create a parent element that has containment. In order to do so, you’ll need to set contain: layout inline-size on the parent. You can use inline-size since we currently can only apply container queries to the inline axis. This prevents your layout from breaking in the block direction.

Setting contain: layout inline-size creates a new containing block and new block formatting context, letting the browser separate it from the rest of the layout. Now, we can query!

Limitations

Currently, you cannot use height-based container queries, using only the block axis. In order to make grid children work with @container, you’ll need to add a wrapper element. Despite this, adding a wrapper lets you still get the effects you want.

Try it out

You can experiment with the @container property in Chromium today, by navigating to: chrome://flags in Chrome Canary and turning on the #experimental-container-queries flag.

The post Next Gen CSS: @container appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Distributed Persistent Rendering (DPR)

Css Tricks - Tue, 05/11/2021 - 4:33am

Like Jamstack, Netlify is coining this term.

If your reaction is: great, a new thing I need to know about and learn, know that while Distributed Persistent Rendering (DPR) does involve some new things, this is actually a push toward simplification and leverages ideas as old as the web is, just like Jamstack.

It’s probably helpful to hear it right from Matt Biilmann, CEO of Netlify:

In that short video, he makes the point that React started out very simple and solved a lot of clear problems with JavaScript architecture and, as time goes on and it tries to solve more use-cases, it’s getting a lot more complicated and risks losing the appeal it once had in its simplicity.

Jamstack, too, faces this problem. The original simplicity of it was extremely appealing, but as it grows to accommodate more use-cases, things get complicated.

One of those complications are sites with many-thousands of pages. Sites like that can have really slow build times. It’s nice to see frameworks tackle that (Google “Incremental Builds {Your Favorite Framework}”), but heck, if you change one link in a site footer, you’re re-building the whole site based on that one change.

So instead of building many-thousands of pages during a build, say you just… didn’t. Until that page is requested once, anyway. That’s DPR.

Here’s Zach Leatherman doing that. He finds a spot on his site that generates some 400 pages on each build and tells Eleventy that instead of building it during the normal build process, defer it to the cloud (literally a lambda will run and build the page when needed).

Deferring those 400 pages saves seven seconds in the build. Say your site is more dramatic, like 16,000 pages. Scratch pad math says you are saving four minutes there. It’s not just time either, although that’s a biggie. I think of all the electricity and long-term storage you save building this way.

Here’s the Netlify blog post:

Just like coining the term “Jamstack” didn’t mean inventing an entirely new architecture from scratch, naming this concept of “Distributed Persistent Rendering” doesn’t mean we’re creating a brand new solution.

The term “DPR” is new to us, but in a lot of ways, we’re taking inspiration from solutions that have worked in the past. We’re simply reworking them to fit with modern Jamstack best practices.

I like that it’s not like this entirely new thing. I’m sure Netlify’s implementation of it is no joke, but for us, it’s very easy to think about:

  • Some pages are pre-built as usual
  • Some pages are not built (deferred)
  • When the non-built pages are requested for the first time, then they are built and cached so they don’t need to be built again.

That’s it, really.

It reminds me of how old WordPress caching plugins used to work. When a page was requested for the first time it would run the PHP and MySQL queries and all that, then save the result as an .html file to the disk to serve subsequent requests. Not new, but still efficient.

The trick to a DPR architecture on Netlify is using their (beta) On-Demand Builders, so here’s the blog post that explains everything and will get you to the docs and such.

The post Distributed Persistent Rendering (DPR) appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Topframe

Css Tricks - Mon, 05/10/2021 - 11:14am

This is extremely fun. Jeff Lindsay has created Topframe, and writes:

Anybody that knows how to mess around with HTML can now mess around with their desktop computing experience. Topframe is an open source tool that lets you customize your desktop screen using HTML/CSS/JavaScript.

The default screen after installing has some hardcore weirdweb energy.

Mac-only I think for now.

But it’s super easy to customize. It’s just an index.html file at ~/.topframe/index.html. It auto-refreshes when you save the file.

oh hellllllo. https://t.co/retulpYUZw

"Customize your computer screen with HTML and JavaScript" pic.twitter.com/9y3yq3IMoh

— Chris Coyier (@chriscoyier) May 5, 2021

My first thought was to fill my desktop with everything Dave likes from his RSS feed. But I tried dropping that JavaScript in and it didn’t work. I couldn’t debug it because there is no console to look at or anything. Oh well. I imagine practical use-cases include building little widgets with live data you want to keep an eye on at all times.

The other hiccup I had is that it messes with taking window screenshots. You know how you can go Command + Shift + 4 then Space on macOS to select just one window to screenshot? You can’t do that with this running because it’s always the top window. Hence “top frame” (get it?). I think it would be more fun to be “bottom frame” in that it would act more like a background instead of being always on top.

Apparently, Windows could do this a long time ago, proving (yet again) that there is nothing new under the sun.

The post Topframe appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Links on Typography

Css Tricks - Fri, 05/07/2021 - 3:54am

A few links that I’ve been holding onto:

  • How to pick a Typeface for User Interface and App Design?” by Oliver Schöndorfer. I like the term “functional text” for everything that isn’t display or body type. Look for clearly distinct letters, open shapes, and little contrast. This reminds me of how we have the charmap screen on the Coding Fonts site, but still need to re-shoot most of the screenshots so they all have it.

  • Uniwidth typefaces for interface design by Lisa Staudinger. “Uniwidth typefaces, on the other hand, are proportionally-spaced typefaces, but every character occupies the same space across different cuts or weights.” So you can change the font-weight but the box the type occupies doesn’t change. Nice for menus! This is a different concept, but it reminds me of the Operator typeface (as opposed to Operator Mono) which “is a natural width family, its characters differing in proportion according to their weight and underlying design.”

  • Should we standardize the naming of font weights?” by Pedro Mascarenhas. As in, the literal names as opposed to font-weight in CSS where we already have names and numeric values but are at the mercy of the font. The image of how dramatically different fonts, say Gilroy Heavy and Avenir Heavy, makes the point.

  • About Legibility and Readability” by Bruno Maag. “Functional accessibility” is another good term. We can create heuristics like specific font-sizes that make for good accessibility, but all nuance is lost there. Good typography involves making type readable and legible. Generally, anyway. I realize typography is a broad world and you might be designing a grungy skateboard that is intentionally neither readable nor legible. But if you do achieve readability and legibility, it has sorts of benefits, like aesthetics and me-taking-you-seriously, but even better: accessibility.

  • Font size is useless; let’s fix it” by Nikita Prokopov. “Useless” is maybe strong since it, ya know, controls the font size. But this graphic does make the point. I found myself making that same point recently. Across typefaces, an identical font-size can feel dramatically different.

  • “The sans selection” by Tejas Bhatt. A journey from a huge selection of fonts for a long-form journalism platform down to just a few, then finally lands on Söhne. I enjoyed all of the very practical considerations like (yet again) a tall x-height, not-too-heavy, and even price (although the final selection was among the most costly of the bunch).

  • Plymouth Press” by James Brocklehurst. You don’t see many “SVG fonts” these days, even though the idea (any SVG can be a character) is ridiculously cool. This one, being all grungy, has far too many vector points to be practical on the web, but that isn’t a big factor for local design software use.

  • “Beyond Calibri: Finding Microsoft’s next default font” (I guess nobody wanted that byline). I’m so turned off by the sample graphics they chose for the blog post that I can’t bring myself to care, even though this should be super interesting to follow because of the scale of use here. The tweet is slightly better.

  • Why you should Self-Host Google Fonts in 2021″ by Gijo Varghese. I am aware of “Cache Partitioning” (my site can’t use cached fonts from your site, even if they both come from Google) but I could have seen myself trotting out the other two arguments in a discussion about this and it’s interesting to see them debunked here.

The post Links on Typography appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Can I Email…

Css Tricks - Thu, 05/06/2021 - 4:38pm

While I’m 85% certain you’ve seen and used Can I Use…, I bet there is only a 13% chance that you’ve seen and used Can I Email…. It’s the same vibe—detailed support information for web platform features—except instead of support information for web browsers, it is email clients. Campaign Monitor has maintained a guide for a long time as well, but I admit I like the design style pioneered by Can I Use….

HTML email is often joked about in how you have to code for it in such an antiquated way (<table> tags galore!) but that’s perhaps not a fair shake. Kevin Mandeville talked about how he used CSS grid (not kidding) in an email back in 2017:

Our Apple Mail audience at Litmus is approximately 30%, so a good portion of our subscriber base is able to see the grid desktop layout.

Where CSS Grid isn’t supported (and for device/window widths of less than 850 pixels), we fell back to a one-column layout.

Just like websites, right? They don’t have to look the same everywhere, as long as the experience is acceptable everywhere.

Me, I don’t do nearly as as much HTML email work as I do for-web-browsers work, but I do some! For example, the newsletter for CSS-Tricks is an RSS feed that feeds a MailChimp template. The email we send out to announce new shows for ShopTalk is similar. Both of those are pretty simple MailChimp templates that are customized with a bit of CSS.

But the most direct CSSin’ I do with HTML email is the templates for CodePen emails. We have all sorts of them, from totally custom templates, to special templates for The Spark and Challenges and, of course, transactional emails.

Those are all entirely from-scratch email templates. I’s very nice to know what kind of CSS features I can count on using. For example, I was surprised by how well flexbox is supported in email.

It’s always worth thinking about fallbacks. There is nothing stopping you from creating an email that is completely laid out with CSS grid. Some email clients will support it, and some won’t. When it is supported, a fancy layout happens. When it is not supported, assuming you leave the source order intelligible, you just get a column of blocks (which is what most emails are anyway) and should be perfectly workable.

Progressive enhancement is almost more straightforward in email where there is rarely any functionality to be concerned with.

Direct Link to ArticlePermalink

The post Can I Email… appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Apparently, You Can Use Route53 as a Blazingly Fast Database

Css Tricks - Thu, 05/06/2021 - 10:03am

Route53 is DNS management service by AWS. DNS is absolutely not a database, and yet here’s Nicholas Martin writing up some very clever trickery originally done by Corey Quinn:

When you think about it, DNS configuration is actually a very rudimentary NoSQL database. You can view and modify it at any time quite easily through your domain provider’s website, and you can view each “record” just like a row in a database table.

Many services use DNS TXT records to verify domain ownership. You would essentially add or modify a TXT record to store a key/value pair, which the service will then query.

Why? It’s super fast and costs $0.50 + $0.40 per million queries.

There are even libraries (ten34, diggydb) to help do it. I wouldn’t do it just because I’d be scared Amazon wouldn’t like it and cut it off. Plus, ya know, there isn’t exactly auth.

Direct Link to ArticlePermalink

The post Apparently, You Can Use Route53 as a Blazingly Fast Database appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

What Google’s New Page Experience Update Means for Images on Your Website

Css Tricks - Thu, 05/06/2021 - 9:57am

It’s easy to forget that, as a search engine, Google doesn’t just compare keywords to generate search results. Google knows that if people don’t enjoy their experience on a web page, they won’t stay on the page long enough to consume the content — no matter how relevant it is.

As a result, Google has been experimenting with ways to analyze the user experience of web pages using quantifiable metrics. Factoring these into its search engine rankings, it’s hoped to provide users not only with great, relevant content but with awesome user experiences as well.

Google’s soon-to-be-launched page experience update is a major step in this direction. Website owners with image-heavy websites need to be particularly vigilant to adapt to these changes or risk falling by the wayside. In this article, we’ll talk about everything you need to know regarding this update, and how you can take full advantage of it.

Note: Google introduced their plans for Page Experience in May 2020 and announced in November 2020 that it will begin rolling out in May 2021. However, Google has since delayed their plans for a gradual rollout starting mid-Jun 2021. This was done in order to give website admins time to deal with the shifting conditions brought about by the COVID-19 pandemic first.

Some Background

Before we get into the latest iteration of changes to how Google factors user experience metrics into search engine rankings, let’s get some context. In April 2020, Google made its most pivotal move in this direction yet by introducing a new initiative: core web vitals.

Core web vitals (CWV) were introduced to help web developers deal with the challenges of trying to optimize for search engine rankings using testable metrics – something that’s difficult to do with a highly subjective thing like user experience.

To do this, Google identified three key metrics (what it calls “user-centric performance metrics”). These are:

  1. LCP (Largest Contentful Paint): The largest element above the fold when a web page initially loads. Typically, this is a large featured image or header. How fast the largest content element loads plays a huge role in how fast the user perceives the overall loading speed of the page.
  2. FID (First Input Delay): The time it takes between when a user first interacts with the page and when the main thread is free for the browser to process the event. This can be clicking/tapping a button, link, or interacting with any other dynamic element. Delays when interacting with a page can obviously be frustrating to users which is why keeping FID low is crucial.
  3. Cumulative Layout Shift (CLS): This calculates the visual stability of a page when it first loads. The algorithm takes the size of the elements and the distance they move relevant to the viewport into account. Pages that load with high instability can cause miscues by users, also leading to frustrating situations.

These metrics have evolved from more rudimentary ones that have been in use for some time, such as SI (Speed Index), FCP (First Contentful Paint), TTI (Time-to-interactive), etc.

The reason this is important is because images can play a significant role in how your website’s CWVs score. For example, the LCP is more often than not an above-the-fold image or, at the very least, will have to compete with an image to be loaded first. Images that aren’t correctly used can also negatively impact CLS. Slow-loading images can also impact the FID by adding further delays to the overall rendering of the page.

What’s more, this came on the back of Google’s renewed focus on mobile-first indexing. So, not only are these metrics important for your website, but you have to ensure that your pages score well on mobile devices as well.

What’s Going to Change? Page Experience for Google Search

It’s clear that, in general, Google is increasingly prioritizing user experience when it comes to search engine rankings. Which brings us to the latest update – Google now plans to incorporate page experience as a ranking factor, starting with an early rollout in mid-June 2021.


So, what is page experience? In short, it’s a ranking signal that combines data from a number of metrics to try and determine how good or bad the user experience of a web page is. It consists of the following factors:

  • Core Web Vitals: Using the same, unchanged, core web vitals. Google has established guidelines and recommended rankings that you can find here. You need an overall “good” CWV rating to qualify for a “good” page experience score.
  • Mobile Usability: A URL must have no mobile usability errors to qualify for a “good” page experience score.
  • Security Issues: Any flagged security issues will disqualify websites.
  • HTTPS: Pages must be served via HTTPS to qualify.
  • Ad Experience: Measures to what degree ads negatively affect the user experience on your web page, for example, by being intrusive, distracting, etc.

As part of this change, Google announced its intention to include a visual indicator, or badge, that highlights web pages that have passed its page experience criteria. This will be similar to previous badges the search engine has used to promote AMP (Accelerated Mobile Pages) or mobile-friendly pages.

This official recognition will give high-performing web pages a massive advantage in the highly competitive arena that is Google’s SERPs. This visual cue will undoubtedly boost CTRs and organic traffic, especially for sites that already rank well. This feature may drop as soon as May if it passes its current trial phase.

Another bit of good news for non-AMP users is that all pages will now become eligible for Top Stories in both the browser and Google News app. Although Google states that pages can qualify for Top Stories “irrespective of its Core Web Vitals score or page experience status,” it’s hard to imagine this not playing a role for eligibility now or down the line.

Key Takeaway: What Does This Mean For Images on Your Website?

Google noted a 70% surge in consumer usage of their Lighthouse and PageSpeed Insight tools, showing that website owners are catching up on the importance of optimizing their pages. This means that standards will only become higher and higher when competing with other websites for search engine rankings.

Google has reaffirmed that, despite these changes, content is still king. However, content is more than just the text on your pages, and truly engaging and user-friendly content also consists of thoughtfully used media, the majority of which will likely be images.

With the proposed page experience badges and Top Stories eligibility up for grabs, the stakes have never been higher to rank highly with the Google Search algorithm. You need every advantage that you can get. And, as I’m about to show, optimizing your image assets can have a tangible effect on scoring well according to these metrics.

What Can You Do To Keep Up?

Before I propose my solution to help you optimize image assets for core web vitals, let’s look at why images are often detrimental to performance:

  • Images bloat the overall size of your website pages, especially if the images are unoptimized (i.e. uncompressed, not properly sized, etc.)
  • Images need to be responsive to different devices. You need much smaller image sizes to maintain the same visual quality on smaller screens.
  • Different contexts (browsers, OSs, etc.) have different formats for optimally rendering images. However, most images are still used in .JPG/.PNG format.
  • Website owners don’t always know about the best practices associated with using images on website pages, such as always explicitly specifying width/height attributes.

Using conventional methods, it can take a lot of blood, sweat, and tears to tackle these issues. Most solutions, such as manually editing images and hard-coding responsive syntax have inherent issues with scalability, the ability to easily update/adjust to changes, and bloat your development pipeline.

To optimize your image assets, particularly with a focus on improving CWVs, you need to:

  • Reduce image payloads
  • Implement effective caching
  • Speed up delivery
  • Transform images into optimal next-gen formats
  • Ensure images are responsive
  • Implement run-time logic to apply the optimal setting in different contexts

Luckily, there is a class of tools designed specifically to solve these challenges and provide these solutions — image CDNs. Particularly, I want to focus on ImageEngine which has consistently outperformed other CDNs on page performance tests I’ve conducted.

ImageEngine is an intelligent, device-aware image CDN that you can use to serve your website images (including GIFs). ImageEngine uses WURFL device detection to analyze the context your website pages are accessed from (device, screen size, DPI, OS, browser, etc.) and optimize your image assets accordingly. Based on these criteria, it can optimize images by intelligently resizing, reformatting, and compressing them.

It’s a completely automatic, set-it-and-forget-it solution that requires little to no intervention once it’s set up. The CDN has over 20 global PoPs with the logic built into the edge servers for faster across different regions. ImageEngine claims to achieve cache-hit ratios of as high as 98%+ as well as reduce image payloads by 75%+.

Step-by-Step Test + How to Use ImageEngine to Improve Page Experience

To illustrate the difference using an image CDN like ImageEngine can make, I’ll show you a practical test.

First, let’s take a look at how a page with a massive amount of image content scores using PageSpeed Insights. It’s a simple page, but consists of a large number of high-quality images with some interactive elements, such as buttons and links as well as text.

FID is unique because it relies on data from real-world interactions users have with your website. As a result, FID can only be collected “in the field.” If you have a website with enough traffic, you can get the FID by generating a Page Experience Report in the Google Console.

However, for lab results, from tools like Lighthouse or PageSpeed Insights, we can surmise the impact of blocking resources by looking at TTI and TBT.

Oh, yes, and I’ll also be focussing on the results of a mobile audit for a number of reasons:

  1. Google themselves are prioritizing mobile signals and mobile-first indexing
  2. Optimizing web pages and images assets are often most challenging for mobile devices/general responsiveness
  3. It provides the opportunity to show the maximum improvement a image CDN can provide

With that in mind, here are the results for our page:

So, as you can see, we have some issues. Helpfully, PageSpeed Insights flags the two CWVs present, LCP and CLS. As you can see, because of the huge image payload (roughly 35 MB), we have a ridiculous LCP of nearly 1 minute.

Because of the straightforward layout and the fact that I did explicitly give images width and height attributes, our page happened to be stable with a 0 CLS. However, it’s important to realize that slow loading images can also impact the perceived stability of your pages. So, even if you can’t directly improve on CLS, the faster sizable elements such as images load, the better the overall experience for real-world users.

TTI and TBT were also sub-par. It will take at least two  seconds from the moment the first element appears on the page until when the page can start to become interactive.

As you can see from the opportunities for improvement and diagnostics, almost all issues were image-related:

Setting Up ImageEngine and Testing the Results

Ok, so now it’s time to add ImageEngine into the mix and see how it improves performance metrics on the same page.

Setting up ImageEngine on nearly any website is relatively straightforward. First, go to ImageEngine.io and signup for a free trial. Just follow the simple 3-step signup process where you will need to:

  1. provide the website you want to optimize, 
  2. the web location where your images are stored, and then 
  3. copy the delivery address ImageEngine assigns to you.

The latter will be in the format of {random string}.cdn.imgeng.in but can be updated from within the ImageEngine dashboard.

To serve images via this domain, simply go back to your website markup and update the <img> src attributes. For example:

From:

<img src=”mywebsite.com/images/header-image.jpg”/>

To:

<img src=”myimages.cdn.imgeng.in/images/header-image.jpg”/>


That’s all you need to do. ImageEngine will now automatically pull your images and dynamically optimize them for best results when visitors view your website pages. You can check the official integration guides in the documentation on how to use ImageEngine with Magento, Shopify, Drupal, and more. There is also an official WordPress plugin.

Here’s the results for my ImageEngine test page once it’s set up:

As you can see, the results are nearly flawless. All metrics were improved, scoring in the green – even Speed Index and LCP which were significantly affected by the large images.

As a result, there were no more opportunities for improvement. And, as you can see, ImageEngine reduced the total page payload to 968 kB, cutting down image content by roughly 90%:

Conclusion

To some extent, it’s more of the same from Google who has consistently been moving in a mobile direction and employing a growing list of metrics to hone in on the best possible “page experience” for its search engine users. Along with reaffirming their move in this direction, Google stated that they will continue to test and revise their list of signals.

Other metrics that can be surfaced in their tools, such as TTFB, TTI, FCP, TBT, or possibly entirely new metrics may play even larger roles in future updates.

Finding solutions that help you score highly for these metrics now and in the future is key to staying ahead in this highly competitive environment. While image optimization is just one facet, it can have major implications, especially for image-heavy sites.

An image CDN like ImageEngine can solve almost all issues related to image content, with minimal time and effort as well as future proof your website against future updates.

The post What Google’s New Page Experience Update Means for Images on Your Website appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Custom State Pseudo-Classes in Chrome

Css Tricks - Thu, 05/06/2021 - 4:28am

There is an increasing number of “custom” features on the web platform. We have custom properties (--my-property), custom elements (<my-element>), and custom events (new CustomEvent('myEvent')). At one point, we might even get custom media queries (@media (--my-media)).

But that’s not all! You might have missed it because it wasn’t mentioned in Google’s “New in Chrome 90” article (to be fair, declarative shadow DOM stole the show in this release), but Chrome just added support for yet another “custom” feature: custom state pseudo-classes (:--my-state).

Built-in states

Before talking about custom states, let’s take a quick look at the built-in states that are defined for built-in HTML elements. The CSS Selectors module and the “Pseudo-classes” section of the HTML Standard specify a number of pseudo-classes that can be used to match elements in different states. The following pseudo-classes are all widely supported in today’s browsers:

User action :hover the mouse cursor hovers over the element :active the element is being activated by the user :focus the element has the focus :focus-within the element has or contains the focus Location :visited the link has been visited by the user :target the element is targeted by the page URL’s fragment Input :disabled the form element is disabled :placeholder-shown the input element is showing placeholder text :checked the checkbox or radio button is selected :invalid the form element’s value is invalid :out-of-range the input element’s value is outside the specificed range :-webkit-autofill the input element has been autofilled by the browser Other :defined the custom element has been registered

Note: For brevity, some pseudo-classes have been omitted, and some descriptions don’t mention every possible use-case.

Custom states

Like built-in elements, custom elements can have different states. A web page that uses a custom element may want to style these states. The custom element could expose its states via CSS classes (class attribute) on its host element, but that’s considered an anti-pattern.

Chrome now supports an API for adding internal states to custom elements. These custom states are exposed to the outer page via custom state pseudo-classes. For example, a page that uses a <live-score> element can declare styles for that element’s custom --loading state.

live-score { /* default styles for this element */ } live-score:--loading { /* styles for when new content is loading */ } Let’s add a --checked state to a <labeled-checkbox> element

The Custom State Pseudo Class specification contains a complete code example, which I will use to explain the API. The JavaScript portion of this feature is located in the custom element‘s class definition. In the constructor, an “element internals” object is created for the custom element. Then, custom states can be set and unset on the internal states object.

Note that the ElementInternals API ensures that the custom states are read-only to the outside. In other words, the outer page cannot modify the custom element’s internal states.

class LabeledCheckbox extends HTMLElement { constructor() { super(); // 1. instantiate the element’s “internals” this._internals = this.attachInternals(); // (other code) } // 2. toggle a custom state set checked(flag) { if (flag) { this._internals.states.add("--checked"); } else { this._internals.states.delete("--checked"); } } // (other code) }

The web page can now style the custom element’s internal states via custom pseudo-classes of the same name. In our example, the --checked state is exposed via the :--checked pseudo-class.

labeled-checkbox { /* styles for the default state */ } labeled-checkbox:--checked { /* styles for the --checked state */ } Try the demo in Chrome This feature is not (yet) a standard

Browser vendors have been debating for the past three years how to expose the internal states of custom elements via custom pseudo-classes. Google’s Custom State Pseudo Class specification remains an “unofficial draft” hosted by WICG. The feature underwent a design review at the W3C TAG and has been handed over to the CSS Working Group. In Chrome’s ”intent to ship” discussion, Mounir Lamouri wrote this:

It looks like this feature has good support. It sounds that it may be hard for web developers to benefit from it as long as it’s not widely shipped, but hopefully Firefox and Safari will follow and implement it too. Someone has to implement it first, and given that there are no foreseeable backward incompatible changes, it sounds safe to go first.

We now have to wait for the implementations in Firefox and Safari. The browser bugs have been filed (Mozilla #1588763 and WebKit #215911) but have not received much attention yet.

The post Custom State Pseudo-Classes in Chrome appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Progress Delayed Is Progress Denied

Css Tricks - Wed, 05/05/2021 - 6:51am

The bombshell article of the week is from Alex Russell of Google/Chrome. Alex has long been super critical of Apple, particularly about how there is literally no option to run any other browser than Safari on iOS. This article isn’t just fist-waving about that, but a dissertation accusing Apple of real harm to the web platform by sluggish progress on Safari and a no-web-apps App Store.

Apple’s iOS browser (Safari) and engine (WebKit) are uniquely under-powered. Consistent delays in the delivery of important features ensure the web can never be a credible alternative to its proprietary tools and App Store.

I appreciate Alex’s take here. It gives credit where credit is due, it places blame where it feels most fair to place it, and brings data to a complex conversation that deserves it. It’s hard not to get through the article and think that the web would be in a better place should Apple…

  1. Allow alternative browsers on iOS
  2. Allow web apps in the App Store
  3. Move faster with web platform features in Safari

Taking them one at a time…

  1. Do I want this? Yes. It seems reasonable that my $1,000 extremely powerful computer phone should be able to run whatever browser I want, particularly one from a company that makes a really good one and very much wants to ship it on my phone. In reality, I’m sure the complications around this are far beyond my understanding. I always think about that Chrome update that literally broke macOS. Could that happen on iOS? While lack of features might abstractly make for unhappy customers, a bricked phone very directly makes for unhappy customers. I suspect it more boils down to the fact that Google is an advertising company that innovates on tracking technology and Apple is a hardware company that innovates on privacy. That’s a rock-and-hard-place situation and this browser situation is one of the consequences.
  2. Do I want this? Yes. I don’t even know how to make native apps aside from software that turns web apps into native apps with magic. If web apps could go in the Apple App Store, it opens the door for people like me (and there are a lot of me). In reality, I’m sure the complications around this are far beyond my understanding. Is quality control harder? I gotta imagine it is. Is security harder to lock down? I gotta imagine it is. Are customers clamoring for it? I’m not sure I hear them very loudly. People do have a choice, as well: iOS is only 15% of the phone market. If you want an alternative browser and an alternative app store, you can have it.
  3. Do I want this? Yes. Heck, we celebrate little wins that Safari ships. I certainly don’t want to wait years for every clearly-useful feature. It will be interesting to measure the time for contain and container queries. That one looms large for me as I want to use it as soon as possible, without polyfills, once it stabilizes. I know the joke goes that “Safari is the new IE” but I don’t tend to feel that day-to-day in my typical web dev work. I feel like I can ship extremely capable websites to all browsers, including Safari, and not worry terribly much about Safari being a second-class experience. (I don’t make games or VR/AR experiences, though.) I’m honestly more worried about Firefox here. Apple and Google have more money than God. It’s Mozilla I worry about being DDoS’d with web-feature onslaught, although to be fair, they seem to be doing fine at the moment.

As far as Safari-behind-ness goes, I think more about the DevTools than I do web platform features.

There is the Apple-is-restrictive angle (fair enough), but also the Apple-is-slow angle here. Slow is a relative term. Slow compared to what? Slow compared to Chrome. Which begs the question: why does Chrome get to dictate the exact speed of the web? I always think of Dave’s “Slow, like brisket.”

There’s a lot of value in slow thinking. You use the non-lizard side of your brain. You make more deliberate decisions. You prioritize design over instant gratification. You can check your gut instincts and validate your hypothesis before incurring mountains of technical debt.

I think just enough iteration before a release produces better products. Because once it’s out, it’s out. There’s no turning back or major API changes. 

Maybe a slower-moving web is frustrating sometimes, but does it mean we get a better one in the end? My baby bear brain tells me there is a just right somewhere in the middle.

Direct Link to ArticlePermalink

The post Progress Delayed Is Progress Denied appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Jetpack Backup: Roll Back Your WooCommerce Site Without Losing Orders

Css Tricks - Tue, 05/04/2021 - 11:15am

Here’s a dilemma: what happens if your WooCommerce site has a problem and the quickest and best way to fix it is to roll back to a previous version? The dilemma is, if you roll back the database, you would lose literal customer orders that are stored in the database. That’s not acceptable for an eCommerce store.

Good news: Jetpack Backup won’t do you wrong like that. You can still restore to a prior point, but you won’t lose any customer order or product data.

Do you lose all the orders that came in since that last backup? Nope.

Will the inventory get all screwed up? Nope.

What about the new products that were added after the restore point? Still there.

All that data is treated as immutable. The way that it plays out is that the database is restored to that point (along with everything else) and that all the new product and order data that has changed since then is replayed on top of the database after the restore.

With Jetpack Backup, there’s absolutely no guesswork. Its real-time snapshots feature has a unique feature that protects WooCommerce customer and product data when rolling back things back so you get the best-case scenario: readily available backups that preserves customer orders and product information.

That’s just one of the magical benefits you get from such a deep integration between Jetpack and WooCommerce. There are others, of course! But you can imagine just what a big deal this specific integration for any WooCommerce-powered store.

And, hey, Jetpack Backup is sold à la carte. So, if backups are all you want from Jetpack, then you can get just that and that alone.

Get Jetpack Backup

The post Jetpack Backup: Roll Back Your WooCommerce Site Without Losing Orders appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Let’s use (X, X, X, X) for talking about specificity

Css Tricks - Tue, 05/04/2021 - 4:19am

I was just chatting with Eric Meyer the other day and I remembered an Eric Meyer story from my formative years. I wrote a blog post about CSS specificity, and Eric took the time to point out the misleading nature of it (I remember scurrying to update it). What was so misleading? The way I was portraying specificity as a base-10 number system.

Say you select an element with ul.nav. I insinuated in the post that the specificity of that selector was 0011 (eleven, essentially), which is a number in a base-10 system. So I was saying tags = 0, classes = 10, IDs = 100, and a style attribute = 1000. If specificity was calculated in a base-10 number system like that, a selector like ul.nav.nav.nav.nav.nav.nav.nav.nav.nav.nav.nav (11 class names) would have a specificity of 0111, which would be the same as ul#nav.top. That’s not true. The reality is that it would be (0, 0, 11, 1) vs. (0, 1, 0, 1) with the latter easily winning.

That comma-separated syntax like I just used solves two problems:

  1. It doesn’t insinuate a base-10 number system (or any number system)
  2. It has a distinct and readable look

I like the (X, X, X, X) look. I could see limiting it to (X, X, X) since a style attribute isn’t exactly a selector and usually isn’t talked about in the same kind of conversations. The parens make it more clear to me, but I could also see a X-X-X (dash-separated) syntax that wouldn’t need them, or a (X / X / X) syntax that probably would benefit from the parens.

Selectors Level 3 uses dashes briefly. Level 2 used both dashes and commas in different places.

Anyway, apparently I get the bug to mention this every half-decade or so.

The post Let’s use (X, X, X, X) for talking about specificity appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Creating Colorful, Smart Shadows

Css Tricks - Tue, 05/04/2021 - 4:19am

A bona fide CSS trick from Kirupa Chinnathambi here. To match a colored shadow with the colors in the background-image of an element, you inherit the background in a pseudo-element, kick it behind the original, then blur and filter it.

.colorfulShadow { position: relative; } .colorfulShadow::after { content: ""; width: 100%; height: 100%; position: absolute; background: inherit; background-position: center center; filter: drop-shadow(0px 0px 10px rgba(0, 0, 0, 0.50)) blur(20px); z-index: -1; }

Negative z-index is always a yellow flag for me as that only works if there are no intermediary backgrounds. But the trick holds. There would always be some other way to layer the backgrounds (like a <span> or whatever).

For some reason this made me think of a demo I saw (I can’t remember who to credit!). Emojis had text-shadow on them, which really made them pop. And those shadows could also be colorized to a similar effect.

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post Creating Colorful, Smart Shadows appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 8: CSS

Css Tricks - Mon, 05/03/2021 - 11:52am

In June of 2006, web developers and designers from around the world came to London for the second annual @media conference. The first had been a huge success, and @media 2006 had even more promise. Its speaker lineup was pulled from some of the most exciting and energetic voices in the web design and browser community.

Chris Wilson was there to announce the first major release to Microsoft’s Internet Explorer in nearly half a decade. Rachel Andrew and Dave Shea were swapping practical tips about CSS and project management. Tantek Çelik was sharing some of his recent work on microformats. Molly Holzschlag, Web Standards Project lead at the time, prepared an illuminating talk on internationalization and planned to join a panel about the latest developments of CSS.

The conference kicked off on Thursday with a keynote talk by Eric Meyer, a pioneer and early adopter of CSS. The keynote’s title slide read “A Decade of Style.” In a captivating and personal talk, Meyer recounted the now decade-long history of Cascading Style Sheets, or CSS. His own professional history intertwined and inseparable from that of CSS, Meyer used his time on the stage to look at the language’s roots and understand better the decisions and compromises that had led to the present day.

At the center of his talk, Meyer unveiled the secret to the success of CSS: “Never underestimate the effect of a small, select group of passionate experts.” CSS, the open and accessible design language of the Web, thrived not because of the technology itself, but because of people—the people who built it (and built with it) and what they shared as they learned along the way. The history of CSS, Meyer concluded, is the history of the people who made it.

Fifteen years after that talk, and nearly three decades after its creation, that is still true.

On Thursday morning, October 20th, 1994, attendees of another conference, the Second International WWW Conference, shuffled into a room on the second floor of the Ramada Hotel in Chicago. It was called the Gold Room. The Grand Hall across the way was quite a bit larger—reserved for the keynote presentations on the day—but the Gold Room would work just fine for the relatively smaller group that had managed to make the early morning 8:30 a.m. panel.

Most in attendance that morning would have been exhausted and bleary-eyed, tired from late-night networking events that had spanned the previous three nights. Thursday was Developer Day, the final day of the conference.

The Chicago conference had been preceded six months earlier by the first WWW conference in Geneva. The contrast would have been immediately apparent. Rather than breakout sessions focused on standards and specs, the halls buzzed with industry insiders and commercial upstarts selling their wares. In a short amount of time, the Web had gone mainstream. The conference in Chicago reflected that shift in tone: it was an industry event, with representatives from Microsoft, HP, Silicon Graphics, and many more.

The theme of the conference was “Mosaic and the Web,” and the site of Mosaic’s creation, NCSA, had helped to organize the event. It was a fact made more dramatic by a press release from Netscape, a company mostly staffed by former NCSA employees, just days earlier. The first version of their browser—dramatically billed as “Mosaic killer”—was not only in beta, but would be free upon release (a decision that would later be reversed). Most members of the Netscape team were in attendance, in commercial opposition of their former employer and biggest rival.

The grand intrigue of commercial clashes somewhat overshadowed the first morning session on the last day of the conference, “HTML and SGML: A Technical Presentation.” This, in spite of the fact that the Web’s creator, Sir Tim Berners-Lee, was leading the panel. The final presenter was Håkon Wium Lie, who worked with Berners-Lee and Robert Calliau at CERN. It was about a new proposal for a design language that Lie was calling Cascading HTML Style Sheets. CHSS for short.

The proposal had come together in a hurry. A conversation with standards editor Dave Ragget helped convince Lie of the urgency. Running right up to the deadline, Lie had posted the first draft of his proposal ten days before the conference.

Lie had come to the Web early and enthusiastically. Early enough to have used Nicola Pellow’s line-mode browser to telnet into the very first website. And enthusiastic enough to join Berners-Lee and the web team at CERN shortly after graduating from the MIT media lab in 1992. “I heard the big bang and came running,” is how Lie puts it.

Hakon Wium Lie (Credit: Heinrich-Böll-Stiftung)

Not long after he began at CERN, the language of the web shifted. Realizing that the web’s audience could not stare at black text on a white background all day, the makers of Mosaic introduced a tag that let website creators add inline images to their website. Once the gate was open, more features rushed out. Mosaic added even more tags for colors and fonts and layout. Lie, and the team at CERN, could only sit on the sidelines and watch, a fact Lie would later comment on, saying, “It was like: ‘Darn, we need something quick, otherwise they’re going to destroy the HTML language.’”

The impending release of Netscape in 1994 offered no relief. Marc Andreessen and his team at Netscape promised a consumer-focused web browser. Berners-Lee had developed HTML—the singular language of the web—to describe documents, not to design them. To fill that gap, browsers stuffed the language of HTML with tags to allow designers to create dynamic and stylized websites.

The problem was, there was not yet a standard way of doing this. So each browser added what they felt was necessary and others were forced to either follow suit or go their own way. “As soon as images were allowed inline in HTML documents, the web became a new graphical design medium,” programmer and soon-to-be W3C member Chris Lilley posted to www-talk around that time, “If style sheets or similar information are not added to HTML, the inevitable price will be documents that only look good on a particular browser.”

Lie’s proposal—which he began working on almost as soon as he joined up at CERN—was for a second language. CHSS used style sheets: separate documents that described the visual design of HTML without affecting its structure. So you could change your HTML and your style sheet stayed the same. Change the style sheet and HTML stayed the same. Content lived in one place, and presentation in another.

There were other style sheet proposals. Rob Raisch from O’Reilly and Viola creator Pei-Yuan Wei each had their own spin. Working at CERN, where the web had been created, helped boost the profile of CHSS. Its relative simplicity also made it appealing to browser makers. The cascade in Cascading HTML Style Sheets, however, set it apart.

Each person experiences the web through a prism of their own experience. It is viewed through different devices, under different conditions. On screen readers and phones and on big screen TVs. One’s perception of how a page should look based on their situation runs in stark contrast to both the intent of the website’s author and the limitations and capabilities of browsers. The web, therefore, is chaotic. Multiple sources mingle and compete to decide the way each webpage is perceived.

The cascade brings order to the web. Through a simple set of rules, multiple parties—the browser, the user, and the website author—can define the presentation of HTML in separate style sheets. As rules flow from one style sheet to the next, the cascade balances one rule against another and determines the winner. It keeps design for the web simple, inheritable, and embraces its natural unstable state. It has changed over time, but the cascade has made the web adaptable to new computing environments.

After Lie gave his presentation on the second floor of the Ramada Hotel in Chicago, it was the cascade that monopolized discussions. The makers of the web used the CHSS proposal as a springboard for a much wider conversation about author intent and user preferences. In what situation, in other words, the author of a website’s design should override the preference of a user or the determination of a browser. Productive debate spilled outside of the room and onto the www-talk mailing list, where it was picked up by Bert Bos.

Bert Bos (Credit: dotConferences)

Bos was a Dutch engineer, studying mathematics at the University of Groningen in the Netherlands. Before he graduated, he created a browser called Argo, a well-known and useful tool for several of the University’s departments. Argo was notable for two reasons. The first was that it included an early iteration of what would later be known as applets. The second was that it included Bos’ own style sheet implementation, one that was not too unlike CHSS. He recognized an opportunity.

“Most of the content of CSS1 was discussed on the whiteboard in Sophia-Antipolis in July 1995… Whenever I encounter difficult technical problems, I think of Bert and that whiteboard.”

Hakon Wium Lie

Lie and Bos began working together, merging their proposals into something more refined. The following year, in the spring of 1995, the third WWW conference was held in Darmstadt, Germany. Netscape, having just been released six months earlier, was already coasting on a new wave of popularity led by their new CEO Jim Barksdale. A few months away from the most successful IPO in history, Netscape would soon launch itself into the stratosphere, with the web riding shotgun, still adding new, non-standard HTML features whenever they could.

Lie and Bos had only ever communicated remotely. In Germany, they met in person for the first time and gave a joint presentation on a new proposal for Cascading Style Sheets, CSS (the H dropped by then).

It stood in contrast to what was available at the time. With only HTML at their disposal, web designers were forced to create “page layout via tables and Netscapisms like FONT SIZE,” as one Suck columnist wrote at the time, later quoted in a dissertation written by Lie. Table-bloated webpages were slow to load, and difficult to understand by accessible devices like screen readers. CSS solved those issues. That same writer, though not believing in its longevity, praised CSS for its “simple elegance, but also… its superfluousness and redundancy.”

Shortly after the conference, Bos joined Lie at the W3C. They began drafting a specification that summer. Lie recalls the frenzied and productive work they did fondly. “Most of the content of CSS1 was discussed on the whiteboard in Sophia-Antipolis in July 1995… Whenever I encounter difficult technical problems, I think of Bert and that whiteboard.”

Chris Wilson, in 1995, was already something of an expert in browsers. He had worked at NCSA on the Mosaic team, one of two programmers who created the Windows version. In the basement of the NCSA lab, Wilson was an eager participant in the conversations that helped define the early web.

Most of his colleagues at NCSA packed up and moved to Silicon Valley to work on Netscape’s Mosaic killer. Wilson chose something different. He settled farther north, in Seattle. His first job was with Spry, working on a Mosaic-licensed browser for their Internet In a Box package. However, as an engineer it was hard for Wilson to avoid the draw of Microsoft in Seattle. By 1995, he worked there as a software developer, and by 1996, he was moved to the Internet Explorer team just ahead of the browser’s version 2 release.

Internet Explorer was Microsoft’s late entry to the browser market. Bill Gates had notoriously sidestepped the Internet and the web for years, before completely reversing his company’s position. In that time, Netscape had captured a swiftly expanding market that didn’t exist when they started. They had released two wildly successful versions of their user-friendly, cross-platform browser. Their window to the web was adorned with built-in email, an easy install process, and a new language called JavaScript that let developers add lively animations to a web that had been previously inert.

Microsoft offered comparatively little. Internet Explorer began as a port of Mosaic, but by the time Wilson signed on, it rested on a rewritten codebase. Besides a few built-in native Microsoft features that appealed to the enterprise market, Internet Explorer had been unable to set themselves apart from the sharp focus and pace of Netscape.

Microsoft needed a differentiator. Wilson thought he had one. “There’s this thing called style sheets,” Wilson recalls telling his boss at the time, “it lets you control the fonts and you and you get to make really pretty looking pages, Netscape isn’t even looking at this stuff.” Wilson got approval to begin working on CSS on the spot.

At the time, the CSS specification wasn’t yet complete. To bridge the gap of how things were supposed to work, Wilson met regularly with Lie, Bos, and other members of the W3C. They would make edits to their draft specification, and Wilson would try it out in his browser. Rinse and repeat. Later, they even brought Vidur Apparao from Netscape into their discussions, which became more formal. Eventually, they became the CSS Working Group.

Internet Explorer 3 was released in August of 1996. It was the first browser to have any support for CSS, a language that hadn’t yet been formally recommended by the W3C. Later, that would become an issue. “There are still a lot of IE3s out there,” Lie would later say a few years after its initial release, “and since they don’t conform to the specification, it’s very hard to write a style sheet that will work well with IE3 while also working well with later browsers.”

Internet Explorer 3 (Credit: My Internet Explorer)

At the time, however, it was imminently necessary. A working version of CSS powered by a browser at the largest tech company in the world lent stability. Table-based layouts and Netscape-only tags were still more widely adopted, but CSS now stood a chance.

By 1997, the W3C split the HTML working group into three parts, with CSS getting its own dedicated group formed from the ad-hoc Internet Explorer 3 team. It would be chaired by Chris Lilley, who came to the web as a computer graphics specialist. Lilley had pointed out years earlier the need for a standardized web technology for design. At the W3C, he would lead the effort to do just that.

The first formal Recommendation of CSS was published in December of 1997. Six months later, CSS version 2 was released.

As chair of the working group, Lilley was active on the www-talk mailing list. He’d often solicit advice or answer questions from developers. On one such exchange, he received an email from one Eric Meyer. “Hey, I threw together these test pages, I don’t know if you’d be interested in them,” was how Meyer remembers the message, adding that he didn’t realize that “there was nothing else quite like it in existence.”

Eric Meyer was at the web conference in Chicago where Håkon Lie first demoed CSS, though not at the session. He didn’t get a chance to actually see CSS until a few years later, at the fifth annual Web Conference in Paris. He was there to present a paper on web technology he had developed while working as the Case Western webmaster. His real purpose there, however, was to discover the probable future of the web.

He attended one panel featuring Håkon Lie and Bert Bos, alongside Dave Raggett. They each spoke to the capabilities of CSS as part of the W3C specification. Chris Wilson was there too, nursing a bit of a cold but nevertheless emphatically demoing a working version of CSS in Internet Explorer 3. “I’d never even heard of CSS before, but by the time that panel was over, the top of my head felt like it had blown off,” Meyer would later say, “I was instantly sold. It just felt right.”

Eric A. Meyer (Credit meyerweb.com)

Meyer got home and began experimenting with CSS. But he quickly hit a wall. He had a little more than a spec to go off of—there wasn’t such a thing as formal documentation or CSS tutorials—but something felt off. He’d code a bit of CSS and expect it to work one way, and it’d work another.

That’s when he began to pull together test pages. Meyer would isolate his code to a single feature of CSS. Then he’d test that across browsers, and document their inconsistencies, alongside how he thought they should work. “I think it was mostly the sheer joy of crawling through a new system, pulling it apart, figuring out how it worked, and documenting what worked and what didn’t. I don’t know exactly why those kinds of things excite me, but they do.” Over the years, Meyer has built a career on top of this type of experimentation.

Those test pages—posted to Meyer’s website and later to other blogs—carefully arranged and unknowingly documented the proper implementation of CSS according to its specification. Once Chris Lilley got a hold of them, the CSS Working Group helped Meyer transform them into the official W3C CSS Test Suite, an important tool to assist browsers working to introduce CSS.

Test pages and tutorials on Meyer’s personal site soon became regular columns on popular blogs. Then O’Reilly approached him about writing a book, which eventually became CSS: The Definitive Guide. Research for the book connected Meyer to the people that were building CSS inside of the W3C and browsers. He, in turn, shared what he learned with the web development community. Before long, Meyer had cemented a legacy as a central figure in the history of CSS.

His work continued. When the Web Standards Project reached out to programmer John Allsopp to form a committee dedicated to CSS, he immediately thought of Meyer. Meyer was joined by Allsopp and several others: Sue Sims, Ian Hickson, David Baron, Roland Eriksson, Ken Gunderson, Brade McDaniel, Liam Quinn and Todd Fahrner. Collectively, their official title was the CSS Action Committee, but they often went by CSS Samurai.

CSS was a properly standardized design language. If done right, it could shake loose the Netscape-only features and table-based layouts of the past. But browsers were not catching up to CSS quick enough for some developers. And when they did, it was frequently an afterthought. “You really can’t imagine, unless you lived through it, just how buggy and inconsistent and frustrating browser support for CSS was,” Meyer would later recall. The goal of the CSS Samurai was to fix that.

The committee took a familiar Web Standards Project approach, publishing public reports about lack of browser support on the one hand, and privately meeting with browser makers to discuss changes on the other. A third objective of the committee was to speak to developers directly. Grassroots education became a central goal to the work of the CSS Samurai, an effective instrument of change from the ground up.

Netscape provided the greatest hurdle. Wholly dependent on JavaScript, Netscape used a non-standard version of CSS known as JSSS, a language which by now has been largely forgotten. The browser processed style sheets dynamically using JavaScript to render the page, which made its support uneven and often slow to load. It would not be until the release of the Gecko rendering engine in the early 2000’s, that JSSS would be removed. As Netscape transformed into Mozilla in the wake of that change, it would finally come around to a functional CSS implementation.

But with other browsers, particularly with versions of Internet Explorer that were capturing larger segments of the market, WaSP proved successful. The hearts and minds of developers were with them, as they entered a new era of styling on the web.

There was at least one conversation over coffee that saved CSS. There may have been more, but the conversation in question happened in 1999, between Todd Fahrner and Tantek Çelik. Fahrner was a member of the Web Standards Project and a CSS Samurai, often on the front-lines of change. Among untold work with and for the web, he helped Meyer with the CSS Test Suite and developed a practical litmus test for CSS support known as the Acid Test.

Çelik worked at Microsoft. He was largely responsible for bringing web standards support into Internet Explorer for Mac, years before other major browsers would do the same. Çelik would have a long and lasting impact on the development of CSS. He would soon join the Web Standards Project Steering Committee. Later, as a member of the CSS Working Group, he would contribute and help edit several specifications.

On that particular day, over coffee, the topic of conversation was the web’s existential crisis. For years, browsers had added ad-hoc, uneven and incompatible versions of CSS. With a formalized Recommendation from the W3C, there was finally an objectively correct way of doing things. But if browsers took the new, correct rules from the W3C and applied them to all of the sites that had relied on the old, incorrect rules from before, they would suddenly look broken.

What they needed was a toggle. Some sort of switch that developers could turn on to signal that they wanted the new, correct rules. That day, Fahrner proposed using the doctype declaration. It’s a bit of text at the top of the HTML page that specifies a document type definition (the one Dan Connolly had spent years at the W3C standardizing). The practice became known as doctype switching. It meant that new sites could code CSS the right way, and old sites would continue to work just fine.

When Internet Explorer for Mac version 5 was released, it included doctype switching. Before long, all the browsers did. That swung the door open for standards-compliant CSS in browsers.

“We have not learned to design the Web.” So read the first line of the introduction of Molly Holzschlag’s 2003 book Cascading Style Sheets: The Designer’s Edge. It was a bold statement, not the first or the last from Holzschlag—who has had a profound and lasting impact on the evolution of the web. Throughout her career Holzschlag has been a restless advocate for people that use the web, even when that has clashed with makers of web technology. Her decades long history with the web has spanned well beyond CSS, to almost every aspect of its development and evolution.

Holzschlag goes on. “To get to this point in the web’s history, we’ve had to borrow guidelines from other media, hack and workaround our way through browser inconsistencies, and bend markup so far out of its normal shape that we’ve broken it.”

Molly Holzschlag

At the end of 2000, Netscape released the sixth version of their browser. Internet Explorer 6 came out not long after. The style sheets for these browsers were far more capable than any that had come before. But Microsoft wouldn’t release another browser for five years. Netscape, all but defeated by Microsoft, would take years to regroup and reform as the more capable and standards-compliant Firefox.

The work of the Web Standards Project and the W3C had brought a working version of CSS to the web. But it was incomplete, and often difficult to understand. And developers had to take older browsers into account, which many people still used.

In the early 2000’s, creators of the web were caught between a past riddled with inconsistency and a future that captured their imagination. “Designers and developers were pushing the bounds of what browsers were capable of,” web developer Eevee recalls about using CSS at the time, “Browsers were handling it all somewhat poorly. All the fixes and workarounds and libraries were arcane, brittle, error-prone, and/or heavy.”

Most web designers continued to rely on a combination of HTML table hacks and Netscape-specific tags to create advanced designs. Level two of CSS offered even more possibilities, but designers were hesitant to go all in and risk a bad experience for Netscape users. “Netscape Navigator 4 was holding everyone back,” developer Dave Shea would later say, “It just barely supported CSS, and certainly not in any capacity that we could start building completely table-less sites. And the business case for continued support was too strong to ignore.”

Beneath the surface, however, a vibrant and influential community spread new ideas through blogs and mailing lists and books. That community introduced clever solutions with equally clever names. The “Holly Hack” and “clearfix” from the Position is Everything, maintained by Holly Bergevin and John Gallant. Douglas Bowman’s “Sliding Doors of CSS,” Dan Webb and Patrick Griffith’s “Suckerfish Dropdowns” and Dan Ciederholm’s “Faux Columns” all came from Jeffrey Zeldman’s A List Apart blog. Even Meyer and Allsopp created the CSS Discuss mailing list as a workshop for innovative ideas and practice.

“It’s going to be the people using CSS in the next few years who will come up with the innovative design ideas we need to help drive the potential of the Web in general.”

Molly Holzschlag

And yet, so much of the energy of that community was spent on hacks and workarounds and creative solutions. The most interesting design ideas came always attached with a caveat, a bit of code to make it work in this browser or that. The first edition of CSS Anthology **by Rachel Andrew, which became a handbook for many CSS developers, featured an entire chapter on what to do about Netscape 4.

The innovators of CSS—beset by disparities difficult to explain—were forced to pick apart the language and find a way through to their designs. In the wake of that newness came a creative surge. Some of the most expressive and shrewd designs in the web’s history came out of this era.

That very same community, however, often fell to a collective preoccupation with what they could make CSS do. A culture that, at times, overvalued hacks and workarounds. Largely out of necessity, shared education focused on the how rather than the why. Too-clever techniques that sometimes outpaced their usefulness.

That would begin to change. Holzschlag ended the introduction to her book on CSS with a nod to the future. “It’s going to be the people using CSS in the next few years who will come up with the innovative design ideas we need to help drive the potential of the Web in general.”

Dave Shea was an ideological disciple of the Web Standards Project, an active member of a growing CSS community. He agreed with Holzschlag. “We entered a period where individuals could help shape the future of the web,” he would later describe the moment. Like others, he was frustrated with the limitations of browsers without CSS support.

The antidote to this type of frustration was often to have a bit of fun. Though getting larger by the day, the web design community was small and familiar. For some, it became a hobby to disseminate inspiration. Domino Shriver compiled a list of CSS designs in his site, WebNoveau, later maintained by Meryl Evans. Each day, new web pages designed with CSS would be posted to its homepage. Chris Casciano’s Daily CSS Fun amended that approach. Each day he’d post a new style sheet for the same HTML file, capturing the wide range of designs CSS made possible. In May of 2003, Shea produced his own take on the format when he launched the CSS Zen Garden. The project rested on a simple premise. Each page used exactly the same HTML file with exactly the same content. The only thing that was different was the page’s style sheet, the CSS that was applied to that HTML. Rather than create them himself, Shea solicited style sheets from developers all over the world to create a digital gallery of CSS inspiration. Designs ranged from constructed minimalism to astonishingly baroque. It was a playground to explore what was possible.

At once a source of influence, a practical demonstration of CSS advantages, and a showcase of great web design, the Zen Garden spread to the far ends of the web. What began with five designs soon turned into a website filled with dozens of different designs. And then more. “Hundreds of designers have made their mark—and sometimes their reputations—by creating Zen Garden layouts,” author Jeffrey Zeldman would later say in his book Designing with Web Standards, “and tens of thousands all over the world have learned to love CSS because of it.”

Though Zen Garden would become the most well-known, it was only one contribution to a growing oeuvre of inspiration projects on the web. Web creators wanted to look to the future.

In 2005, Shea published a book based on the project with Molly Holzschlag called The Zen of CSS Design. By then, CSS had web designers’ full attention.

In 1998, in an attempt to keep pace with Microsoft, Netscape made the decision to release their browser for free, and to open source its source code under a newly formed umbrella project known as Mozilla that would ultimately lead to the release of the Firefox browser in 2003.

David Baron and Ian Hickson both began their careers at Mozilla in the late 1990’s as volunteers, and later interns, on the Mozilla Quality Assurance team, identifying standards-compliance bugs. It was through the course of their work that they became deeply familiar not just with how CSS was supposed to work, but how, in practice, it was being used inside of a standards-driven browser. During that time, Hickson and Baron became an integral part of a growing CSS community, and joined the CSS Samurai. They helped write and run the tests for the CSS Test Suite. They became active participants in the www-style mailing list, and later, the CSS Working Group itself.

While Meyer was writing his first book, CSS: The Definitive Guide, he recalls asking Baron and Hickson for help in understanding how some parts of CSS worked. “I doubt that I will ever stop owing them for their dedication to getting me through the wilderness of my own misunderstandings,” he would later say. It was their attention to detail that would soon make them an incredible asset.

Browsers understand style sheets, the language of CSS, based on the words of the specifications at the W3C. If the language is not specific enough, or if not every edge case or feature combination has been considered, this can lead to incompatibilities among browsers. While working at the W3C, Hickson and Baron helped bring the vague language of its technical specifications into clearer focus. They made the definition of CSS more precise, consistent, and easier to implement correctly.

Their work, alongside Bert Bos, Tantek Çelik, Håkon Lie and others, led to a substantial revision of the second version of CSS, what CSS Working Group member Elika Etemad would later describe as “a long process of plugging the holes, fixing errors, and building test suites for the core CSS standard.” It was tireless work, as much about conversation with browser programmers as actual technical work and writing.

It was also a job nobody thought would take very long. There had been two versions of CSS released in a few years. A minor revision was expected to take a fraction of the time. One night at a conference a few months in, several CSS editors commented that if they stayed up late one night, they might be able to get it done before the next day. Instead, the work would take nearly a decade.

For years, Elika Etemad, then known only as ‘fantasai’, had been an active member of the www-style mailing list and Mozilla bug tracker. It had put her in conversations with browser makers, and members of the W3C. Though she had spoken with many different members of the CSS Working Group over the years, some of her most engaged and frequent discussions were with David Baron and Ian Hickson. Like Hickson and Baron, ‘fantasai’ was uncovering bugs and spec errors that no one else had noticed—and happily reporting what she found.

Elika Etemad (Credit: Web Conferences Amsterdam)

That work earned her an invite to the W3C Technical Plenary in 2004. Each year, members of the W3C working groups travel to shifting locations (2020 was the first year it was held virtually) for the event. W3C discussions are mostly done through emails and conference calls and editorial comments. For some members, the plenary is the only time they see each other face to face all year. In 2004, it was held in the south of France, in a town called Mandelieu-la-Napoule, overlooking the Bay of Cannes. It was there that Etemad met Baron and Hickson in person for the first time.

The CSS Working Group, several years into their work on CSS 2.1, invited Etemad to join them. Microsoft had all but pulled back from the standards process after the release of Internet Explorer 6 in 2001. The working group had to work with actively developed browsers like Mozilla and Opera while constrained by the stagnant IE6. They spent years ironing out the details, always feeling on the verge of completion. “We’re almost out of issues, and the new issues we are getting are usually minor stuff like typo fixes and so forth,” Hickson posted in 2006, still years away from a final specification.

During this time, the CSS Working Group was also working on something new. Hickson and Baron had learned from CSS 2.1, an exhaustive but monolithic specification. “We succeeded,” Hickson would later comment, “but boy are they insanely complicated. What we should have done instead is just break the constraints and come up with something simpler, ideally something that more closely matched what browsers implemented at the time.” Over time, the CSS Working Group began to shift their approach. Specifications would no longer be a single, immutable document. It would change over time to accommodate real-world browser implementations.

Beginning with CSS3, also transitioned to a new format to cover a wider set of features and maintain pace with browser development. CSS3 consists of a number of modules, each that addresses a single area of functionality—including color, font, text, and more advanced concepts like media queries. “Some of the CSS3 modules out there are ‘concept albums,’” ‘fantasai’ describes, “specs that are sketching out the future of CSS.” These “concepts” are developed independently and at a variable pace. Each CSS3 module has its own editors. Collectively, they have contributed to a bolder vision of CSS. Individually, they are developed alongside real-world browser implementations and, on their own, can more deftly adapt to change.

The modular approach to CSS3 would prove effective. The second decade of CSS would introduce sweeping changes and refreshing new features. The second decade of CSS would be different than the first. New features would lead to new designs, and eventually, a new web.

The post Chapter 8: CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Swipey Image Grids

Css Tricks - Mon, 05/03/2021 - 4:35am

I hope people think of SVG as a vector format that is good for drawing things. There is plenty more to know, but here’s one more: SVG is good for composition. You draw things at very specific coordinates in SVG and, while they can scale, they tend to stay put. And while SVG is a vector format, you can place raster images onto it. That’s my favorite part of Cassie’s “Swipey image grids” post. The swipey part is cool, but the composition is even cooler.

<svg viewBox="0 0 100 100"> <rect x="30" y="0" width="70" height="50" fill="blue"/> <rect x="60" y="60" width="40" height="40" fill="green"/> <rect x="0" y="30" width="50" height="70" fill="pink"/> <image x="30" y="0" width="70" height="50" href="https://place-puppy.com/300x300"/> <image x="60" y="60" width="40" height="40" href="https://place-puppy.com/700x300"/> <image x="0" y="30" width="50" height="70" href="https://place-puppy.com/800x500"/> </svg>

You’ll need to check this out in Chrome, Edge or Firefox:

CodePen Embed Fallback

Don’t miss Cassie’s interactive examples explaining preserveAspectRatio. That’s a thing I normally think of on the <svg> itself, but is used to great effect on the <image> elements themselves here. It’s like a more powerful object-fit and object-position.

Direct Link to ArticlePermalink

The post Swipey Image Grids appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Safari 14.1 Adds Support for Flexbox Gaps

Css Tricks - Fri, 04/30/2021 - 11:51am

Yay, it’s here! Safari 14.1 reportedly adds support for the gap property in flexbox layouts. We’ve had grid-gap support for some time, but true to its name, it’s limited to grid layouts. Now we can use gap in either type of layout:

.container { display: flex; flex-flow: row wrap; gap: 1.5rem; }

Apple’s been rather quiet about the update. I didn’t even hear about it until Eric Meyer brought it up in an email, and he only heard about it when Jen Simmons tweeted it out.

Did you see, Safari 14.1 shipped yesterday… including support for Gaps in Flexbox.

(*Can I Use & MDN still need to be updated, so they don't show support yet… but it's there! Try it out.)

— Jen Simmons (@jensimmons) April 27, 2021

I checked the WebKit CSS Feature Status and, sure enough, it’s supported. It just wasn’t called out in the release notes. Nothing is posted to Safari’s release notes archive just yet, so maybe we’ll hear about it officially there at some point. For now, it seems that Maximiliano Firtman‘s overview of iOS 14.5 is the deepest look at what’s new.

And how timely is it that Eric recently covered how to use the gap property, not only in grid and flexbox layouts, but multi-column as well.

The post Safari 14.1 Adds Support for Flexbox Gaps appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.