Developer News

How to Use Tailwind on a Svelte Site

Css Tricks - Fri, 03/12/2021 - 8:53am

Let’s spin up a basic Svelte site and integrate Tailwind into it for styling. One advantage of working with Tailwind is that there isn’t any context switching going back and forth between HTML and CSS, since you’re applying styles as classes right on the HTML. It’s all the in same file in Svelte anyway, but still, this way you don’t even need a <style> section in your .svelte files.

If you are a Svelte developer or enthusiast, and you’d like to use Tailwind CSS in your Svelte app, this article looks at the easiest, most-straightforward way to install tailwind in your app and hit the ground running in creating a unique, modern UI for your app.

If you like to just see a working example, here’s a working GitHub repo.

Why Svelte?

Performance-wise, Svelte is widely considered to be one of the top JavaScript frameworks on the market right now. Created by Rich Harris in 2016, it has been growing rapidly and becoming popular in the developer community. This is mainly because, while very similar to React (and Vue), Svelte is much faster. When you create an app with React, the final code at build time is a mixture of React and vanilla JavaScript. But browsers only understand vanilla JavaScript. So when a user loads your app in a browser (at runtime), the browser has to download React’s library to help generate the app’s UI. This slows down the process of loading the app significantly.

How’s Svelte different? It comes with a compiler that compiles all your app code into vanilla JavaScript at build time. No Svelte code makes it into the final bundle. In this instance, when a user loads your app, their browser downloads only vanilla JavaScript files, which are lighter. No framework UI library is needed. This significantly speeds up the process of loading your app. For this reason, Svelte applications are usually very small and lightning fast.

The only downside Svelte currently faces is that since it’s still new and doesn’t have the kind of ecosystem and community backing that more established frameworks like React enjoy.

Why Tailwind?

Tailwind CSS is a CSS framework. It’s somewhat similar to popular frameworks, like Bootstrap and Materialize, in that you apply classes to elements and it styles them. But it is also atomic CSS in that one class name does one thing. While Tailwind does have Tailwind UI for pre-built componentry, generally you customize Tailwind to look how you want it to look, so there is less risk of “looking like a Bootstrap site” (or whatever other framework that is less commonly customized).

For example, rather than give you a generic header component that comes with some default font sizes, margins, paddings, and other styling, Tailwind provides you with utility classes for different font sizes, margins, and paddings. You can pick the specific ones you want and create a unique looking header with them.

Tailwind has other advantages as well:

  • It saves you the time and stress of writing custom CSS yourself. With Tailwind, you get thousands of out-of-the-box CSS classes that you just need to apply to your HTML elements.
  • One thing most users of Tailwind appreciate is the naming convention of the utility classes. The names are simple and they do a good job of telling you what their functions are. For example, text-sm gives your text a small font size**.** This is a breath of fresh air for people that struggle with naming custom CSS classes.
  • By utilizing a mobile-first approach, responsiveness is at the heart of Tailwind’s design. Making use of the sm, md, and lg prefixes to specify breakpoints, you can control the way styles are rendered across different screen sizes. For example, if you use the md prefix on a style, that style will only be applied to medium-sized screens and larger. Small screens will not be affected.
  • It prioritizes making your application lightweight by making PurgeCSS easy to set up in your app. PurgeCSS is a tool that runs through your application and optimizes it by removing all unused CSS classes, significantly reducing the size of your style file. We’ll use PurgeCSS in our practice project.

All this said Tailwind might not be your cup of tea. Some people believe that adding lots of CSS classes to your HTML elements makes your HTML code difficult to read. Some developers even think it’s bad practice and makes your code ugly. It’s worth noting that this problem can easily be solved by abstracting many classes into one using the @apply directive, and applying that one class to your HTML, instead of the many.

Tailwind might also not be for you if you are someone who prefers ready-made components to avoid stress and save time, or you are working on a project with a short deadline.

Step 1: Scaffold a new Svelte site

Svelte provides us with a starter template we can use. You can get it by either cloning the Svelte GitHub repo, or by using degit. Using degit provides us with certain advantages, like helping us make a copy of the starter template repository without downloading its entire Git history (unlike git clone). This makes the process faster. Note that degit requires Node 8 and above.

Run the following command to clone the starter app template with degit:

npx degit sveltejs/template project-name

Navigate into the directory of the starter project so we can start making changes to it:

cd project-name

The template is mostly empty right now, so we’ll need to install some required npm packages:

npm install

Now that you have your Svelte app ready, you can proceed to combining it with Tailwind CSS to create a fast, light, unique web app.

Step 2: Adding Tailwind CSS

Let’s proceed to adding Tailwind CSS to our Svelte app, along with some dev dependencies that will help with its setup.

npm install tailwindcss@npm:@tailwindcss/postcss7-compat postcss@^7 autoprefixer@^9 # or yarn add tailwindcss@npm:@tailwindcss/postcss7-compat postcss@^7 autoprefixer@^9

The three tools we are downloading with the command above:

  1. Tailwind
  2. PostCSS
  3. Autoprefixer

PostCSS is a tool that uses JavaScript to transform and improve CSS. It comes with a bunch of plugins that perform different functions like polyfilling future CSS features, highlighting errors in your CSS code, controlling the scope of CSS class names, etc.

Autoprefixer is a PostCSS plugin that goes through your code adding vendor prefixes to your CSS rules (Tailwind does not do this automatically), using caniuse as reference. While browsers are choosing to not use prefixing on CSS properties the way they had in years past, some older browsers still rely on them. Autoprefixer helps with that backwards compatibility, while also supporting future compatibility for browsers that might apply a prefix to a property prior to it becoming a standard.

For now, Svelte works with an older version of PostCSS. Its latest version, PostCSS 8, was released September 2020. So, to avoid getting any version-related errors, our command above specifies PostCSS 7 instead of 8. A PostCSS 7 compatibility build of Tailwind is made available under the compat channel on npm.

Step 3: Configuring Tailwind

Now that we have Tailwind installed, let’s create the configuration file needed and do the necessary setup. In the root directory of your project, run this to create a tailwind.config.js file:

npx tailwindcss init tailwind.config.js

Being a highly customizable framework, Tailwind allows us to easily override its default configurations with custom configurations inside this tailwind.config.js file. This is where we can easily customize things like spacing, colors, fonts, etc.

The tailwind.config.js file is provided to prevent ‘fighting the framework’ which is common with other CSS libraries. Rather than struggling to reverse the effect of certain classes, you come here and specify what you want. It’s in this file that we also define the PostCSS plugins used in the project.

The file comes with some default code. Open it in your text editor and add this compatibility code to it:

future: { purgeLayersByDefault: true, removeDeprecatedGapUtilities: true, },

Tailwind 2.0 (the latest version), all layers (e.g., base, components, and utilities) are purged by default. In previous versions, however, just the utilities layer is purged. We can manually configure Tailwind to purge all layers by setting the purgeLayersByDefault flag to true.

Tailwind 2.0 also removes some gap utilities, replacing them with new ones. We can manually remove them from our code by setting removeDeprecatedGapUtilities to true.

These will help you handle deprecations and breaking changes from future updates.

PurgeCSS

The several thousand utility classes that come with Tailwind are added to your project by default. So, even if you don’t use a single Tailwind class in your HTML, your project still carries the entire library, making it rather bulky. We’ll want our files to be as small as possible in production, so we can use purge to remove all of the unused utility classes from our project before pushing the code to production.

Since this is mainly a production problem, we specify that purge should only be enabled in production.

purge: { content: [ "./src/**/*.svelte", ], enabled: production // disable purge in dev },

Now, your tailwind.config.js should look like this:

const production = !process.env.ROLLUP_WATCH; module.exports = { future: { purgeLayersByDefault: true, removeDeprecatedGapUtilities: true, }, plugins: [ ], purge: { content: [ "./src/**/*.svelte", ], enabled: production // disable purge in dev }, }; Rollup.js

Our Svelte app uses Rollup.js, a JavaScript module bundler made by Rich Harris, the creator of Svelte, that is used for compiling multiple source files into one single bundle (similar to webpack). In our app, Rollup performs its function inside a configuration file called rollup.config.js.

With Rollup, We can freely break our project up into small, individual files to make development easier. Rollup also helps to lint, prettify, and syntax-check our source code during bundling.

Step 4: Making Tailwind compatible with Svelte

Navigate to rollup.config.js and import the sveltePreprocess package. This package helps us handle all the CSS processing required with PostCSS and Tailwind.

import sveltePreprocess from "svelte-preprocess";

Under plugins, add sveltePreprocess and require Tailwind and Autoprefixer, as Autoprefixer will be processing the CSS generated by these tools.

preprocess: sveltePreprocess({ sourceMap: !production, postcss: { plugins: [ require("tailwindcss"), require("autoprefixer"), ], }, }),

Since PostCSS is an external tool with a syntax that’s different from Svelte’s framework, we need a preprocessor to process it and make it compatible with our Svelte code. That’s where the sveltePreprocess package comes in. It provides support for PostCSS and its plugins. We specify to the sveltePreprocess package that we are going to require two external plugins from PostCSS, Tailwind and Autoprefixer. sveltePreprocess runs the foreign code from these two plugins through Babel and converts them to code supported by the Svelte compiler (ES6+). Rollup eventually bundles all of the code together.

The next step is to inject Tailwind’s styles into our app using the @tailwind directive. You can think of @tailwind loosely as a function that helps import and access the files containing Tailwind’s styles. We need to import three sets of styles.

The first set of styles is @tailwind base. This injects Tailwind’s base styles—mostly pulled straight from Normalize.css—into our CSS. Think of the styles you commonly see at the top of stylesheets. Tailwind calls these Preflight styles. They are provided to help solve cross-browser inconsistencies. In other words, they remove all the styles that come with different browsers, ensuring that only the styles you employ are rendered. Preflight helps remove default margins, make headings and lists unstyled by default, and a host of other things. Here’s a complete reference of all the Preflight styles.

The second set of styles is @tailwind components. While Tailwind is a utility-first library created to prevent generic designs, it’s almost impossible to not reuse some designs (or components) when working on a large project. Think about it. The fact that you want a unique-looking website doesn’t mean that all the buttons on a page should be designed differently from each other. You’ll likely use a button style throughout the app.

Follow this thought process. We avoid frameworks, like Bootstrap, to prevent using the same kind of button that everyone else uses. Instead, we use Tailwind to create our own unique button. Great! But we might want to use this nice-looking button we just created on different pages. In this case, it should become a component. Same goes for forms, cards, badges etc.

All the components you create will eventually be injected into the position that @tailwind components occupies. Unlike other frameworks, Tailwind doesn’t come with lots of predefined components, but there are a few. If you aren’t creating components and plan to only use the utility styles, then there’s no need to add this directive.

And, lastly, there’s @tailwind utilities. Tailwind’s utility classes are injected here, along with the ones you create.

Step 5: Injecting Tailwind Styles into Your Site

It’s best to inject all of the above into a high-level component so they’re accessible on every page. You can inject them in the App.svelte file:

<style global lang="postcss"> @tailwind base; @tailwind components; @tailwind utilities; </style>

Now that we have Tailwind set up in, let’s create a website header to see how tailwind works with Svelte. We’ll create it in App.svelte, inside the main tag.

Step 6: Creating A Website Header

Starting with some basic markup:

<nav> <div> <div> <a href="#">APP LOGO</a> <!-- Menus --> <div> <ul> <li> <a href="#">About</a> </li> <li> <a href="#">Services</a> </li> <li> <a href="#">Blog</a> </li> <li> <a href="#">Contact</a> </li> </ul> </div> </div> </div> </nav>

This is the header HTML without any Tailwind CSS styling. Pretty standard stuff. We’ll wind up moving the “APP LOGO” to the left side, and the four navigation links on the right side of it.

Now let’s add some Tailwind CSS to it:

<nav class="bg-blue-900 shadow-lg"> <div class="container mx-auto"> <div class="sm:flex"> <a href="#" class="text-white text-3xl font-bold p-3">APP LOGO</a> <!-- Menus --> <div class="ml-55 mt-4"> <ul class="text-white sm:self-center text-xl"> <li class="sm:inline-block"> <a href="#" class="p-3 hover:text-red-900">About</a> </li> <li class="sm:inline-block"> <a href="#" class="p-3 hover:text-red-900">Services</a> </li> <li class="sm:inline-block"> <a href="#" class="p-3 hover:text-red-900">Blog</a> </li> <li class="sm:inline-block"> <a href="#" class="p-3 hover:text-red-900">Contact</a> </li> </ul> </div> </div> </div> </nav>

OK, let’s break down all those classes we just added to the HTML. First, let’s look at the <nav> element:

<nav class="bg-blue-900 shadow-lg">

We apply the class bg-blue-900 gives our header a blue background with a shade of 900, which is dark. The class shadow-lg class applies a large outer box shadow. The shadow effect this class creates will be 0px at the top, 10px on the right, 15px at the bottom, and -3px on the left.

Next is the first div, our container for the logo and navigation links:

<div class="container mx-auto">

To center it and our navigation links, we use the mx-auto class. It’s equivalent to margin: auto, horizontally centering an element within its container.

Onto the next div:

<div class="sm:flex">

By default, a div is a block-level element. We use the sm:flex class to make our header a block-level flex container, so as to make its children responsive (to enable them shrink and expand easily). We use the sm prefix to ensure that the style is applied to all screen sizes (small and above).

Alright, the logo:

<a href="#" class="text-white text-3xl font-bold p-3">APP LOGO</a>

The text-white class, true to its name, make the text of the logo white. The text-3xl class sets the font size of our logo (which is configured to 1.875rem)and its line height (configured to 2.25rem). From there, p-3 sets a padding of 0.75rem on all sides of the logo.

That takes us to:

<div class="ml-55 mt-4">

We’re giving the navigation links a left margin of 55% to move them to the right. However, there’s no Tailwind class for this, so we’ve created a custom style called ml-55, a name that’s totally made up but stands for “margin-left 55%.”

It’s one thing to name a custom class. We also have to add it to our style tags:

.ml-55 { margin-left: 55%; }

There’s one more class in there: mt-4. Can you guess what it does? If you guessed that it seta a top margin, then you are correct! In this case, it’s configured to 1rem for our navigation links.

Next up, the navigation links are wrapped in an unordered list tag that contains a few classes:

<ul class="text-white sm:self-center text-xl">

We’re using the text-white class again, followed by sm:self-center to center the list—again, we use the sm prefix to ensure that the style is applied to all screen sizes (small and above). Then there’s text-xl which is the extra-large configured font size.

For each list item:

<li class="sm:inline-block">

The sm:inline-block class sets each list item as an inline block-level element, bringing them side-by-side.

And, lastly, the link inside each list item:

<a href="#" class="p-3 hover:text-red-900">

We use the utility class hover:text-red-900 to make each red on hover.

Let’s run our app in the command line:

npm run dev

This is what we should get:

And that is how we used Tailwind CSS with Svelte in six little steps!

Conclusion

My hope is that you now know how to integrate Tailwind CSS into our Svelte app and configure it. We covered some pretty basic styling, but there’s always more to learn! Here’s an idea: Try improving the project we worked on by adding a sign-up form and a footer to the page. Tailwind provides comprehensive documentation on all its utility classes. Go through it and familiarize yourself with the classes.

Do you learn better with video? Here are a couple of excellent videos that also go into the process of integrating Tailwind CSS with Svelte.

The post How to Use Tailwind on a Svelte Site appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Platform News: Defaulting to Logical CSS, Fugu APIs, Custom Media Queries, and WordPress vs. Italics

Css Tricks - Fri, 03/12/2021 - 5:51am

Looks like 2021 is the time to start using CSS Logical Properties! Plus, Chrome recently shipped a few APIs that have raised eyebrows, SVG allows us to disable its aspect ratio, WordPress focuses on the accessibility of its typography, and there’s still no update (or progress) on the development of CSS custom media queries.

Let’s jump right into the news…

Logical CSS could soon become the new default

Six years after Mozilla shipped the first bits of CSS Logical Properties in Firefox, this feature is now on a path to full browser support in 2021. The categories of logical properties and values listed in the table below are already supported in Firefox, Chrome, and the latest Safari Preview.

CSS property or valueThe logical equivalentmargin-topmargin-block-starttext-align: righttext-align: endbottominset-block-endborder-leftborder-inline-start(n/a)margin-inline

Logical CSS also introduces a few useful shorthands for tasks that in the past required multiple declarations. For example, margin-inline sets the margin-left and margin-right properties, while inset sets the top, right, bottom and left properties.

/* BEFORE */ main { margin-left: auto; margin-right: auto; } /* AFTER */ main { margin-inline: auto; }

A website can add support for an RTL (right-to-left) layout by replacing all instances of left and right with their logical counterparts in the site’s CSS code. Switching to logical CSS makes sense for all websites because the user may translate the site to a language that is written right-to-left using a machine translation service. The biggest languages with RTL scripts are Arabic (310 million native speakers), Persian (70 million), and Urdu (70 million).

/* Switch to RTL when Google translates the page to an RTL language */ .translated-rtl { direction: rtl; }

David Bushell’s personal website now uses logical CSS and relies on Google’s translated-rtl class to toggle the site’s inline base direction. Try translating David’s website to an RTL language in Chrome and compare the RTL layout with the site’s default LTR layout.

Chrome ships three controversial Fugu APIs

Last week Chrome shipped three web APIs for “advanced hardware interactions”: the WebHID and Web Serial APIs on desktop, and Web NFC on Android. All three APIs are part of Google’s capabilities project, also known as Project Fugu, and were developed in W3C community groups (though they’re not web standards).

  • The WebHID API allows web apps to connect to old and uncommon human interface devices that don’t have a compatible device driver for the operating system (e.g., Nintendo’s Wii Remote).
  • The Web Serial API allows web apps to communicate (“byte by byte”) with peripheral devices, such as microcontrollers (e.g., the Arduino DHT11 temperature/humidity sensor) and 3D printers, through an emulated serial connection.
  • Web NFC allows web apps to wirelessly read from and write to NFC tags at short distances (less than 10 cm).

Apple and Mozilla, the developers of the other two major browser engines, are currently opposed to these APIs. Apple has decided to “not yet implement due to fingerprinting, security, and other concerns.” Mozilla’s concerns are summarized on the Mozilla Specification Positions page.

Source: webapicontroversy.com Stretching SVG with preserveAspectRatio=none

By default, an SVG scales to fit the <svg> element’s content box, while maintaining the aspect ratio defined by the viewBox attribute. In some cases, the author may want to stretch the SVG so that it completely fills the content box on both axes. This can be achieved by setting the preserveAspectRatio attribute to none on the <svg> element.

View demo

Distorting SVG in this manner may seem counterintuitive, but disabling aspect ratio via the preserveAspectRatio=none value can make sense for simple, decorative SVG graphics on a responsive web page:

This value can be useful when you are using a path for a border or to add a little effect on a section (like a diagonal [line]), and you want the path to fill the space.

WordPress tones down the use of italics

An italic font can be used to highlight important words (e.g., the <em> element), titles of creative works (<cite>), technical terms, foreign phrases (<i>), and more. Italics are helpful when used discreetly in this manner, but long sections of italic text are considered an accessibility issue and should be avoided.

Italicized text can be difficult to read for some people with dyslexia or related forms of reading disorders.

Putting the entire help text in italics is not recommended

WordPress 5.7, which was released earlier this week, removed italics on descriptions, help text, labels, error details text, and other places in the WordPress admin to “improve accessibility and readability.”

In related news, WordPress 5.7 also dropped custom web fonts, opting for system fonts instead.

Still no progress on CSS custom media queries

The CSS Media Queries Level 5 module specifies a @custom-media rule for defining custom media queries. This proposed feature was originally added to the CSS spec almost seven years ago (in June 2014) and has since then not been further developed nor received any interest from browser vendors.

@custom-media --narrow-window (max-width: 30em); @media (--narrow-window) { /* narrow window styles */ }

A media query used in multiple places can instead be assigned to a custom media query, which can be used everywhere, and editing the media query requires touching only one line of code.

Custom media queries may not ship in browsers for quite some time, but websites can start using this feature today via the official PostCSS plugin (or PostCSS Preset Env) to reduce code repetition and make media queries more readable.

On a related note, there is also the idea of author-defined environment variables, which (unlike custom properties) could be used in media queries, but this potential feature has not yet been fully fleshed out in the CSS spec.

@media (max-width: env(--narrow-window)) { /* narrow window styles */ }

The post Platform News: Defaulting to Logical CSS, Fugu APIs, Custom Media Queries, and WordPress vs. Italics appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Table of Contents with IntersectionObserver

Css Tricks - Thu, 03/11/2021 - 11:00am

If you have a table of contents on a long-scrolling page, thanks to, say, position: fixed; or position: sticky;, the IntersectionObserver API in JavaScript is the perfect companion to highlight items in the table of contents when corresponding content is in view.

Ben Frain has a post all about this:

Thanks to IntersectionObserver we have a small but very efficient bit of code to create our table of contents, provide quick links to jump around the document and update readers on where they are in a document as they read.

Compared to older techniques that need to bind to scroll events and perform their own math, this code is shorter, faster, and more logical. If you’re looking for the demo on Ben’s site, the article is the demo. And here’s a video on it:

I’ve mentioned this stuff before, but here’s a Bramus Van Damme version:

CodePen Embed Fallback

And here’s a version from Hakim el Hattab that is just begging for someone to port it to IntersectionObserver because the UI is so cool:

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post Table of Contents with IntersectionObserver appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 7: Standards

Css Tricks - Thu, 03/11/2021 - 6:14am

It was the year 1994 that the web came out of the shadow of academia and onto the everyone’s screens. In particular, it was the second half of the second week of December 1994 that capped off the year with three eventful days.

Members of the World Wide Web Consortium huddled around a table at MIT on Wednesday, December 14th. About two dozen people made it to the meeting, representatives from major tech companies, browser makers, and web-based startups. They were there to discuss open standards for the web.

When done properly, standards set a technical lodestar. Companies with competing interests and priorities can orient themselves around a common set of agreed upon documentation about how a technology should work. Consensus on shared standards creates interoperability; competition happens through user experience instead of technical infrastructure.

The World Wide Web Consortium, or W3C as it is more commonly referred to, had been on the mind of the web’s creator, Sir Tim Berners-Lee, as early as 1992. He had spoken with a rotating roster of experts and advisors about an official standards body for web technologies. The MIT Laboratory for Computer Science soon became his most enthusiastic ally. After years of work, Berners-Lee left his job at CERN in October of 1994 to run the consortium at MIT. He had no intention of being a dictator. He had strong opinions about the direction of the web, but he still preferred to listen.

W3C, 1994

On the agenda — after the table had been cleared with some basic introductions — was a long list of administrative details that needed to be worked out. The role of the consortium, the way it conducted itself, and its responsibilities to the wider web was little more than sketched out at the beginning of the meeting. Little by little, the 25 or so members walked through the list. By the end of the meeting, the group felt confident that the future of web standards was clear.

The next day, December 15th, Jim Clark and Marc Andreessen announced the recently renamed Netscape Navigator version 1.0. It had been out for several months in beta, but that Thursday marked a wider release. In a bid for a growing market, it was initially given away for free. Several months later, after the release of version 1.1, Netscape would be forced to walk that back. In either case, the browser was a commercial and technical success, improving on the speed, usability, and features of browsers that had come before it.

On Friday, December 16th, the W3C experienced its first setback. Berners-Lee never meant for MIT to be the exclusive site of the consortium. He planned for CERN, the birthplace of the web and home to some of its greatest advocates, to be a European host for the organization. On December 16th, however, CERN approved a massive budget for its Large Hadron Collider, forcing them to shift priorities. A refocused budget left little room for hypertext Internet experiments not directly contributing to the central project of particle physics.

CERN would no longer be the European host of the W3C. All was not lost. Months later, the W3C set up at France’s National Institute for Research in Computer Science and Control, or INRIA. By 1996, a third site at Japan’s Keio University would also be established.

Far from an outlier, this would not be the last setback the W3C ever faced, or that it would overcome.

In 1999, Berners-Lee published an autobiographical account of the web’s creation in a book entitled Weaving the Web. It is a concise and even history, a brisk walk through the major milestones of the web’s first decade. Throughout the book, he often returns to the subject of the W3C.

He frames the web consortium, first and foremost, as a matter of compromise. “It was becoming clear to me that running the consortium would always be a balancing act, between taking the time to stay as open as possible and advancing at the speed demanded by the onrush of technology.” Striking a balance between shared compatibility and shorter and shorter browser release cycles would become a primary objective of the W3C.

Web standards, he concedes, thrives through tension. Standards are developed amidst disagreement and hard-won bargains. Recalling a time just before the W3C’s creation, Berners-Lee notes how the standards process reflects the structure of the web. “It struck me that these tensions would make the consortium a proving ground for the relative merits of weblike and treelike societal structures,” he wrote, “I was eager to start the experiment.” A web consortium born of compromise and defined by tension, however, was not Berners-Lee’s first plan.

In March of 1992, Berners-Lee flew to San Diego to attend a meeting of the Internet Engineering Task Force, or IETF. Created in 1986, the IETF develops standards for the Internet, ranging from networking to routing to DNS. IETF standards are unenforceable and entirely voluntarily. They are not sanctioned by any world government or subject to any regulations. No entity is obligated to use them. Instead, the IETF relies on a simple conceit: interoperability helps everyone. It has been enough to sustain the organization for decades.

Because everything is voluntary, the IETF is managed by a labyrinthine set of rules and ritualistic processes that can be difficult to understand. There is no formal membership, though anyone can join (in its own words it has “no members and no dues”). Everyone is a volunteer, no one is paid. The group meets in person three times a year at shifting locations.

The IETF operates on a principle known as rough consensus (and, often times, running code). Rather than a formal voting process, disputed proposals need to come to some agreement where most, if not at all, of the members in a technology working group agree. Working group members decide when rough consensus has been met, and its criteria shifts form year to year and group to group. In some cases, the IETF has turned to humming to take the temperature of a room. “When, for example, we have face-to-face meetings… instead of a show of hands, sometimes the chair will ask for each side to hum on a particular question, either ‘for’ or ‘against’.”

It is against the backdrop of these idiosyncratic rules that Berners-Lee first came to the IETF in March of 1992. He hoped to set up a working group for each of the primary technologies of the web: HTTP, HTML, and the URI (which would later be renamed to URL through the IETF). In March he was told he would need another meeting, this one in June, to formally propose the working groups. Somewhere close to the end of 1993, a year and a half after he began, he had persuaded the IETF to set up all three.

The process of rough consensus can be slow. The web, by contrast, had redefined what fast could look like. New generations of browsers were coming out in months, not years. And this was before Netscape and Microsoft got involved.

The development of the web had spiraled outside Berners-Lee’s sphere of influence. Inline images — a feature maybe most responsible for the web’s success — was a product of a late night brainstorming session over snacks and soda in the basement of a university lab. Berners-Lee learned about it when everyone else did, when Marc Andreessen posted it to the www-talk mailing list.

Tension. Berners-Lee knew that it would come. He had hoped, for instance, that images might be treated differently (“Tim bawled me out in the summer of ’93 for adding images to the thing,” Andreessen would later say), but the web was not his. It was not anybody’s. He had designed it that way.

With all of its rules and rituals, the IETF did not seem like the right fit for web standards. In private discussions at universities and research labs, Berners-Lee had begun to explore a new path. Something like a consortium of stakeholders in the web — a collection of companies that create browsers and websites and software — that can come together to agree upon a rough consensus for themselves. By the end of 1993, his work on the W3C had already begun.

Dave Raggett, a seasoned researcher at Hewlett-Packard, had a different view of the web. He wasn’t from academia, and he wasn’t working on a browser (not yet anyway). He understood almost instinctively the utility of the web as commercial software. Something less like a digital phonebook and more like Apple’s wildly successful Hypercard application.

Unable to convince his bosses of the web’s promise, Raggett used the ten percent of time HP allowed for its employees to pursue independent research to begin working with the web. He anchored himself to the community, an active member of the www-talk mailing list and a regular presence at IETF meetings. In the fall of 1992, he had a chance to visit with Berners-Lee at CERN.

Yuri Rubinsky

It was around this time that he met Yuri Rubinsky, an enthusiastic advocate for Standard General Markup Language, or SGML, the language that HTML was originally based on. Rubinsky believed that the limitations of HTML could be solved by a stricter adherence to the SGML standard. He had begun a campaign to bring SGML to the web. Raggett agreed — but to a point. He was not yet ready to sever ties with HTML.

Each time Mosaic shipped a new version, or a new browser was released, the gap between the original HTML specification and the real world web widened. Raggett believed that a more comprehensive record of HTML was required. He began working on an enhanced version of HTML, and a browser to demo its capabilities. Its working title was HTML+.

Ragget’s work soon began to spill over to his home life. He’d spend most nights “at a large computer that occupied a fair portion of the dining room table, sharing its slightly sticky surface with paper, crayons, Lego bricks and bits of half-eaten cookies left by the children.” After a year of around the clock work, Raggett had a version of HTML+ ready to go in November of 1993. His improvements to the language were far from superficial. He had managed to add all of the little things that had made their way into browsers: tables, images with captions and figures, and advanced forms.

Several months later, in May of 1994, developers and web enthusiasts traveled from all over the world to come to what some attendees would half-jokingly refer to as the “Woodstock of the Web,” the first official web conference organized by CERN employee and web pioneer Robert Calliau. Of the 800 people clamoring to come, the space in Geneva could hold only 350. Many were meeting for the first time. “Everyone was milling about the lobby,” web historian Marc Weber would later describe, “electrified by the same sensation of meeting face-to-face actual people who had been just names on an email or on the www-talk [sic] mailing list.”

Members of the first conference

It came at a moment when the web stood on the precipice of ubiquity. Nobody from the Mosaic team had managed to make it (they had their own competing conference set for just a few months later), but there were already rumors about Mosaic alum Marc Andresseen’s new commercial browser that would later be called Netscape Navigator. Mosaic, meanwhile, had begun to license their browser for commercial use. An early version of Yahoo! was growing exponentially as more and more publications, like GNN, Wired, The New York Times, and The Wall Street Journal, came online.

Progress at the IETF, on the other hand, had been slow. It was too meticulous, too precise. In the meantime, browsers like Mosaic had begun to add whatever they wanted — particularly to HTML. Tags supported by Mosaic couldn’t be found anywhere else, and website creators were forced to chose between cutting-edge technology and compatibility with other browsers. Many were choosing the former.

HTML+ was the biggest topic of conversation at the conference. But another highlight was when Dan Connolly — a young, “red-haired, navy-cut Texan” who worked at the supercomputer manufacturer Convex — took the stage. He gave a talk called “Interoperability: Why Everyone Wins.” Later, and largely because of that talk, Connolly would be made chair of the IETF HTML Working Group.

In a prescient moment capturing the spirit of the room, Connolly described a future when the language of HTML fractured. When each browser implemented their own set of HTML tags in an effort to edge out the competition. The solution, he concluded, was an HTML standard that was able to evolve at the pace of browser development.

Ragget’s HTML+ made a strong case for becoming that standard. It was exhaustive, describing the new HTML used in browsers like Mosaic in near-perfect detail. “I was always the minimalist, you know, you can get it done with out that,” Connolly later said, “Raggett, on the other hand, wanted to expand everything.” The two struck an agreement. Raggett would continue to work through HTML+ while Connolly focused on a more narrow upgrade.

Connolly’s version would soon become HTML 2, and after a year of back and forth and rough consensus building at the IETF, it became an official standard. It didn’t have nearly the detail of HTML+, but Connolly was able to officially document features that browsers had been supporting for years.

Ragget’s proposal, renamed to HTML 3, was stuck. In an effort to accommodate an expanding web, it continued to grow in size. “To get consensus on a draft 150 pages long and about which everyone wanted to voice an opinion was optimistic – to say the least,” Raggett would later put it, rather bluntly. But by then, Raggett was already working at the W3C, where HTML 3 would soon become a reality.

Berners-Lee also spoke at the first web conference in Geneva, closing it out with a keynote address. He didn’t specifically mention the W3C. Instead, he focused on the role of web. “The people present were the ones now creating the Web,” he would later write of his speech, “and therefore were the only ones who could be sure that what the systems produced would be appropriate to a reasonable and fair society.”

In October of 1994, he embarked on his own part in making a more equitable and accessible future for the web. The World Wide Web Consortium was officially announced. Berners-Lee was joined by a handful of employees — a list that included both Dave Raggett and Dan Connolly. Two months later, in the second half of the second week of December of 1994, the members of the W3C met for the first time.

Before the meeting, Berners-Lee had a rough sketch of how the W3C would work. Any company or organization could join given that they pay the membership fee, a tiered pricing structure tied to the size of that company. Member organizations would send representatives to W3C meetings, to provide input into the process of creating standards. By limiting W3C proceedings to paying members, Berners-Lee hoped to focus and scope the conversations to real world implementations of web technologies.

Yet despite a closed membership, the W3C operates in the open whenever possible. Meeting notes and documentation are open to anybody in the public. Any code written as part of experiments in new standards is freely downloadable.

Gathered at MIT, the W3C members had to next decide how its standards would work. They decided on a process that stops just short of rough consensus. Though they are often called standards, the W3C does not create official standards for the web. The technical specifications created at the W3C are known, in their final form, as recommendations.

They are, in effect, proposals. They outline, in great detail, how exactly a technology works. But they leave enough open that it is up to browsers to figure out exactly how the implementation works. “The goal of the W3C is to ensure interpretability of the Web, and in the long range that’s realistic,” former head of communications at the W3C Sally Khudairi once described it, “but in the short range we’re not going to play Web cops for compliance… we can’t force members to implement things.”

Initial drafts create a feedback loop between the W3C and its members. They provide guidance on web technologies, but even as specifications are in the process of being drafted, browsers begin to introduce them and developers are encouraged to experiment with them. Each time issues are found, the draft is revised, until enough consensus has been reached. At that point, a draft becomes a recommendation.

There would always be tension, and Berners-Lee knew that well. The trick was not to try to resist it, but to create a process where it becomes an asset. Such was the intended effect of recommendations.

At the end of 1995, the IETF HTML working group was replaced by a newly created W3C HTML Editorial Review Board. HTML 3.2 would be the first HTML version released entirely by the W3C, based largely on Ragget’s HTML+.

There was a year in web development, 1997, when browsers broke away from the still-new recommendations of the W3C. Microsoft and Netscape began to release a new set of features separate and apart from agreed upon standards. They even had a name for them. They called them Dynamic HTML, or DHTML. And they almost split the web in two.

DHTML was originally celebrated. Dynamic meant fluid. A natural evolution from HTML’s initial inert state. The web, in other words, came alive.

Touting it’s capabilities, a feature in Wired in 1997 referred to DHTML as the “magic wand Web wizards have long sought.” In its enthusiasm for the new technology, it makes a small note that “Microsoft and Netscape, to their credit, have worked with the standards bodies,” specifically on the introduction of Cascading Style Sheets, or CSS, but that most features were being added “without much regard for compatibility.”

The truth on the ground was that using DHTML required targeting one browser or another, Netscape or Internet Explorer. Some developers chose to simply choose a path, slapping a banner at the bottom of their site that displayed “Best Viewed In…” one browser or another. Others ignored the technology entirely, hoping to avoid its tangled complexity.

Browsers had their reasons, of course. Developers and users were asking for things not included in the official HTML specification. As one Microsoft representative put it, “In order to drive new technologies into the standards bodies, you have to continue innovating… I’m responsible to my customers and so are the Netscape folks.”

A more dynamic web was not a bad thing, but a splintered web was untenable. For some developers, it would prove to be the final straw.

Following the release of HTML 3.2, and with the rapid advancement of browsers, the HTML Editorial Review Board was divided into three parts. Each was given a separate area of responsibility to make progress on, independent of the others.

Dr. Lauren Wood (Photo: XML Summer School)

Dr. Lauren Wood became chair of the Document Object Model Working Group. A former theoretical nuclear phycist, Wood was the Director of Product Technology at SoftQuad, a comapny founded by SGML advocate Yuri Rubinsky. While there, she helped work on the HoTMetaL HTML editor. The DOM spec created a standardized way for browsers to implement Dynamic HTML. “You need a way to tie your data and your programs together,” was how Wood described it, “and the Document Object Model is that glue.” Her work on the Document Object Model, and later XML, would have a long-lasting influence on the web.

The Cascading Style Sheets Working Group was chaired by Chris Lilley. Lilley’s background was in computer graphics, as a teacher and specialist in the Computer Graphics Unit at the University of Manchester. Lilley had worked at the IETF on the HTML 2 spec, as well as a specification for Portable Network Graphics (PNG), but this would mark his first time as a working group chair.

CSS was still a relative newcomer in 1997. It had been in the works for years, but had yet to have a major release. Lilley would work alongside the creators of CSS — Håkon Lie and Bert Bos — to create the first CSS standard.

The final working group was for HTML, left under the auspices of Dan Connolly, continuing his position from the IETF. Connolly had been around the web almost as long as Berners-Lee had. He was one of the people watching back in October of 1991, when Berners-Lee demoed the web for a small group of unimpressed people at a hypertext conference in San Antonio. In fact, it was at that conference that he first met the woman that would later become his wife.

After he returned home, he experimented with the web. He messaged Berners-Lee a month later. It was only three words:“You need a DTD.”

When Berners-Lee developed the language of HTML, he borrowed its convention from a predecessor, SGML. IBM developed Generalized Markup Language (GML) in the early 1970’s to make it easier for typists to create formatted books and reports. However, it quickly got out of control, as people would take shortcuts and use whatever version of the tags that they wanted.

That’s when they developed the Document Type Definition, or as Connolly called it, a DTD. DTDs are what added the “S” (Standardized) to GML. Using SGML, you can create a standardized set of instructions for your data, its scheme and its structure, to help computers understand how to interpret it. These instructions are a document type definition.

Beginning with version 2, Connolly added a type definition to HTML. It limited the language to a smaller set of agreed-upon tags. In practice, browsers treated this more as a loose definition, continuing to implement their own DHTML features and tags. But it was a first step.

In 1997, the HTML Working Group, now inside of the W3C, began to work on the fourth iteration of HTML. It expanded the language, adding to the specification far more advanced features, complex tables and forms, better accessibility, and a more defined relationship with CSS. But it also split HTML from a single schema into three different document type definitions for browsers to adopt.

The first, Frameset, was not typically used. The second, Transitional, was there to include the mistakes of the past. It expanded a larger subset of HTML that included non-standard, presentational HTML that browsers had used for years, such as <font> and <center>. This was set as a default for browsers.

The third DTD was called Strict. Under the Strict definition, HTML was pared down to only its standard, non-presentational features. It removed all of the unique tags introduced by Netscape and Microsoft, leaving only structured elements. If you use HTML today, it likely draws on the same base of tags.

The Strict definition drew a line in the sand. It said, this is HTML. And it finally gave a way for developers to code once for every browser.

In the August 1998 issue of Computerworld — tucked between large features on the impending doom of <abbr title=”Year 2000>Y2K, the bristling potential of billing on the World Wide Web, and antitrust concerns about Microsoft — was a small announcement. Its headline read, ”Browser standards targeted.” It was about the creation of a new grassroots organization of web developers aimed at bringing web standards support to browsers. It was called the Web Standards Project.

Glenn Davis, co-creator of the project, was quoted in the announcement. “The problem is, with each generation of the browser, the browser manufacturers diverge farther from standards support.” Developers, forced to write different code for different browsers for years, had simply had enough. A few off-hand conversations in mailing lists had spiraled into a fully grown movement. At launch, 450 developers and designers had already signed up.

Davis was not new to the web, and he understood its challenges. His first experience on the web dated all the way back to 1994, just after Mosaic had first introduced inline images, when he created the gallery site Cool Site of the Day. Each day, he would feature a single homepage from an interesting or edgy or experimental site. For a still small community of web designers, it was an instant hit.

There was no criteria other than sites that Davis thought were worth featuring. “I was always looking for things that push the limits,” was how he would later define it. Davis helped to redefine the expectations of the early web, using the moniker coolas a shorthand to encompass many possibilities. Dot-com Design author and media professor **Megan Ankerson points out what “this ecosystem of cool sites gestured towards the sheer range of things the web could be: its temporal and spatial dislocations, its distinction from and extension of mainstream media, its promise as a vehicle for self-publishing, and the incredible blend of personal, mundane, and extraordinary.” For a time on the web, Davis was the arbiter of cool.

As time went on Davis transformed his site into Project Cool, a resource for creating websites. In the days of DHTML, Davis’ Project Cool tutorials provided constructive and practical techniques for making the most out of the web. And a good amount of his writing was devoted to explaining how to write code that was usable in both Netscape Navigator and Microsoft’s Internet Explorer. He eventually reached a breaking point, along with many others. At the end of 1997, Netscape and Microsoft both released their 4.0 browsers with spotty standards support. It was already clear that upcoming 5.0 releases were planning to lean even further into uneven and contradictory DHTML extensions.

Running out of patience, Davis helped set up a mailing list with George Olsen and Jeffrey Zeldman. The list started with two dozen people, but it gathered support quickly. The Web Standards Project, known as WaSP, officially launched from that list in August of 1998. It began with a few hundred members and announcement in magazines like Computer World. Within a few months, it would have tens of thousands of members.

The strategy for WaSP was to push browsers — publicly and privately — into web standards support. WaSP was not meant to be a hyperbolic name.” The W3C recommends standards. It cannot enforce them,” Zeldman once said of the organization’s strategy, “and it certainly is not about to throw public tantrums over non-compliance. So we do that job.”

A prominent designer and standards advocate, Zeldman would have an enduring influence on makers of the web. He would later run WaSP during some of its most influential years. His website and mailing list, A List Apart, would become a gathering place for designers who cared about web standards and using the latest web technologies.

WaSP would change focus several times during their decade and a half tenure. They pushed browsers to make better use of HTML and CSS. They taught developers how write standards-based code. They advocated for greater accessibility and tools that supported standards out of the box.

But their mission, published to their website on the first day of launch, would never falter. “Our goal is to support these core standards and encourage browser makers to do the same, thereby ensuring simple, affordable access to Web technologies for all.”

WaSP succeeded in their mission on a few occasions early on. Some browsers, notably Opera, had standards baked in at the beginning; their efforts were praised by WaSP. But the two browsers that collectively made up a majority of web use — Internet Explorer and Netscape Navigator — would need some work.

A four billion dollar sale to AOL in 1998 was not enough for Netscape to compete with Microsoft. After the release of Netscape 4.0, they doubled-down on bold strategy, choosing to release the entire browser’s code as open source under the Mozilla project. Everyday consumers could download it for free; coders were encouraged to contribute directly.

Members of the community soon noticed something in Mozilla. It had a new rendering engine, often referred to as Gecko. Unlike planned releases of Netscape 5, which had patchy standards support at best, Gecko supported a fairly complete version of HTML 4 and CSS.

WaSP diverted their formidable membership to the task of pushing Netscape to include Gecko in its next major release. One familiar WaSP tactic was known as roadblocking. Some of its members worked at publications like HotWired and CNet. WaSP would coordinate articles across several outlets all at once criticizing, for instance, Netscape’s neglect of standards in the face of a perfectly reasonable solution in Gecko. By doing so, they were often able to capture the attention of at least one news cycle.

WaSP also took more direct action. Members were asked to send emails to browsers, or sign petitions showing widespread support for standards. Overwhelming pressure from developers was occasionally enough to push browsers in the right direction.

In part because of WaSP, Netscape agreed to make Gecko part of version 5.0. Beta versions of Netscape 5 would indeed have standards-compliant HTML and CSS, but it was beset with issues elsewhere. It would take years for a release. By then, Microsoft’s dominion over the browser market would be near complete.

As one of the largest tech companies in the world, Microsoft was more insulated from grassroots pressure. The on-the-ground tactics of WaSP proved less successful when turned against the tech giant.

But inside the walls of Microsoft, WaSP had at least one faithful follower, developer Tantek Çelik. Çelik has tirelessly fought on the side of web standards as far back as his web career stretches. He would later become a member of the WaSP Steering Committee and a representative for a number of working groups at the W3C working directly on the development of standards.

Tantek Çelik (Photo: Tantek.com)

Çelik ran a team inside of Internet Explorer for Mac. Though it shared a name, branding, and general features with its far more ubiquitous Windows counterpart, IE for Mac ran on a separate codebase. Çelik’s team was largely left to its own devices in a colossal organization with other priorities working on a browser that not many people were using.

With the direction of the browser largely left up to him, Çelik began to reach out to web designers in San Francisco at the cutting edge of web technology. Through a stroke of luck he was connected to several members of the Web Standards Project. He’d visit with them and ask what they wanted to see in the Mac IE browser. “The answer: better standards support.”

They helped Çelik realize that his work on a smaller browser could be impactful. If he was able to support standards, as they were defined by the W3C, it could serve as a baseline for the code that the designers were writing. They had enough to worry about with buggy standards in IE for Windows and Netscape, in other words. They didn’t need to also worry about IE for Mac.

That was all that Çelik needed to hear. When Internet Explorer 5.0 for Mac launched in 2000, it had across the board support for web standards; HTML, PNG images, and most impressively, one of the most ambitious implementations of the new Cascading Style Sheets (CSS) specification.

It would take years for the Windows version to get anywhere close to the same kind of support. Even half a decade later, after Çelik left to work at the search engine Technorati, they were still playing catch-up.

Towards the end of the millennium, the W3C found themselves at a fork in the road. They looked to their still-recent past and saw it filled with contentious support for standards — Incompatible browsers with their own priorities. Then they looked the other way, to their towering future. They saw a web that was already evolving beyond the confines personal computers. One that would soon exist on TVs and in cell phones and on devices we that hadn’t been dreamed up yet in paradigms yet to be invented. Their past and their future were incompatible. And so, they reacted.

Yuri Rubinsky had an unusual talent for making connections. In his time as a standards advocate, developer, and executive at a major software company, he had managed to find time to connect some of the web’s most influential proponents. Sadly, Rubinsky died suddenly and at a young age in 1996, but his influence would not soon be forgotten. He carried with him an infectious energy and a knack for persuasion. His friend and colleague Peter Sharpe would say upon his death that in “talking to the people from all walks of life who knew Yuri, there was a common theme: Yuri had entered their lives and changed them forever.”

Rubinsky devoted his career to making technology more accessible. He believed that without equitable access, technology was not worth building. It motivated all of the work he did, including his longstanding advocacy of SGML.

SGML is a meta-language and “you use it to build your own computer languages for your own purposes.” If you hand a document over to a computer, SGML is how you can give that computer instructions on how to understand it. It provides a standardized way to describe the structure of data — the tags that it uses and the order it is expected in. The ownership of data, therefore, is not locked up and defined at some unknown level, it is given to everybody.

Rubinsky believed in that kind of universal access, a world in which machines talked to each other in perfect harmony, passing sets of data between them, structured, ordered, and formatted for its users. His company, SoftQuad, built software for SGML. He organized and spoke at conferences about it. He created SGML Open, a consortium not unlike the W3C. “SGML provides an internationally standardized, vendor-supported, multi-purpose, independent way of doing business,” was how he once described it, “If you aren’t using it today, you will be next year.” He was almost right.

He had a mission on the web as well. HTML is actually based on SGML, though it uses only a small part of it. Rubinsky was beginning to have conversations with members of the W3C, like Berners-Lee and Raggett, about bringing a more comprehensive version of SGML to the web. He was even writing a book called SGML on the Web before his death.

In the hallways of conferences and in threaded mailing lists, Rubinsky used his unique propensity for persuasion to bring people several people together on the subject, including Dan Connolly, Lauren Wood, Jon Bosak, James Clark, Tim Bray, and others. Eventually, those conversations moved into the W3C. They formed a formal working group and, in November of 1996, eXtensible Markup Language (XML) was formally announced, and then adopted as a W3C Recommendation. The announcement took place at an annual SGML conference in Boston, run by an organization where Rubinsky sat on the Board of Directors.

XML is SGML, minus a few things, renamed and repackaged as a web language. That means it goes far beyond the capabilities of HTML, giving developers a way to define their own structured data with completely unique tags (e.g., an <ingredients> tag in a recipe, or an <author> tag in an article). Over the years, XML has become the backbone of widely used technologies, like RSS and MathML, as well as server-level APIs.

XML was appealing to the maintainers of HTML, a language that was beginning to feel somewhat complete. “When we published HTML 4, the group was then basically closed,” Steve Pemberton, chair of the HTML working group at the time, described the situation. “Six months later, though, when XML was up and running, people came up with the idea that maybe there should be an XML version of HTML.” The merging of HTML and XML became known as XHTML. Within a year, it was the W3C’s main focus.

The first iterations of XHTML, drafted in 1998, were not that different from what already existed in the HTML specifications. The only real difference was that it had stricter rules for authors to follow. But that small constraint opened up new possibilities for the future, and XHTML was initially celebrated. The Web Standards Project issued a press release on the day of its release lauding its capabilities, and developers began to make use of the stricter markup rules required, in line with the work Connolly had already done with Document Type Definitions.

XHTML represented a web with deeper meaning. Data would be owned by the web’s creators. And together, computers and programmers, could create a more connected and understandable web. That meaning was labeled semantics. The Semantic Web would become the W3C’s greatest ambition, and they would chase it for close to a decade.

W3C, 2000

Subsequent versions of XHTML would introduce even stricter rules, leaning harder into the structure of XML. Released in 2002, the XHTML 2.0 specification became the language’s harbinger. It removed backwards compatibility with older versions of HTML, even as Microsoft’s Internet Explorer — the leading browser by a wide margin at this point — refused to support it. “XHTML 2 was a beautiful specification of philosophical purity that had absolutely no resemblance to the real world,” said Bruce Lawson, an HTML evangelist for Opera at the time.

Rather than uniting standards under a common banner, XHTML, and the refusal of major browsers to fully implement it, threatened the split the web apart permanently. It would take something bold to push web standards in a new direction. But that was still years away.

The post Chapter 7: Standards appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How I Built my SaaS MVP With Fauna ($150 in revenue so far)

Css Tricks - Thu, 03/11/2021 - 6:07am

Are you a beginner coder trying to implement to launch your MVP? I’ve just finished my MVP of ReviewBolt.com, a competitor analysis tool. And it’s built using React + Fauna + Next JS. It’s my first paid SaaS tool so earning $150 is a big accomplishment for me.

In this post you’ll see why I chose Fauna for ReviewBolt and how you can implement a similar set up. I’ll show you why I chose Fauna as my primary database. It easily stores massive amounts of data and gets it to me fast.By the end of this article, you’ll be able to decide on whether you also want to create your own serverless website with Fauna as your back end.

What is ReviewBolt?

The website allows you to search any website and get a detailed review of a company’s ad strategies, tech stack, and user experiences.

Reviewbolt currently pulls data from seven different sources to give you an analysis of any website in the world. It will estimate Facebook spend, Google spend, yearly revenue, traffic growth metrics, user reviews, and more!

Why did I build it?

I’ve dabbled in entrepreneurship and I’m always scouting for new opportunities. I thought building ReviewBolt would help me (1) determine how big a company is… and (2) determine its primary distribution channel. This is super important because if you can’t get new users then your business is pretty much dead.

Some other cool tidbits about it:

  • You get a large overview of everything that’s going on with a website.
  • What’s more, every search you make on the website creates a page that gets saved and indexed. So ReviewBolt grows a tiny bit bigger with every user search.

So far, it’s made $150, 50 users, analysed over 3,000 websites and helped 5,000+ people with their research. So a good start for a solo dev indie-hacker like myself.

It was featured on Betalist and it’s quite popular in entrepreneur circles. You can see my real-time statistics here: reviewbolt.com/stats

I’m not a coder… all self-taught

Building it so far was no easy feat! Originally I graduated as an english major from McGill University in Canada with zero tech skills. I actually took one programming class in my last year and got a 50%… the lowest passing grade possible.

But between then and now a lot has changed. For the last two years I’ve been learning web and app development. This year my goal was to make a profitable SaaS company but to also to make something that I would find useful.

I built ReviewBolt in my little home office in London during this massive Lockdown. The project works and that’s one step for me on my journey. And luckily I chose Fauna because it was quite easy to get a fast, reliable database that actually works with very low costs.

Why did I pick Fauna?

Fauna provides a great free tier and as a solo dev project, I wanted to keep my costs lean to see first if this would actually work.

Warning: I’m no Fauna expert. I actually still have a long way to go to master it. However, this was my setup to create the MVP of ReviewBolt.com that you see today. I made some really dumb mistakes like storing my data objects as strings instead of objects… But you live and learn.

I didn’t start off with Fauna…

ReviewBolt first started as just one large google sheet. Every time someone made a wesbite search, it pulled the data from the various sources and saved it as a row in a google sheet.

Simple enough right? But there was a problem…

After about 1,000 searches Google Sheets started to break down like an old car on a road trip…. It was barely able to start when I loaded the page. So I quickly looked for something more stable.

Then I found Fauna &#x1f607;

I discovered that Fauna was really fast and quite reliable. I started out using their GraphQL feature but realized the native FQL language had much better documentation.

There’s a great dashboard that gives you immediate insight for your usage.

I primarily use Fauna in the following ways:

  1. Storage of 110,000 company bios that I scraped.
  2. Storage of Google Ads data
  3. Storage of Facebook Ad data
  4. Storage of Google Trends data
  5. Storage of tech stack
  6. Storage of user reviews

The 110k companies are stored in one collection and the live data about websites is stored in another. I could have probably created created relational databases within fauna but that was way beyond me at the time &#x1f605; and it was easier to store everything as one very large object.

For testing, Fauna actually provides the built-in web shell. This is really useful, because I can follow the tutorials and try them in real-time on the website without load visual studio.

What frameworks does the website use?

The website works using React and NextJS. To load a review of a website you just type in the site.

Every search looks like this: reviewbolt.com/r/[website.com]

The first thing that happens on the back end is that it uses a Fauna Index to see if this search has already been done. Fauna is very efficient to search your database. Even with a 110k collection of documents it still works really well because of its use of indexing. So when a page loads — say reviewbolt.com/r/fauna — it first checks to see if there’s a match. If a match is found then it loads the saved data and renders that on the page.

If there’s no match then the page brings up a spinner and in the backend it queries all these public APIs about the requested website. As soon as it’s done it loads the data for the user.

And when that new website is analyzed it saves this data into my Fauna Collection. So then the next user won’t have to load everything but rather we can use Fauna to fetch it.

My use case is to index all of ReviewBolt’s website searches and then being able to retrieve those searches easily.

What else can Fauna do?

The next step is to create a charts section. So far I built a very basic version of this just for Shopify’s top 90 stores.

But ideally I have one that works by the category using Fauna’s index binding to create multiple indexes around: Top Facebook Spenders, Top Google Spenders, Top Traffic, Top Revenue, Top CRMs by traffic. And that will really be interesting to see who’s at the top for competitor research. Because in marketing, you always want to take inspiration from the winners.

But ideally I have one that works by the category using Fauna’s index binding to create multiple indexes around: Top Facebook Spenders, Top Google Spenders, Top Traffic, Top Revenue, Top CRMs by traffic. And that will really be interesting to see who’s at the top for competitor research. Because in marketing, you always want to take inspiration from the winners.

export async function findByName(name){ var data = await client.query(Map( Paginate( Match(Index("rbCompByName"), name) ), Lambda( "person", Get(Var("person")) ) )) return data.data//[0].data }

This queries Fauna to paginate the results and return the found object.

I run this function when searching for the website name. And then to create a company I use this code:

export async function createCompany(slug,linkinfo,trending,googleData,trustpilotReviews,facebookData,tech,date,trafficGrowth,growthLevels,trafficLevel,faunaData){ var Slug = slug var Author = linkinfo var Trends = trending var Google = googleData var Reviews = trustpilotReviews var Facebook = facebookData var TechData = tech var myDate = date var myTrafficGrowth = trafficGrowth var myGrowthLevels = growthLevels var myFaunaData = faunaData client.query( Create(Collection('RBcompanies'), { data: { "Slug": Slug, "Author": Author, "Trends": Trends, "Google": Google, "Reviews": Reviews, "Facebook": Facebook, "TechData": TechData, "Date": myDate, "TrafficGrowth":myTrafficGrowth, "GrowthLevels":myGrowthLevels, "TrafficLevels":trafficLevel, "faunaData":JSON.parse(myFaunaData), } }) ).then(result=>console.log(result)).catch(error => console.error('Error mate: ', error.message)); }

Which is a bit longer because I’m pulling so much information on various aspects of the website and storing it as one large object.

The Fauna FQL language is quite simple once you get your head around. Especially since for what I’m doing at least I don’t need to many commands.

I followed this tutorial on building a twitter clone and that really helped.

This will change when I introduce charts and I’m sorting a variety of indexes but luckily it’s quite easy to do this in Fauna.

What’s the next step to learn more about Fauna?

I highly recommend watching the video above and also going through the tutorial on fireship.io. It’s great for going through the basic concepts. It really helped get to the grips with the fauna query language.

Conclusion

Fauna was quite easy to implement as a basic CRUD system where I didn’t have to worry about fees. The free tier is currently 100k reads and 50k writes and for the traffic level that ReviewBolt is getting that works. So I’m quite happy with it so far and I’d recommend it for future projects.

The post How I Built my SaaS MVP With Fauna ($150 in revenue so far) appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The WordPress Evolution Toward Full-Site Editing

Css Tricks - Wed, 03/10/2021 - 11:45am

The block editor was a game-changer for WordPress. The idea that we can create blocks of content and arrange them in a component-like fashion means we have a lot of flexibility in how we create content, as well a bunch of opportunities to develop new types of modular content.

But there’s so much more happening in the blocks ecosystem since the initial introduction of the editor. Last year, Dmitry Mayorov wrote about the emergence of block variations and how they provide even more flexibility by extending existing blocks to create styled variations of them.

Three buttons variations in the WordPress block inserter

Then we got block patterns, or the ability to stitch blocks together into reusable patterns.

Block patterns are sandwiched between Blocks and Reusable Blocks in the block inserter, which is a perfect metaphor for where it fits in the bigger picture of WordPress editing.

So, that means we have blocks, block variations, reusable blocks, and block patterns. That’s a lot of awesome tooling for designing layouts directly in the editor!

But you may have heard that WordPress has plans for blocks that go beyond the post editor. They’re outright targeting global elements — menus, headers, footers, and such — in an effort to establish full-site editing (FSE) capabilities right in WordPress.

Matt Mullenweg introduces the Twenty Twenty-One theme and its beta full-site editing capabilities.

Whoa. I certainly cannot speak for everyone else, but my mind instantly goes to what this means for theme developers. I mean, what is a theme where the templates are designed in the editor instead of code? I’d imagine a theme is a lot like a collection of shells that contain very little markup. And perhaps more development goes into creating blocks, block patterns and block variations to stitch everything together.

That’s actually the case, and you can test it now. Make sure you’re on WordPress 5.6+, then install the experimental TT1 Blocks theme and Gutenberg plugin.

Cracking open the theme, it’s really two PHP templates then — get this — HTML files used for block templates and block template parts.

The way block template and block template parts are separated closely mirrors the common current approach of separating templates from template parts.

I’m personally all-in on this direction. I’d even go so far as to say (peeking over my shoulder at Chris) that CSS-Tricks is all-in on this as well. We made the switch to blocks last year and it has reinvigorated our love for writing blog posts just like this one. (Honestly, I would have probably written something like this in the past with a code editor first, then port it to WordPress with the classic editor. That was a better writing experience for me at the time.)

The TT1 Blocks theme adds a “Site Editor” that takes its cues from the theme’s block templates and block template parts.

While I’m bullish on blocks, I know others aren’t. In fact, I work with plenty of folks who are (and I mean this kindly) blissfully ignorant of the block editor. Developing for the block editor is a huge mental shift and there’s a lack of documentation for it at the moment. Things are still very much in active development, and iterations to the block editor come with every new WordPress release. Can’t blame folks for deciding to wait for the next train as things settle and standards evolve.

But, at the same time, it’s true to Matt Mullenweg’s now infamous advice to WordPress developers in 2015: Learn JavaScript, deeply.

I was (and am still very much) excited about blocks. Full-site editing freaks me out a bit, but that’s mostly because it moves the concept of blocks outside the editor, where I’m only now beginning to get a good feel for them.

Whatever it all means, what I’m looking forward to most is an official release of a default theme that supports FSE. Remember the first time you opened up a WordPress theme? I marveled at the markup and spent countless hours picking at lines of code until I made it my own. That’s the experience I’m expecting the first time I open up the new theme.

Until then, here’s a sorta roundup of ways to stay in the loop:

  • Make WordPress Design – The handbook lists FSE as one of the team’s current priorities with an overview of the project. It was last updated May 2020, so I’m not sure how current the information is and whether the page is still maintained.
  • How to Test FSE – Instructions for setting up a FSE site locally and participate in testing.
  • TT1 Theme Repo – See what’s being reported and the status of those issues. This is the spot to watch for theme development.
  • Gutenberg Plugin Repo – Issues reported for the plugin. This is the spot to watch for block development.
  • Theme Experiments Repo – Check out more themes that are experimenting with blocks and FSE.
  • #fse-answers – A collection of responses to a bunch of questions about FSE.
  • #fse-outreach-experiment – Slack channel for discussing FSE.

The post The WordPress Evolution Toward Full-Site Editing appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Too Many SVGs Clogging Up Your Markup? Try `use`.

Css Tricks - Wed, 03/10/2021 - 5:58am

Recently, I had to make a web page displaying a bunch of SVG graphs for an analytics dashboard. I used a bunch of <rect>, <line> and <text> elements on each graph to visualize certain metrics.

This works and renders just fine, but results in a bloated DOM tree, where each shape is represented as separate nodes. Displaying all 50 graphs simultaneously on a web page results in 5,951 DOM elements in total, which is far too many.

We might display 50-60 different graphs at a time, all with complex DOM trees.

This is not optimal for several reasons:

  • A large DOM increases memory usage, longer style calculations, and costly layout reflows.
  • It will increases the size of the file on the client side.
  • Lighthouse penalizes the performance and SEO scores.
  • Maintainability is a nightmare — even if we use a templating system — because there’s still a lot of cruft and repetition.
  • It doesn’t scale. Adding more graphs only exacerbates these issues.

If we take a closer look at the graphs, we can see a lot of repeated elements.

Each graph ends up sharing lots of repeated elements with the rest.

Here’s dummy markup that’s similar to the graphs we’re using:

<svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="500" height="200" viewBox="0 0 500 200" > <!-- &#x1f4ca; Render our graph bars as boxes to visualise our data. This part is different for each graph, since each of them displays different sets of data. --> <g class="graph-data"> <rect x="10" y="20" width="10" height="80" fill="#e74c3c" /> <rect x="30" y="20" width="10" height="30" fill="#16a085" /> <rect x="50" y="20" width="10" height="44" fill="#16a085" /> <rect x="70" y="20" width="10" height="110" fill="#e74c3c" /> <!-- Render the rest of the graph boxes ... --> </g> <!-- Render our graph footer lines and labels. --> <g class="graph-footer"> <!-- Left side labels --> <text x="10" y="40" fill="white">400k</text> <text x="10" y="60" fill="white">300k</text> <text x="10" y="80" fill="white">200k</text> <!-- Footer labels --> <text x="10" y="190" fill="white">01</text> <text x="30" y="190" fill="white">11</text> <text x="50" y="190" fill="white">21</text> <!-- Footer lines --> <line x1="2" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="4" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="6" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="8" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <!-- Rest of the footer lines... --> </g> </svg>

And here is a live demo. While the page renders fine the graph’s footer markup is constantly redeclared and all of the DOM nodes are duplicated.

CodePen Embed Fallback The solution? The SVG element.

Luckily for us, SVG has a <use> tag that lets us declare something like our graph footer just once and then simply reference it from anywhere on the page to render it as many times as we want. From MDN:

The <use> element takes nodes from within the SVG document, and duplicates them somewhere else. The effect is the same as if the nodes were deeply cloned into a non-exposed DOM, then pasted where the use element is.

That’s exactly what we want! In a sense, <use> is like a modular component, allowing us to drop instances of the same element anywhere we’d like. But instead of props and such to populate the content, we reference which part of the SVG file we want to display. For those of you familiar with graphics programming APIs, such as WebGL, a good analogy would be Geometry Instancing. We declare the thing we want to draw once and then can keep reusing it as a reference, while being able to change the position, scale, rotation and colors of each instance.

Instead of drawing the footer lines and labels of our graph individually for each graph instance then redeclaring it over and over with new markup, we can render the graph once in a separate SVG and simply start referencing it when needed. The <use> tag allows us to reference elements from other inline SVG elements just fine.

Let’s put it to use

We’re going to move the SVG group for the graph footer — <g class="graph-footer"> — to a separate <svg> element on the page. It won’t be visible on the front end. Instead, this <svg> will be hidden with display: none and only contain a bunch of <defs>.

And what exactly is the <defs> element? MDN to the rescue once again:

The <defs> element is used to store graphical objects that will be used at a later time. Objects created inside a <defs> element are not rendered directly. To display them you have to reference them (with a <use> element for example).

Armed with that information, here’s the updated SVG code. We’re going to drop it right at the top of the page. If you’re templating, this would go in some sort of global template, like a header, so it’s included everywhere.

<!-- ⚠️ Notice how we visually hide the SVG containing the reference graphic with display: none; This is to prevent it from occupying empty space on our page. The graphic will work just fine and we will be able to reference it from elsewhere on our page --> <svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="500" height="200" viewBox="0 0 500 200" style="display: none;" > <!-- By wrapping our reference graphic in a <defs> tag we will make sure it does not get rendered here, only when it's referenced --> <defs> <g id="graph-footer"> <!-- Left side labels --> <text x="10" y="40" fill="white">400k</text> <text x="10" y="60" fill="white">300k</text> <text x="10" y="80" fill="white">200k</text> <!-- Footer labels --> <text x="10" y="190" fill="white">01</text> <text x="30" y="190" fill="white">11</text> <text x="50" y="190" fill="white">21</text> <!-- Footer lines --> <line x1="2" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="4" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="6" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="8" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <!-- Rest of the footer lines... --> </g> </defs> </svg>

Notice that we gave our group an ID of graph-footer. This is important, as it is the hook for when we reach for <use>.

So, what we do is drop another <svg> on the page that includes the graph data it needs, but then reference #graph-footer in <use> to render the footer of the graph. This way, there’s no need to redeclaring the code for the footer for every single graph.

Look how how much cleaner the code for a graph instance is when <use> is in.. umm, use.

<svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="500" height="200" viewBox="0 0 500 200" > <!-- &#x1f4ca; Render our graph bars as boxes to visualise our data. This part is different for each graph, since each of them displays different sets of data. --> <g class="graph-data"> <rect x="10" y="20" width="10" height="80" fill="#e74c3c" /> <rect x="30" y="20" width="10" height="30" fill="#16a085" /> <rect x="50" y="20" width="10" height="44" fill="#16a085" /> <rect x="70" y="20" width="10" height="110" fill="#e74c3c" /> <!-- Render the rest of the graph boxes ... --> </g> <!-- Render our graph footer lines and labels. --> <use xlink:href="graph-footer" x="0" y="0" /> </svg>

And here is an updated <use> example with no visual change:

CodePen Embed Fallback Problem solved.

What, you want proof? Let’s compare the demo with <use> version against the original one.

DOM nodesFile sizeFile Size (GZIP compression)Memory usageNo <use>5,952664 KB40.8 KB20 MBWith <use>2,572294 KB40.4 KB18 MBSavings56% fewer nodes42% smaller0.98% smaller10% less

As you can see, the <use> element comes in handy. And, even though the performance benefits were the main focus here, just the fact that it reduces huge chunks of code from the markup makes for a much better developer experience when it comes to maintaining the thing. Double win!

More information Article on Jan 28, 2020 Use and Reuse Everything in SVG… Even Animations! Mariana Beldi Article on May 30, 2017 SVG `use` with External Reference, Take 2 Chris Coyier Article on Aug 2, 2017 How to Make Charts with SVG Robin Rendle Article on Nov 1, 2016 A Handmade SVG Bar Chart (featuring some SVG positioning gotchas) Robin Rendle on Oct 27, 2015 13: SVG as an Icon System – The `use` Element Chris Coyier on Oct 27, 2015 15: SVG Icon System – Where the defs go Chris Coyier

The post Too Many SVGs Clogging Up Your Markup? Try `use`. appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Web Frameworks: Why You Don’t Always Need Them

Css Tricks - Tue, 03/09/2021 - 1:37pm

Richard MacManus explaining Daniel Kehoe’s approach to building websites, which he calls “Stackless”:

There are three key web technologies underpinning Kehoe’s approach:

  • ES6 Modules: JavaScript ES6 can support import modules, which are also supported by browsers.
  • Module CDNs: JavaScript modules can now be downloaded from third-party content delivery networks (CDNs).
  • Custom HTML elements: Developers can now create custom HTML tags, via Web Components.

Using a no build process and only features that are built into browser, and yet that still buys you a pretty powerful setup. You can still use stuff off npm. You can still get templating. You can still build with components. You still get isolation where needed.

I’d say today you’re:

  • Giving up some DX (hot module reloading, JSX, framework doodads)
  • Gaining some DX (can jump into project and just start working)
  • Giving up some performance (no tree shaking, loads of network requests)
  • Widening your hiring pool (more people know core technologies than specific tools)

But it’s not hard to imagine a tomorrow where we give up less and gain more, making the tools we use today less necessary. I’m quite sure we’ll always still find a way to jam more tools into what we’re doing. Hammer something something nail.

Direct Link to ArticlePermalink

The post Web Frameworks: Why You Don’t Always Need Them appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Firebase Crash Course

Css Tricks - Tue, 03/09/2021 - 6:06am

This article is going to help you, dear front-end developer, understand all that is Firebase. We’re going to cover lots of details about what Firebase is, why it can be useful to you, and show examples of how. But first, I think you’ll enjoy a little story about how Firebase came to be.

Eight years ago, Andrew Lee and James Tamplin were building a real-time chat startup called Envolve. The service was a hit. It was used on sites of celebrities like Ricky Martin and Limp Bizkit. Envolve was for developers who didn’t want to—yet again—build their own chat widget. The value of the service came from the setup ease and speed of message delivery. Envolve was just a chat widget. The process was simple. Place a script tag on a page. Once the widget booted up, it would do everything for you. It was like having a database and server for chat messages already set up.

James and Andrew noticed a peculiar trend as the service became more popular. Some developers would include the widget on the page, but make it invisible. Why would someone want a chat widget with hidden messages? Well, it wasn’t just chat data being sent between devices. It was game data, high scores, app settings, to-dos, or whatever the developer needed to quickly send and synchronize. Developers would listen for new messages in the widget and use them to synchronize state in their apps. This was an easy way to create real-time experiences without the need for a back end.

This was a light bulb moment for the co-founders of Firebase. What if developers could do more than send chat messages? What if they had a service to quickly develop and scale their applications? Remove the responsibility of managing back-end infrastructure. Focus on the front end. This is how Firebase was born.

What is Firebase?

Isn’t Firebase a database? No… yes… mostly no. Firebase is a platform that provides the infrastructure for developers and tools for marketers. But it wasn’t always that way.

Seven years ago, Firebase was a single product: a real-time cloud database. Today, Firebase is a collection of 19 products. Each product is designed to empower a part of an application’s infrastructure. Firebase also gives you insight into how your app is performing, what your users are doing, and how you can make your overall app experience better. While Firebase can make up the entirety of your app’s back-end, you can use each product individually as well.

Here’s just a sampling of those 19 products:

  • Hosting: Deploy a new version of your site for every GitHub pull request.
  • Firestore: Build apps that work in real time, even while offline, with no server required.
  • Auth: Authenticate and manage users with a myriad of providers.
  • Storage: Manage user-generated content like photos, videos, and GIFs.
  • Cloud Functions: Server code driven by events (e.g. record created, user sign up, etc.).
  • Extensions: Pre-packaged functions set up with UI (e.g. Stripe payments, text translations, etc.)
  • Google Analytics: Understand user activity, organized by segments and audiences.
  • Remote Config: Key-value store with dynamic conditions that’s great for feature gating.
  • Performance Monitoring: Page load metrics and custom traces from real usage.
  • Cloud Messaging: Cross-platform push notifications.

Whew. That’s a lot, and I didn’t even list the other nine products. That’s okay. There’s no requirement to use every specific service or even more than one thing. But now it’s time to make these services a little more tangible and showcase what you can do with Firebase.

A great way to learn is by seeing something just work. The first section below will get you set up with Firebase services. The sections following that will highlight the Firebase details of a demo app to showcase how to use Firebase features. While this is a relatively thorough guide to Firebase, it’s not a step-by-step tutorial. The goal is to highlight the working bits in the embedded demos for the sake of covering more ground in this one article. If you want step-by-step Firebase tutorials, leave a comment to hype me up to write one.

A basic Firebase setup

This section is helpful if you plan on forking the demo with your own Firebase back end. You can skip this section if you’re familiar with Firebase Projects or just want to see the shiny demos.

Firebase is a cloud-based service which means you need to do some basic account setup before using its services. Firebase development is not tied to a network connection, however. It’s very much worth noting that you can (and usually should) run Firebase locally on your machine for development. This guide demonstrates building an app with CodePen, which means it needs a cloud-connected service. The goal here is to create your personal back end with Firebase and then retrieve the configuration the front end needs to connect to it.

Create a Firebase Project

Go to the Firebase Console. You’ll be asked if you want to set up Google Analytics. None of these examples use it, so you can skip it and always add it back in later if needed.

Create a web Firebase App

Next, you’ll see options for creating an “App.” Click the web option and give it a name—any name will do. Firebase Projects can hold multiple “apps.” I’m not going to get deep into this hierarchy because it’s not too important when getting started. Once the app is created you’ll be given a configuration object.

let firebaseConfig = { apiKey: "your-key", authDomain: "your-domain.firebaseapp.com", projectId: "your-projectId", storageBucket: "your-projectId.appspot.com", messagingSenderId: "your-senderId", appId: "your-appId", measurementId: "your-measurementId" };

This is the configuration you’ll use on the front end to connect to Firebase. Don’t worry about any of these properties in terms of security. There’s nothing insecure about including these properties in your front-end code. You’ll learn in one of the sections below how security works in Firebase.

Now it’s time to represent this “App” you created in code. This “app” is merely a container that shares logic and authentication state across different Firebase services. Firebase provides a set of libraries that make development a lot easier. In this example I’ll be using them from a CDN, but they also work well with module bundlers like Webpack and Rollup.

// This pen adds Firebase via the "Add External Scripts" option in codepen // https://www.gstatic.com/firebasejs/8.2.10/firebase-app.js // https://www.gstatic.com/firebasejs/8.2.10/firebase-auth.js // Create a Project at the Firebase Console // (console.firebase.google.com) let firebaseConfig = { apiKey: "your-key", authDomain: "your-domain.firebaseapp.com", projectId: "your-projectId", storageBucket: "your-projectId.appspot.com", messagingSenderId: "your-senderId", appId: "your-appId", measurementId: "your-measurementId" }; // Create your Firebase app let firebaseApp = firebase.initializeApp(firebaseConfig); // The auth instance console.log(firebaseApp.auth());

Great! You are so, so close to being able to talk to your very own Firebase back end. Now you need to enable the services you intend to use.

Enable authentication providers

The examples below use authentication to sign in users and secure data in the database. When you create a new Firebase Project, all of your authentication providers are turned off. This is initially inconvenient, but essential for security. You don’t want users trying to sign in with providers your back end does not support.

To turn on a provider go to the Authentication tab in the side navigation and then click the “Sign-in method” button up top. Below, you’ll see a large list of providers such as Email and Password, Google, Facebook, GitHub, Microsoft, and Twitter. For the examples below, you will need to turn on Google and Anonymous. Google is near the top of the list and Anonymous is at the bottom. Selecting Google will ask you to provide a support email, I recommend putting in your own personal email while testing, but production apps should have a dedicated email.

If you plan on using authentication within CodePen, then you’ll also need to add CodePen as an authorized domain. You can add authorized domains towards the bottom of the “Sign-in method” tab.

An important note on this authorization: this will allow any project hosted on cdpn.io to sign into your Firebase Project. There’s not a lot of risk for short-term demo purposes. There’s no cost to using Firebase Auth except for phone number authentication. Ideally, you would not want to keep this as an authorized domain if you plan on using this app in a production capacity.

Now, on to the last step in the Firebase Console: Creating a Firestore database!

Create a Firestore database

Click on Firestore in the left navigation. From here, you’ll need to click the button to create the Firestore database. You’ll be asked if you want to start in “production mode” or “test mode.” You want “test mode” for this example. If you’re worried about security, we’ll cover that in the last section.

Now that you have the basics down, let’s get to some real-life use cases.

Authenticating users

The best parts of an app are usually behind a sign-up form. Why don’t we just let the user in as a guest so they can see for themselves? Systems often require accounts because the back end isn’t designed for guests. There’s a system requirement to have a sacred userId property to save any record belonging to a user. This is where “guest” accounts are helpful. They provide low friction for users to join while giving them a temporary userId that appeases the system. Firebase Auth has a process for doing just this.

Setting up anonymous auth

One of my favorite features about Firebase Auth is anonymous auth. There are two perks. One, you can authenticate a user without taking any input (e.g. passwords, phone number, etc.). Two, you get really good at spelling anonymous. It’s really a win-win situation.

Take this CodePen for example.

CodePen Embed Fallback

It’s a form that lets the user decide if they want to sign in with Google or as a guest. Most of the code in this example is specific to the user interface. The Firebase bits are actually pretty easy.

// Firebase-specific code let firebaseConfig = { /* config */ }; let firebaseApp = firebase.initializeApp(firebaseConfig); // End Firebase-specific code let socialForm = document.querySelector('form.sign-in-social'); let guestForm = document.querySelector('form.sign-in-guest'); guestForm.addEventListener('submit', async submitEvent => { submitEvent.preventDefault(); let formData = new FormData(guestForm); let displayName = formData.get('name'); let photoURL = await getRandomPhotoURL(); // Firebase-specific code let { user } = await firebaseApp.auth().signInAnonymously(); await user.updateProfile({ displayName, photoURL }); // End Firebase-specific code });

In the 17 lines of code above (sans comments), only five of them are Firebase-specific. Four lines are needed to import and configure. Two lines needed to signInAnonymously() and user.updateProfile(). The first sign-in method makes a call to your Firebase back end and authenticates the user. The call returns a result that contains the needed properties, such as the uid. Even with guest users, you can associate data to a user in your back end with this uid. After the user is signed in, the example calls updateProfile on the user object. Even though this user is a guest, they can still have a display name and a profile photo.

Setting up Google auth

The great news is that this works the same exact way with all other permanent providers, like Email and Password, Google, Facebook, Twitter, GitHub, Microsoft, Phone Number, and so much more. Implementing the Google Sign-in only takes a few lines of code as well.

socialForm.addEventListener('submit', submitEvent => { submitEvent.preventDefault(); // Firebase-specific code let provider = new firebase.auth.GoogleAuthProvider(); firebaseApp.auth().signInWithRedirect(provider); // End Firebase-specific code });

Each social style provider initiates a redirect-based authentication flow. That’s a fancy way of saying that the signInWithRedirect method will go to the sign-in page owned by the provider and then return back to your app with the authenticated user. In this case, the user is redirected to Google’s sign-in page, signs in, and then is returned back to your app.

Monitoring authentication state

How do you get the user back from this redirect? There are a few ways, but I’m going to go with the most common. You can detect the authentication state of any active user whether logged in or out.

firebaseApp.auth().onAuthStateChanged(user => { if(user != null) { console.log(user.toJSON()); } else { console.log("No user!"); } });

The onAuthStateChanged method updates whenever there is a change in a user’s authentication state. It will fire initially on page load telling you if a user is logged in, logs in, or logs out. This allows you to build a UI that reacts to these state changes. It also fits well into client-side routers because it can redirect a user to the proper page with a small amount of code.

The app in this case just uses <template> tags to replace the contents of the “phone” element. If the user is logged in, the app routes to the new template.

<div class="container"> <div class="phone"> <!-- Phone contents replaced with template tags --> </div> </div>

This provides a simple relationship. The .phone is the “root view.” Each <template> tag is a “child view.” The authentication state determines which view is shown.

firebaseApp.auth().onAuthStateChanged(user => { if(user != null) { // Show demo view routeTo("demo", firebaseApp, user); } else { console.log("No user!"); // Show log in page routeTo("signIn", firebaseApp); } });

The embedded demo isn’t “production ready” as the requirements were just work inside a single CodePen. The goal was to make it easy to read with each “view” contained within a template tag. This loosely emulates what a common routing solution would look like with a framework.

One important bit to note here is that the user object is passed down to the routeTo function. Getting the logged-in state is asynchronous. I find that it’s much easier to pass the user state down for the view than to make async calls from within the view. Many framework routers have a spot for this kind of async data fetching.

Convert guests to permanent users

The <template id="demo"> tag has a submit button for converting guests to permanent users. This is done by having the user sign in with the main provider (again, Google in this case).

let convertForm = document.querySelector('form.convert'); convertForm.addEventListener("submit", submitEvent => { submitEvent.preventDefault(); let provider = new firebase.auth.GoogleAuthProvider(); firebaseApp.auth().currentUser.linkWithRedirect(provider); });

Using the linkWithRedirect method will kick the user out to authenticate with the social provider. When the redirect returns the account will be merged. There’s nothing you have to change within the onAuthStateChanged method that controls the “child view” routing. What’s important to note is that the uid remains the same.

Handling account merging errors

A few edge cases can pop-up when you merge accounts. In some cases, a user could already have created an account with Google and is now trying to merge that existing account. There are many ways you can handle this scenario depending on how you want your app to work. I’m not going to get deep into this because we have a lot more to cover, but it’s important to know how to handle these errors.

async function checkForRedirect() { let auth = firebaseApp.auth(); try { let result = await auth.getRedirectResult(); } catch (error) { switch(error.code) { case 'auth/credential-already-in-use': { // You can check for the provider(s) in use let providers = await auth.fetchProvidersForEmail(error.email); // Then decide what strategy to take. A possible strategy is // notifying the user and asking them to sign in as that account } } } }

The code above uses the getRedirectResult method to detect if a user has directly returned from a social login redirect. If so, there’s a lot of information in that result. Most importantly here, we want to know if there was a problem. Firebase Auth will throw an error and provide relevant information on the error, such as the email and credentials, which will allow you to continue to merge the account. That’s not always what you want to do. In this case, I would probably indicate to the user that the account exists and prompt them to sign in with it. But I digress; I could talk sign-in forms for ages.

Now it’s on to building the data visualization (okay, it’s just a pie-chart) and providing it with a realtime data stream.

Setting up the data visualization

I’ll be honest. I have no idea what this app really does. I really wanted to test out building pie charts with a conic-gradient. I was excited about using CSS Custom Properties to change the values of the chart and syncing that in real time with a database, like Firestore. Let’s take a brief little detour to discuss how conic-gradient works.

A conic-gradient is a surprisingly well-supported CSS feature. Its key feature is that its color-stops are placed at the circumference of the circle.

.pie-chart { background-image: conic-gradient( purple 10%, /* 10% of the circumference */ magenta 0 20%, /* start at 0 go 20%, acts like 10% */ cyan 0 /* Fill the rest */ ); } CodePen Embed Fallback

You can build a pie chart with some quick math: lastStopPercent + percent. I stored these values in four CSS Custom Properties in the app pen: --pie-{n}-value (replace n with a number). Those values are used in another custom property that serves like a computed function.

:root { --pie-1-value: 10%; --pie-2-value: 10%; --pie-3-value: 80%; --pie-1-computed: var(--pie-1-value); --pie-2-computed: 0 calc(var(--pie-1-value) + var(--pie-2-value)); --pie-3-computed: 0 calc(var(--pie-2-value) + var(--pie-3-value)); }

Then the computed values are set in the conic-gradient.

background-image: conic-gradient( purple var(--pie-1-computed), magenta var(--pie-2-computed), cyan 0 );

The last computed value, --pie-3-computed, is ignored since it will always fill to the end (kinda like z for SVG paths). I think it’s still a good idea to set it in JavaScript to make the whole thing feel like it makes sense.

function setPieChartValue(percentage, index) { let root = document.documentElement; root.style.setProperty(`--pie-${index+1}-value`, `${percentage}%`); } let percentages = [25, 35, 60]; percentages.forEach(setPieChartValue);

You can hook up with pie chart with any data set with this newfound knowledge. The Firestore database has a .onSnapshot() method that streams data back from your database.

const fullPathDoc = firebaseApp.firestore().doc('/users/1234/expenses/3-2021'); fullPathDoc.onSnapshot(snap => { const { items } = doc.data(); items.forEach(setPieChartValue); });

A real time update will trigger the .onSnapshot() method whenever a value changes in your Firestore database at that document location. Now you might be asking yourself, what is a document location? How do I even store data in this database in the first place? Let’s take a dive into how Firestore works and how to model data in NoSQL databases.

How to model data in NoSQL database

Firestore is a document (NoSQL) database. It provides a hierarchical pattern of collections, which is a list of documents. Think of a document like a JSON object that has many more data types. A document can have a collection itself, which is known as a sub-collection. This is good for structuring data in a “parent-child” or hierarchical pattern.

If you haven’t followed the pre-requisites section up top, give it a read to make sure you have a Firestore database created before using any of this code yourself.

Just like Auth, you first need to import Firestore up top.

// This pen adds Firebase via the "Add External Scripts" option in codepen // https://www.gstatic.com/firebasejs/8.2.10/firebase-app.js // https://www.gstatic.com/firebasejs/8.2.10/firebase-auth.js // https://www.gstatic.com/firebasejs/8.2.10/firebase-firestore.js let firebaseConfig = { /* config */ }; let firebaseApp = firebase.initializeApp(firebaseConfig);

This tells the Firebase client library how to connect your Firestore database. Using this setup, you can create a reference to a piece of data in your database.

// Reference to collection stored at: '/expenses' const expensesCol = firebaseApp.firestore('expenses'); // Retrieve a snapshot of data (not in realtime) // Top level await is coming! (v8.dev/features/top-level-await) const snapshot = await expensesCol.get(); const expenses = snapshot.docs.map(d => d.data());

The code above creates a “Collection Reference” and then calls the .get() method to retrieve the data snapshot. This is not the data itself; it’s a wrapper that has a lot of helpful methods and metadata about the data. The last line “unwraps” the snapshot by iterating over the .docs array and calling the .data() function for each “Document Snapshot.” Note that this isn’t real time, but that’s coming up in just a bit!

What’s really important to grok here is the data structure (sometimes called a data model). This app stores the “expenses” of a user. Let’s say the document has the following structure.

{ uid: '1234', items: [ { label: "Food", value: 10 }, { label: "Services", value: 24 }, { label: "Rent", value: 30 }, { label: "Oops", value: 38 } ] }

The properties of a document are called a field. This document has an array field of items. Each item contains a label string and a value number. There’s also a string field named uid that stores the id of the user who owns the data. This structure simplifies the process of iterating over the values to create the pie chart. That part is solved, but how do we figure out how to get to a specific user’s expenses?

A traditional way of retrieving data based on a constraint is by using a query. That’s something you can do in Firestore with a .where() method.

// Let's pretend currentUser.uid === '1234' const currentUser = firebaseApp.auth().currentUser; // Reference to collection stored at: '/expenses' const expensesCol = firebaseApp.firestore('expenses'); // Query for the expenses belonging to uid == 1234 const userQuery = expensesCol.where('uid', '==', currentUser.uid); const snapshot = await userQuery.get();

This structure is fine but doesn’t take full advantage of the hierarchy provided by collections and, more importantly, sub-collections. Instead, you can structure your data in a “parent-child” relationship. Structures in Firestore work a lot like URLs for a website. You can design the paths to contain route parameter-like wildcards.

/users/:uid/expenses/month-year

The path above structures the data with a top-level collection of /users, then a document assigned to a uid, followed by a sub-collection. Sub-collections can exist even if there’s no existing parent document. Each sub-collection contains a document where you can retrieve the expenses by putting in the year as a number (e.g. 3 for March) and the year (2021).

// Let's pretend currentUser.uid === '1234' const currentUser = firebaseApp.auth().currentUser; // Reference to collection stored at: '/users' const usersCol = firebaseApp.firestore().collection('users'); // Reference to document stored at: '/users/1234'; const userDoc = usersCol.doc(currentUser.uid); // Reference to sub-collection stored at: '/users/1234/expenses' const userExpensesCol = userDoc.collection('expenses'); // Reference to document stored at: '/users/1234/expenses/3-2021'; const marchDoc = userExpensesCol.doc('3-2021'); // Alternatively, you could express a full path: const fullPathDoc = firebaseApp.firestore().doc('/users/1234/expenses/3-2021');

The example above shows that, with a little bit of modeling, you can retrieve a piece of data without needing to write a query. This won’t always be the case for each and every data structure. In many cases, it will be valid to have a top-level collection. It comes down to the kind of queries you want to write and how you want to secure your data. It’s good, however, to know your options when modeling the data. If you want to learn more about this topic, my colleague Todd Kerpelman has recorded a comprehensive series all about Firestore.

We’re going to go with the hierarchical approach in this app. With this data structured let’s stream in real time.

Streaming data to the visualization

The section above detail how to retrieve data in a “one-time” manner with .get(). Both documents and collections also have a .onSnapshot() method that allows you to stream the data in realtime.

const fullPathDoc = firebaseApp.firestore().doc('/users/1234/expenses/3-2021'); fullPathDoc.onSnapshot(snap => { const { items } = doc.data(); items.forEach((item, index) => { console.log(item); }); });

Whenever an update happens for the data stored at fullPathDoc, Firestore will stream the update to all connected clients. Using this data sync you can set all the CSS Custom Properties for the pie chart.

let db = firebaseApp.firestore(); let { uid } = firebaseApp.auth().currentUser; let marchDoc = db.doc(`users/${uid}/expenses/3-2021`); marchDoc.onSnapshot(snap => { let { factors } = snap.data(); factors.forEach((factor, index) => { root.style.setProperty(`--pie-${index+1}__value`, `${factor.value}%`); }); });

Now for the fun part! Go update the data in the database and see the pie slices move around.

Honestly, that is so much fun. I’ve been working on Firebase for nearly seven years and I never get tired of seeing the lightning-fast data updates. But the job is not yet complete! The database is insecure as it can be updated by anyone anywhere. It’s time to make sure it’s secure only to the user who owns that data.

Securing your database

If you’re new to Firebase or back-end development, you might be wondering why the database is insecure at the current moment. Firebase allows you to build apps without running your own server. This means you can connect to a database directly from the browser. So, if it’s that easy to access the database, what keeps someone from coming along and doing something malicious? The answer is Firebase Security Rules.

Security Rules are how you secure access to your Firebase services. You write a set of rules that specify how data is accessed in your back end. Firebase evaluates these rules for each request that comes into your back end and only allows the request if it passes the rule. In other words, you write a set of rules and Firebase runs them on the server to secure access to your services.

Security Rules are like a router

Security Rules are a custom language that works a lot like a router.

rules_version = '2'; service cloud.firestore { match /databases/{database}/documents { // When a request comes in for a "user/:userId" // let's allow the read or write (not very secure) // Don't copy this in your code plz! match /users/{userId} { allow read, write: if userId == "david"; } } }

The example above starts with a rules_version statement. Don’t worry too much about that, but it’s how you tell Firebase the version of the Security Rules language you want to use. Currently, it’s recommended to use version 2. Then the example goes on to create a service declaration. This tells Firebase what service you are trying to secure, which is Firestore in this case. Now, on to the important part: the match statements.

A match statement is the part that makes rules work like a router. The first match statement in the example establishes that you are matching for documents in the database. It’s a very general statement that says, Hey, look in the documents section. The next match statement is more specific. It looks to match a document within the users collection. The path syntax of /users/{userId} is similar to a routing syntax of /users/:userId where the :userId syntax notes a route parameter. In Security Rules, the {userId} syntax works just like route parameter, except that it’s called “wildcard” here. Any user within the collection can be matched with this statement. What happens when the user is matched? You use the allow statement to control the access.

The allow statement evaluates an expression and, if the result is true, it allows the operation. If it’s false, the operation is rejected. What’s useful about the allow statement is that there’s a lot of useful information to use in the containing match block. One useful piece of information is the wildcard itself. The example above uses the userId wildcard like a variable and tests to see if it matches the value of "``david``". Only a userId of "``david``" will allow the read or write operation.

Now that’s not a very useful rule, but it helps to start simple. Let’s take a moment to remember the structure of the database. Security Rules are a lot like an annotation on top of your data structure. You can make notes at the document paths that enforce your data access. This app stores data at a /users/{userId} collection and expenses at a /users/{userId}/expenses/month-year sub-collection. The security strategy is to ensure that only the authenticated user can read or write their data.

rules_version = '2'; service cloud.firestore { match /databases/{database}/documents { match /users/{userId}/{documents=**} { allow read, write: if request.auth.uid == userId; } } }

The example above starts off the same but starts to change when matching the /users/{userId} path. There’s this weird {documents=**} syntax tacked on at the end. This is called the recursive wildcard and it’s a way of cascading a rule to sub-collections—meaning any sub-collection of /users/{userId} will have the same rules applied to it. This is great in the current use case because both /users/{userId} and /users/{userId}/expenses/month-year should follow the same rule.

Inside of that match, the allow statement has been updated with a new variable named request. Security Rules come with an entire set of variables to help you write sophisticated rules. This variable is how you evaluate if the request comes from an authenticated user. The allow statement evaluates if the authenticated user has a uid that matches the {userId} wildcard. If that statement evaluates to true, the read or write is allowed. If the user is not authenticated or does not match the {userId} wildcard, no operation is allowed. Therefore, it’s secure!

The following snippet shows two requests from an authenticated user.

// Let's pretend currentUser.uid === '1234' const currentUser = firebaseApp.auth().currentUser; // The authenticated user owns this sub-collection const ownedDoc = firebaseApp.firestore().doc('/users/1234/expenses/3-2021'); // The authenticated user DOES NOT own this sub-collection const notOwnedDoc = firebaseApp.firestore().doc('/users/abcxyz/expenses/3-2021'); try { const ownedSnapshot = await ownedDoc.get(); const notOwnedSnapshot = await notOwnedDoc.get(); } catch (error) { // This will result in an error because the `notOwnedDoc` request will fail the security rule }

Just like that, you have secured a collection and a sub-collection. But what about the rest of the data in the database? What happens if you don’t write any rules for data at those paths? The example above will only allow access that matches the allow statement for the /users collection and its sub-collections. No other reads or writes will work, at all, for any other collections or sub-collections. In other words, if you don’t write an allow statement for a path, then it won’t allow any reads or writes. This is a great default for security because it makes you explicitly enforce your access at the appropriate paths. You can mess this up, however!

rules_version = '2'; service cloud.firestore { match /databases/{database}/documents { // Ahh!!! Never do this in a production app!!! // This will negate any rule you write below!! // Don't copy and paste this into your rules!! match /{document=**} { allow read, write: if true; } match /users/{userId}/{documents=**} { allow read, write: if request.auth.uid == userId; } } }

The sample above adds a match block that matches the path of /{document=**}. This is a recursive wildcard path that matches every document in the entire database. The allow statement always evaluates to true because it is true. This match block creates an overlapping match statement, meaning two or more match blocks that match on the same path. This isn’t like CSS where the last rule wins. Security Rules will evaluate each rule matched and if any of them evaluate to true, the operation is allowed. Therefore, the global recursive wildcard will negate any secure rule you have below. The global recursive wildcard is a strategy for opening up your database for a short-term “test mode” where there is no important non-publicly accessible data (nothing private or important) saved. Outside of that, I don’t recommend using it.

Write rules locally or in the console

The final topic to touch upon is where you write and save your rules. You have two options. The first is inside of the Firebase Console. Within the Firestore data viewer you’ll see an option for “Rules.” This tab shows you the rules that are active for your database. From here, you can write and even test scenarios against your rules. This approach is recommended for those who are getting started and trying to become familiar with Security Rules.

Another option is write rules locally on your machine and use the Firebase CLI to deploy them to the console. This allows you to keep them in source control and write tests for them to make sure they continue to work as your codebase evolves. This is the recommended approach for production apps and teams.

It’s worth noting again that your Firebase configuration used to create a Firebase App is not insecure. It’s the equivalent of someone knowing the domain name of your site. Someone knowing your domain name doesn’t make your site insecure. Security Rules are Firebase’s way of providing secure access to your data and your services.

Wrapping things up

That was a lot of Firebase information, especially about all the stuff about rules (security is important!). The information in this article signs in users, merges guest accounts, structures data, streams it to a visualization in real time, and makes sure it’s all secure. You can use these concepts to build so many different applications.

There are so many things I want you to know if you want to continue building with Firebase, such as the Emulator Suite. The app built in this article can run locally on your machine, which is a far easier development experience and great for testing in CI/CD environments. There are also a lot of great Firebase tools and framework libraries. Here are some links worth checking out.

If there’s one thing I hope you saw in this article, it’s that there aren’t too many lines of front-end Firebase code. A line here to sign in a user, a few lines there to get the data, but mostly code that’s specific to the app itself. The goal of Firebase is to allow you to build quickly. Remove the responsibility of managing backend infrastructure. Focus on the front end.

The post Firebase Crash Course appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

AutomateWoo Brings Automated Communications to Bookings

Css Tricks - Tue, 03/09/2021 - 5:00am

AutomateWoo is this handy extension for WooCommerce that adds triggers actions based on your online store’s activity. Someone abandoned their cart? Remind them by email. Someone made a purchase? Ask them to leave a review or follow up to see how they’re liking the product so far.

This sort of automated communication is gold. Automatically reaching out to customers based on their own activity keeps them engaged and encourages more activity, hopefully the kind that makes you more money.

AutomateWoo now integrates with WooCommerce Bookings and it includes a new trigger: booking status changes. So now, when a customer’s appointment changes—say from “pending” to “confirmed”—AutomateWoo can perform an action, like sending an email confirming the customer’s booked appointment. Or asking for feedback after the appointment. Or reminding a customer to book a follow-up appointment.

Or really anything else you can think of based on the status of a booking. It’s like having your own version of Zapier right in WordPress, but without having to manage another platform.

Once a customer’s booking status for a river rafting adventure is confirmed, they’ll get an email not only confirming the appointment but with any additional information they might need to know. OK, real-life situation.

Sometime in the middle of last year, I was working with a client that takes online appointments, or bookings. The user finds an open spot on a calendar and books that time to come in for service. A lot like making a hair appointment.

Well, what happens when a bunch of appointments need to be canceled? I don’t need to tell you what a big deal that was this time last year. This particular client had the unfortunate task of reaching out to each and every customer and updating the status of over 200 appointments.

That was awful. Fortunately, we found a way to bulk update the appointments. Then we sent a mass email to everyone with a canceled appointment. Not the most elegant solution, but it got the job done. I’ll tell you what, though, it would have been a heckuva lot less work and the communications would have been much polished if we had this AutomateWoo + WooCommerce Bookings combo. Simply create a trigger for a canceled status change, write the email, and bulk update the bookings statuses.

AutomateWoo and WooCommerce Bookings are both paid extensions for WooCommerce. The licenses will run you $349, which includes a year of updates and customer support. The value you get out of it really depends on your overall budget and how much activity happens on your site. I can tell you that would’ve been a stellar deal when we were trying to resolve 200+ canceled appointments. And if it leads to additional bookings, upsells, and fewer cancelations (because, hey, research has shown that 75% of email revenue is generated by triggered campaigns), then it could very well pay for itself.

You can give WooCommerce Bookings a front-end test drive with a live demo. Both extensions also come with a 30-day money-back guarantee, giving you a good amount of time to try them out.

The post AutomateWoo Brings Automated Communications to Bookings appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Web Components Are Easier Than You Think

Css Tricks - Mon, 03/08/2021 - 6:06am

When I’d go to a conference (when we were able to do such things) and see someone do a presentation on web components, I always thought it was pretty nifty (yes, apparently, I’m from 1950), but it always seemed complicated and excessive. A thousand lines of JavaScript to save four lines of HTML. The speaker would inevitably either gloss over the oodles of JavaScript to get it working or they’d go into excruciating detail and my eyes would glaze over as I thought about whether my per diem covered snacks.

But in a recent reference project to make learning HTML easier (by adding zombies and silly jokes, of course), the completist in me decided I had to cover every HTML element in the spec. Beyond those conference presentations, this was my first introduction to the <slot> and <template> elements. But as I tried to write something accurate and engaging, I was forced to delve a bit deeper.

And I’ve learned something in the process: web components are a lot easier than I remember.

Either web components have come a long way since the last time I caught myself daydreaming about snacks at a conference, or I let my initial fear of them get in the way of truly knowing them — probably both.

I’m here to tell you that you—yes, you—can create a web component. Let’s leave our distractions, fears, and even our snacks at the door for a moment and do this together.

Let’s start with the <template>

A <template> is an HTML element that allows us to create, well, a template—the HTML structure for the web component. A template doesn’t have to be a huge chunk of code. It can be as simple as:

<template> <p>The Zombies are coming!</p> </template>

The <template> element is important because it holds things together. It’s like the foundation of building; it’s the base from which everything else is built. Let’s use this small bit of HTML as the template for an <apocalyptic-warning> web component—you know, as a warning when the zombie apocalypse is upon us.

Then there’s the <slot>

<slot> is merely another HTML element just like <template>. But in this case, <slot> customizes what the <template> renders on the page.

<template> <p>The <slot>Zombies</slot> are coming!</p> </template>

Here, we’ve slotted (is that even a word?) the word “Zombies” in the templated markup. If we don’t do anything with the slot, it defaults to the content between the tags. That would be “Zombies” in this example.

Using <slot> is a lot like having a placeholder. We can use the placeholder as is, or define something else to go in there instead. We do that with the name attribute.

<template> <p>The <slot name="whats-coming">Zombies</slot> are coming!</p> </template>

The name attribute tells the web component which content goes where in the template. Right now, we’ve got a slot called whats-coming. We’re assuming zombies are coming first in the apocalypse, but the <slot> gives us some flexibility to slot something else in, like if it ends up being a robot, werewolf, or even a web component apocalypse.

Using the component

We’re technically done “writing” the component and can drop it in anywhere we want to use it.

<apocalyptic-warning> <span slot="whats-coming">Halitosis Laden Undead Minions</span> </apocalyptic-warning> <template> <p>The <slot name="whats-coming">Zombies</slot> are coming!</p> </template>

See what we did there? We put the <apocalyptic-warning> component on the page just like any other <div> or whatever. But we also dropped a <span> in there that references the name attribute of our <slot>. And what’s between that <span> is what we want to swap in for “Zombies” when the component renders.

Here’s a little gotcha worth calling out: custom element names must have a hyphen in them. It’s just one of those things you’ve gotta know going into things. The spec prescribes that to prevent conflicts in the event that HTML releases a new element with the same name.

Still with me so far? Not too scary, right? Well, minus the zombies. We still have a little work to do to make the <slot> swap possible, and that’s where we start to get into JavaScript.

Registering the component

As I said, you do need some JavaScript to make this all work, but it’s not the super complex, thousand-lined, in-depth code I always thought. Hopefully I can convince you as well.

You need a constructor function that registers the custom element. Otherwise, our component is like the undead: it’s there but not fully alive.

Here’s the constructor we’ll use:

// Defines the custom element with our appropriate name, <apocalyptic-warning> customElements.define("apocalyptic-warning", // Ensures that we have all the default properties and methods of a built in HTML element class extends HTMLElement { // Called anytime a new custom element is created constructor() { // Calls the parent constructor, i.e. the constructor for `HTMLElement`, so that everything is set up exactly as we would for creating a built in HTML element super(); // Grabs the <template> and stores it in `warning` let warning = document.getElementById("warningtemplate"); // Stores the contents of the template in `mywarning` let mywarning = warning.content; const shadowRoot = this.attachShadow({mode: "open"}).appendChild(mywarning.cloneNode(true)); } });

I left detailed comments in there that explain things line by line. Except the last line:

const shadowRoot = this.attachShadow({mode: "open"}).appendChild(mywarning.cloneNode(true));

We’re doing a lot in here. First, we’re taking our custom element (this) and creating a clandestine operative—I mean, shadow DOM. mode: open simply means that JavaScript from outside the :root can access and manipulate the elements within the shadow DOM, sort of like setting up back door access to the component.

From there, the shadow DOM has been created and we append a node to it. That node will be a deep copy of the template, including all elements and text of the template. With the template attached to the shadow DOM of the custom element, the <slot> and slot attribute take over for matching up content with where it should go.

Check this out. Now we can plop two instances of the same component, rendering different content simply by changing one element.

CodePen Embed Fallback Styling the component

You may have noticed styling in that demo. As you might expect, we absolutely have the ability to style our component with CSS. In fact, we can include a <style> element right in the <template>.

<template id="warningtemplate"> <style> p { background-color: pink; padding: 0.5em; border: 1px solid red; } </style> <p>The <slot name="whats-coming">Zombies</slot> are coming!</p> </template>

This way, the styles are scoped directly to the component and nothing leaks out to other elements on the same page, thanks to the shadow DOM.

Now in my head, I assumed that a custom element was taking a copy of the template, inserting the content you’ve added, and then injecting that into the page using the shadow DOM. While that’s what it looks like on the front end, that’s not how it actually works in the DOM. The content in a custom element stays where it is and the shadow DOM is sort of laid on top like an overlay.

And since the content is technically outside the template, any descendant selectors or classes we use in the template’s <style> element will have no affect on the slotted content. This doesn’t allow full encapsulation the way I had hoped or expected. But since a custom element is an element, we can use it as an element selector in any ol’ CSS file, including the main stylesheet used on a page. And although the inserted material isn’t technically in the template, it is in the custom element and descendant selectors from the CSS will work.

apocalyptic-warning span { color: blue; } CodePen Embed Fallback

But beware! Styles in the main CSS file cannot access elements in the <template> or shadow DOM.

Let’s put all of this together

Let’s look at an example, say a profile for a zombie dating service, like one you might need after the apocalypse. In order to style both the default content and any inserted content, we need both a <style> element in the <template> and styling in a CSS file.

The JavaScript code is exactly the same except now we’re working with a different component name, <zombie-profile>.

customElements.define("zombie-profile", class extends HTMLElement { constructor() { super(); let profile = document.getElementById("zprofiletemplate"); let myprofile = profile.content; const shadowRoot = this.attachShadow({mode: "open"}).appendChild(myprofile.cloneNode(true)); } } );

Here’s the HTML template, including the encapsulated CSS:

<template id="zprofiletemplate"> <style> img { width: 100%; max-width: 300px; height: auto; margin: 0 1em 0 0; } h2 { font-size: 3em; margin: 0 0 0.25em 0; line-height: 0.8; } h3 { margin: 0.5em 0 0 0; font-weight: normal; } .age, .infection-date { display: block; } span { line-height: 1.4; } .label { color: #555; } li, ul { display: inline; padding: 0; } li::after { content: ', '; } li:last-child::after { content: ''; } li:last-child::before { content: ' and '; } </style> <div class="profilepic"> <slot name="profile-image"><img src="https://assets.codepen.io/1804713/default.png" alt=""></slot> </div> <div class="info"> <h2><slot name="zombie-name" part="zname">Zombie Bob</slot></h2> <span class="age"><span class="label">Age:</span> <slot name="z-age">37</slot></span> <span class="infection-date"><span class="label">Infection Date:</span> <slot name="idate">September 12, 2025</slot></span> <div class="interests"> <span class="label">Interests: </span> <slot name="z-interests"> <ul> <li>Long Walks on Beach</li> <li>brains</li> <li>defeating humanity</li> </ul> </slot> </div> <span class="z-statement"><span class="label">Apocalyptic Statement: </span> <slot name="statement">Moooooooan!</slot></span> </div> </template>

Here’s the CSS for our <zombie-profile> element and its descendants from our main CSS file. Notice the duplication in there to ensure both the replaced elements and elements from the template are styled the same.

zombie-profile { width: calc(50% - 1em); border: 1px solid red; padding: 1em; margin-bottom: 2em; display: grid; grid-template-columns: 2fr 4fr; column-gap: 20px; } zombie-profile img { width: 100%; max-width: 300px; height: auto; margin: 0 1em 0 0; } zombie-profile li, zombie-profile ul { display: inline; padding: 0; } zombie-profile li::after { content: ', '; } zombie-profile li:last-child::after { content: ''; } zombie-profile li:last-child::before { content: ' and '; }

All together now!

CodePen Embed Fallback

While there are still a few gotchas and other nuances, I hope you feel more empowered to work with the web components now than you were a few minutes ago. Dip your toes in like we have here. Maybe sprinkle a custom component into your work here and there to get a feel for it and where it makes sense.

That’s really it. Now what are you more scared of, web components or the zombie apocalypse? I might have said web components in the not-so-distant past, but now I’m proud to say that zombies are the only thing that worry me (well, that and whether my per diem will cover snacks…)

The post Web Components Are Easier Than You Think appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS-Tricks Chronicle XXXIX

Css Tricks - Fri, 03/05/2021 - 2:05pm

I’ve been lucky enough to be a guest on some podcasts and at some events, so I thought I’d do a quick little round-up here! These Chronicle posts are just that: an opportunity to share some off-site stiff that I’ve been up to. This time, it’s all different podcasts.

Web Rush

Episode 122: Modern Web with Chris Coyier

Chris Coyier talks with John, Ward, Dan, and Craig about the modern web. What technology should we be paying attention to? What tech has Chris used that was worth getting into? Flexbox or CSS Grid? Is there anything better than HTML coming? And what tools should developers be aware of?

Front-end Development South Africa

Live Q&A session with Chris Coyier

Audience

Evolving as podcasting grows with Chris Coyier of ShopTalk Show

Craig talks to Chris about what it’s like being an online creator (podcaster, blogger, software and web designer, etc.). Chris talks about the lessons he has learned and what it’s like to have a weekly podcast for ten years. They also talk about podcasting trends in terms of marketing, topics, and the future outlook of the industry.

Cloudinary Devjams

DevJams Episode #2: Fetching Local Production Images With Cloudinary for an Eleventy Site

Watch our hosts Sam Brace and Becky Peltz, as well as our special guest host Eric Portis, interview Chris Coyier about his recent development project. With Eleventy, Netlify, Puppeteer and Cloudinary’s fetch capabilities, he was able to create a microsite for his famous CSS-Tricks.com site that showcases various coding fonts you can use. Find how he did it by watching this episode!

The post CSS-Tricks Chronicle XXXIX appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A Super Flexible CSS Carousel, Enhanced With JavaScript Navigation

Css Tricks - Fri, 03/05/2021 - 5:47am

Not sure about you, but I often wonder how to build a carousel component in such a way that you can easily dump a bunch of items into the component and get a nice working carousel — one that allows you to scroll smoothly, navigate with the dynamic buttons, and is responsive. If that is the thing you’d like to build, follow along and we’ll work on it together!

This is what we’re aiming for:

CodePen Embed Fallback

We’re going to be working with quite a bit of JavaScript, React and the DOM API from here on out.

First, let’s spin up a fresh project

Let’s start by bootstrapping a simple React application with styled-components tossed in for styling:

npx create-react-app react-easy-carousel cd react-easy-carousel yarn add styled-components yarn install yarn start

Styling isn’t really the crux of what we’re doing, so I have prepared aa bunch of predefined components for us to use right out of the box:

// App.styled.js import styled from 'styled-components' export const H1 = styled('h1')` text-align: center; margin: 0; padding-bottom: 10rem; ` export const Relative = styled('div')` position: relative; ` export const Flex = styled('div')` display: flex; ` export const HorizontalCenter = styled(Flex)` justify-content: center; margin-left: auto; margin-right: auto; max-width: 25rem; ` export const Container = styled('div')` height: 100vh; width: 100%; background: #ecf0f1; ` export const Item = styled('div')` color: white; font-size: 2rem; text-transform: capitalize; width: ${({size}) => `${size}rem`}; height: ${({size}) => `${size}rem`}; display: flex; align-items: center; justify-content: center; `

Now let’s go to our App file, remove all unnecessary code, and build a basic structure for our carousel:

// App.js import {Carousel} from './Carousel' function App() { return ( <Container> <H1>Easy Carousel</H1> <HorizontalCenter> <Carousel> {/* Put your items here */} </Carousel> </HorizontalCenter> </Container> ) } export default App

I believe this structure is pretty straightforward. It’s the basic layout that centers the carousel directly in the middle of the page.

Now, let’s make the carousel component

Let’s talk about the structure of our component. We’re gonna need the main <div> container which as our base. Inside that, we’re going to take advantage of native scrolling and put another block that serves as the scrollable area.

// Carousel.js <CarouserContainer> <CarouserContainerInner> {children} </CarouserContainerInner> </CarouserContainer>

You can specify width and height on the inner container, but I’d avoid strict dimensions in favor of some sized component on top of it to keep things flexible.

Scrolling, the CSS way

We want that scroll to be smooth so it’s clear there’s a transition between slides, so we’ll reach for CSS scroll snapping, set the scroll horizontally along the x-axis, and hide the actual scroll bar while we’re at it.

export const CarouserContainerInner = styled(Flex)` overflow-x: scroll; scroll-snap-type: x mandatory; -ms-overflow-style: none; scrollbar-width: none; &::-webkit-scrollbar { display: none; } & > * { scroll-snap-align: center; } `

Wondering what’s up with scroll-snap-type and scroll-snap-align? That’s native CSS that allows us to control the scroll behavior in such a way that an element “snaps” into place during a scroll. So, in this case, we’ve set the snap type in the horizontal (x) direction and told the browser it has to stop at a snap position that is in the center of the element.

In other words: scroll to the next slide and make sure that slide is centered into view. Let’s break that down a bit to see how it fits into the bigger picture.

Our outer <div> is a flexible container that puts it’s children (the carousel slides) in a horizontal row. Those children will easily overflow the width of the container, so we’ve made it so we can scroll horizontally inside the container. That’s where scroll-snap-type comes into play. From Andy Adams in the CSS-Tricks Almanac:

Scroll snapping refers to “locking” the position of the viewport to specific elements on the page as the window (or a scrollable container) is scrolled. Think of it like putting a magnet on top of an element that sticks to the top of the viewport and forces the page to stop scrolling right there.

Couldn’t say it better myself. Play around with it in Andy’s demo on CodePen.

But, we still need another CSS property set on the container’s children (again, the carousel slides) that tells the browser where the scroll should stop. Andy likens this to a magnet, so let’s put that magnet directly on the center of our slides. That way, the scroll “locks” on the center of a slide, allowing to be full in view in the carousel container.

That property? scroll-snap-align.

& > * { scroll-snap-align: center; }

We can already test it out by creating some random array of items:

const colors = [ '#f1c40f', '#f39c12', '#e74c3c', '#16a085', '#2980b9', '#8e44ad', '#2c3e50', '#95a5a6', ] const colorsArray = colors.map((color) => ( <Item size={20} style={{background: color, borderRadius: '20px', opacity: 0.9}} key={color} > {color} </Item> ))

And dumping it right into our carousel:

// App.js <Container> <H1>Easy Carousel</H1> <HorizontalCenter> <Carousel>{colorsArray}</Carousel> </HorizontalCenter> </Container> CodePen Embed Fallback

Let’s also add some spacing to our items so they won’t look too squeezed. You may also notice that we have unnecessary spacing on the left of the first item. We can add a negative margin to offset it.

export const CarouserContainerInner = styled(Flex)` overflow-x: scroll; scroll-snap-type: x mandatory; -ms-overflow-style: none; scrollbar-width: none; margin-left: -1rem; &::-webkit-scrollbar { display: none; } & > * { scroll-snap-align: center; margin-left: 1rem; } `

Take a closer look at the cursor position while scrolling. It’s always centered. That’s the scroll-snap-align property at work!

And that’s it! We’ve made an awesome carousel where we can add any number of items, and it just plain works. Notice, too, that we did all of this in plain CSS, even if it was built as a React app. We didn’t really need React or styled-components to make this work.

CodePen Embed Fallback Bonus: Navigation

We could end the article here and move on, but I want to take this a bit further. What I like about what we have so far is that it’s flexible and does the basic job of scrolling through a set of items.

But you may have noticed a key enhancement in the demo at the start of this article: buttons that navigate through slides. That’s where we’re going to put the CSS down and put our JavaScript hats on to make this work.

First, let’s define buttons on the left and right of the carousel container that, when clicked, scrolls to the previous or next slide, respectively. I’m using simple SVG arrows as components:

// ArrowLeft export const ArrowLeft = ({size = 30, color = '#000000'}) => ( <svg xmlns="http://www.w3.org/2000/svg" width={size} height={size} viewBox="0 0 24 24" fill="none" stroke={color} strokeWidth="2" strokeLinecap="round" strokeLinejoin="round" > <path d="M19 12H6M12 5l-7 7 7 7" /> </svg> ) // ArrowRight export const ArrowRight = ({size = 30, color = '#000000'}) => ( <svg xmlns="http://www.w3.org/2000/svg" width={size} height={size} viewBox="0 0 24 24" fill="none" stroke={color} strokeWidth="2" strokeLinecap="round" strokeLinejoin="round" > <path d="M5 12h13M12 5l7 7-7 7" /> </svg> )

Now let’s position them on both sides of our carousel:

// Carousel.js <LeftCarouselButton> <ArrowLeft /> </LeftCarouselButton> <RightCarouselButton> <ArrowRight /> </RightCarouselButton>

We’ll sprinkle in some styling that adds absolute positioning to the arrows so that the left arrow sits on the left edge of the carousel and the right arrow sits on the right edge. A few other things are thrown in to style the buttons themselves to look like buttons. Also, we’re playing with the carousel container’s :hover state so that the buttons only show when the user’s cursor hovers the container.

// Carousel.styled.js // Position and style the buttons export const CarouselButton = styled('button')` position: absolute; cursor: pointer; top: 50%; z-index: 1; transition: transform 0.1s ease-in-out; background: white; border-radius: 15px; border: none; padding: 0.5rem; ` // Display buttons on hover export const LeftCarouselButton = styled(CarouselButton)` left: 0; transform: translate(-100%, -50%); ${CarouserContainer}:hover & { transform: translate(0%, -50%); } ` // Position the buttons to their respective sides export const RightCarouselButton = styled(CarouselButton)` right: 0; transform: translate(100%, -50%); ${CarouserContainer}:hover & { transform: translate(0%, -50%); } `

This is cool. Now we have buttons, but only when the user interacts with the carousel.

But do we always want to see both buttons? It’d be great if we hide the left arrow when we’re at the first slide, and hide the right arrow when we’re at the last slide. It’s like the user can navigate past those slides, so why set the illusion that they can?

I suggest creating a hook that’s responsible for all the scrolling functionality we need, as we’re gonna have a bunch of it. Plus, it’s just good practice to separate functional concerns from our visual component.

First, we need to get the reference to our component so we can get the position of the slides. Let’s do that with ref:

// Carousel.js const ref = useRef() const position = usePosition(ref) <CarouserContainer> <CarouserContainerInner ref={ref}> {children} </CarouserContainerInner> <LeftCarouselButton> <ArrowLeft /> </LeftCarouselButton> <RightCarouselButton> <ArrowRight /> </RightCarouselButton> </CarouserContainer>

The ref property is on <CarouserContainerInner> as it contains all our items and will allow us to do proper calculations.

Now let’s implement the hook itself. We have two buttons. To make them work, we need to keep track of the next and previous items accordingly. The best way to do so is to have a state for each one:

// usePosition.js export function usePosition(ref) { const [prevElement, setPrevElement] = useState(null) const [nextElement, setNextElement] = useState(null) }

The next step is to create a function that detects the position of the elements and updates the buttons to either hide or display depending on that position.

Let’s call it the update function. We’re gonna put it into React’s useEffect hook because, initially, we want to run this function when the DOM mounts the first time. We need access to our scrollable container which is available to use under the ref.current property. We’ll put it into a separate variable called element and start by getting the element’s position in the DOM.

We’re gonna use getBoundingClientRect() here as well. This is a very helpful function because it gives us an element’s position in the viewport (i.e. window) and allows us to proceed with our calculations.

// usePosition.js useEffect(() => { // Our scrollable container const element = ref.current const update = () => { const rect = element.getBoundingClientRect() }, [ref])

We’ve done a heck of a lot positioning so far and getBoundingClientRect() can help us understand both the size of the element — rect in this case — and its position relative to the viewport.

Credit: Mozilla Developer Network

The following step is a bit tricky as it requires a bit of math to calculate which elements are visible inside the container.

First, we need to filter each item by getting its position in the viewport and checking it against the container boundaries. Then, we check if the child’s left boundary is bigger than the container’s left boundary, and the same thing on the right side.

If one of these conditions is met means that our child is visible inside the container. Let’s convert it into the code step-by-step:

  1. We need to loop and filter through all container children. We can use the children property available on each node. So, let’s convert it into an array and filter:
const visibleElements = Array.from(element.children).filter((child) => {}
  1. After that, we need to get the position of each element by using that handy getBoundingClientRect() function once again:
const childRect = child.getBoundingClientRect()
  1. Now let’s bring our drawing to life:
rect.left <= childRect.left && rect.right >= childRect.right

Pulling that together, this is our script:

// usePosition.js const visibleElements = Array.from(element.children).filter((child) => { const childRect = child.getBoundingClientRect() return rect.left <= childRect.left && rect.right >= childRect.right })

Once we’ve filtered out items, we need to check whether an item is the first or the last one so we know to hide the left or right button accordingly. We’ll create two helper functions that check that condition using previousElementSibling and nextElementSibling. This way, we can see if there is a sibling in the list and whether it’s an HTML instance and, if it is, we will return it.

To receive the first element and return it, we need to take the first item from our visible items list and check if it contains the previous node. We’ll do the same thing for the last element in the list, however, we need to get the last item in the list and check if it contains the next element after itself:

// usePosition.js function getPrevElement(list) { const sibling = list[0].previousElementSibling if (sibling instanceof HTMLElement) { return sibling } return sibling } function getNextElement(list) { const sibling = list[list.length - 1].nextElementSibling if (sibling instanceof HTMLElement) { return sibling } return null }

Once we have those functions, we can finally check if there are any visible elements in the list, and then set our left and right buttons into the state:

// usePosition.js if (visibleElements.length > 0) { setPrevElement(getPrevElement(visibleElements)) setNextElement(getNextElement(visibleElements)) }

Now we need to call our function. Moreover, we want to call this function each time we scroll through the list — that’s when we want to detect the position of the element.

// usePosition.js export function usePosition(ref) { const [prevElement, setPrevElement] = useState(null) const [nextElement, setNextElement] = useState(null) useEffect(() => { const element = ref.current const update = () => { const rect = element.getBoundingClientRect() const visibleElements = Array.from(element.children).filter((child) => { const childRect = child.getBoundingClientRect() return rect.left <= childRect.left && rect.right >= childRect.right }) if (visibleElements.length > 0) { setPrevElement(getPrevElement(visibleElements)) setNextElement(getNextElement(visibleElements)) } } update() element.addEventListener('scroll', update, {passive: true}) return () => { element.removeEventListener('scroll', update, {passive: true}) } }, [ref])

Here’s an explanation for why we’re passing {passive: true} in there.

Now let’s return those properties from the hook and update our buttons accordingly:

// usePosition.js return { hasItemsOnLeft: prevElement !== null, hasItemsOnRight: nextElement !== null, } // Carousel.js <LeftCarouselButton hasItemsOnLeft={hasItemsOnLeft}> <ArrowLeft /> </LeftCarouselButton> <RightCarouselButton hasItemsOnRight={hasItemsOnRight}> <ArrowRight /> </RightCarouselButton> // Carousel.styled.js export const LeftCarouselButton = styled(CarouselButton)` left: 0; transform: translate(-100%, -50%); ${CarouserContainer}:hover & { transform: translate(0%, -50%); } visibility: ${({hasItemsOnLeft}) => (hasItemsOnLeft ? `all` : `hidden`)}; ` export const RightCarouselButton = styled(CarouselButton)` right: 0; transform: translate(100%, -50%); ${CarouserContainer}:hover & { transform: translate(0%, -50%); } visibility: ${({hasItemsOnRight}) => (hasItemsOnRight ? `all` : `hidden`)}; `

So far, so good. As you’ll see, our arrows show up dynamically depending on our scroll location in the list of items.

We’ve got just one final step to go to make the buttons functional. We need to create a function that’s gonna accept the next or previous element it needs to scroll to.

const scrollRight = useCallback(() => scrollToElement(nextElement), [ scrollToElement, nextElement, ]) const scrollLeft = useCallback(() => scrollToElement(prevElement), [ scrollToElement, prevElement, ])

Don’t forget to wrap functions into the useCallback hook in order to avoid unnecessary re-renders.

Next, we’ll implement the scrollToElement function. The idea is pretty simple. We need to take the left boundary of our previous or next element (depending on the button that’s clicked), sum it up with the width of the element, divided by two (center position), and offset this value by half of the container width. That will give us the exact scrollable distance to the center of the next/previous element.

Here’s that in code:

// usePosition.js const scrollToElement = useCallback( (element) => { const currentNode = ref.current if (!currentNode || !element) return let newScrollPosition newScrollPosition = element.offsetLeft + element.getBoundingClientRect().width / 2 - currentNode.getBoundingClientRect().width / 2 currentNode.scroll({ left: newScrollPosition, behavior: 'smooth', }) }, [ref], )

scroll actually does the scrolling for us while passing the precise distance we need to scroll to. Now let’s attach those functions to our buttons.

// Carousel.js const { hasItemsOnLeft, hasItemsOnRight, scrollRight, scrollLeft, } = usePosition(ref) <LeftCarouselButton hasItemsOnLeft={hasItemsOnLeft} onClick={scrollLeft}> <ArrowLeft /> </LeftCarouselButton> <RightCarouselButton hasItemsOnRight={hasItemsOnRight} onClick={scrollRight}> <ArrowRight /> </RightCarouselButton>

Pretty nice!

Like a good citizen, we ought to clean up our code a bit. For one, we can be more in control of the passed items with a little trick that automatically sends the styles needed for each child. The Children API is pretty rad and worth checking out.

<CarouserContainerInner ref={ref}> {React.Children.map(children, (child, index) => ( <CarouselItem key={index}>{child}</CarouselItem> ))} </CarouserContainerInner>

Now we just need to update our styled components. flex: 0 0 auto preserves the original sizes of the containers, so it’s totally optional

export const CarouselItem = styled('div')` flex: 0 0 auto; // Spacing between items margin-left: 1rem; ` export const CarouserContainerInner = styled(Flex)` overflow-x: scroll; scroll-snap-type: x mandatory; -ms-overflow-style: none; scrollbar-width: none; margin-left: -1rem; // Offset for children spacing &::-webkit-scrollbar { display: none; } ${CarouselItem} & { scroll-snap-align: center; } ` CodePen Embed Fallback Accessibility 

We care about our users, so we need to make our component not only functional, but also accessible so folks feel comfortable using it. Here are a couple things I’d suggest:

  • Adding role='region' to highlight the importance of this area.
  • Adding an area-label as an identifier.
  • Adding labels to our buttons so screen readers could easily identify them as “Previous” and “Next” and inform the user which direction a button goes.
// Carousel.js <CarouserContainer role="region" aria-label="Colors carousel"> <CarouserContainerInner ref={ref}> {React.Children.map(children, (child, index) => ( <CarouselItem key={index}>{child}</CarouselItem> ))} </CarouserContainerInner> <LeftCarouselButton hasItemsOnLeft={hasItemsOnLeft} onClick={scrollLeft} aria-label="Previous slide > <ArrowLeft /> </LeftCarouselButton> <RightCarouselButton hasItemsOnRight={hasItemsOnRight} onClick={scrollRight} aria-label="Next slide" > <ArrowRight /> </RightCarouselButton> </CarouserContainer> More than one carousel? No problem!

Feel free to add additional carousels to see how it behaves with the different size items. For example, let’s drop in a second carousel that’s just an array of numbers.

const numbersArray = Array.from(Array(10).keys()).map((number) => ( <Item size={5} style={{color: 'black'}} key={number}> {number} </Item> )) function App() { return ( <Container> <H1>Easy Carousel</H1> <HorizontalCenter> <Carousel>{colorsArray}</Carousel> </HorizontalCenter> <HorizontalCenter> <Carousel>{numbersArray}</Carousel> </HorizontalCenter> </Container> ) }

And voilà, magic! Dump a bunch of items and you’ve got fully workable carousel right out of the box.

CodePen Embed Fallback

Feel free to modify this and use it in your projects. I sincerely hope that this is a good starting point to use as-is, or enhance it even further for a more complex carousel. Questions? Ideas? Contact me on Twitter, GitHub, or the comments below!

The post A Super Flexible CSS Carousel, Enhanced With JavaScript Navigation appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Through the pipeline: An exploration of front-end bundlers

Css Tricks - Thu, 03/04/2021 - 1:56pm

I really like the kind of tech writing where a fellow developer lays out some specific needs, tries out different tech to fulfill those needs, and documents how it went for them.

That’s exactly what Andrew Walpole did here. He wanted to try out bundlers in the context of WordPress themes and needing a handful of specific files built. Two JavaScript and two Sass files, which can import things from npm, and need to be minified with sourcemaps and all that. Essentially the same crap we were doing when I wrote Grunt for People Who Think Things Like Grunt are Weird and Hard eight years ago. The process hasn’t gotten any easier, but at least it’s gotten faster.

The winner for Andrew: esbuild through Estrella.

Direct Link to ArticlePermalink

The post Through the pipeline: An exploration of front-end bundlers appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Weekly Platform News: Focus Rings, Donut Scope, More em Units, and Global Privacy Control

Css Tricks - Thu, 03/04/2021 - 11:33am

In this week’s news, Chrome tackles focus rings, we learn how to get “donut” scope, Global Privacy Control gets big-name adoption, it’s time to ditch pixels in media queries, and a snippet that prevents annoying form validation styling.

Chrome will stop displaying focus rings when clicking buttons

Chrome, Edge, and other Chromium-based browsers display a focus indicator (a.k.a. focus ring) when the user clicks or taps a (styled) button. For comparison, Safari and Firefox don’t display a focus indicator when a button is clicked or tapped, but do only when the button is focused via the keyboard.

The focus ring will stay on the button until the user clicks somewhere else on the page.

Some developers find this behavior annoying and are using various workarounds to prevent the focus ring from appearing when a button is clicked or tapped. For example, the popular what-input library continuously tracks the user’s input method (mouse, keyboard or touch), allowing the page to suppress focus rings specifically for mouse clicks.

[data-whatintent="mouse"] :focus { outline: none; }

A more recent workaround was enabled by the addition of the CSS :focus-visible pseudo-class to Chromium a few months ago. In the current version of Chrome, clicking or tapping a button invokes the button’s :focus state but not its :focus-visible state. that way, the page can use a suitable selector to suppress focus rings for clicks and taps without affecting keyboard users.

:focus:not(:focus-visible) { outline: none; }

Fortunately, these workarounds will soon become unnecessary. Chromium’s user agent stylesheet recently switched from :focus to :focus-visible, and as a result of this change, button clicks and taps no longer invoke focus rings. The new behavior will first ship in Chrome 90 next month.

The enhanced CSS :not() selector enables “donut scope”

I recently wrote about the A:not(B *) selector pattern that allows authors to select all A elements that are not descendants of a B element. This pattern can be expanded to A B:not(C *) to create a “donut scope.”

For example, the selector article p:not(blockquote *) matches all <p> elements that are descendants of an <article> element but not descendants of a <blockquote> element. In other words, it selects all paragraphs in an article except the ones that are in a block quotation.

The donut shape that gives this scope its name CodePen Embed Fallback The New York Times now honors Global Privacy Control

Announced last October, Global Privacy Control (GPC) is a new privacy signal for the web that is designed to be legally enforceable. Essentially, it’s an HTTP Sec-GPC: 1 request header that tells websites that the user does not want their personal data to be shared or sold.

The DuckDuckGo Privacy Essentials extension enables GPC by default in the browser

The New York Times has become the first major publisher to honor GPC. A number of other publishers, including The Washington Post and Automattic (WordPress.com), have committed to honoring it “this coming quarter.”

From NYT’s privacy page:

Does The Times support the Global Privacy Control (GPC)?

Yes. When we detect a GPC signal from a reader’s browser where GDPR, CCPA or a similar privacy law applies, we stop sharing the reader’s personal data online with other companies (except with our service providers).

The case for em-based media queries

Some browsers allow the user to increase the default font size in the browser’s settings. Unfortunately, this user preference has no effect on websites that set their font sizes in pixels (e.g., font-size: 20px). In part for this reason, some websites (including CSS-Tricks) instead use font-relative units, such as em and rem, which do respond to the user’s font size preference.

Ideally, a website that uses font-relative units for font-size should also use em values in media queries (e.g., min-width: 80em instead of min-width: 1280px). Otherwise, the site’s responsive layout may not always work as expected.

For example, CSS-Tricks switches from a two-column to a one-column layout on narrow viewports to prevent the article’s lines from becoming too short. However, if the user increases the default font size in the browser to 24px, the text on the page will become larger (as it should) but the page layout will not change, resulting in extremely short lines at certain viewport widths.

If you’d like to try out em-based media queries on your website, there is a PostCSS plugin that automatically converts min-width, max-width, min-height, and max-height media queries from px to em.

(via Nick Gard)

A new push to bring CSS :user-invalid to browsers

In 2017, Peter-Paul Koch published a series of three articles about native form validation on the web. Part 1 points out the problems with the widely supported CSS :invalid pseudo-class:

  • The validity of <input> elements is re-evaluated on every key stroke, so a form field can become :invalid while the user is still typing the value.
  • If a form field is required (<input required>), it will become :invalid immediately on page load.

Both of these behaviors are potentially confusing (and annoying), so websites cannot rely solely on the :invalid selector to indicate that a value entered by the user is not valid. However, there is the option to combine :invalid with :not(:focus) and even :not(:placeholder-shown) to ensure that the page’s “invalid” styles do not apply to the <input> until the user has finished entering the value and moved focus to another element.

CodePen Embed Fallback

The CSS Selectors module defines a :user-invalid pseudo-class that avoids the problems of :invalid by only matching an <input> “after the user has significantly interacted with it.”

Firefox already supports this functionality via the :-moz-ui-invalid pseudo-class (see it in action). Mozilla now intends to un-prefix this pseudo-class and ship it under the standard :user-invalid name. There are still no signals from other browser vendors, but the Chromium and WebKit bugs for this feature have been filed.

The post Weekly Platform News: Focus Rings, Donut Scope, More em Units, and Global Privacy Control appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Exploring @property and its Animating Powers

Css Tricks - Thu, 03/04/2021 - 5:51am

Uh, what’s @property? It’s a new CSS feature! It gives you superpowers. No joke, there is stuff that @property can do that unlocks things in CSS we’ve never been able to do before.

While everything about @property is exciting, perhaps the most interesting thing is that it provides a way to specify a type for custom CSS properties. A type provides more contextual information to the browser, and that results in something cool: We can give the browser the information is needs to transition and animate those properties!

But before we get too giddy about this, it’s worth noting that support isn’t quite there. As it current stands at the time of this writing, @property is supported in Chrome and, by extension, Edge. We need to keep an eye on browser support for when we get to use this in other places, like Firefox and Safari.

First off, we get type checking @property --spinAngle { /* An initial value for our custom property */ initial-value: 0deg; /* Whether it inherits from parent set values or not */ inherits: false; /* The type. Yes, the type. You thought TypeScript was cool */ syntax: '<angle>'; } @keyframes spin { to { --spinAngle: 360deg; } }

That’s right! Type checking in CSS. It’s sorta like creating our very own mini CSS specification. And that’s a simple example. Check out all of the various types we have at our disposal:

  • length
  • number
  • percentage
  • length-percentage
  • color
  • image
  • url
  • integer
  • angle
  • time
  • resolution
  • transform-list
  • transform-function
  • custom-ident (a custom identifier string)

Before any of this, we may have relied on using “tricks” for powering animations with custom properties.

CSS variables are awesome, right? But scope power is often overlooked. For example, take this demo, 3 different animations but only 1 animation defined &#x1f4aa; That means dynamic animations &#x1f60e; https://t.co/VN02NlC4G8 via @CodePen #CSS #animation #webdev #webdesign #coding pic.twitter.com/ig8baxr7F3

— Jhey &#x1f43b;&#x1f6e0; (@jh3yy) November 5, 2019

What cool stuff can we do then? Let’s take a look to spark our imaginations.

Let’s animate color

How might you animate an element either through a series of colors or between them? I’m a big advocate for the HSL color space which breaks things down into fairly understandable numbers: hue, saturation, and lightness, respectively.

Animating a hue feels like something fun we can do. What’s colorful? A rainbow! There’s a variety of ways we could make a rainbow. Here’s one:

CodePen Embed Fallback

In this example, CSS Custom Properties are set on the different bands of the rainbow using :nth-child() to scope them to individual bands. Each band also has an --index set to help with sizing.

To animate those bands, we might use that --index to set some negative animation delays, but then use the same keyframe animation to cycle through hues.

.rainbow__band { border-color: hsl(var(--hue, 10), 80%, 50%); animation: rainbow 2s calc(var(--index, 0) * -0.2s) infinite linear; } @keyframes rainbow { 0%, 100% { --hue: 10; } 14% { --hue: 35; } 28% { --hue: 55; } 42% { --hue: 110; } 56% { --hue: 200; } 70% { --hue: 230; } 84% { --hue: 280; } }

That might work out okay if you want a “stepped” effect. But, those keyframe steps aren’t particularly accurate. I’ve used steps of 14% as a rough jump.

CodePen Embed Fallback

We could animate the border-color and that would get the job done. But, we’d still have a keyframe step calculation issue. And we need to write a lot of CSS to get this done:

@keyframes rainbow { 0%, 100% { border-color: hsl(10, 80%, 50%); } 14% { border-color: hsl(35, 80%, 50%); } 28% { border-color: hsl(55, 80%, 50%); } 42% { border-color: hsl(110, 80%, 50%); } 56% { border-color: hsl(200, 80%, 50%); } 70% { border-color: hsl(230, 80%, 50%); } 84% { border-color: hsl(280, 80%, 50%); } }

Enter @property. Let’s start by defining a custom property for hue. This tells the browser our custom property, --hue, is going to be a number (not a string that looks like a number):

@property --hue { initial-value: 0; inherits: false; syntax: '<number>'; }

Hue values in HSL can go from 0 to 360. We start with an initial value of 0. The value isn’t going to inherit. And our value, in this case, is a number. The animation is as straightforward as:

@keyframes rainbow { to { --hue: 360; } }

Yep, that’s the ticket:

CodePen Embed Fallback

To get the starting points accurate, we could play with delays for each band. This gives us some cool flexibility. For example, we can up the animation-duration and we get a slow cycle. Have a play with the speed in this demo.

CodePen Embed Fallback

It may not be the “wildest” of examples, but I think animating color has some fun opportunities when we use color spaces that make logical use of numbers. Animating through the color wheel before required some trickiness. For example, generating keyframes with a preprocessor, like Stylus:

@keyframes party for $frame in (0..100) {$frame * 1%} background 'hsl(%s, 65%, 40%)' % ($frame * 3.6)

We do this purely because this isn’t understood by the browser. It sees going from 0 to 360 on the color wheel as an instant transition because both hsl values show the same color.

@keyframes party { from { background: hsl(0, 80%, 50%); } to { background: hsl(360, 80%, 50%); } }

The keyframes are the same, so the browser assumes the animation stays at the same background value when what we actually want is for the browser to go through the entire hue spectrum, starting at one value and ending at that same value after it goes through the motions.

Think of all the other opportunities we have here. We can:

  • animate the saturation
  • use different easings
  • animate the lightness
  • Try rgb()
  • Try degrees in hsl() and declare our custom property type as <angle>

What’s neat is that we can share that animated value across elements with scoping! Consider this button. The border and shadow animate through the color wheel on hover.

CodePen Embed Fallback

Animating color leads me think… wow!

CodePen Embed Fallback Straight-up numbering

Because we can define types for numbers—like integer and number—that means we can also animate numbers instead of using those numbers as part of something else. Carter Li actually wrote an article on this right here on CSS-Tricks. The trick is to use an integer in combination with CSS counters. This is similar to how we can work the counter in “Pure CSS” games like this one.

The use of counter and pseudo-elements provides a way to convert a number to a string. Then we can use that string for the content of a pseudo-element. Here are the important bits:

@property --milliseconds { inherits: false; initial-value: 0; syntax: '<integer>'; } .counter { counter-reset: ms var(--milliseconds); animation: count 1s steps(100) infinite; } .counter:after { content: counter(ms); } @keyframes count { to { --milliseconds: 100; } }

Which gives us something like this. Pretty cool.

CodePen Embed Fallback

Take that a little further and you’ve got yourself a working stopwatch made with nothing but CSS and HTML. Click the buttons! The rad thing here is that this actually works as a timer. It won’t suffer from drift. In some ways it may be more accurate than the JavaScript solutions we often reach for such as setInterval. Check out this great video from Google Chrome Developer about JavaScript counters.

CodePen Embed Fallback

What other things could you use animated numbers for? A countdown perhaps?

Animated gradients

You know the ones, linear, radial, and conic. Ever been in a spot where you wanted to transition or animate the color stops? Well, @property can do that!

Consider a gradient where we‘re creating some waves on a beach. Once we’ve layered up some images we could make something like this.

body { background-image: linear-gradient(transparent 0 calc(35% + (var(--wave) * 0.5)), var(--wave-four) calc(75% + var(--wave)) 100%), linear-gradient(transparent 0 calc(35% + (var(--wave) * 0.5)), var(--wave-three) calc(50% + var(--wave)) calc(75% + var(--wave))), linear-gradient(transparent 0 calc(20% + (var(--wave) * 0.5)), var(--wave-two) calc(35% + var(--wave)) calc(50% + var(--wave))), linear-gradient(transparent 0 calc(15% + (var(--wave) * 0.5)), var(--wave-one) calc(25% + var(--wave)) calc(35% + var(--wave))), var(--sand); }

There is quite a bit going on there. But, to break it down, we’re creating each color stop with calc(). And in that calculation, we add the value of --wave. The neat trick here is that when we animate that --wave value, all the wave layers move.

CodePen Embed Fallback

This is all the code we needed to make that happen:

body { animation: waves 5s infinite ease-in-out; } @keyframes waves { 50% { --wave: 25%; } }

Without the use of @property, our waves would step between high and low tide. But, with it, we get a nice chilled effect like this.

CodePen Embed Fallback

It’s exciting to think other neat opportunities that we get when manipulating images. Like rotation. Or how about animating the angle of a conic-gradient… but, within a border-image. Bramus Van Damme does a brilliant job covering this concept.

Let’s break it down by creating a charging indicator. We’re going to animate an angle and a hue at the same time. We can start with two custom properties:

@property --angle { initial-value: 0deg; inherits: false; syntax: '<number>'; } @property --hue { initial-value: 0; inherits: false; syntax: '<angle>'; }

The animation will update the angle and hue with a slight pause on each iteration.

@keyframes load { 0%, 10% { --angle: 0deg; --hue: 0; } 100% { --angle: 360deg; --hue: 100; } }

Now let’s apply it as the border-image of an element.

.loader { --charge: hsl(var(--hue), 80%, 50%); border-image: conic-gradient(var(--charge) var(--angle), transparent calc(var(--angle) * 0.5deg)) 30; animation: load 2s infinite ease-in-out; }

Pretty cool.

CodePen Embed Fallback

Unfortunately, border-image doesn‘t play nice with border-radius. But, we could use a pseudo-element behind it. Combine it with the number animation tricks from before and we’ve got a full charging/loading animation. (Yep, it changes when it gets to 100%.)

CodePen Embed Fallback Transforms are cool, too

One issue with animating transforms is transitioning between certain parts. It often ends up breaking or not looking how it should. Consider the classic example of a ball being throw. We want it to go from point A to point B while imitating the effect of gravity.

An initial attempt might look like this

@keyframes throw { 0% { transform: translate(-500%, 0); } 50% { transform: translate(0, -250%); } 100% { transform: translate(500%, 0); } }

But, we’ll soon see that it doesn’t look anything like we want.

CodePen Embed Fallback

Before, we may have reached for wrapper elements and animated them in isolation. But, with @property, we can animate the individual values of the transform. And all on one timeline. Let’s flip the way this works by defining custom properties and then setting a transform on the ball.

@property --x { inherits: false; initial-value: 0%; syntax: '<percentage>'; } @property --y { inherits: false; initial-value: 0%; syntax: '<percentage>'; } @property --rotate { inherits: false; initial-value: 0deg; syntax: '<angle>'; } .ball { animation: throw 1s infinite alternate ease-in-out; transform: translateX(var(--x)) translateY(var(--y)) rotate(var(--rotate)); }

Now for our animation, we can compose the transform we want against the keyframes:

@keyframes throw { 0% { --x: -500%; --rotate: 0deg; } 50% { --y: -250%; } 100% { --x: 500%; --rotate: 360deg; } }

The result? The curved path we had hoped for. And we can make that look different depending on the different timing functions we use. We could split the animation into three ways and use different timing functions. That would give us different results for the way the ball moves.

CodePen Embed Fallback

Consider another example where we have a car that we want to drive around a square with rounded corners.

CodePen Embed Fallback

We can use a similar approach to what we did with the ball:

@property --x { inherits: false; initial-value: -22.5; syntax: '<number>'; } @property --y { inherits: false; initial-value: 0; syntax: '<number>'; } @property --r { inherits: false; initial-value: 0deg; syntax: '<angle>'; }

The car’s transform is using calculated with vmin to keep things responsive:

.car { transform: translate(calc(var(--x) * 1vmin), calc(var(--y) * 1vmin)) rotate(var(--r)); }

Now can write an extremely accurate frame-by-frame journey for the car. We could start with the value of --x.

@keyframes journey { 0%, 100% { --x: -22.5; } 25% { --x: 0; } 50% { --x: 22.5; } 75% { --x: 0; } }

The car makes the right journey on the x-axis.

CodePen Embed Fallback

Then we build upon that by adding the travel for the y-axis:

@keyframes journey { 0%, 100% { --x: -22.5; --y: 0; } 25% { --x: 0; --y: -22.5; } 50% { --x: 22.5; --y: 0; } 75% { --x: 0; --y: 22.5; } }

Well, that’s not quite right.

CodePen Embed Fallback

Let’s drop some extra steps into our @keyframes to smooth things out:

@keyframes journey { 0%, 100% { --x: -22.5; --y: 0; } 12.5% { --x: -22.5; --y: -22.5; } 25% { --x: 0; --y: -22.5; } 37.5% { --y: -22.5; --x: 22.5; } 50% { --x: 22.5; --y: 0; } 62.5% { --x: 22.5; --y: 22.5; } 75% { --x: 0; --y: 22.5; } 87.5% { --x: -22.5; --y: 22.5; } }

Ah, much better now:

CodePen Embed Fallback

All that‘s left is the car‘s rotation. We‘re going with a 5% window around the corners. It’s not precise but it definitely shows the potential of what’s possible:

@keyframes journey { 0% { --x: -22.5; --y: 0; --r: 0deg; } 10% { --r: 0deg; } 12.5% { --x: -22.5; --y: -22.5; } 15% { --r: 90deg; } 25% { --x: 0; --y: -22.5; } 35% { --r: 90deg; } 37.5% { --y: -22.5; --x: 22.5; } 40% { --r: 180deg; } 50% { --x: 22.5; --y: 0; } 60% { --r: 180deg; } 62.5% { --x: 22.5; --y: 22.5; } 65% { --r: 270deg; } 75% { --x: 0; --y: 22.5; } 85% { --r: 270deg; } 87.5% { --x: -22.5; --y: 22.5; } 90% { --r: 360deg; } 100% { --x: -22.5; --y: 0; --r: 360deg; } }

And there we have it, a car driving around a curved square! No wrappers, no need for complex Math. And we composed it all with custom properties.

CodePen Embed Fallback Powering an entire scene with variables

We‘ve seen some pretty neat @property possibilities so far, but putting everything we’ve looked at here together can take things to another level. For example, we can power entire scenes with just a few custom properties.

Consider the following concept for a 404 page. Two registered properties power the different moving parts. We have a moving gradient that’s clipped with -webkit-background-clip. The shadow moves by reading the values of the properties. And we swing another element for the light effect.

CodePen Embed Fallback That’s it!

It’s exciting to think about what types of things we can do with the ability to define types with @property. By giving the browser additional context about a custom property, we can go nuts in ways we couldn’t before with basic strings.

What ideas do you have for the other types? Time and resolution would make for interesting transitions, though I’ll admit I wasn’t able to make them work that way I was hoping. url could also be neat, like perhaps transitioning between a range of sources the way an image carousel typically does. Just brainstorming here!

I hope this quick look at @property inspires you to go check it out and make your own awesome demos! I look forward to seeing what you make. In fact, please share them with me here in the comments!

The post Exploring @property and its Animating Powers appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Develop and Test a Mobile-First Design in 2021

Css Tricks - Thu, 03/04/2021 - 5:49am

The internet has connected 4.66 billion people with each other as of October 2020. A total of 59% of the world’s total population. Amazingly, this is not even the surprising part. The stat to look out for is mobile users and their rise in the internet world. Out of 4.66 billion people connected to the internet, 4.28 billion are mobile internet users. A number that reminds us of how vital mobile users are and why we need to keep them on priority. As surprising as these numbers are, everyone saw this coming with the sudden rise in mobile users and mobile sales all around the world. As a web developer, we have been prepared in shifting our design methodologies and development tactics according to mobile users for some time now. One such design paradigm is the mobile-first design which is a witness to our commitment to mobile internet users all around the world. A design strategy that starts with mobile users and works around their interest. This post takes the mobile-first design into the centre and explores the complexities of mobile-first design development and testing and how it affects our business in a positive way.

What is a Mobile-First Design?

Mobile-first design is the process of planning and developing a website keeping in mind the mobile-users first. This methodology of development changed from desktop-first, which has always been the way, reacting to the surge in mobile internet users around the world. Mobile-first design is a part of progressive advancement method in which we progress towards more advance design slowly.

Progressive advancement starts from a basic design fulfilling the requirements of the mobile device. This basic design consists of minimal elements on the webpage eliminating everything that a mobile user is not interested in. From here, we take another step and add a few more elements increasing the complexity of our design while sticking to the modern web design techniques and mobile-friendliness. This goes on until we are satisfied and have implemented all the necessary modules in our application.

Credit: Smartz

Mobile-first design is a practice of starting the development with respect to the mobile user or a mobile device first. Contributing 52% of the internet traffic today, a mobile-first website can help us lift the overall engagement and make us visible on the internet (on Google). With the start of mobile-first indexing, Google has also twisted their algorithm to show that mobile users are the priority today. Mobile-first design is not a complex task but a series of small development changes that can help your website render perfectly on the mobile with a happy user. To develop a mobile-first design website, we need to absorb some habits and remember a few tips to test the application efficiently.

How To Develop a Mobile-First Design

The journey of developing a mobile-first design can be roughly divided into the following stages:

Wireframing

Wireframe’s importance in a mobile-first design is similar to a map when we construct a new building. Wireframe provides an architectural way of how things are going to be. With wireframes in hand, not only technical teams, everyone related to the project can understand the high-level view of the application or in user’s terms: how will my application look?

While clients and analysts can tick mark their checklist of requirements, developers can understand how the elements will be laid down on to the application.

Credit: Balsamiq

A research paper published by KPMG, determining “Unsuccessful Information Technology Projects” revealed poor project planning as one of the reasons for project failure. The other good reason to use wireframes is that they increase the efficiency of the developers because of the clearer objectives. A similar result can be seen with TDD and BDD approach in testing.

Use Responsiveness and A Responsive Framework

A responsive framework is a crucial step in mobile-first design development. A responsive website adjusts itself with the environment it is rendered on such as screen size, platform or orientation. Mobile devices do not have a fixed size in the market. An overwhelming list on Screensiz will make you believe that applying media queries and meta tag will not work for all the devices and a generic approach is the only solution. Responsiveness is not only adjusting to the screen of the device, but also to the user’s experience. For example, images are an important part of a web page. Even though a user may skim the content, they will glance at an image at least once. It needs to be perfect! But while shrinking the aspect ratio of the webpage, we unknowingly focus out the image subject from the image. In the following image, the comparison speaks this point as the family becomes out of focus on a smaller screen device:

Simply resizing is a risky business as an image may lose its importance altogether. A good responsive website takes care of this aspect and render a different lower quality but cropped image on a smaller screen as below:

Or if I just show the mobile image, it has been cropped to as follows:

Since the focus is not the tree but the family, we need the users to see our subject with clarity. This is a responsive nature according to the user experience which is a part of a responsive website.

A responsive framework keeps the responsive needs in mind and has built-in capabilities to enhance the responsiveness of the website. With a responsive framework, the developers need not take care of every little thing and can focus on major issues such as the image resizing shown above. Responsive frameworks are more successful and popular in a mobile-first design strategy.

Follow the thumb rule

Before placing our content on the web page, we have to decide the location of each element with respect to the interaction habits of mobile users. A simple reason for this thought is the generic way most people use their phone; with one hand.

75% of people use their thumb to operate the mobile device which when painted as a red-green zone looks as follows:

Credit: Biracial Booty

The green zone in the above image shows the easily accessible area on a mobile screen. Our most important elements, such as CTAs, should reside in the green zone for a mobile-user to access easily. Remember that the user is not using a mouse to operate a mobile device. Reaching the red zone will take efforts and repeated actions of which, voluntarily or involuntarily, a user always notices.

The below-given image shows the efforts a user would have to take with a very small design change (which in my way, is not a mobile-first design):

Credit: Ranjith Manoharan

This small change can lead to increased user engagement demanding no efforts from our user.

Untangle Your Mobile-First Content and Design

Designing a web page desktop-style demands no extra attention towards the content. The screen is big and can accommodate whatever content you wish to add. The same attitude does not work in a mobile-first way. When the screen is smaller and we have only less than 3 seconds to impress a user, the content needs to be concise and to the point. A good solution to replace a lot of content from your screen is to use images, a hierarchical method of design or through a better user interface.

The content thoughts are to be replicated when we work with elements in a mobile-first design for the same reason of lesser screen space. Providing a congested screen with too many elements spread throughout confuses a user and slides him away from the CTA conversion goal. This is also called a minimalistic approach (or minimalism) in web design. A minimalistic approach distributes too few elements on to the screen leaving out a considerable white space for the user. The following image shows the minimalistic design:

Credit: Johnyvino

But a mobile design is different from a mobile-first design. Eventually, we would have to extend this web page for the desktop users too. Minimalism on the desktop is also a good approach given that the font-size and hero images are equally proportionate.

Decluttering our design and content reminds us why it is important to move from basic to advanced and not vice-versa (also called graceful degradation). Had it been the desktop design at the start, the team has to conduct the brainstorm sessions first to fill a large screen and then removing them one by one for the mobile device. By that time, management becomes too complex, hard to confine our elements and takes too much time and efforts. Therefore, start basic with a minimal design and then move forward which is the initial step in a mobile-first design strategy.

Prioritize UI and UX

A mobile-first design needs to revolve around mobile users to increase engagements and conversions on our web application. While animations and transitions are as fancy to look as to touch, the user experience is much more than explicit elements. Our user experience need not be too ostentatious but should engage the users without them realising our intentions. For example, elements should be extremely easy to find on a web page. A mobile user should never struggle to find the search button Conventional locations of the elements work in this case such as a navigation bar is always expected to be in the corner (left or right).

Another aspect in prioritizing the user experience is to enlarge the touch targets for comfortable interaction. Unlike a desktop with a small pointed arrow, we touch our screens with our thumbs which require a considerable large area. A mobile-first design encourages large clickable elements with white space between them to avoid unwanted clicks.

It is not a bad idea to keep these parameters intact while progressing towards the desktop side. Businesses have started to keep a better UI including large boxes to touch on desktops as well which shows their mobile-first design approach clearly. Enlarging elements also include determining the best font-size for your web page considering the smaller screen size. Font-size are easier to switch through media queries and you can follow the following chart to decide which size to go for in which scenario:

Credit: Airbus

Remember that font-type also affects the font-size visibility and readability on a mobile device. Therefore, it is better to test and find your perfect size taking the above chart for reference.

Tip: A small thing to remember in developing a mobile-first design is to avoid hover-only elements on your web page. Hovering is a great tool on desktops but mobiles do not have any support for hover. They work on touch interaction. You can keep the hover design along with the touch facility but constructing elements only with hover property is not a good idea.

CTA Placement

CTA is an important button. It helps in conversion goals and every business wants its users to click that button and increase their conversion rate. Therefore it demands special attention from the team members. The location of the CTA is the first thing that should be finalised carefully reminding yourself not to let our user work too hard.

CTAs should always be in the reach of the thumb (remember the green zone?) and on the first presented screen (above the fold) as well.

Apart from CTA placement, the message and presentation of a CTA is also an art in itself but let’s leave that for mobile-friendliness discussions.

Navigation Bar

The navigation bar on a mobile-first design needs simplification more than the desktop ones. While the desktop design has also transformed navigation bars into different unique designs, mobile-first is still enjoying the conventional hamburger menu style. People expect that today! If a user cannot find an option on the landing screen, he looks for those three horizontal lines that he knows will take him to what he is looking for.

The following image shows LambdaTest transformation of the navigation bar on two different devices:

The mobile-first approach helps us in shrinking down the available links on the navigation menu as long lists of links are not appreciated well. For those who cannot sacrifice, a nested layout seems a better choice than intimidating the user with a long list of links. In addition, it also keeps our presentation clean and encourages minimal design with decluttered content approach.

Say No To Heavy Elements

A web page’s loading speed has become a make or breaks parameter in website designing. An Unbounce survey shows that 70% of the customers are influenced by a website’s speed. Their decisions are affected by the FCP or full page load. Rendering the FCP (the first thing visible on your website) is a better choice as the user has something to get engaged in.

Google recommends a loading time of 2 seconds and under. Currently, the majority of these websites do not follow this criteria. As much as 57% of peopleleave a website that takes more than three seconds to load. The conversion rates also take a toll when the page speed is higher than expected affecting business directly. So, how can we save ourselves from this?

Using lighter elements on a web page crafted for mobile users is the first step to go for. If images exist, they probably should be in a lossless algorithm format such as JPEG and of lower size. Resizing them to a lower ratio helps too since the mobile user is rarely concerned about high-quality images apart from the product images. Using CDNs can also help in decreasing the page load time. For a WordPress website, plugins should be as minimal and light-weight as possible. Static plugins are a good start but eventually, the elements on a web page should be lighter, using asynchronous algorithms for FCP and should make fewer requests to the server.

How to test a mobile-first design?

The above points assist us during the development of a mobile-first design that starts with a basic minimal design for the mobile user and increases the complexities without hindering the user experience. But an equally important aspect of a web application is testing it. Testing an application can point out hidden bugs and functionalities that either is not liked by the people or behave inappropriately. Let’s check out how we can go ahead and polish our mobile-first website by testing it.

Use Tools

Similar to using responsive frameworks in the development which gives us in-built functionality and takes care of common code, tools do the same in testing. A mobile web testing tool not only creates an environment for the website to render as, on a mobile device, but it also provides certain features that are extremely important for a mobile-first design.

Consider one such tool LT browser I recently discovered on ProductHunt.

LT Browser is a browser made specifically for mobile web testing and responsive testing of the website. It provides 45+ screen sizes for the testers to render their website on. With such a tool, you can easily find bugs using in-built debuggers and leverage hot reload features to help you in development as well. With built-in integrations and performance reports, you can analyze the performance and share it with your teammates easily.

LT Browser showing two devices side-by-side with different orientations

Test & Debug on the go – Using LT Browser users can test and debug their websites on the go, its in-built developer’s tool really comes in handy to make a website seamless across devices.

Network Throttling: This is an amazing and unique feature offered by LT Browser utilizing which a user can check how the website performs under high and low network bandwidth.

Local Testing: Local testing allows the developer to test their website even before pushing their website online. With the local tunnel, they can view the website on any of the 45+ devices from their local system.

Performance Report: To analyse the final website performance, developers and testers can view the google lighthouse based performance report that will help them change certain website aspect in order to score more both on mobile and desktop devices.

Tools help you increase productivity and keep you efficient during the process. The choice of tools is the personal choice of the tester but they definitely should play a part in the overall testing.

Cross-Browser Testing

Cross-browser testing is the process of analyzing your website on different target browsers, operating systems and resolutions. For a mobile-friendly website to be successful, it should render as intended on a mobile screen without worrying about the platform and browser used. This can be tested through cross-browser tools like LambdaTest

As a tester and a developer, it is definitely not a good idea to take these efforts manually. There is an overwhelming number of OS, browser and resolution combinations that will take too much efforts to install and test. A better way is to go for online cross-browser testing tools with mobile browser and OS support or a mobile-specific browser like LT Browser discussed above.

So, what are we looking for in a cross-browser testing process?

Cross-browser testing looks for issues with the elements of a web page and whether they are supported or not. While functionality testing is another segment of testing, cross-browser testing points out the cross-browser compatibility issues. For example, if you have used CSS subgrids on the web page, they might not render on Google Chrome version 62. The same goes for Javascript libraries and other code. With a browser matrix in our hand, we can rest assured after performing the testing that our user will not be confused as he would when an element crashes on the webpage.

Validate HTML and CSS code

Every mishap on the web page is not the browser’s fault, sometimes programmers commit mistakes too! Since a web page is rendered or parsed by the browser and not compiled, errors and warnings do not stop a web page from loading. Now we have performed cross-browser testing but still cannot find an issue with a missing element is generally a wrong syntax fault. Such syntax errors and not following the W3C web standards can land us in trouble when we progress from mobile-first to complete desktop designs.

HTML and CSS code is very easy to validate. There are a lot of tools available which can do this job for us. Some of them are Validator.nu, W3CMarkup Validator and W3C CSS Validator.

Network Performance

In our efforts to test the page load speed of the web page, a major hurdle is a network. A slower network means slower downloading of web pages and more page load time. For a mobile-first design, it is extremely important to cover all types of users while performing the testing. One such section is the users with slower networks such as 3G network constituting 12% of North American internet users. Only 4% of people use the 5G network in North America now. Imagine this number for countries with poor network infrastructures!!

Network performance can be tested on a real device by switching the connections or through an online tool that provides such features. LT Browser has a network throttling feature to test the website on different connections which helps while performing responsive or cross-browser testing.

A/B Testing

A/B Testing is a type of variation testing or split testing that shows different variations of a web page to different segments of users. Website owners then analyze the performance of both versions and choose the better performing one. For a mobile-first design application, we may develop everything perfectly following every rule in the textbook but the final verdict is up to the user. If the user is not pressing that shiny CTA button, we need to fix that by knowing what the user wants.

A popular question in A/B testing is, where do we create the variation? We cannot jumble up every element on the web page and create fifty variations for the users. This can have an adverse impact on the business. To understand where we are going wrong and which elements need adjustments we can choose the Heatmap features. Heatmap allows the web app owners to see the user’s engagement with the web page and which part are they ignoring.

A famous case study of A/B testing includes the 40% improved sales on a varying page of EA Sports’ SimCity 5 from this:

To this:

Buyers were least interested in the pre-order offer I guess!!

Usability Testing

The final step in completing our mobile-first web design is to present it to real-users and take their feedback. A/B testing is good, but even if you see the Heatmap of the web page, you cannot talk to a real user and ask them why are they not pressing the CTA button? Usability testing covers this hole.

Usability testing is performed with the real users who should be the target audience of the application. For example, you cannot ask a poet to check out a coding website right? Once these users are selected, we ask them to either record their session, their screens and speak their thoughts out loud. Sometimes, the testers can sit with the users too and make their notes by asking the questions. Sometimes, we can just ask them to fill a form with various options. Whatever way you do, usability testing is important and uncovers hidden bugs that are hard to find in a mobile-first design which is a tricky business in itself.

Why does mobile presence matter?

Our analysis of mobile-first design and its development techniques will make you wonder, why should I do mobile-first design? Does mobile presence matter that much?

A few days ago, I was going through my website on Google Search Console and the popped up message was the following:

Mobile-first design is so important today that Google considers mobile-first indexing as the primary search index technique and increasing the visibility on mobile searches which constitutes 52% of the internet traffic!! But this is from Google’s side, what do we have in the box for us?

Better Google Ranking

A mobile-first design is mobile-friendly. It is for mobile users. Therefore, Google realizes that we have a website that is perfect for a mobile user and makes us more visible on queries generated from a smartphone. As a result, our rankings improve. Better Google ranking attracts other businesses as we are more visible and can advertise for them on our website if we want. Since users generally do not remember the web site’s name, a Google search will help us generate traffic and conversions.

Higher Conversion Rates

A mobile-first design will ensure a decrease in the bounce rate. When bounce rates are low and people are actually interested in your website, they will stick to it and will also return back often. Given that the CTA is positioned right with all the eligibility criteria satisfied, a mobile-first design will push your conversion rates higher directly generating a better engagement and goal establishments. As a result, you will get a steady business from your application.

Large Audience Coverage

Businesses also generate a large audience base when they are visible on the Google rankings and have built a mobile-first design. A large audience base is the strength of any business. They require lesser efforts to be engaged as the trust has been established. On the other hand, seeing a larger involvement with your application, you can also introduce other features and services. Such a strong base is a marketing bonus for businesses.

Better Market Presence

Satisfying all the three requirements discussed above directly results in a better market presence. Even though mobile-first designs are recommended for every business, they are currently rare. Mobile-first design is yet to become a standard in web development and choosing it will keep you ahead in the race. A Google search keyword that shows your link in the query results increases your market presence among the competitors. They not only have to work harder to overtake you, but they might also need to restructure their designs if they are still working on desktop ones.

A better market presence means better word of mouth about your happenings, features and upcoming highlights. Such a presence is a direct cause for better revenues and a better future.

Is Mobile-First similar to Mobile-Responsive?

A short answer; no! Mobile-first is a design method. With the mobile-first design, we develop our web application for mobile users first. This starts from a very basic design and gradually advances towards a more complex design structure while keeping in view the mobile-friendliness and mobile users on priority. Mobile-first design is not a development technique but a design strategy that works as a catalyst in development. The developers can get a clear objective and work faster with the defined design.

Mobile-responsive is the ability of the website to adjust itself according to the mobile screen size. A mobile-responsive design need not start with the mobile version of the website and neither takes the thumb area or content relevance into account. Mobile-responsiveness is just concerned with rendering the website on a smaller device.

A mobile-responsive design strategy can be considered a part of the mobile-first design since to produce a mobile-first design, the application needs to be responsive in nature. The mobile-responsive design strategy was a good call when the mobile users had just started to increase. Today, there have been extensive researches on mobile designs in which a mobile-responsive design strategy is hard to survive. To cater to the needs of 4.2 billion mobile internet users, we need a mobile-first design.

Try LambdaTest

The post How to Develop and Test a Mobile-First Design in 2021 appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A Bare-Bones Approach to Versatile and Reusable Skeleton Loaders

Css Tricks - Wed, 03/03/2021 - 5:51am

UI components like spinners and skeleton loaders make waiting for a page load less frustrating and might even affect how loading times are perceived when used correctly. They won’t completely prevent users from abandoning the website, but they might encourage them to wait a bit longer. Animated spinners are used in most cases since they are easy to implement and they generally do a good enough job. Skeleton loaders have a limited use-case and might be complex to implement and maintain, but they offer an improved loading experience for those specific use-cases.

I’ve noticed that developers are either unsure when to use skeleton loaders to enhance the UX or do not know how to approach the implementation. More common examples of skeleton loaders around the web are not all that reusable or scalable. They are usually tailor-made for a single component and cannot be applied to anything else. That is one of the reasons developers use regular spinners instead and avoid the potential overhead in the code. Surely, there must be a way to implement skeleton loaders in a more simple, reusable, and scalable way.

Spinner elements and skeleton loaders

A spinner (or progress bar) element is the simplest and probably most commonly used element to indicate a loading state. A spinner might look better than a blank page, but it won’t hold user’s attention for long. Spinners tell the user that something will load eventually. Users have to passively wait for content to load, meaning that they are unable to interact with other elements on the page or consume any other content on the page. Spinners take up the entire screen space and no content is available to the user.

The spinner element is displayed and covers the entire screen until all content has finished loading.

However, skeleton loaders (or skeleton screens) tell the user that the content is about to load and they might provide a better loading UX than a simple spinner. Empty boxes (with a solid color or gradient background) are used as a placeholder for the content that is being loaded. In most cases, content is gradually being loaded which allows users to maintain a sense of progress and perception that a page load is faster than it is. Users are actively waiting, meaning that they can interact with the page or consume at least some part of the content while the rest is loading.

Empty boxes (with a solid color or gradient background) are used as a placeholder while content is being gradually loaded. Text content is loaded and displayed first, and images are loaded and displayed after that.

It’s important to note that loading components should not be used to address performance issues. If a website is experiencing performance issues due to the problem that can be addressed (un-optimized assets or code, back-end performance issues, etc.), they should be fixed first. Loading elements won’t prevent users from abandoning websites with poor performance and high loading times. Loading elements should be used as a last resort when waiting is unavoidable and when loading delay is not caused by unaddressed performance issues.

Using skeleton loaders properly

Skeleton loaders shouldn’t be treated as a replacement for full-screen loading elements but instead when specific conditions for content and layout have been met. Let’s take this step-by-step and see how to use loading UI components effectively and how to know when to go with skeleton loaders instead of regular spinners.

Is loading delay avoidable?

The best way to approach loading in terms of UX is to avoid it altogether. We need to make sure that loading delay is unavoidable and is not the result of the aforementioned performance issues that can be fixed. The main priority should always be performance improvements and reducing the time needed to fetch and display the content.

Is loading initiated by the user and is the feedback required?

In some cases, user actions might initiate additional content to load. Some examples include lazy-loading content (e.g. images) in the user’s viewport while scrolling, loading content on a button click, etc. We need to include a loading element for cases where a user needs to get some kind of feedback for their actions that have initiated the loading process.

As seen in the following mockup, without a loading element to provide feedback, a user doesn’t know that their actions have initiated any loading process that is happening in the background.

We are asynchronously loading the content in a modal when the button is clicked. In the first example, no loading element is displayed and users might think that their click hasn’t been registered. In the second example, users get the feedback that their click has been registered and that the content is being loaded. Is the layout consistent and predictable?

If we’ve decided to go with a loader element, we now need to choose what type of loader element best fits our use-case. Skeleton loaders are most effective in cases when we can predict the type and layout of the content that is being loaded in. If the skeleton loader layout doesn’t accurately represent the loaded content’s layout to some degree, the sudden change may cause layout shift and leave the user confused and disoriented. Use skeleton loaders for elements with predictable content for consistent layouts.

The grid layout on the left (taken from discogs.com) represents an ideal use-case for skeleton loaders, while the comments example on the right (taken from CSS-Tricks) is an ideal use-case for spinners. Is there content on the page that is immediately available to the user?

Skeleton loaders are most effective when there are sections or page elements already present on the page while the skeleton loader is active and additional content is actively loading. Gradually loading in content means that static content is available on page load and asynchronously-loaded content is displayed as it becomes available (for example, the first text is loaded and images after that). This approach ensures that the user maintains a sense of progression and is expecting the content to finish loading at any moment. Having the entire screen covered in skeleton loaders without any content present and without gradual content loading is not significantly better than having the screen covered by a full-page spinner or progress bar.

The mockup on the left shows a skeleton loader covering all elements until everything has loaded. The mockup on the right shows a skeleton loader covering only content that is being asynchronously loaded. The page is usable since they have a part of the website’s content displayed and the user maintains a sense of progression. Creating robust skeleton loaders

Now that we know when to use skeleton loaders and how to use them properly, we can finally do some coding! But first, let me tell you how we are going to approach this.

Most skeleton loading examples from around the web are, in my opinion, over-engineered and high-maintenance. You might have seen one of those examples where skeleton screens are created as a separate UI component with separate CSS styles or created with elaborate use of CSS gradients to simulate the final layout. Creating and maintaining a separate skeleton loader or skeleton styles for each UI component can become serious overhead in development with such a highly-specific approach. This is especially true when we look at scalability, as any change to the existing layout also involves updating the skeleton layout or styles.

Let’s try and find a bare-bones approach to implementing skeleton loading that should work for most use-cases and will be easy to implement, reuse and maintain!

Card grid component

We’ll use regular HTML, CSS, and JavaScript for implementation, but the overall approach can be adapted to work with most tech stacks and frameworks.

We are going to create a simple grid of six card elements (three in each row) as an example, and simulate asynchronous content loading with a button click.

We’ll use the following markup for each card. Notice that we are setting width and height on our images and using a 1px transparent image as a placeholder. This will ensure that the image skeleton loader is visible until the image has been loaded.

<div class="card"> <img width="200" height="200" class="card-image" src="..." /> <h3 class="card-title"></h3> <p class="card-description"></p> <button class="card-button">Card button</button> </div>

Here is our card grid example with some layout and presentation styles applied to it. Content nodes are added or removed from the DOM depending on the loading state using JavaScript to simulate asynchronous loading.

CodePen Embed Fallback Skeleton loader styles

Developers usually implement skeleton loaders by creating replacement skeleton components (with dedicated skeleton CSS classes) or by recreating entire layouts with CSS gradients. Those approaches are inflexible and not reusable at all since individual skeleton loaders are tailor-made for each layout. Considering that layout styles (spacings, grid, inline, block and flex elements, etc.) are already present from the main component (card) styles, skeleton loaders just need to replace the content, not the entire component!

With that in mind, let’s create skeleton loader styles that become active only when a parent class is set and use CSS properties that only affect the presentation and content. Notice that these styles are independent from the layout and content of the element they’re being applied to, which should make them highly reusable.

.loading .loading-item { background: #949494 !important; /* Customizable skeleton loader color */ color: rgba(0, 0, 0, 0) !important; border-color: rgba(0, 0, 0, 0) !important; user-select: none; cursor: wait; } .loading .loading-item * { visibility: hidden !important; } .loading .loading-item:empty::after, .loading .loading-item *:empty::after { content: "\00a0"; }

Base parent class .loading is used to activate the skeleton loading styles. The .loading-item class is used to override element’s presentational styles to display a skeleton element. This also ensures that the layout and dimensions of the element are preserved and inherited by the skeleton. Additionally, .loading-item makes sure that all child elements are hidden and have at least an empty space character (\00a0) inside it so that element is displayed and its layout is rendered.

Let’s add skeleton loader CSS classes to our markup. Notice how no additional HTML elements have been added, we are only applying additional CSS classes.

<div class="card loading"> <img width="200" height="200" class="card-image loading-item" src="..." /> <h3 class="card-title loading-item"></h3> <p class="card-description loading-item"></p> <button class="card-button loading-item">Card button</button> </div>

Once the content has loaded, we only need to remove loading CSS class from the parent component to hide the skeleton loader styles.

CodePen Embed Fallback

These few lines should work for most, if not all, use cases depending on your custom CSS since these skeleton loaders inherit the layout from the main (content) styles and create a solid box that replaces the content by filling out the empty space left in the layout. We’re also applying these classes to non-empty elements (button with text) and replacing it with a skeleton. A button might have the text content ready from the start, but it might be missing additional data that is required for it to function correctly, so we should also hide it while that data is loaded in.

This approach can also adapt to most changes in the layout and markup. For example, if we were to remove the description part of the card or decide to move the title above the image, we wouldn’t need to make any changes to the skeleton styles, since skeleton responds to all changes in the markup.

Additional skeleton loading override styles can be applied to a specific element simply by using the .loading .target-element selector.

.loading .button, .loading .link { pointer-events: none; } Multi-line content and layout shifts

As you can see, the previous example works great with cards and the grid layout we’re using, but notice that the page content slightly jumps the moment it is loaded. This is called a layout shift. Our .card-description component has a fixed height with three lines of text, but the skeleton placeholder spans only one line of text. When the extra content is loaded, the container dimensions change and the overall layout is shifted as a result. Layout shift is not bad in this particular case, but might confuse and disorient the user in more severe cases.

This can be easily fixed directly in the placeholder element. Placeholder content is going to get replaced by the content that is being loaded anyway, so we can add anything we need inside it. So, let’s add a few <br /> elements to simulate multiple lines of text.

<div class="card loading"> <img width="200" height="200" class="card-image loading-item" src="..." /> <h3 class="card-title loading-item"></h3> <p class="card-description loading-item"><br/><br/><br/></p> <button class="card-button loading-item">Card button</button> </div>

We’re using basic HTML to shape the skeleton and change the number of lines inside it. Other examples on the web might achieve this using CSS padding or some other way, but this introduces overhead in the code. After all, content can span any number of lines and we would want to cover all those cases.

As an added benefit of using <br /> elements, they inherit the CSS properties that affect the content dimensions (e.g. the line height, font size, etc.). Similarly, &nbsp characters can be used to add additional spacing to the inline placeholder elements.

CodePen Embed Fallback

With a few lines of CSS, we’ve managed to create versatile and extensible skeleton loader styles that can be applied to a wide range of UI components. We’ve also managed to come up with a simple way of vertically extending the skeleton boxes to simulate content that spans multiple lines of text.

To further showcase how versatile this skeleton loader CSS snippet is, I’ve created a simple example where I’ve added the snippet to a page using Bootstrap CSS framework without any additional changes or overrides. Please note that in this example no text content will be displayed or simulated, but it will work as in previous examples. This is just to showcase how styles can be easily integrated with other CSS systems.

CodePen Embed Fallback

Here is an additional example to showcase how these styles can be applied to various elements, including input, label and a elements.

CodePen Embed Fallback Accessibility requirements

We should also take accessibility (a11y) requirements into account and make sure that the content is accessible to all users. Skeleton loaders without a11y features might disorientate and confuse users that have visual disabilities or browse the web using screen readers.

Contrast

You might have noticed that the skeleton loaders in our example have a high contrast and they look more prominent compared to the common low-contrast skeleton loaders in the wild. Some users might experience difficulties perceiving and using low-contrast UI components. That is why Web Content Accessibility Guidelines (WCAG) specify a 3:1 minimum contrast for non-text UI components.

The upcoming “Media queries level 5” draft contains a prefers-contrast media query that will enable us to detect user contrast preferences. This will give us more flexibility by allowing us to assign a high-contrast background color to skeleton loaders for users that request a high-contrast version, and have a subtle low-contrast background color for others. I would suggest implementing high-contrast skeleton loaders by default until the prefers-contrast media query becomes more widely supported.

/* NOTE: as of the time of writing this article, this feature is not supported in browsers, so this code won't work */ .loading .loading-item { /* Default skeleton loader styles */ } @media (prefers-contrast: high) { .loading .loading-item { /* High-contrast skeleton loader styles */ } } Animations

Depending on the design and the implementation of animated skeleton loaders, users suffering from visual disorders might feel overwhelmed by the animations and find the site unusable. It’s always a good idea to prevent animations from firing for users that prefer reduced motion. This media query is widely-supported in modern browsers and can be used without any caveats.

.loading .loading-item { animation-name: skeleton; background: /* animated gradient background */; } @media (prefers-reduced-motion) { .loading .loading-item { animation: none !important; background: /* solid color */; } } Screen readers

To better support screen readers, we need to update our HTML with ARIA (Accessible Rich Internet Applications) markup. This markup won’t affect our content or presentation, but it will allow users using screen readers to better understand and navigate around our website content, including our skeleton loaders.

Adrian Roselli has very detailed research on the topic of accessible skeleton loaders for cases when skeleton loaders are implemented as separate UI components. For our example, I’ll use the aria-hidden attribute in combination with visually hidden text to give screen readers a hint that content is in the process of loading. Screen readers will ignore the content with aria-hidden="true", but they’ll use the visually-hidden element to indicate the loading state to the user.

Let’s update our cards with the ARIA markup and loading indicator element.

<div class="card loading"> <span aria-hidden="false" class="visually-hidden loading-text">Loading... Please wait.</span> <img width="200" height="200" class="card-image loading-item" aria-hidden="true" src="..." /> <h3 class="card-title loading-item" aria-hidden="true"></h3> <p class="card-description loading-item" aria-hidden="true"><br/><br/><br/></p> <button class="card-button loading-item" aria-hidden="true">Card button</button> </div>

We also could have applied aria-hidden to the grid container element and add a single visually hidden element before the container markup, but I wanted to keep the markup examples focused on a single card element rather than on the full grid, so I went with this version.

When the content has finished loading and is displayed in the DOM, we need to toggle aria-hidden to false for content containers and toggle aria-hidden to true on a visually hidden loading text indicator.

Here’s the finished example CodePen Embed Fallback That’s a wrap

Implementing skeleton loaders requires a slightly different approach than implementing regular loading elements, like spinners. I’ve seen numerous examples around the web that implement skeleton loaders in a way that severely limits their reusability. These over-engineered solutions usually involve creating separate skeleton loader UI components with dedicated (narrow-scope) skeleton CSS markup or recreating the layout with CSS gradients and magic numbers. We’ve seen that only the content needs to be replaced with the skeleton loaders, and not the entire component.

We’ve managed to create simple, versatile, and reusable skeleton loaders that inherit the layout from the default styles and replace the content inside the empty containers with solid boxes. With just two CSS classes, these skeleton loaders can easily be added to virtually any HTML element and extended, if needed. We’ve also made sure that this solution is accessible and doesn’t bloat the markup with additional HTML elements or duplicated UI components.

Thank you for taking the time to read this article. Let me know your thoughts on this approach and let me know how did you approach creating skeleton loaders in your projects.

The post A Bare-Bones Approach to Versatile and Reusable Skeleton Loaders appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

React Without Build Tools

Css Tricks - Wed, 03/03/2021 - 3:42am

Jim Nielsen:

I think you’ll find it quite refreshing to use React A) with a JSX-like syntax, and B) without any kind of build tooling.

Refreshing indeed:

CodePen Embed Fallback

It’s not really the React that’s the hard part to pull off without build tools (although I do wonder what we lose from not tree shaking), it’s the JSX. I’m so used to JSX I think it would be hard for me to work on a front-end JavaScript project without it. But I know some people literally prefer a render function instead. If that’s the case, you could use React.createComponent directly and skip the JSX, or in the case of Preact, use h:

CodePen Embed Fallback

I work on a project that uses Mithril for the JavaScript templating which is a bit like that, and it’s not my favorite syntax, but you totally get used to it (and it’s fast):

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post React Without Build Tools appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Animate the Details Element

Css Tricks - Tue, 03/02/2021 - 10:58am

Here’s a nice simple demo from Moritz Gießmann on animating the triangle of a <details> element, which is the affordance that tells people this thing can be opened. Animating it, then is another kind of affordance that tells people this thing is opening now.

The tricks?

  1. Turn off the default triangle: details summary::-webkit-details-marker { display:none; }. You can’t animate that one.
  2. Make a replacement triangle with the CSS border trick and a pseudo element.
  3. Animate the new triangle when the state is open: details[open] > summary::before { transform: rotate(90deg); }.
CodePen Embed Fallback

This only animates the triangle. The content inside still “snaps” open. Wanna smooth things out? Louis Hoebregts’ “How to Animate the Details Element Using WAAPI” covers that.

CodePen Embed Fallback

Here’s a fork where I’ll combine them just because:

CodePen Embed Fallback

I see Moritz put the cursor: pointer; on the summary as well like Greg Gibson suggests.

The post How to Animate the Details Element appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.