Front End Web Development

Better Line Breaks for Long URLs

Css Tricks - Tue, 03/16/2021 - 4:56am

CSS-Tricks has covered how to break text that overflows its container before, but not much as much as you might think. Back in 2012, Chris penned “Handling Long Words and URLs (Forcing Breaks, Hyphenation, Ellipsis, etc)” and it is still one of only a few posts on the topic, including his 2018 follow-up Where Lines Break is Complicated. Here’s all the Related CSS and HTML.

Chris’s tried-and-true technique works well when you want to leverage automated word breaks and hyphenation rules that are baked into the browser:

.dont-break-out { /* These are technically the same, but use both */ overflow-wrap: break-word; word-wrap: break-word; word-break: break-word; /* Adds a hyphen where the word breaks, if supported (No Blink) */ hyphens: auto; }

But what if you can’t? What if your style guide requires you to break URLs in certain places? These classic sledgehammers are too imprecise for that level of control. We need a different way to either tell the browser exactly where to make a break.

Why we need to care about line breaks in URLs

One reason is design. A URL that overflows its container is just plain gross to look at.

Then there’s copywriting standards. The Chicago Manual of Style, for example, specifies when to break URLs in print. Then again, Chicago gives us a pass for electronic documents… sorta:

It is generally unnecessary to specify breaks for URLs in electronic publications formats with reflowable text, and authors should avoid forcing them to break in their manuscripts.

Chicago 17th ed., 14.18

But what if, like Rachel Andrew (2015) encourages us, you’re designing for print, not just screens? Suddenly, “generally unnecessary” becomes “absolutely imperative.” Whether you’re publishing a book, or you want to create a PDF version of a research paper you wrote in HTML, or you’re designing an online CV, or you have a reference list at the end of your blog post, or you simply care how URLs look in your project—you’d want a way to manage line breaks with a greater degree of control.

OK, so we’ve established why considering line breaks in URLs is a thing, and that there are use cases where they’re actually super important. But that leads us to another key question…

Where are line breaks supposed to go, then?

We want URLs to be readable. We also don’t want them to be ugly, at least no uglier than necessary. Continuing with Chicago’s advice, we should break long URLs based on punctuation, to help signal to the reader that the URL continues on the next line. That would include any of the following places:

  • After a colon or a double slash (//)
  • Before a single slash (/), a tilde (~), a period, a comma, a hyphen, an underline (aka an underscore, _), a question mark, a number sign, or a percent symbol
  • Before or after an equals sign or an ampersand (&)

At the same time, we don’t want to inject new punctuation, like when we might reach for hyphens: auto; rules in CSS to break up long words. Soft or “shy” hyphens are great for breaking words, but bad news for URLs. It’s not as big a deal on screens, since soft hyphens don’t interfere with copy-and-paste, for example. But a user could still mistake a soft hyphen as part of the URL—hyphens are often in URLs, after all. So we definitely don’t want hyphens in print that aren’t actually part of the URL. Reading long URLs is already hard enough without breaking words inside them.

We still can break particularly long words and strings within URLs. Just not with hyphens. For the most part, Chicago leaves word breaks inside URLs to discretion. Our primary goal is to break URLs before and after the appropriate punctuation marks.

How do you control line breaks?

Fortunately, there’s an (under-appreciated) HTML element for this express purpose: the <wbr> element, which represents a line break opportunity. It’s a way to tell the browser, Please break the line here if you need to, not just any-old place.

We can take a gnarly URL, like the one Chris first shared in his 2012 post:

http://www.amazon.com/s/ref=sr_nr_i_o?rh=k%3Ashark+vacuum%2Ci%3Agarden&keywords=shark+vacuum&ie=UTF8&qid=1327784979

And sprinkle in some <wbr> tags, “Chicago style”:

http:<wbr>//<wbr>www<wbr>.<wbr>amazon<wbr>.com<wbr>/<wbr>s/<wbr>ref<wbr>=<wbr>sr<wbr>_<wbr>nr<wbr>_<wbr>i<wbr>_o<wbr>?rh<wbr>=<wbr>k<wbr>%3Ashark<wbr>+vacuum<wbr>%2Ci<wbr>%3Agarden<wbr>&<wbr>keywords<wbr>=<wbr>shark+vacuum<wbr>&ie<wbr>=<wbr>UTF8<wbr>&<wbr>qid<wbr>=<wbr>1327784979

Even if you’re the most masochistic typesetter ever born, you’d probably mark up a URL like that exactly zero times before you’d start wondering if there’s a way to automate those line break opportunities.

Yes, yes there is. Cue JavaScript and some aptly placed regular expressions:

/** * Insert line break opportunities into a URL */ function formatUrl(url) { // Split the URL into an array to distinguish double slashes from single slashes var doubleSlash = url.split('//') // Format the strings on either side of double slashes separately var formatted = doubleSlash.map(str => // Insert a word break opportunity after a colon str.replace(/(?<after>:)/giu, '$1<wbr>') // Before a single slash, tilde, period, comma, hyphen, underline, question mark, number sign, or percent symbol .replace(/(?<before>[/~.,\-_?#%])/giu, '<wbr>$1') // Before and after an equals sign or ampersand .replace(/(?<beforeAndAfter>[=&])/giu, '<wbr>$1<wbr>') // Reconnect the strings with word break opportunities after double slashes ).join('//<wbr>') return formatted } Try it out

Go ahead and open the following demo in a new window, then try resizing the browser to see how the long URLs break.

CodePen Embed Fallback

This does exactly what we want:

  • The URLs break at appropriate spots.
  • There is no additional punctuation that could be confused as part of the URL.
  • The <wbr> tags are auto-generated to relieve us from inserting them manually in the markup.

This JavaScript solution works even better if you’re leveraging a static site generator. That way, you don’t have to run a script on the client just to format URLs. I’ve got a working example on my personal site built with Eleventy.

If you really want to break long words inside URLs too, then I’d recommend inserting those few <wbr> tags by hand. The Chicago Manual of Style has a whole section on word division (7.36–47, login required).

Browser support

The <wbr> element has been seen in the wild since 2001. It was finally standardized with HTML5, so it works in nearly every browser at this point. Strangely enough, <wbr> worked in Internet Explorer (IE) 6 and 7, but was dropped in IE 8, onward. Support has always existed in Edget, so it’s just a matter of dealing with IE or other legacy browsers. Some popular HTML-to-PDF programs, like Prince, also need a boost to handle <wbr>.

One more possible solution

There’s one more trick to optimize line break opportunities. We can use a pseudo-element to insert a zero width space, which is how the <wbr> element is meant to behave in UTF-8 encoded pages anyhow. That’ll at least push support back to IE 9, and perhaps more importantly, work with Prince.

/** * IE 8–11 and Prince don’t recognize the `wbr` element, * but a pseudo-element can achieve the same effect with IE 9+ and Prince. */ wbr:before { /* Unicode zero width space */ content: "\200B"; white-space: normal; }

Striving for print-quality HTML, CSS, and JavaScript is hardly new, but it is undergoing a bit of a renaissance. Even if you don’t design for print or follow Chicago style, it’s still a worthwhile goal to write your HTML and CSS with URLs and line breaks in mind.

References

The post Better Line Breaks for Long URLs appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The Gang Goes on JS Danger

Css Tricks - Mon, 03/15/2021 - 8:45am

The JS Party podcast sometimes hosts game shows. One of them is Jeopardy-esque, called JS Danger, and some of us here from CSS-Tricks got to be the guests this past week! The YouTube video of it kicks off at about 5:56.

While I’m at it…

Here’s some more videos I’ve enjoyed recently.

Past episodes of JS Danger are of course just as fun to watch!

Kevin Powell takes on a classic CSS confusion… what’s the difference between width: auto; and width: 100%;?

More like automatically distribute the space amiright?

Jeremy Keith on Design Principles For The Web:

John Allsopp with A History of the Web in 100 Pages. The intro video is three pages, and not embeddable presumably because the context of the blog post is important… this is just a prototype to hopefully complete the whole project!

And then recently while listening to A History of the World in 100 objects, it occurred to me that that model might well work for telling the story of the Web–A History of the Web told through 100 Pages. By telling the story of 100 influential pages in the Web’s history (out of the now half a trillion pages archived by the wayback machine), might we tell a meaningful history of the Web?

The post The Gang Goes on JS Danger appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Creating Patterns With SVG Filters

Css Tricks - Mon, 03/15/2021 - 5:04am

For years, my pain has been not being able to create a somewhat natural-looking pattern in CSS. I mean, sometimes all I need is a wood texture. The only production-friendly solution I knew of was to use an external image, but external images are an additional dependency and they introduce a new complexity.

I know now that a good portion of these problems could be solved with a few lines of SVG.

Tl;dr: Take me to the gallery!

There is a Filter Primitive in SVG called <feTurbulence>. It’s special in the sense that it doesn’t need any input image — the filter primitive itself generates an image. It produces so-called Perlin noise which is a type of noise gradient. Perlin noise is heavily used in computer generated graphics for creating all sorts of textures. <feTurbulence> comes with options to create multiple types of noise textures and millions of variations per type.

All of these are generated with <feTurbulence>.

“So what?” you might ask. These textures are certainly noisy, but they also contain hidden patterns which we can uncover by pairing them with other filters! That’s what we’re about to jump into.

Creating SVG filters

A custom filter typically consists of multiple filter primitives chained together to achieve a desired outcome. In SVG, we can describe these in a declarative way with the <filter> element and a number of <fe{PrimitiveName}> elements. A declared filter can then be applied on a renderable element — like <rect>, <circle>, <path>, <text>, etc. — by referencing the filter’s id. The following snippet shows an empty filter identified as coolEffect, and applied on a full width and height <rect>.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="coolEffect"> <!-- Filter primitives will be written here --> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg>

SVG offers more than a dozen different filter primitives but let’s start with a relatively easy one, <feFlood> . It does exactly what it says: it floods a target area. (This also doesn’t need an input image.) The target area is technically the filter primitive’s sub-region within the <filter> region.

Both the filter region and the filter primitive’s sub-region can be customized. However, we will use the defaults throughout this article, which is practically the full area of our rectangle. The following snippet makes our rectangle red and semi-transparent by setting the flood-color (red) and flood-opacity (0.5) attributes.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="coolEffect"> <feFlood flood-color="red" flood-opacity="0.5"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> A semi-transparent red rectangle looks light red on a white background and dark red on a black background because the opacity is set to  0.5.

Now let’s look at the <feBlend> primitive. It’s used for blending multiple inputs. One of our inputs can be SourceGraphic, a keyword that represents the original graphic on which the filter is applied.

Our original graphic is a black rectangle — that’s because we haven’t specified the fill on the <rect> and the default fill color is black. Our other input is the result of the <feFlood> primitive. As you can see below we’ve added the result attribute to <feFlood> to name its output. We are referencing this output in <feBlend> with the in attribute, and the SourceGraphic with the in2 attribute.

The default blend mode is normal and the input order matters. I would describe our blending operation as putting the semi-transparent red rectangle on top of the black rectangle.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="coolEffect"> <feFlood flood-color="red" flood-opacity="0.5" result="flood"/> <feBlend in="flood" in2="SourceGraphic"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> Now, our rectangle is dark red, no matter what color the background is behind it. That’s because we stacked our semi-transparent red <feFlood> on top of the black <rect> and blended the two together with <feBlend>, we chained the result of <feFlood> to <feBlend>.

Chaining filter primitives is a pretty frequent operation and, luckily, it has useful defaults that have been standardized. In our example above, we could have omitted the result attribute in <feFlood> as well as the in attribute in <feBlend>, because any subsequent filter will use the result of the previous filter as its input. We will use this shortcut quite often throughout this article.

Generating random patterns with feTurbulence

<feTurbulence> has a few attributes that determine the noise pattern it produces. Let’s walk through these, one by one.

baseFrequency

This is the most important attribute because it is required in order to create a pattern. It accepts one or two numeric values. Specifying two numbers defines the frequency along the x- and y-axis, respectively. If only one number is provided, then it defines the frequency along both axes. A reasonable interval for the values is between 0.001 and 1, where a low value results in large “features” and a high value results in smaller “features.” The greater the difference between the x and y frequencies, the more “stretched” the pattern becomes.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="coolEffect"> <feTurbulence baseFrequency="0.001 1"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> The baseFrequency values in the top row, from left to right: 0.01, 0.1, 1.  Bottom row: 0.01 0.1, 0.1 0.01, 0.001 1. type

The type attribute takes one of two values: turbulence (the default) or fractalNoise, which is what I typically use. fractalNoise produces the same kind of pattern across the red, green, blue and alpha (RGBA) channels, whereas turbulence in the alpha channel is different from those in RGB. I find it tough to describe the difference, but it’s much easier to see when comparing the visual results.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="coolEffect"> <feTurbulence baseFrequency="0.1" type="fractalNoise"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> The turbulence type (left) compared to the fractalNoise type (right) numOctaves

The concept of octaves might be familiar to you from music or physics. A high octave doubles the frequency. And, for <feTurbulence> in SVG, the numOctaves attribute defines the number of octaves to render over the baseFrequency.

The default numOctaves value is 1, which means it renders noise at the base frequency. Any additional octave doubles the frequency and halves the amplitude. The higher this number goes, the less visible its effect will be. Also, more octaves mean more calculation, possibly hurting performance. I typically use values between 1-5 and only use it to refine a pattern.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="coolEffect"> <feTurbulence baseFrequency="0.1" type="fractalNoise" numOctaves="2"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> numOctaves values compared: 1 (left), 2 (center), and 5 (right) seed

The seed attribute creates different instances of noise, and serves as the the starting number for the noise generator, which produces pseudo-random numbers under the hood. If the seed value is defined, a different instance of noise will appear, but with the same qualities. Its default value is 0 and positive integers are interpreted (although 0 and 1 are considered to be the same seed). Floats are truncated.

This attribute is best for adding a unique touch to a pattern. For example, a random seed can be generated on a visit to a page so that every visitor will get a slightly different pattern. A practical interval for generating random seeds is from 0 to 9999999 due to some technical details and single precision floats. But still, that’s 10 million different instances, which hopefully covers most cases.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="coolEffect"> <feTurbulence baseFrequency="0.1" type="fractalNoise" numOctaves="2" seed="7329663"/> </filter> <rect width="100%" height="100%" filter="url(#coolEffect)"/> </svg> seed values compared: 1 (left), 2 (center), and 7329663 (right) stitchTiles

We can tile a pattern the same sort of way we can use background-repeat: repeat in CSS! All we need is the stitchTiles attribute, which accepts one of two keyword values: noStitch and stitch, where noStitch is the default value. stitch repeats the pattern seamlessly along both axes.

Comparing noStitch (top) to stitch (bottom)

Note that <feTurbulence> also produces noise in the Alpha channel, meaning the images are semi-transparent, rather than fully opaque.

Patterns Gallery

Let’s look at a bunch of awesome patterns made with SVG filters and figure out how they work!

Starry Sky CodePen Embed Fallback

This pattern consists of two chained filter effects on a full width and height rectangle. <feTurbulence> is the first filter, responsible for generating noise. <feColorMatrix> is the second filter effect, and it alters the input image, pixel by pixel. We can tell specifically what each output channel value should be based on a constant and all the input channel values within a pixel. The formula per channel looks like this:

  • is the output channel value
  • are the input channel values
  • are the weights

So, for example, we can write a formula for the Red channel that only considers the Green channel by setting to 1, and setting the other weights to 0. We can write similar formulas for the Green and Blue channels that only consider the Blue and Red channels, respectively. For the Alpha channel, we can set (the constant) to 1 and the other weights to 0 to create a fully opaque image. These four formulas perform a hue rotation.

The formulas can also be written as matrix multiplication, which is the origin of the name <feColorMatrix>. Though <feColorMatrix> can be used without understanding matrix operations, we need to keep in mind that our 4×5 matrix are the 4×5 weights of the four formulas.

  • is the weight of Red channel’s contribution to the Red channel.
  • is the weight of Red channel’s contribution to the Green channel.
  • is the weight of Green channel’s contribution to the Red channel.
  • is the weight of Green channel’s contribution to the Green channel.
  • The description of the remaining 16 weights are omitted for the sake ofbrevity

The hue rotation mentioned above is written like this:

It’s important to note that the RGBA values are floats ranging from 0 to 1, inclusive (rather than integers ranging from 0 to 255 as you might expect). The weights can be any float, although at the end of the calculations any result below 0 is clamped to 0, and anything above 1 is clamped to 1. The starry sky pattern relies on this clamping, since it’s matrix is this:

The transfer function described by <feColorMatrix> for the R, G, and B channels. The input is always Alpha.

We are using the same formula for the RGB channels which means we are producing a grayscale image. The formula is multiplying the value from the Alpha channel by nine, then removing four from it. Remember, even Alpha values vary in the output of <feTurbulence>. Most resulting values will not be within the 0 to 1 range; thus they will be clamped. So, our image is mostly either black or white — black being the sky, and white being the brightest stars; the remaining few in-between values are dim stars. We are setting the Alpha channel to a constant of 1 in the fourth row, meaning the image is fully opaque.

Pine Wood CodePen Embed Fallback

This code is not much different from what we just saw in Starry Sky. It’s really just some noise generation and color matrix transformation. A typical wooden pattern has features that are longer in one dimension than the other. To mimic this effect, we are creating “stretched” noise with <feTurbulence> by setting baseFrequency="0.1 0.01". Furthermore, we are setting type="fractalNoise".

With <feColorMatrix>, we are simply recoloring our longer pattern. And, once again, the Alpha channel is used as an input for variance. This time, however, we are offsetting the RGB channels by constant weights that are greater than the weights applied on the Alpha input. This ensures that all the pixels of the image remain within a certain color range. Finding the best color range requires a little bit of playing around with the values.

Mapping Alpha values to colors with <feColorMatrix> While it’s extremely subtle, the second bar is a gradient.

It’s essential to understand that the matrix operates in the linearized RGB color space by default. The color purple (#800080), for example, is represented by values , , and . It might look odd at first, but there’s a good reason for using linearized RGB for some transformations. This article provides a good answer for the why, and this article is great for diving into the how.

At the end of the day, all it means is that we need to convert our usual #RRGGBB values to the linearized RGB space. I used this color space tool to do that. Input the RGB values in the first line then use the values from the third line. In the case of our purple example, we would input , , in the first line and hit the sRGB8 button to get the linearized values in the third line.

If we pick our values right and perform the conversions correctly, we end up with something that resembles the colors of pine wood.

Dalmatian Spots CodePen Embed Fallback

This example spices things up a bit by introducing <feComponentTransfer> filter. This effect allows us to define custom transfer functions per color channel (also known as color component). We’re only defining one custom transfer function in this demo for the Alpha channel and leave the other channels undefined (which means identity function will be applied). We use the discrete type to set a step function. The steps are described by space-separated numeric values in the tableValues attribute. tableValues control the number of steps and the height of each step.

Let’s consider examples where we play around with the tableValues value. Our goal is to create a “spotty” pattern out of the noise. Here’s what we know:

  • tableValues="1" transfers each value to 1.
  • tableValues="0" transfers each value to 0.
  • tableValues="0 1" transfers values below 0.5 to 0 and values from 0.5 to 1.
  • tableValues="1 0" transfers values below 0.5 to 1 and values from 0.5 to 0.
Three simple step functions. The third (right) shows what is used in Dalmatian Spots.

It’s worth playing around with this attribute to better understand its capabilities and the quality of our noise. After some experimenting we arrive at tableValues="0 1 0" which translates mid-range values to 1 and others to 0.

The last filter effect in this example is <feColorMatrix> which is used to recolor the pattern. Specifically, it makes the transparent parts (Alpha = 0) black and the opaque parts (Alpha = 1) white.

Finally, we fine-tune the pattern with <feTurbulence>. Setting numOctaves="2" helps make the spots a little more “jagged” and reduces elongated spots. The baseFrequency="0.06" basically sets a zoom level which I think is best for this pattern.

ERDL Camouflage CodePen Embed Fallback

The ERDL pattern was developed for disguising of military personnel, equipment, and installation. In recent decades, it found its way into clothing. The pattern consists of four colors: a darker green for the backdrop, brown for the shapes shapes, a yellowish-green for patches, and black sprinkled in as little blobs.

Similarly to the Dalmatian Spots example we looked at, we are chaining <feComponentTransfer> to the noise — although this time the discrete functions are defined for the RGB channels.

Imagine that the RGBA channels are four layers of the image. We create blobs in three layers by defining single-step functions. The step starts at different positions for each function, producing a different number of blobs on each layer. The cuts for Red, Green and Blue are 66.67%, 60% and 50%, respectively..

<feFuncR type="discrete" tableValues="0 0 0 0 1 1"/> <feFuncG type="discrete" tableValues="0 0 0 1 1"/> <feFuncB type="discrete" tableValues="0 1"/>

At this point, the blobs on each layers overlap in some places, resulting colors we don’t want. These other colors make it more difficult to transform our pattern into an ERDL camouflage, so let’s eliminate them:

  • For Red, we define the identity function.
  • For Green, our starting point is the identity function but we subtract the Red from it.
  • For Blue, our starting point is the identity function as well, but we subtract the Red and Green from it.

These rules mean Red remains where Red and Green and/or Blue once overlapped; Green remains where Green and Blue overlapped. The resulting image contains four types of pixels: Red, Green, Blue, or Black.

The second chained <feColorMatrix> recolors everything:

  • The black parts are made dark green with the constant weights.
  • The red parts are made black by negating the constant weights.
  • The green parts are made that yellow-green color by the additional weights from the Green channel.
  • The blue parts are made brown by the additional weights from the Blue channel.
Island Group CodePen Embed Fallback

This example is basically a heightmap. It’s pretty easy to produce a realistic looking heightmap with <feTurbulence> — we only need to focus on one color channel and we already have it. Let’s focus on the Red channel. With the help of a <feColorMatrix>, we turn the colorful noise into a grayscale heightmap by overwriting the Green and Blue channels with the value of the Red channel.

Now we can rely on the same value for each color channel per pixel. This makes it easy to recolor our image level-by-level with the help of <feComponentTransfer>, although, this time, we use a table type of function. table is similar to discrete, but each step is a ramp to the next step. This allows for a much smoother transition between levels.

The RGB transfer functions defined in <feComponentTransfer>

The number of tableValues determine how many ramps are in the transfer function. We have to consider two things to find the optimal number of ramps. One is the distribution of intensity in the image. The different intensities are unevenly distributed, although the ramp widths are always equal. The other thing to consider is the number of levels we would like to see. And, of course, we also need to remember that we are in linearized RGB space. We could get into the maths of all these, but it’s much easier to just play around and feel out the right values.

Mapping grayscale to colors with <feComponentTransfer>

We use deep blue and aqua color values from the lowest intensities to somewhere in the middle to represent the water. Then, we use a few flavors of yellow for the sandy parts. Finally, green and dark green at the highest intensities create the forest.

We haven’t seen the seed attribute in any these examples, but I invite you to try it out by adding it in there. Think of a random number between 1 and 10 million, then use that number as the seed attribute value in <feTurbulence>, like <feTurbulence seed="3761593" ... >

Now you have your own variation of the pattern!

Production use

So far, what we’ve done is look at a bunch of cool SVG patterns and how they’re made. A lot of what we’ve seen is great proof-of-concept, but the real benefit is being able to use the patterns in production in a responsible way.

The way I see it, there are three fundamental paths to choose from.

Method 1: Using an inline data URI in CSS or HTML

My favorite way to use SVGs is to inline them, provided they are small enough. For me, “small enough” means a few kilobytes or less, but it really depends on the particular use case. The upside of inlining is that the image is guaranteed to be there in your CSS or HTML file, meaning there is no need to wait until it is downloaded.

The downside is having to encode the markup. Fortunately, there are some great tools made just for this purpose. Yoksel’s URL-encoder for SVG is one such tool that provides a copy-paste way UI to generate the code. If you’re looking for a programmatic approach — like as part of a build process — I suggest looking into mini-svg-data-uri. I haven’t used it personally, but it seems quite popular.

Regardless of the approach, the encoded data URI goes right in your CSS or HTML (or even JavaScript). CSS is better because of its reusability, but HTML has minimal delivery time. If you are using some sort of server-side rendering technique, you can also slip a randomized seed value in <feTurbulence> within the data URI to show a unique variation for each user.

Here’s the Starry Sky example used as a background image with its inline data URI in CSS:

.your-selector { background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg'%3E%3Cfilter id='filter'%3E%3CfeTurbulence baseFrequency='0.2'/%3E%3CfeColorMatrix values='0 0 0 9 -4 0 0 0 9 -4 0 0 0 9 -4 0 0 0 0 1'/%3E%3C/filter%3E%3Crect width='100%25' height='100%25' filter='url(%23filter)'/%3E%3C/svg%3E%0A"); }

And this is how it looks used as an <img> in HTML:

<img src="data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg'%3E%3Cfilter id='filter'%3E%3CfeTurbulence baseFrequency='0.2'/%3E%3CfeColorMatrix values='0 0 0 9 -4 0 0 0 9 -4 0 0 0 9 -4 0 0 0 0 1'/%3E%3C/filter%3E%3Crect width='100%25' height='100%25' filter='url(%23filter)'/%3E%3C/svg%3E%0A"/> Method 2: Using SVG markup in HTML

We can simply put the SVG code itself into HTML. Here’s Starry Sky’s markup, which can be dropped into an HTML file:

<div> <svg xmlns="http://www.w3.org/2000/svg"> <filter id="filter"> <feTurbulence baseFrequency="0.2"/> <feColorMatrix values="0 0 0 9 -4 0 0 0 9 -4 0 0 0 9 -4 0 0 0 0 1"/> </filter> <rect width="100%" height="100%" filter="url(#filter)"/> </svg> </div>

It’s a super simple approach, but carries a huge drawback — especially with the examples we’ve seen.

That drawback? ID collision.

Notice that the SVG markup uses a #filter ID. Imagine adding the other examples in the same HTML file. If they also use a #filter ID, then that would cause the IDs to collide where the first instance overrides the others.

Personally, I would only use this technique in hand-crafted pages where the scope is small enough to be aware of all the included SVGs and their IDs. There’s the option of generating unique IDs during a build, but that’s a whole other story.

Method 3: Using a standalone SVG

This is the “classic” way to do SVG. In fact, it’s just like using any other image file. Drop the SVG file on a server, then use the URL in an HTML <img> tag, or somewhere in CSS like a background image.

So, going back to the Starry Sky example. Here’s the contents of the SVG file again, but this time the file itself goes on the server.

<svg xmlns="http://www.w3.org/2000/svg"> <filter id="filter"> <feTurbulence baseFrequency="0.2"/> <feColorMatrix values="0 0 0 9 -4 0 0 0 9 -4 0 0 0 9 -4 0 0 0 0 1"/> </filter> <rect width="100%" height="100%" filter="url(#filter)"/> </svg>

Now we can use it in HTML, say as as image:

<img src="https://example.com/starry-sky.svg"/>

And it’s just as convenient to use the URL in CSS, like a background image:

.your-selector { background-image: url("https://example.com/starry-sky.svg"); }

Considering today’s HTTP2 support and how relatively small SVG files are compared to raster images, this isn’t a bad solution at all. Alternatively, the file can be placed on a CDN for even better delivery. The benefit of having SVGs as separate files is that they can be cached in multiple layers.

Caveats

While I really enjoy crafting these little patterns, I also have to acknowledge some of their imperfections.

The most important imperfection is that they can pretty quickly create a computationally heavy “monster” filter chain. The individual filter effects are very similar to one-off operations in photo editing software. We are basically “photoshopping” with code, and every time the browser displays this sort of SVG, it has to render each operation. So, if you end up having a long filter chain, you might be better off capturing and serving your result as a JPEG or PNG to save users CPU time. Taylor Hunt’s “Improving SVG Runtime Performance” has a lot of other great tips for getting the most performance out of SVG.

Secondly, we’ve got to talk about browser support. Generally speaking, SVG is well-supported, especially in modern browsers. However, I came across one issue with Safari when working with these patterns. I tried creating a repeating circular pattern using <radialGradient> with spreadMethod="repeat". It worked well in Chrome and Firefox, but Safari wasn’t happy with it. Safari displays the radial gradient as if its spreadMethod was set to pad. You can verify it right in MDN’s documentation.

You may get different browsers rendering the same SVG differently. Considering all the complexities of rendering SVG, it’s pretty hard to achieve perfect consistency. That said, I have only found one difference between the browsers that’s worth mentioning and it’s when switching to “full screen” view. When Firefox goes full screen, it doesn’t render the SVG in the extended part of the viewport. Chrome and Safari are good though. You can verify it by opening this pen in your browser of choice, then going into full screen mode. It’s a rare edge-case that could probably be worked around with some JavaScript and the Fullscreen API.

Thanks!

Phew, that’s a wrap! We not only got to look at some cool patterns, but we learned a ton about <feTurbulence> in SVG, including the various filters it takes and how to manipulate them in interesting ways.

Like most things on the web, there are downsides and potential drawbacks to the concepts we covered together, but hopefully you now have a sense of what’s possible and what to watch for. You have the power to create some awesome patterns in SVG!

The post Creating Patterns With SVG Filters appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Use Tailwind on a Svelte Site

Css Tricks - Fri, 03/12/2021 - 8:53am

Let’s spin up a basic Svelte site and integrate Tailwind into it for styling. One advantage of working with Tailwind is that there isn’t any context switching going back and forth between HTML and CSS, since you’re applying styles as classes right on the HTML. It’s all the in same file in Svelte anyway, but still, this way you don’t even need a <style> section in your .svelte files.

If you are a Svelte developer or enthusiast, and you’d like to use Tailwind CSS in your Svelte app, this article looks at the easiest, most-straightforward way to install tailwind in your app and hit the ground running in creating a unique, modern UI for your app.

If you like to just see a working example, here’s a working GitHub repo.

Why Svelte?

Performance-wise, Svelte is widely considered to be one of the top JavaScript frameworks on the market right now. Created by Rich Harris in 2016, it has been growing rapidly and becoming popular in the developer community. This is mainly because, while very similar to React (and Vue), Svelte is much faster. When you create an app with React, the final code at build time is a mixture of React and vanilla JavaScript. But browsers only understand vanilla JavaScript. So when a user loads your app in a browser (at runtime), the browser has to download React’s library to help generate the app’s UI. This slows down the process of loading the app significantly.

How’s Svelte different? It comes with a compiler that compiles all your app code into vanilla JavaScript at build time. No Svelte code makes it into the final bundle. In this instance, when a user loads your app, their browser downloads only vanilla JavaScript files, which are lighter. No framework UI library is needed. This significantly speeds up the process of loading your app. For this reason, Svelte applications are usually very small and lightning fast.

The only downside Svelte currently faces is that since it’s still new and doesn’t have the kind of ecosystem and community backing that more established frameworks like React enjoy.

Why Tailwind?

Tailwind CSS is a CSS framework. It’s somewhat similar to popular frameworks, like Bootstrap and Materialize, in that you apply classes to elements and it styles them. But it is also atomic CSS in that one class name does one thing. While Tailwind does have Tailwind UI for pre-built componentry, generally you customize Tailwind to look how you want it to look, so there is less risk of “looking like a Bootstrap site” (or whatever other framework that is less commonly customized).

For example, rather than give you a generic header component that comes with some default font sizes, margins, paddings, and other styling, Tailwind provides you with utility classes for different font sizes, margins, and paddings. You can pick the specific ones you want and create a unique looking header with them.

Tailwind has other advantages as well:

  • It saves you the time and stress of writing custom CSS yourself. With Tailwind, you get thousands of out-of-the-box CSS classes that you just need to apply to your HTML elements.
  • One thing most users of Tailwind appreciate is the naming convention of the utility classes. The names are simple and they do a good job of telling you what their functions are. For example, text-sm gives your text a small font size**.** This is a breath of fresh air for people that struggle with naming custom CSS classes.
  • By utilizing a mobile-first approach, responsiveness is at the heart of Tailwind’s design. Making use of the sm, md, and lg prefixes to specify breakpoints, you can control the way styles are rendered across different screen sizes. For example, if you use the md prefix on a style, that style will only be applied to medium-sized screens and larger. Small screens will not be affected.
  • It prioritizes making your application lightweight by making PurgeCSS easy to set up in your app. PurgeCSS is a tool that runs through your application and optimizes it by removing all unused CSS classes, significantly reducing the size of your style file. We’ll use PurgeCSS in our practice project.

All this said Tailwind might not be your cup of tea. Some people believe that adding lots of CSS classes to your HTML elements makes your HTML code difficult to read. Some developers even think it’s bad practice and makes your code ugly. It’s worth noting that this problem can easily be solved by abstracting many classes into one using the @apply directive, and applying that one class to your HTML, instead of the many.

Tailwind might also not be for you if you are someone who prefers ready-made components to avoid stress and save time, or you are working on a project with a short deadline.

Step 1: Scaffold a new Svelte site

Svelte provides us with a starter template we can use. You can get it by either cloning the Svelte GitHub repo, or by using degit. Using degit provides us with certain advantages, like helping us make a copy of the starter template repository without downloading its entire Git history (unlike git clone). This makes the process faster. Note that degit requires Node 8 and above.

Run the following command to clone the starter app template with degit:

npx degit sveltejs/template project-name

Navigate into the directory of the starter project so we can start making changes to it:

cd project-name

The template is mostly empty right now, so we’ll need to install some required npm packages:

npm install

Now that you have your Svelte app ready, you can proceed to combining it with Tailwind CSS to create a fast, light, unique web app.

Step 2: Adding Tailwind CSS

Let’s proceed to adding Tailwind CSS to our Svelte app, along with some dev dependencies that will help with its setup.

npm install tailwindcss@npm:@tailwindcss/postcss7-compat postcss@^7 autoprefixer@^9 # or yarn add tailwindcss@npm:@tailwindcss/postcss7-compat postcss@^7 autoprefixer@^9

The three tools we are downloading with the command above:

  1. Tailwind
  2. PostCSS
  3. Autoprefixer

PostCSS is a tool that uses JavaScript to transform and improve CSS. It comes with a bunch of plugins that perform different functions like polyfilling future CSS features, highlighting errors in your CSS code, controlling the scope of CSS class names, etc.

Autoprefixer is a PostCSS plugin that goes through your code adding vendor prefixes to your CSS rules (Tailwind does not do this automatically), using caniuse as reference. While browsers are choosing to not use prefixing on CSS properties the way they had in years past, some older browsers still rely on them. Autoprefixer helps with that backwards compatibility, while also supporting future compatibility for browsers that might apply a prefix to a property prior to it becoming a standard.

For now, Svelte works with an older version of PostCSS. Its latest version, PostCSS 8, was released September 2020. So, to avoid getting any version-related errors, our command above specifies PostCSS 7 instead of 8. A PostCSS 7 compatibility build of Tailwind is made available under the compat channel on npm.

Step 3: Configuring Tailwind

Now that we have Tailwind installed, let’s create the configuration file needed and do the necessary setup. In the root directory of your project, run this to create a tailwind.config.js file:

npx tailwindcss init tailwind.config.js

Being a highly customizable framework, Tailwind allows us to easily override its default configurations with custom configurations inside this tailwind.config.js file. This is where we can easily customize things like spacing, colors, fonts, etc.

The tailwind.config.js file is provided to prevent ‘fighting the framework’ which is common with other CSS libraries. Rather than struggling to reverse the effect of certain classes, you come here and specify what you want. It’s in this file that we also define the PostCSS plugins used in the project.

The file comes with some default code. Open it in your text editor and add this compatibility code to it:

future: { purgeLayersByDefault: true, removeDeprecatedGapUtilities: true, },

Tailwind 2.0 (the latest version), all layers (e.g., base, components, and utilities) are purged by default. In previous versions, however, just the utilities layer is purged. We can manually configure Tailwind to purge all layers by setting the purgeLayersByDefault flag to true.

Tailwind 2.0 also removes some gap utilities, replacing them with new ones. We can manually remove them from our code by setting removeDeprecatedGapUtilities to true.

These will help you handle deprecations and breaking changes from future updates.

PurgeCSS

The several thousand utility classes that come with Tailwind are added to your project by default. So, even if you don’t use a single Tailwind class in your HTML, your project still carries the entire library, making it rather bulky. We’ll want our files to be as small as possible in production, so we can use purge to remove all of the unused utility classes from our project before pushing the code to production.

Since this is mainly a production problem, we specify that purge should only be enabled in production.

purge: { content: [ "./src/**/*.svelte", ], enabled: production // disable purge in dev },

Now, your tailwind.config.js should look like this:

const production = !process.env.ROLLUP_WATCH; module.exports = { future: { purgeLayersByDefault: true, removeDeprecatedGapUtilities: true, }, plugins: [ ], purge: { content: [ "./src/**/*.svelte", ], enabled: production // disable purge in dev }, }; Rollup.js

Our Svelte app uses Rollup.js, a JavaScript module bundler made by Rich Harris, the creator of Svelte, that is used for compiling multiple source files into one single bundle (similar to webpack). In our app, Rollup performs its function inside a configuration file called rollup.config.js.

With Rollup, We can freely break our project up into small, individual files to make development easier. Rollup also helps to lint, prettify, and syntax-check our source code during bundling.

Step 4: Making Tailwind compatible with Svelte

Navigate to rollup.config.js and import the sveltePreprocess package. This package helps us handle all the CSS processing required with PostCSS and Tailwind.

import sveltePreprocess from "svelte-preprocess";

Under plugins, add sveltePreprocess and require Tailwind and Autoprefixer, as Autoprefixer will be processing the CSS generated by these tools.

preprocess: sveltePreprocess({ sourceMap: !production, postcss: { plugins: [ require("tailwindcss"), require("autoprefixer"), ], }, }),

Since PostCSS is an external tool with a syntax that’s different from Svelte’s framework, we need a preprocessor to process it and make it compatible with our Svelte code. That’s where the sveltePreprocess package comes in. It provides support for PostCSS and its plugins. We specify to the sveltePreprocess package that we are going to require two external plugins from PostCSS, Tailwind and Autoprefixer. sveltePreprocess runs the foreign code from these two plugins through Babel and converts them to code supported by the Svelte compiler (ES6+). Rollup eventually bundles all of the code together.

The next step is to inject Tailwind’s styles into our app using the @tailwind directive. You can think of @tailwind loosely as a function that helps import and access the files containing Tailwind’s styles. We need to import three sets of styles.

The first set of styles is @tailwind base. This injects Tailwind’s base styles—mostly pulled straight from Normalize.css—into our CSS. Think of the styles you commonly see at the top of stylesheets. Tailwind calls these Preflight styles. They are provided to help solve cross-browser inconsistencies. In other words, they remove all the styles that come with different browsers, ensuring that only the styles you employ are rendered. Preflight helps remove default margins, make headings and lists unstyled by default, and a host of other things. Here’s a complete reference of all the Preflight styles.

The second set of styles is @tailwind components. While Tailwind is a utility-first library created to prevent generic designs, it’s almost impossible to not reuse some designs (or components) when working on a large project. Think about it. The fact that you want a unique-looking website doesn’t mean that all the buttons on a page should be designed differently from each other. You’ll likely use a button style throughout the app.

Follow this thought process. We avoid frameworks, like Bootstrap, to prevent using the same kind of button that everyone else uses. Instead, we use Tailwind to create our own unique button. Great! But we might want to use this nice-looking button we just created on different pages. In this case, it should become a component. Same goes for forms, cards, badges etc.

All the components you create will eventually be injected into the position that @tailwind components occupies. Unlike other frameworks, Tailwind doesn’t come with lots of predefined components, but there are a few. If you aren’t creating components and plan to only use the utility styles, then there’s no need to add this directive.

And, lastly, there’s @tailwind utilities. Tailwind’s utility classes are injected here, along with the ones you create.

Step 5: Injecting Tailwind Styles into Your Site

It’s best to inject all of the above into a high-level component so they’re accessible on every page. You can inject them in the App.svelte file:

<style global lang="postcss"> @tailwind base; @tailwind components; @tailwind utilities; </style>

Now that we have Tailwind set up in, let’s create a website header to see how tailwind works with Svelte. We’ll create it in App.svelte, inside the main tag.

Step 6: Creating A Website Header

Starting with some basic markup:

<nav> <div> <div> <a href="#">APP LOGO</a> <!-- Menus --> <div> <ul> <li> <a href="#">About</a> </li> <li> <a href="#">Services</a> </li> <li> <a href="#">Blog</a> </li> <li> <a href="#">Contact</a> </li> </ul> </div> </div> </div> </nav>

This is the header HTML without any Tailwind CSS styling. Pretty standard stuff. We’ll wind up moving the “APP LOGO” to the left side, and the four navigation links on the right side of it.

Now let’s add some Tailwind CSS to it:

<nav class="bg-blue-900 shadow-lg"> <div class="container mx-auto"> <div class="sm:flex"> <a href="#" class="text-white text-3xl font-bold p-3">APP LOGO</a> <!-- Menus --> <div class="ml-55 mt-4"> <ul class="text-white sm:self-center text-xl"> <li class="sm:inline-block"> <a href="#" class="p-3 hover:text-red-900">About</a> </li> <li class="sm:inline-block"> <a href="#" class="p-3 hover:text-red-900">Services</a> </li> <li class="sm:inline-block"> <a href="#" class="p-3 hover:text-red-900">Blog</a> </li> <li class="sm:inline-block"> <a href="#" class="p-3 hover:text-red-900">Contact</a> </li> </ul> </div> </div> </div> </nav>

OK, let’s break down all those classes we just added to the HTML. First, let’s look at the <nav> element:

<nav class="bg-blue-900 shadow-lg">

We apply the class bg-blue-900 gives our header a blue background with a shade of 900, which is dark. The class shadow-lg class applies a large outer box shadow. The shadow effect this class creates will be 0px at the top, 10px on the right, 15px at the bottom, and -3px on the left.

Next is the first div, our container for the logo and navigation links:

<div class="container mx-auto">

To center it and our navigation links, we use the mx-auto class. It’s equivalent to margin: auto, horizontally centering an element within its container.

Onto the next div:

<div class="sm:flex">

By default, a div is a block-level element. We use the sm:flex class to make our header a block-level flex container, so as to make its children responsive (to enable them shrink and expand easily). We use the sm prefix to ensure that the style is applied to all screen sizes (small and above).

Alright, the logo:

<a href="#" class="text-white text-3xl font-bold p-3">APP LOGO</a>

The text-white class, true to its name, make the text of the logo white. The text-3xl class sets the font size of our logo (which is configured to 1.875rem)and its line height (configured to 2.25rem). From there, p-3 sets a padding of 0.75rem on all sides of the logo.

That takes us to:

<div class="ml-55 mt-4">

We’re giving the navigation links a left margin of 55% to move them to the right. However, there’s no Tailwind class for this, so we’ve created a custom style called ml-55, a name that’s totally made up but stands for “margin-left 55%.”

It’s one thing to name a custom class. We also have to add it to our style tags:

.ml-55 { margin-left: 55%; }

There’s one more class in there: mt-4. Can you guess what it does? If you guessed that it seta a top margin, then you are correct! In this case, it’s configured to 1rem for our navigation links.

Next up, the navigation links are wrapped in an unordered list tag that contains a few classes:

<ul class="text-white sm:self-center text-xl">

We’re using the text-white class again, followed by sm:self-center to center the list—again, we use the sm prefix to ensure that the style is applied to all screen sizes (small and above). Then there’s text-xl which is the extra-large configured font size.

For each list item:

<li class="sm:inline-block">

The sm:inline-block class sets each list item as an inline block-level element, bringing them side-by-side.

And, lastly, the link inside each list item:

<a href="#" class="p-3 hover:text-red-900">

We use the utility class hover:text-red-900 to make each red on hover.

Let’s run our app in the command line:

npm run dev

This is what we should get:

And that is how we used Tailwind CSS with Svelte in six little steps!

Conclusion

My hope is that you now know how to integrate Tailwind CSS into our Svelte app and configure it. We covered some pretty basic styling, but there’s always more to learn! Here’s an idea: Try improving the project we worked on by adding a sign-up form and a footer to the page. Tailwind provides comprehensive documentation on all its utility classes. Go through it and familiarize yourself with the classes.

Do you learn better with video? Here are a couple of excellent videos that also go into the process of integrating Tailwind CSS with Svelte.

The post How to Use Tailwind on a Svelte Site appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Platform News: Defaulting to Logical CSS, Fugu APIs, Custom Media Queries, and WordPress vs. Italics

Css Tricks - Fri, 03/12/2021 - 5:51am

Looks like 2021 is the time to start using CSS Logical Properties! Plus, Chrome recently shipped a few APIs that have raised eyebrows, SVG allows us to disable its aspect ratio, WordPress focuses on the accessibility of its typography, and there’s still no update (or progress) on the development of CSS custom media queries.

Let’s jump right into the news…

Logical CSS could soon become the new default

Six years after Mozilla shipped the first bits of CSS Logical Properties in Firefox, this feature is now on a path to full browser support in 2021. The categories of logical properties and values listed in the table below are already supported in Firefox, Chrome, and the latest Safari Preview.

CSS property or valueThe logical equivalentmargin-topmargin-block-starttext-align: righttext-align: endbottominset-block-endborder-leftborder-inline-start(n/a)margin-inline

Logical CSS also introduces a few useful shorthands for tasks that in the past required multiple declarations. For example, margin-inline sets the margin-left and margin-right properties, while inset sets the top, right, bottom and left properties.

/* BEFORE */ main { margin-left: auto; margin-right: auto; } /* AFTER */ main { margin-inline: auto; }

A website can add support for an RTL (right-to-left) layout by replacing all instances of left and right with their logical counterparts in the site’s CSS code. Switching to logical CSS makes sense for all websites because the user may translate the site to a language that is written right-to-left using a machine translation service. The biggest languages with RTL scripts are Arabic (310 million native speakers), Persian (70 million), and Urdu (70 million).

/* Switch to RTL when Google translates the page to an RTL language */ .translated-rtl { direction: rtl; }

David Bushell’s personal website now uses logical CSS and relies on Google’s translated-rtl class to toggle the site’s inline base direction. Try translating David’s website to an RTL language in Chrome and compare the RTL layout with the site’s default LTR layout.

Chrome ships three controversial Fugu APIs

Last week Chrome shipped three web APIs for “advanced hardware interactions”: the WebHID and Web Serial APIs on desktop, and Web NFC on Android. All three APIs are part of Google’s capabilities project, also known as Project Fugu, and were developed in W3C community groups (though they’re not web standards).

  • The WebHID API allows web apps to connect to old and uncommon human interface devices that don’t have a compatible device driver for the operating system (e.g., Nintendo’s Wii Remote).
  • The Web Serial API allows web apps to communicate (“byte by byte”) with peripheral devices, such as microcontrollers (e.g., the Arduino DHT11 temperature/humidity sensor) and 3D printers, through an emulated serial connection.
  • Web NFC allows web apps to wirelessly read from and write to NFC tags at short distances (less than 10 cm).

Apple and Mozilla, the developers of the other two major browser engines, are currently opposed to these APIs. Apple has decided to “not yet implement due to fingerprinting, security, and other concerns.” Mozilla’s concerns are summarized on the Mozilla Specification Positions page.

Source: webapicontroversy.com Stretching SVG with preserveAspectRatio=none

By default, an SVG scales to fit the <svg> element’s content box, while maintaining the aspect ratio defined by the viewBox attribute. In some cases, the author may want to stretch the SVG so that it completely fills the content box on both axes. This can be achieved by setting the preserveAspectRatio attribute to none on the <svg> element.

View demo

Distorting SVG in this manner may seem counterintuitive, but disabling aspect ratio via the preserveAspectRatio=none value can make sense for simple, decorative SVG graphics on a responsive web page:

This value can be useful when you are using a path for a border or to add a little effect on a section (like a diagonal [line]), and you want the path to fill the space.

WordPress tones down the use of italics

An italic font can be used to highlight important words (e.g., the <em> element), titles of creative works (<cite>), technical terms, foreign phrases (<i>), and more. Italics are helpful when used discreetly in this manner, but long sections of italic text are considered an accessibility issue and should be avoided.

Italicized text can be difficult to read for some people with dyslexia or related forms of reading disorders.

Putting the entire help text in italics is not recommended

WordPress 5.7, which was released earlier this week, removed italics on descriptions, help text, labels, error details text, and other places in the WordPress admin to “improve accessibility and readability.”

In related news, WordPress 5.7 also dropped custom web fonts, opting for system fonts instead.

Still no progress on CSS custom media queries

The CSS Media Queries Level 5 module specifies a @custom-media rule for defining custom media queries. This proposed feature was originally added to the CSS spec almost seven years ago (in June 2014) and has since then not been further developed nor received any interest from browser vendors.

@custom-media --narrow-window (max-width: 30em); @media (--narrow-window) { /* narrow window styles */ }

A media query used in multiple places can instead be assigned to a custom media query, which can be used everywhere, and editing the media query requires touching only one line of code.

Custom media queries may not ship in browsers for quite some time, but websites can start using this feature today via the official PostCSS plugin (or PostCSS Preset Env) to reduce code repetition and make media queries more readable.

On a related note, there is also the idea of author-defined environment variables, which (unlike custom properties) could be used in media queries, but this potential feature has not yet been fully fleshed out in the CSS spec.

@media (max-width: env(--narrow-window)) { /* narrow window styles */ }

The post Platform News: Defaulting to Logical CSS, Fugu APIs, Custom Media Queries, and WordPress vs. Italics appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Table of Contents with IntersectionObserver

Css Tricks - Thu, 03/11/2021 - 11:00am

If you have a table of contents on a long-scrolling page, thanks to, say, position: fixed; or position: sticky;, the IntersectionObserver API in JavaScript is the perfect companion to highlight items in the table of contents when corresponding content is in view.

Ben Frain has a post all about this:

Thanks to IntersectionObserver we have a small but very efficient bit of code to create our table of contents, provide quick links to jump around the document and update readers on where they are in a document as they read.

Compared to older techniques that need to bind to scroll events and perform their own math, this code is shorter, faster, and more logical. If you’re looking for the demo on Ben’s site, the article is the demo. And here’s a video on it:

I’ve mentioned this stuff before, but here’s a Bramus Van Damme version:

CodePen Embed Fallback

And here’s a version from Hakim el Hattab that is just begging for someone to port it to IntersectionObserver because the UI is so cool:

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post Table of Contents with IntersectionObserver appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Chapter 7: Standards

Css Tricks - Thu, 03/11/2021 - 6:14am

It was the year 1994 that the web came out of the shadow of academia and onto the everyone’s screens. In particular, it was the second half of the second week of December 1994 that capped off the year with three eventful days.

Members of the World Wide Web Consortium huddled around a table at MIT on Wednesday, December 14th. About two dozen people made it to the meeting, representatives from major tech companies, browser makers, and web-based startups. They were there to discuss open standards for the web.

When done properly, standards set a technical lodestar. Companies with competing interests and priorities can orient themselves around a common set of agreed upon documentation about how a technology should work. Consensus on shared standards creates interoperability; competition happens through user experience instead of technical infrastructure.

The World Wide Web Consortium, or W3C as it is more commonly referred to, had been on the mind of the web’s creator, Sir Tim Berners-Lee, as early as 1992. He had spoken with a rotating roster of experts and advisors about an official standards body for web technologies. The MIT Laboratory for Computer Science soon became his most enthusiastic ally. After years of work, Berners-Lee left his job at CERN in October of 1994 to run the consortium at MIT. He had no intention of being a dictator. He had strong opinions about the direction of the web, but he still preferred to listen.

W3C, 1994

On the agenda — after the table had been cleared with some basic introductions — was a long list of administrative details that needed to be worked out. The role of the consortium, the way it conducted itself, and its responsibilities to the wider web was little more than sketched out at the beginning of the meeting. Little by little, the 25 or so members walked through the list. By the end of the meeting, the group felt confident that the future of web standards was clear.

The next day, December 15th, Jim Clark and Marc Andreessen announced the recently renamed Netscape Navigator version 1.0. It had been out for several months in beta, but that Thursday marked a wider release. In a bid for a growing market, it was initially given away for free. Several months later, after the release of version 1.1, Netscape would be forced to walk that back. In either case, the browser was a commercial and technical success, improving on the speed, usability, and features of browsers that had come before it.

On Friday, December 16th, the W3C experienced its first setback. Berners-Lee never meant for MIT to be the exclusive site of the consortium. He planned for CERN, the birthplace of the web and home to some of its greatest advocates, to be a European host for the organization. On December 16th, however, CERN approved a massive budget for its Large Hadron Collider, forcing them to shift priorities. A refocused budget left little room for hypertext Internet experiments not directly contributing to the central project of particle physics.

CERN would no longer be the European host of the W3C. All was not lost. Months later, the W3C set up at France’s National Institute for Research in Computer Science and Control, or INRIA. By 1996, a third site at Japan’s Keio University would also be established.

Far from an outlier, this would not be the last setback the W3C ever faced, or that it would overcome.

In 1999, Berners-Lee published an autobiographical account of the web’s creation in a book entitled Weaving the Web. It is a concise and even history, a brisk walk through the major milestones of the web’s first decade. Throughout the book, he often returns to the subject of the W3C.

He frames the web consortium, first and foremost, as a matter of compromise. “It was becoming clear to me that running the consortium would always be a balancing act, between taking the time to stay as open as possible and advancing at the speed demanded by the onrush of technology.” Striking a balance between shared compatibility and shorter and shorter browser release cycles would become a primary objective of the W3C.

Web standards, he concedes, thrives through tension. Standards are developed amidst disagreement and hard-won bargains. Recalling a time just before the W3C’s creation, Berners-Lee notes how the standards process reflects the structure of the web. “It struck me that these tensions would make the consortium a proving ground for the relative merits of weblike and treelike societal structures,” he wrote, “I was eager to start the experiment.” A web consortium born of compromise and defined by tension, however, was not Berners-Lee’s first plan.

In March of 1992, Berners-Lee flew to San Diego to attend a meeting of the Internet Engineering Task Force, or IETF. Created in 1986, the IETF develops standards for the Internet, ranging from networking to routing to DNS. IETF standards are unenforceable and entirely voluntarily. They are not sanctioned by any world government or subject to any regulations. No entity is obligated to use them. Instead, the IETF relies on a simple conceit: interoperability helps everyone. It has been enough to sustain the organization for decades.

Because everything is voluntary, the IETF is managed by a labyrinthine set of rules and ritualistic processes that can be difficult to understand. There is no formal membership, though anyone can join (in its own words it has “no members and no dues”). Everyone is a volunteer, no one is paid. The group meets in person three times a year at shifting locations.

The IETF operates on a principle known as rough consensus (and, often times, running code). Rather than a formal voting process, disputed proposals need to come to some agreement where most, if not at all, of the members in a technology working group agree. Working group members decide when rough consensus has been met, and its criteria shifts form year to year and group to group. In some cases, the IETF has turned to humming to take the temperature of a room. “When, for example, we have face-to-face meetings… instead of a show of hands, sometimes the chair will ask for each side to hum on a particular question, either ‘for’ or ‘against’.”

It is against the backdrop of these idiosyncratic rules that Berners-Lee first came to the IETF in March of 1992. He hoped to set up a working group for each of the primary technologies of the web: HTTP, HTML, and the URI (which would later be renamed to URL through the IETF). In March he was told he would need another meeting, this one in June, to formally propose the working groups. Somewhere close to the end of 1993, a year and a half after he began, he had persuaded the IETF to set up all three.

The process of rough consensus can be slow. The web, by contrast, had redefined what fast could look like. New generations of browsers were coming out in months, not years. And this was before Netscape and Microsoft got involved.

The development of the web had spiraled outside Berners-Lee’s sphere of influence. Inline images — a feature maybe most responsible for the web’s success — was a product of a late night brainstorming session over snacks and soda in the basement of a university lab. Berners-Lee learned about it when everyone else did, when Marc Andreessen posted it to the www-talk mailing list.

Tension. Berners-Lee knew that it would come. He had hoped, for instance, that images might be treated differently (“Tim bawled me out in the summer of ’93 for adding images to the thing,” Andreessen would later say), but the web was not his. It was not anybody’s. He had designed it that way.

With all of its rules and rituals, the IETF did not seem like the right fit for web standards. In private discussions at universities and research labs, Berners-Lee had begun to explore a new path. Something like a consortium of stakeholders in the web — a collection of companies that create browsers and websites and software — that can come together to agree upon a rough consensus for themselves. By the end of 1993, his work on the W3C had already begun.

Dave Raggett, a seasoned researcher at Hewlett-Packard, had a different view of the web. He wasn’t from academia, and he wasn’t working on a browser (not yet anyway). He understood almost instinctively the utility of the web as commercial software. Something less like a digital phonebook and more like Apple’s wildly successful Hypercard application.

Unable to convince his bosses of the web’s promise, Raggett used the ten percent of time HP allowed for its employees to pursue independent research to begin working with the web. He anchored himself to the community, an active member of the www-talk mailing list and a regular presence at IETF meetings. In the fall of 1992, he had a chance to visit with Berners-Lee at CERN.

Yuri Rubinsky

It was around this time that he met Yuri Rubinsky, an enthusiastic advocate for Standard General Markup Language, or SGML, the language that HTML was originally based on. Rubinsky believed that the limitations of HTML could be solved by a stricter adherence to the SGML standard. He had begun a campaign to bring SGML to the web. Raggett agreed — but to a point. He was not yet ready to sever ties with HTML.

Each time Mosaic shipped a new version, or a new browser was released, the gap between the original HTML specification and the real world web widened. Raggett believed that a more comprehensive record of HTML was required. He began working on an enhanced version of HTML, and a browser to demo its capabilities. Its working title was HTML+.

Ragget’s work soon began to spill over to his home life. He’d spend most nights “at a large computer that occupied a fair portion of the dining room table, sharing its slightly sticky surface with paper, crayons, Lego bricks and bits of half-eaten cookies left by the children.” After a year of around the clock work, Raggett had a version of HTML+ ready to go in November of 1993. His improvements to the language were far from superficial. He had managed to add all of the little things that had made their way into browsers: tables, images with captions and figures, and advanced forms.

Several months later, in May of 1994, developers and web enthusiasts traveled from all over the world to come to what some attendees would half-jokingly refer to as the “Woodstock of the Web,” the first official web conference organized by CERN employee and web pioneer Robert Calliau. Of the 800 people clamoring to come, the space in Geneva could hold only 350. Many were meeting for the first time. “Everyone was milling about the lobby,” web historian Marc Weber would later describe, “electrified by the same sensation of meeting face-to-face actual people who had been just names on an email or on the www-talk [sic] mailing list.”

Members of the first conference

It came at a moment when the web stood on the precipice of ubiquity. Nobody from the Mosaic team had managed to make it (they had their own competing conference set for just a few months later), but there were already rumors about Mosaic alum Marc Andresseen’s new commercial browser that would later be called Netscape Navigator. Mosaic, meanwhile, had begun to license their browser for commercial use. An early version of Yahoo! was growing exponentially as more and more publications, like GNN, Wired, The New York Times, and The Wall Street Journal, came online.

Progress at the IETF, on the other hand, had been slow. It was too meticulous, too precise. In the meantime, browsers like Mosaic had begun to add whatever they wanted — particularly to HTML. Tags supported by Mosaic couldn’t be found anywhere else, and website creators were forced to chose between cutting-edge technology and compatibility with other browsers. Many were choosing the former.

HTML+ was the biggest topic of conversation at the conference. But another highlight was when Dan Connolly — a young, “red-haired, navy-cut Texan” who worked at the supercomputer manufacturer Convex — took the stage. He gave a talk called “Interoperability: Why Everyone Wins.” Later, and largely because of that talk, Connolly would be made chair of the IETF HTML Working Group.

In a prescient moment capturing the spirit of the room, Connolly described a future when the language of HTML fractured. When each browser implemented their own set of HTML tags in an effort to edge out the competition. The solution, he concluded, was an HTML standard that was able to evolve at the pace of browser development.

Ragget’s HTML+ made a strong case for becoming that standard. It was exhaustive, describing the new HTML used in browsers like Mosaic in near-perfect detail. “I was always the minimalist, you know, you can get it done with out that,” Connolly later said, “Raggett, on the other hand, wanted to expand everything.” The two struck an agreement. Raggett would continue to work through HTML+ while Connolly focused on a more narrow upgrade.

Connolly’s version would soon become HTML 2, and after a year of back and forth and rough consensus building at the IETF, it became an official standard. It didn’t have nearly the detail of HTML+, but Connolly was able to officially document features that browsers had been supporting for years.

Ragget’s proposal, renamed to HTML 3, was stuck. In an effort to accommodate an expanding web, it continued to grow in size. “To get consensus on a draft 150 pages long and about which everyone wanted to voice an opinion was optimistic – to say the least,” Raggett would later put it, rather bluntly. But by then, Raggett was already working at the W3C, where HTML 3 would soon become a reality.

Berners-Lee also spoke at the first web conference in Geneva, closing it out with a keynote address. He didn’t specifically mention the W3C. Instead, he focused on the role of web. “The people present were the ones now creating the Web,” he would later write of his speech, “and therefore were the only ones who could be sure that what the systems produced would be appropriate to a reasonable and fair society.”

In October of 1994, he embarked on his own part in making a more equitable and accessible future for the web. The World Wide Web Consortium was officially announced. Berners-Lee was joined by a handful of employees — a list that included both Dave Raggett and Dan Connolly. Two months later, in the second half of the second week of December of 1994, the members of the W3C met for the first time.

Before the meeting, Berners-Lee had a rough sketch of how the W3C would work. Any company or organization could join given that they pay the membership fee, a tiered pricing structure tied to the size of that company. Member organizations would send representatives to W3C meetings, to provide input into the process of creating standards. By limiting W3C proceedings to paying members, Berners-Lee hoped to focus and scope the conversations to real world implementations of web technologies.

Yet despite a closed membership, the W3C operates in the open whenever possible. Meeting notes and documentation are open to anybody in the public. Any code written as part of experiments in new standards is freely downloadable.

Gathered at MIT, the W3C members had to next decide how its standards would work. They decided on a process that stops just short of rough consensus. Though they are often called standards, the W3C does not create official standards for the web. The technical specifications created at the W3C are known, in their final form, as recommendations.

They are, in effect, proposals. They outline, in great detail, how exactly a technology works. But they leave enough open that it is up to browsers to figure out exactly how the implementation works. “The goal of the W3C is to ensure interpretability of the Web, and in the long range that’s realistic,” former head of communications at the W3C Sally Khudairi once described it, “but in the short range we’re not going to play Web cops for compliance… we can’t force members to implement things.”

Initial drafts create a feedback loop between the W3C and its members. They provide guidance on web technologies, but even as specifications are in the process of being drafted, browsers begin to introduce them and developers are encouraged to experiment with them. Each time issues are found, the draft is revised, until enough consensus has been reached. At that point, a draft becomes a recommendation.

There would always be tension, and Berners-Lee knew that well. The trick was not to try to resist it, but to create a process where it becomes an asset. Such was the intended effect of recommendations.

At the end of 1995, the IETF HTML working group was replaced by a newly created W3C HTML Editorial Review Board. HTML 3.2 would be the first HTML version released entirely by the W3C, based largely on Ragget’s HTML+.

There was a year in web development, 1997, when browsers broke away from the still-new recommendations of the W3C. Microsoft and Netscape began to release a new set of features separate and apart from agreed upon standards. They even had a name for them. They called them Dynamic HTML, or DHTML. And they almost split the web in two.

DHTML was originally celebrated. Dynamic meant fluid. A natural evolution from HTML’s initial inert state. The web, in other words, came alive.

Touting it’s capabilities, a feature in Wired in 1997 referred to DHTML as the “magic wand Web wizards have long sought.” In its enthusiasm for the new technology, it makes a small note that “Microsoft and Netscape, to their credit, have worked with the standards bodies,” specifically on the introduction of Cascading Style Sheets, or CSS, but that most features were being added “without much regard for compatibility.”

The truth on the ground was that using DHTML required targeting one browser or another, Netscape or Internet Explorer. Some developers chose to simply choose a path, slapping a banner at the bottom of their site that displayed “Best Viewed In…” one browser or another. Others ignored the technology entirely, hoping to avoid its tangled complexity.

Browsers had their reasons, of course. Developers and users were asking for things not included in the official HTML specification. As one Microsoft representative put it, “In order to drive new technologies into the standards bodies, you have to continue innovating… I’m responsible to my customers and so are the Netscape folks.”

A more dynamic web was not a bad thing, but a splintered web was untenable. For some developers, it would prove to be the final straw.

Following the release of HTML 3.2, and with the rapid advancement of browsers, the HTML Editorial Review Board was divided into three parts. Each was given a separate area of responsibility to make progress on, independent of the others.

Dr. Lauren Wood (Photo: XML Summer School)

Dr. Lauren Wood became chair of the Document Object Model Working Group. A former theoretical nuclear phycist, Wood was the Director of Product Technology at SoftQuad, a comapny founded by SGML advocate Yuri Rubinsky. While there, she helped work on the HoTMetaL HTML editor. The DOM spec created a standardized way for browsers to implement Dynamic HTML. “You need a way to tie your data and your programs together,” was how Wood described it, “and the Document Object Model is that glue.” Her work on the Document Object Model, and later XML, would have a long-lasting influence on the web.

The Cascading Style Sheets Working Group was chaired by Chris Lilley. Lilley’s background was in computer graphics, as a teacher and specialist in the Computer Graphics Unit at the University of Manchester. Lilley had worked at the IETF on the HTML 2 spec, as well as a specification for Portable Network Graphics (PNG), but this would mark his first time as a working group chair.

CSS was still a relative newcomer in 1997. It had been in the works for years, but had yet to have a major release. Lilley would work alongside the creators of CSS — Håkon Lie and Bert Bos — to create the first CSS standard.

The final working group was for HTML, left under the auspices of Dan Connolly, continuing his position from the IETF. Connolly had been around the web almost as long as Berners-Lee had. He was one of the people watching back in October of 1991, when Berners-Lee demoed the web for a small group of unimpressed people at a hypertext conference in San Antonio. In fact, it was at that conference that he first met the woman that would later become his wife.

After he returned home, he experimented with the web. He messaged Berners-Lee a month later. It was only three words:“You need a DTD.”

When Berners-Lee developed the language of HTML, he borrowed its convention from a predecessor, SGML. IBM developed Generalized Markup Language (GML) in the early 1970’s to make it easier for typists to create formatted books and reports. However, it quickly got out of control, as people would take shortcuts and use whatever version of the tags that they wanted.

That’s when they developed the Document Type Definition, or as Connolly called it, a DTD. DTDs are what added the “S” (Standardized) to GML. Using SGML, you can create a standardized set of instructions for your data, its scheme and its structure, to help computers understand how to interpret it. These instructions are a document type definition.

Beginning with version 2, Connolly added a type definition to HTML. It limited the language to a smaller set of agreed-upon tags. In practice, browsers treated this more as a loose definition, continuing to implement their own DHTML features and tags. But it was a first step.

In 1997, the HTML Working Group, now inside of the W3C, began to work on the fourth iteration of HTML. It expanded the language, adding to the specification far more advanced features, complex tables and forms, better accessibility, and a more defined relationship with CSS. But it also split HTML from a single schema into three different document type definitions for browsers to adopt.

The first, Frameset, was not typically used. The second, Transitional, was there to include the mistakes of the past. It expanded a larger subset of HTML that included non-standard, presentational HTML that browsers had used for years, such as <font> and <center>. This was set as a default for browsers.

The third DTD was called Strict. Under the Strict definition, HTML was pared down to only its standard, non-presentational features. It removed all of the unique tags introduced by Netscape and Microsoft, leaving only structured elements. If you use HTML today, it likely draws on the same base of tags.

The Strict definition drew a line in the sand. It said, this is HTML. And it finally gave a way for developers to code once for every browser.

In the August 1998 issue of Computerworld — tucked between large features on the impending doom of <abbr title=”Year 2000>Y2K, the bristling potential of billing on the World Wide Web, and antitrust concerns about Microsoft — was a small announcement. Its headline read, ”Browser standards targeted.” It was about the creation of a new grassroots organization of web developers aimed at bringing web standards support to browsers. It was called the Web Standards Project.

Glenn Davis, co-creator of the project, was quoted in the announcement. “The problem is, with each generation of the browser, the browser manufacturers diverge farther from standards support.” Developers, forced to write different code for different browsers for years, had simply had enough. A few off-hand conversations in mailing lists had spiraled into a fully grown movement. At launch, 450 developers and designers had already signed up.

Davis was not new to the web, and he understood its challenges. His first experience on the web dated all the way back to 1994, just after Mosaic had first introduced inline images, when he created the gallery site Cool Site of the Day. Each day, he would feature a single homepage from an interesting or edgy or experimental site. For a still small community of web designers, it was an instant hit.

There was no criteria other than sites that Davis thought were worth featuring. “I was always looking for things that push the limits,” was how he would later define it. Davis helped to redefine the expectations of the early web, using the moniker coolas a shorthand to encompass many possibilities. Dot-com Design author and media professor **Megan Ankerson points out what “this ecosystem of cool sites gestured towards the sheer range of things the web could be: its temporal and spatial dislocations, its distinction from and extension of mainstream media, its promise as a vehicle for self-publishing, and the incredible blend of personal, mundane, and extraordinary.” For a time on the web, Davis was the arbiter of cool.

As time went on Davis transformed his site into Project Cool, a resource for creating websites. In the days of DHTML, Davis’ Project Cool tutorials provided constructive and practical techniques for making the most out of the web. And a good amount of his writing was devoted to explaining how to write code that was usable in both Netscape Navigator and Microsoft’s Internet Explorer. He eventually reached a breaking point, along with many others. At the end of 1997, Netscape and Microsoft both released their 4.0 browsers with spotty standards support. It was already clear that upcoming 5.0 releases were planning to lean even further into uneven and contradictory DHTML extensions.

Running out of patience, Davis helped set up a mailing list with George Olsen and Jeffrey Zeldman. The list started with two dozen people, but it gathered support quickly. The Web Standards Project, known as WaSP, officially launched from that list in August of 1998. It began with a few hundred members and announcement in magazines like Computer World. Within a few months, it would have tens of thousands of members.

The strategy for WaSP was to push browsers — publicly and privately — into web standards support. WaSP was not meant to be a hyperbolic name.” The W3C recommends standards. It cannot enforce them,” Zeldman once said of the organization’s strategy, “and it certainly is not about to throw public tantrums over non-compliance. So we do that job.”

A prominent designer and standards advocate, Zeldman would have an enduring influence on makers of the web. He would later run WaSP during some of its most influential years. His website and mailing list, A List Apart, would become a gathering place for designers who cared about web standards and using the latest web technologies.

WaSP would change focus several times during their decade and a half tenure. They pushed browsers to make better use of HTML and CSS. They taught developers how write standards-based code. They advocated for greater accessibility and tools that supported standards out of the box.

But their mission, published to their website on the first day of launch, would never falter. “Our goal is to support these core standards and encourage browser makers to do the same, thereby ensuring simple, affordable access to Web technologies for all.”

WaSP succeeded in their mission on a few occasions early on. Some browsers, notably Opera, had standards baked in at the beginning; their efforts were praised by WaSP. But the two browsers that collectively made up a majority of web use — Internet Explorer and Netscape Navigator — would need some work.

A four billion dollar sale to AOL in 1998 was not enough for Netscape to compete with Microsoft. After the release of Netscape 4.0, they doubled-down on bold strategy, choosing to release the entire browser’s code as open source under the Mozilla project. Everyday consumers could download it for free; coders were encouraged to contribute directly.

Members of the community soon noticed something in Mozilla. It had a new rendering engine, often referred to as Gecko. Unlike planned releases of Netscape 5, which had patchy standards support at best, Gecko supported a fairly complete version of HTML 4 and CSS.

WaSP diverted their formidable membership to the task of pushing Netscape to include Gecko in its next major release. One familiar WaSP tactic was known as roadblocking. Some of its members worked at publications like HotWired and CNet. WaSP would coordinate articles across several outlets all at once criticizing, for instance, Netscape’s neglect of standards in the face of a perfectly reasonable solution in Gecko. By doing so, they were often able to capture the attention of at least one news cycle.

WaSP also took more direct action. Members were asked to send emails to browsers, or sign petitions showing widespread support for standards. Overwhelming pressure from developers was occasionally enough to push browsers in the right direction.

In part because of WaSP, Netscape agreed to make Gecko part of version 5.0. Beta versions of Netscape 5 would indeed have standards-compliant HTML and CSS, but it was beset with issues elsewhere. It would take years for a release. By then, Microsoft’s dominion over the browser market would be near complete.

As one of the largest tech companies in the world, Microsoft was more insulated from grassroots pressure. The on-the-ground tactics of WaSP proved less successful when turned against the tech giant.

But inside the walls of Microsoft, WaSP had at least one faithful follower, developer Tantek Çelik. Çelik has tirelessly fought on the side of web standards as far back as his web career stretches. He would later become a member of the WaSP Steering Committee and a representative for a number of working groups at the W3C working directly on the development of standards.

Tantek Çelik (Photo: Tantek.com)

Çelik ran a team inside of Internet Explorer for Mac. Though it shared a name, branding, and general features with its far more ubiquitous Windows counterpart, IE for Mac ran on a separate codebase. Çelik’s team was largely left to its own devices in a colossal organization with other priorities working on a browser that not many people were using.

With the direction of the browser largely left up to him, Çelik began to reach out to web designers in San Francisco at the cutting edge of web technology. Through a stroke of luck he was connected to several members of the Web Standards Project. He’d visit with them and ask what they wanted to see in the Mac IE browser. “The answer: better standards support.”

They helped Çelik realize that his work on a smaller browser could be impactful. If he was able to support standards, as they were defined by the W3C, it could serve as a baseline for the code that the designers were writing. They had enough to worry about with buggy standards in IE for Windows and Netscape, in other words. They didn’t need to also worry about IE for Mac.

That was all that Çelik needed to hear. When Internet Explorer 5.0 for Mac launched in 2000, it had across the board support for web standards; HTML, PNG images, and most impressively, one of the most ambitious implementations of the new Cascading Style Sheets (CSS) specification.

It would take years for the Windows version to get anywhere close to the same kind of support. Even half a decade later, after Çelik left to work at the search engine Technorati, they were still playing catch-up.

Towards the end of the millennium, the W3C found themselves at a fork in the road. They looked to their still-recent past and saw it filled with contentious support for standards — Incompatible browsers with their own priorities. Then they looked the other way, to their towering future. They saw a web that was already evolving beyond the confines personal computers. One that would soon exist on TVs and in cell phones and on devices we that hadn’t been dreamed up yet in paradigms yet to be invented. Their past and their future were incompatible. And so, they reacted.

Yuri Rubinsky had an unusual talent for making connections. In his time as a standards advocate, developer, and executive at a major software company, he had managed to find time to connect some of the web’s most influential proponents. Sadly, Rubinsky died suddenly and at a young age in 1996, but his influence would not soon be forgotten. He carried with him an infectious energy and a knack for persuasion. His friend and colleague Peter Sharpe would say upon his death that in “talking to the people from all walks of life who knew Yuri, there was a common theme: Yuri had entered their lives and changed them forever.”

Rubinsky devoted his career to making technology more accessible. He believed that without equitable access, technology was not worth building. It motivated all of the work he did, including his longstanding advocacy of SGML.

SGML is a meta-language and “you use it to build your own computer languages for your own purposes.” If you hand a document over to a computer, SGML is how you can give that computer instructions on how to understand it. It provides a standardized way to describe the structure of data — the tags that it uses and the order it is expected in. The ownership of data, therefore, is not locked up and defined at some unknown level, it is given to everybody.

Rubinsky believed in that kind of universal access, a world in which machines talked to each other in perfect harmony, passing sets of data between them, structured, ordered, and formatted for its users. His company, SoftQuad, built software for SGML. He organized and spoke at conferences about it. He created SGML Open, a consortium not unlike the W3C. “SGML provides an internationally standardized, vendor-supported, multi-purpose, independent way of doing business,” was how he once described it, “If you aren’t using it today, you will be next year.” He was almost right.

He had a mission on the web as well. HTML is actually based on SGML, though it uses only a small part of it. Rubinsky was beginning to have conversations with members of the W3C, like Berners-Lee and Raggett, about bringing a more comprehensive version of SGML to the web. He was even writing a book called SGML on the Web before his death.

In the hallways of conferences and in threaded mailing lists, Rubinsky used his unique propensity for persuasion to bring people several people together on the subject, including Dan Connolly, Lauren Wood, Jon Bosak, James Clark, Tim Bray, and others. Eventually, those conversations moved into the W3C. They formed a formal working group and, in November of 1996, eXtensible Markup Language (XML) was formally announced, and then adopted as a W3C Recommendation. The announcement took place at an annual SGML conference in Boston, run by an organization where Rubinsky sat on the Board of Directors.

XML is SGML, minus a few things, renamed and repackaged as a web language. That means it goes far beyond the capabilities of HTML, giving developers a way to define their own structured data with completely unique tags (e.g., an <ingredients> tag in a recipe, or an <author> tag in an article). Over the years, XML has become the backbone of widely used technologies, like RSS and MathML, as well as server-level APIs.

XML was appealing to the maintainers of HTML, a language that was beginning to feel somewhat complete. “When we published HTML 4, the group was then basically closed,” Steve Pemberton, chair of the HTML working group at the time, described the situation. “Six months later, though, when XML was up and running, people came up with the idea that maybe there should be an XML version of HTML.” The merging of HTML and XML became known as XHTML. Within a year, it was the W3C’s main focus.

The first iterations of XHTML, drafted in 1998, were not that different from what already existed in the HTML specifications. The only real difference was that it had stricter rules for authors to follow. But that small constraint opened up new possibilities for the future, and XHTML was initially celebrated. The Web Standards Project issued a press release on the day of its release lauding its capabilities, and developers began to make use of the stricter markup rules required, in line with the work Connolly had already done with Document Type Definitions.

XHTML represented a web with deeper meaning. Data would be owned by the web’s creators. And together, computers and programmers, could create a more connected and understandable web. That meaning was labeled semantics. The Semantic Web would become the W3C’s greatest ambition, and they would chase it for close to a decade.

W3C, 2000

Subsequent versions of XHTML would introduce even stricter rules, leaning harder into the structure of XML. Released in 2002, the XHTML 2.0 specification became the language’s harbinger. It removed backwards compatibility with older versions of HTML, even as Microsoft’s Internet Explorer — the leading browser by a wide margin at this point — refused to support it. “XHTML 2 was a beautiful specification of philosophical purity that had absolutely no resemblance to the real world,” said Bruce Lawson, an HTML evangelist for Opera at the time.

Rather than uniting standards under a common banner, XHTML, and the refusal of major browsers to fully implement it, threatened the split the web apart permanently. It would take something bold to push web standards in a new direction. But that was still years away.

The post Chapter 7: Standards appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How I Built my SaaS MVP With Fauna ($150 in revenue so far)

Css Tricks - Thu, 03/11/2021 - 6:07am

Are you a beginner coder trying to implement to launch your MVP? I’ve just finished my MVP of ReviewBolt.com, a competitor analysis tool. And it’s built using React + Fauna + Next JS. It’s my first paid SaaS tool so earning $150 is a big accomplishment for me.

In this post you’ll see why I chose Fauna for ReviewBolt and how you can implement a similar set up. I’ll show you why I chose Fauna as my primary database. It easily stores massive amounts of data and gets it to me fast.By the end of this article, you’ll be able to decide on whether you also want to create your own serverless website with Fauna as your back end.

What is ReviewBolt?

The website allows you to search any website and get a detailed review of a company’s ad strategies, tech stack, and user experiences.

Reviewbolt currently pulls data from seven different sources to give you an analysis of any website in the world. It will estimate Facebook spend, Google spend, yearly revenue, traffic growth metrics, user reviews, and more!

Why did I build it?

I’ve dabbled in entrepreneurship and I’m always scouting for new opportunities. I thought building ReviewBolt would help me (1) determine how big a company is… and (2) determine its primary distribution channel. This is super important because if you can’t get new users then your business is pretty much dead.

Some other cool tidbits about it:

  • You get a large overview of everything that’s going on with a website.
  • What’s more, every search you make on the website creates a page that gets saved and indexed. So ReviewBolt grows a tiny bit bigger with every user search.

So far, it’s made $150, 50 users, analysed over 3,000 websites and helped 5,000+ people with their research. So a good start for a solo dev indie-hacker like myself.

It was featured on Betalist and it’s quite popular in entrepreneur circles. You can see my real-time statistics here: reviewbolt.com/stats

I’m not a coder… all self-taught

Building it so far was no easy feat! Originally I graduated as an english major from McGill University in Canada with zero tech skills. I actually took one programming class in my last year and got a 50%… the lowest passing grade possible.

But between then and now a lot has changed. For the last two years I’ve been learning web and app development. This year my goal was to make a profitable SaaS company but to also to make something that I would find useful.

I built ReviewBolt in my little home office in London during this massive Lockdown. The project works and that’s one step for me on my journey. And luckily I chose Fauna because it was quite easy to get a fast, reliable database that actually works with very low costs.

Why did I pick Fauna?

Fauna provides a great free tier and as a solo dev project, I wanted to keep my costs lean to see first if this would actually work.

Warning: I’m no Fauna expert. I actually still have a long way to go to master it. However, this was my setup to create the MVP of ReviewBolt.com that you see today. I made some really dumb mistakes like storing my data objects as strings instead of objects… But you live and learn.

I didn’t start off with Fauna…

ReviewBolt first started as just one large google sheet. Every time someone made a wesbite search, it pulled the data from the various sources and saved it as a row in a google sheet.

Simple enough right? But there was a problem…

After about 1,000 searches Google Sheets started to break down like an old car on a road trip…. It was barely able to start when I loaded the page. So I quickly looked for something more stable.

Then I found Fauna &#x1f607;

I discovered that Fauna was really fast and quite reliable. I started out using their GraphQL feature but realized the native FQL language had much better documentation.

There’s a great dashboard that gives you immediate insight for your usage.

I primarily use Fauna in the following ways:

  1. Storage of 110,000 company bios that I scraped.
  2. Storage of Google Ads data
  3. Storage of Facebook Ad data
  4. Storage of Google Trends data
  5. Storage of tech stack
  6. Storage of user reviews

The 110k companies are stored in one collection and the live data about websites is stored in another. I could have probably created created relational databases within fauna but that was way beyond me at the time &#x1f605; and it was easier to store everything as one very large object.

For testing, Fauna actually provides the built-in web shell. This is really useful, because I can follow the tutorials and try them in real-time on the website without load visual studio.

What frameworks does the website use?

The website works using React and NextJS. To load a review of a website you just type in the site.

Every search looks like this: reviewbolt.com/r/[website.com]

The first thing that happens on the back end is that it uses a Fauna Index to see if this search has already been done. Fauna is very efficient to search your database. Even with a 110k collection of documents it still works really well because of its use of indexing. So when a page loads — say reviewbolt.com/r/fauna — it first checks to see if there’s a match. If a match is found then it loads the saved data and renders that on the page.

If there’s no match then the page brings up a spinner and in the backend it queries all these public APIs about the requested website. As soon as it’s done it loads the data for the user.

And when that new website is analyzed it saves this data into my Fauna Collection. So then the next user won’t have to load everything but rather we can use Fauna to fetch it.

My use case is to index all of ReviewBolt’s website searches and then being able to retrieve those searches easily.

What else can Fauna do?

The next step is to create a charts section. So far I built a very basic version of this just for Shopify’s top 90 stores.

But ideally I have one that works by the category using Fauna’s index binding to create multiple indexes around: Top Facebook Spenders, Top Google Spenders, Top Traffic, Top Revenue, Top CRMs by traffic. And that will really be interesting to see who’s at the top for competitor research. Because in marketing, you always want to take inspiration from the winners.

But ideally I have one that works by the category using Fauna’s index binding to create multiple indexes around: Top Facebook Spenders, Top Google Spenders, Top Traffic, Top Revenue, Top CRMs by traffic. And that will really be interesting to see who’s at the top for competitor research. Because in marketing, you always want to take inspiration from the winners.

export async function findByName(name){ var data = await client.query(Map( Paginate( Match(Index("rbCompByName"), name) ), Lambda( "person", Get(Var("person")) ) )) return data.data//[0].data }

This queries Fauna to paginate the results and return the found object.

I run this function when searching for the website name. And then to create a company I use this code:

export async function createCompany(slug,linkinfo,trending,googleData,trustpilotReviews,facebookData,tech,date,trafficGrowth,growthLevels,trafficLevel,faunaData){ var Slug = slug var Author = linkinfo var Trends = trending var Google = googleData var Reviews = trustpilotReviews var Facebook = facebookData var TechData = tech var myDate = date var myTrafficGrowth = trafficGrowth var myGrowthLevels = growthLevels var myFaunaData = faunaData client.query( Create(Collection('RBcompanies'), { data: { "Slug": Slug, "Author": Author, "Trends": Trends, "Google": Google, "Reviews": Reviews, "Facebook": Facebook, "TechData": TechData, "Date": myDate, "TrafficGrowth":myTrafficGrowth, "GrowthLevels":myGrowthLevels, "TrafficLevels":trafficLevel, "faunaData":JSON.parse(myFaunaData), } }) ).then(result=>console.log(result)).catch(error => console.error('Error mate: ', error.message)); }

Which is a bit longer because I’m pulling so much information on various aspects of the website and storing it as one large object.

The Fauna FQL language is quite simple once you get your head around. Especially since for what I’m doing at least I don’t need to many commands.

I followed this tutorial on building a twitter clone and that really helped.

This will change when I introduce charts and I’m sorting a variety of indexes but luckily it’s quite easy to do this in Fauna.

What’s the next step to learn more about Fauna?

I highly recommend watching the video above and also going through the tutorial on fireship.io. It’s great for going through the basic concepts. It really helped get to the grips with the fauna query language.

Conclusion

Fauna was quite easy to implement as a basic CRUD system where I didn’t have to worry about fees. The free tier is currently 100k reads and 50k writes and for the traffic level that ReviewBolt is getting that works. So I’m quite happy with it so far and I’d recommend it for future projects.

The post How I Built my SaaS MVP With Fauna ($150 in revenue so far) appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

The WordPress Evolution Toward Full-Site Editing

Css Tricks - Wed, 03/10/2021 - 11:45am

The block editor was a game-changer for WordPress. The idea that we can create blocks of content and arrange them in a component-like fashion means we have a lot of flexibility in how we create content, as well a bunch of opportunities to develop new types of modular content.

But there’s so much more happening in the blocks ecosystem since the initial introduction of the editor. Last year, Dmitry Mayorov wrote about the emergence of block variations and how they provide even more flexibility by extending existing blocks to create styled variations of them.

Three buttons variations in the WordPress block inserter

Then we got block patterns, or the ability to stitch blocks together into reusable patterns.

Block patterns are sandwiched between Blocks and Reusable Blocks in the block inserter, which is a perfect metaphor for where it fits in the bigger picture of WordPress editing.

So, that means we have blocks, block variations, reusable blocks, and block patterns. That’s a lot of awesome tooling for designing layouts directly in the editor!

But you may have heard that WordPress has plans for blocks that go beyond the post editor. They’re outright targeting global elements — menus, headers, footers, and such — in an effort to establish full-site editing (FSE) capabilities right in WordPress.

Matt Mullenweg introduces the Twenty Twenty-One theme and its beta full-site editing capabilities.

Whoa. I certainly cannot speak for everyone else, but my mind instantly goes to what this means for theme developers. I mean, what is a theme where the templates are designed in the editor instead of code? I’d imagine a theme is a lot like a collection of shells that contain very little markup. And perhaps more development goes into creating blocks, block patterns and block variations to stitch everything together.

That’s actually the case, and you can test it now. Make sure you’re on WordPress 5.6+, then install the experimental TT1 Blocks theme and Gutenberg plugin.

Cracking open the theme, it’s really two PHP templates then — get this — HTML files used for block templates and block template parts.

The way block template and block template parts are separated closely mirrors the common current approach of separating templates from template parts.

I’m personally all-in on this direction. I’d even go so far as to say (peeking over my shoulder at Chris) that CSS-Tricks is all-in on this as well. We made the switch to blocks last year and it has reinvigorated our love for writing blog posts just like this one. (Honestly, I would have probably written something like this in the past with a code editor first, then port it to WordPress with the classic editor. That was a better writing experience for me at the time.)

The TT1 Blocks theme adds a “Site Editor” that takes its cues from the theme’s block templates and block template parts.

While I’m bullish on blocks, I know others aren’t. In fact, I work with plenty of folks who are (and I mean this kindly) blissfully ignorant of the block editor. Developing for the block editor is a huge mental shift and there’s a lack of documentation for it at the moment. Things are still very much in active development, and iterations to the block editor come with every new WordPress release. Can’t blame folks for deciding to wait for the next train as things settle and standards evolve.

But, at the same time, it’s true to Matt Mullenweg’s now infamous advice to WordPress developers in 2015: Learn JavaScript, deeply.

I was (and am still very much) excited about blocks. Full-site editing freaks me out a bit, but that’s mostly because it moves the concept of blocks outside the editor, where I’m only now beginning to get a good feel for them.

Whatever it all means, what I’m looking forward to most is an official release of a default theme that supports FSE. Remember the first time you opened up a WordPress theme? I marveled at the markup and spent countless hours picking at lines of code until I made it my own. That’s the experience I’m expecting the first time I open up the new theme.

Until then, here’s a sorta roundup of ways to stay in the loop:

  • Make WordPress Design – The handbook lists FSE as one of the team’s current priorities with an overview of the project. It was last updated May 2020, so I’m not sure how current the information is and whether the page is still maintained.
  • How to Test FSE – Instructions for setting up a FSE site locally and participate in testing.
  • TT1 Theme Repo – See what’s being reported and the status of those issues. This is the spot to watch for theme development.
  • Gutenberg Plugin Repo – Issues reported for the plugin. This is the spot to watch for block development.
  • Theme Experiments Repo – Check out more themes that are experimenting with blocks and FSE.
  • #fse-answers – A collection of responses to a bunch of questions about FSE.
  • #fse-outreach-experiment – Slack channel for discussing FSE.

The post The WordPress Evolution Toward Full-Site Editing appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Too Many SVGs Clogging Up Your Markup? Try `use`.

Css Tricks - Wed, 03/10/2021 - 5:58am

Recently, I had to make a web page displaying a bunch of SVG graphs for an analytics dashboard. I used a bunch of <rect>, <line> and <text> elements on each graph to visualize certain metrics.

This works and renders just fine, but results in a bloated DOM tree, where each shape is represented as separate nodes. Displaying all 50 graphs simultaneously on a web page results in 5,951 DOM elements in total, which is far too many.

We might display 50-60 different graphs at a time, all with complex DOM trees.

This is not optimal for several reasons:

  • A large DOM increases memory usage, longer style calculations, and costly layout reflows.
  • It will increases the size of the file on the client side.
  • Lighthouse penalizes the performance and SEO scores.
  • Maintainability is a nightmare — even if we use a templating system — because there’s still a lot of cruft and repetition.
  • It doesn’t scale. Adding more graphs only exacerbates these issues.

If we take a closer look at the graphs, we can see a lot of repeated elements.

Each graph ends up sharing lots of repeated elements with the rest.

Here’s dummy markup that’s similar to the graphs we’re using:

<svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="500" height="200" viewBox="0 0 500 200" > <!-- &#x1f4ca; Render our graph bars as boxes to visualise our data. This part is different for each graph, since each of them displays different sets of data. --> <g class="graph-data"> <rect x="10" y="20" width="10" height="80" fill="#e74c3c" /> <rect x="30" y="20" width="10" height="30" fill="#16a085" /> <rect x="50" y="20" width="10" height="44" fill="#16a085" /> <rect x="70" y="20" width="10" height="110" fill="#e74c3c" /> <!-- Render the rest of the graph boxes ... --> </g> <!-- Render our graph footer lines and labels. --> <g class="graph-footer"> <!-- Left side labels --> <text x="10" y="40" fill="white">400k</text> <text x="10" y="60" fill="white">300k</text> <text x="10" y="80" fill="white">200k</text> <!-- Footer labels --> <text x="10" y="190" fill="white">01</text> <text x="30" y="190" fill="white">11</text> <text x="50" y="190" fill="white">21</text> <!-- Footer lines --> <line x1="2" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="4" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="6" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="8" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <!-- Rest of the footer lines... --> </g> </svg>

And here is a live demo. While the page renders fine the graph’s footer markup is constantly redeclared and all of the DOM nodes are duplicated.

CodePen Embed Fallback The solution? The SVG element.

Luckily for us, SVG has a <use> tag that lets us declare something like our graph footer just once and then simply reference it from anywhere on the page to render it as many times as we want. From MDN:

The <use> element takes nodes from within the SVG document, and duplicates them somewhere else. The effect is the same as if the nodes were deeply cloned into a non-exposed DOM, then pasted where the use element is.

That’s exactly what we want! In a sense, <use> is like a modular component, allowing us to drop instances of the same element anywhere we’d like. But instead of props and such to populate the content, we reference which part of the SVG file we want to display. For those of you familiar with graphics programming APIs, such as WebGL, a good analogy would be Geometry Instancing. We declare the thing we want to draw once and then can keep reusing it as a reference, while being able to change the position, scale, rotation and colors of each instance.

Instead of drawing the footer lines and labels of our graph individually for each graph instance then redeclaring it over and over with new markup, we can render the graph once in a separate SVG and simply start referencing it when needed. The <use> tag allows us to reference elements from other inline SVG elements just fine.

Let’s put it to use

We’re going to move the SVG group for the graph footer — <g class="graph-footer"> — to a separate <svg> element on the page. It won’t be visible on the front end. Instead, this <svg> will be hidden with display: none and only contain a bunch of <defs>.

And what exactly is the <defs> element? MDN to the rescue once again:

The <defs> element is used to store graphical objects that will be used at a later time. Objects created inside a <defs> element are not rendered directly. To display them you have to reference them (with a <use> element for example).

Armed with that information, here’s the updated SVG code. We’re going to drop it right at the top of the page. If you’re templating, this would go in some sort of global template, like a header, so it’s included everywhere.

<!-- ⚠️ Notice how we visually hide the SVG containing the reference graphic with display: none; This is to prevent it from occupying empty space on our page. The graphic will work just fine and we will be able to reference it from elsewhere on our page --> <svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="500" height="200" viewBox="0 0 500 200" style="display: none;" > <!-- By wrapping our reference graphic in a <defs> tag we will make sure it does not get rendered here, only when it's referenced --> <defs> <g id="graph-footer"> <!-- Left side labels --> <text x="10" y="40" fill="white">400k</text> <text x="10" y="60" fill="white">300k</text> <text x="10" y="80" fill="white">200k</text> <!-- Footer labels --> <text x="10" y="190" fill="white">01</text> <text x="30" y="190" fill="white">11</text> <text x="50" y="190" fill="white">21</text> <!-- Footer lines --> <line x1="2" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="4" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="6" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <line x1="8" y1="195" x2="2" y2="200" stroke="white" strokeWidth="1" /> <!-- Rest of the footer lines... --> </g> </defs> </svg>

Notice that we gave our group an ID of graph-footer. This is important, as it is the hook for when we reach for <use>.

So, what we do is drop another <svg> on the page that includes the graph data it needs, but then reference #graph-footer in <use> to render the footer of the graph. This way, there’s no need to redeclaring the code for the footer for every single graph.

Look how how much cleaner the code for a graph instance is when <use> is in.. umm, use.

<svg xmlns="http://www.w3.org/2000/svg" version="1.1" width="500" height="200" viewBox="0 0 500 200" > <!-- &#x1f4ca; Render our graph bars as boxes to visualise our data. This part is different for each graph, since each of them displays different sets of data. --> <g class="graph-data"> <rect x="10" y="20" width="10" height="80" fill="#e74c3c" /> <rect x="30" y="20" width="10" height="30" fill="#16a085" /> <rect x="50" y="20" width="10" height="44" fill="#16a085" /> <rect x="70" y="20" width="10" height="110" fill="#e74c3c" /> <!-- Render the rest of the graph boxes ... --> </g> <!-- Render our graph footer lines and labels. --> <use xlink:href="graph-footer" x="0" y="0" /> </svg>

And here is an updated <use> example with no visual change:

CodePen Embed Fallback Problem solved.

What, you want proof? Let’s compare the demo with <use> version against the original one.

DOM nodesFile sizeFile Size (GZIP compression)Memory usageNo <use>5,952664 KB40.8 KB20 MBWith <use>2,572294 KB40.4 KB18 MBSavings56% fewer nodes42% smaller0.98% smaller10% less

As you can see, the <use> element comes in handy. And, even though the performance benefits were the main focus here, just the fact that it reduces huge chunks of code from the markup makes for a much better developer experience when it comes to maintaining the thing. Double win!

More information Article on Jan 28, 2020 Use and Reuse Everything in SVG… Even Animations! Mariana Beldi Article on May 30, 2017 SVG `use` with External Reference, Take 2 Chris Coyier Article on Aug 2, 2017 How to Make Charts with SVG Robin Rendle Article on Nov 1, 2016 A Handmade SVG Bar Chart (featuring some SVG positioning gotchas) Robin Rendle on Oct 27, 2015 13: SVG as an Icon System – The `use` Element Chris Coyier on Oct 27, 2015 15: SVG Icon System – Where the defs go Chris Coyier

The post Too Many SVGs Clogging Up Your Markup? Try `use`. appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Web Frameworks: Why You Don’t Always Need Them

Css Tricks - Tue, 03/09/2021 - 1:37pm

Richard MacManus explaining Daniel Kehoe’s approach to building websites, which he calls “Stackless”:

There are three key web technologies underpinning Kehoe’s approach:

  • ES6 Modules: JavaScript ES6 can support import modules, which are also supported by browsers.
  • Module CDNs: JavaScript modules can now be downloaded from third-party content delivery networks (CDNs).
  • Custom HTML elements: Developers can now create custom HTML tags, via Web Components.

Using a no build process and only features that are built into browser, and yet that still buys you a pretty powerful setup. You can still use stuff off npm. You can still get templating. You can still build with components. You still get isolation where needed.

I’d say today you’re:

  • Giving up some DX (hot module reloading, JSX, framework doodads)
  • Gaining some DX (can jump into project and just start working)
  • Giving up some performance (no tree shaking, loads of network requests)
  • Widening your hiring pool (more people know core technologies than specific tools)

But it’s not hard to imagine a tomorrow where we give up less and gain more, making the tools we use today less necessary. I’m quite sure we’ll always still find a way to jam more tools into what we’re doing. Hammer something something nail.

Direct Link to ArticlePermalink

The post Web Frameworks: Why You Don’t Always Need Them appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Firebase Crash Course

Css Tricks - Tue, 03/09/2021 - 6:06am

This article is going to help you, dear front-end developer, understand all that is Firebase. We’re going to cover lots of details about what Firebase is, why it can be useful to you, and show examples of how. But first, I think you’ll enjoy a little story about how Firebase came to be.

Eight years ago, Andrew Lee and James Tamplin were building a real-time chat startup called Envolve. The service was a hit. It was used on sites of celebrities like Ricky Martin and Limp Bizkit. Envolve was for developers who didn’t want to—yet again—build their own chat widget. The value of the service came from the setup ease and speed of message delivery. Envolve was just a chat widget. The process was simple. Place a script tag on a page. Once the widget booted up, it would do everything for you. It was like having a database and server for chat messages already set up.

James and Andrew noticed a peculiar trend as the service became more popular. Some developers would include the widget on the page, but make it invisible. Why would someone want a chat widget with hidden messages? Well, it wasn’t just chat data being sent between devices. It was game data, high scores, app settings, to-dos, or whatever the developer needed to quickly send and synchronize. Developers would listen for new messages in the widget and use them to synchronize state in their apps. This was an easy way to create real-time experiences without the need for a back end.

This was a light bulb moment for the co-founders of Firebase. What if developers could do more than send chat messages? What if they had a service to quickly develop and scale their applications? Remove the responsibility of managing back-end infrastructure. Focus on the front end. This is how Firebase was born.

What is Firebase?

Isn’t Firebase a database? No… yes… mostly no. Firebase is a platform that provides the infrastructure for developers and tools for marketers. But it wasn’t always that way.

Seven years ago, Firebase was a single product: a real-time cloud database. Today, Firebase is a collection of 19 products. Each product is designed to empower a part of an application’s infrastructure. Firebase also gives you insight into how your app is performing, what your users are doing, and how you can make your overall app experience better. While Firebase can make up the entirety of your app’s back-end, you can use each product individually as well.

Here’s just a sampling of those 19 products:

  • Hosting: Deploy a new version of your site for every GitHub pull request.
  • Firestore: Build apps that work in real time, even while offline, with no server required.
  • Auth: Authenticate and manage users with a myriad of providers.
  • Storage: Manage user-generated content like photos, videos, and GIFs.
  • Cloud Functions: Server code driven by events (e.g. record created, user sign up, etc.).
  • Extensions: Pre-packaged functions set up with UI (e.g. Stripe payments, text translations, etc.)
  • Google Analytics: Understand user activity, organized by segments and audiences.
  • Remote Config: Key-value store with dynamic conditions that’s great for feature gating.
  • Performance Monitoring: Page load metrics and custom traces from real usage.
  • Cloud Messaging: Cross-platform push notifications.

Whew. That’s a lot, and I didn’t even list the other nine products. That’s okay. There’s no requirement to use every specific service or even more than one thing. But now it’s time to make these services a little more tangible and showcase what you can do with Firebase.

A great way to learn is by seeing something just work. The first section below will get you set up with Firebase services. The sections following that will highlight the Firebase details of a demo app to showcase how to use Firebase features. While this is a relatively thorough guide to Firebase, it’s not a step-by-step tutorial. The goal is to highlight the working bits in the embedded demos for the sake of covering more ground in this one article. If you want step-by-step Firebase tutorials, leave a comment to hype me up to write one.

A basic Firebase setup

This section is helpful if you plan on forking the demo with your own Firebase back end. You can skip this section if you’re familiar with Firebase Projects or just want to see the shiny demos.

Firebase is a cloud-based service which means you need to do some basic account setup before using its services. Firebase development is not tied to a network connection, however. It’s very much worth noting that you can (and usually should) run Firebase locally on your machine for development. This guide demonstrates building an app with CodePen, which means it needs a cloud-connected service. The goal here is to create your personal back end with Firebase and then retrieve the configuration the front end needs to connect to it.

Create a Firebase Project

Go to the Firebase Console. You’ll be asked if you want to set up Google Analytics. None of these examples use it, so you can skip it and always add it back in later if needed.

Create a web Firebase App

Next, you’ll see options for creating an “App.” Click the web option and give it a name—any name will do. Firebase Projects can hold multiple “apps.” I’m not going to get deep into this hierarchy because it’s not too important when getting started. Once the app is created you’ll be given a configuration object.

let firebaseConfig = { apiKey: "your-key", authDomain: "your-domain.firebaseapp.com", projectId: "your-projectId", storageBucket: "your-projectId.appspot.com", messagingSenderId: "your-senderId", appId: "your-appId", measurementId: "your-measurementId" };

This is the configuration you’ll use on the front end to connect to Firebase. Don’t worry about any of these properties in terms of security. There’s nothing insecure about including these properties in your front-end code. You’ll learn in one of the sections below how security works in Firebase.

Now it’s time to represent this “App” you created in code. This “app” is merely a container that shares logic and authentication state across different Firebase services. Firebase provides a set of libraries that make development a lot easier. In this example I’ll be using them from a CDN, but they also work well with module bundlers like Webpack and Rollup.

// This pen adds Firebase via the "Add External Scripts" option in codepen // https://www.gstatic.com/firebasejs/8.2.10/firebase-app.js // https://www.gstatic.com/firebasejs/8.2.10/firebase-auth.js // Create a Project at the Firebase Console // (console.firebase.google.com) let firebaseConfig = { apiKey: "your-key", authDomain: "your-domain.firebaseapp.com", projectId: "your-projectId", storageBucket: "your-projectId.appspot.com", messagingSenderId: "your-senderId", appId: "your-appId", measurementId: "your-measurementId" }; // Create your Firebase app let firebaseApp = firebase.initializeApp(firebaseConfig); // The auth instance console.log(firebaseApp.auth());

Great! You are so, so close to being able to talk to your very own Firebase back end. Now you need to enable the services you intend to use.

Enable authentication providers

The examples below use authentication to sign in users and secure data in the database. When you create a new Firebase Project, all of your authentication providers are turned off. This is initially inconvenient, but essential for security. You don’t want users trying to sign in with providers your back end does not support.

To turn on a provider go to the Authentication tab in the side navigation and then click the “Sign-in method” button up top. Below, you’ll see a large list of providers such as Email and Password, Google, Facebook, GitHub, Microsoft, and Twitter. For the examples below, you will need to turn on Google and Anonymous. Google is near the top of the list and Anonymous is at the bottom. Selecting Google will ask you to provide a support email, I recommend putting in your own personal email while testing, but production apps should have a dedicated email.

If you plan on using authentication within CodePen, then you’ll also need to add CodePen as an authorized domain. You can add authorized domains towards the bottom of the “Sign-in method” tab.

An important note on this authorization: this will allow any project hosted on cdpn.io to sign into your Firebase Project. There’s not a lot of risk for short-term demo purposes. There’s no cost to using Firebase Auth except for phone number authentication. Ideally, you would not want to keep this as an authorized domain if you plan on using this app in a production capacity.

Now, on to the last step in the Firebase Console: Creating a Firestore database!

Create a Firestore database

Click on Firestore in the left navigation. From here, you’ll need to click the button to create the Firestore database. You’ll be asked if you want to start in “production mode” or “test mode.” You want “test mode” for this example. If you’re worried about security, we’ll cover that in the last section.

Now that you have the basics down, let’s get to some real-life use cases.

Authenticating users

The best parts of an app are usually behind a sign-up form. Why don’t we just let the user in as a guest so they can see for themselves? Systems often require accounts because the back end isn’t designed for guests. There’s a system requirement to have a sacred userId property to save any record belonging to a user. This is where “guest” accounts are helpful. They provide low friction for users to join while giving them a temporary userId that appeases the system. Firebase Auth has a process for doing just this.

Setting up anonymous auth

One of my favorite features about Firebase Auth is anonymous auth. There are two perks. One, you can authenticate a user without taking any input (e.g. passwords, phone number, etc.). Two, you get really good at spelling anonymous. It’s really a win-win situation.

Take this CodePen for example.

CodePen Embed Fallback

It’s a form that lets the user decide if they want to sign in with Google or as a guest. Most of the code in this example is specific to the user interface. The Firebase bits are actually pretty easy.

// Firebase-specific code let firebaseConfig = { /* config */ }; let firebaseApp = firebase.initializeApp(firebaseConfig); // End Firebase-specific code let socialForm = document.querySelector('form.sign-in-social'); let guestForm = document.querySelector('form.sign-in-guest'); guestForm.addEventListener('submit', async submitEvent => { submitEvent.preventDefault(); let formData = new FormData(guestForm); let displayName = formData.get('name'); let photoURL = await getRandomPhotoURL(); // Firebase-specific code let { user } = await firebaseApp.auth().signInAnonymously(); await user.updateProfile({ displayName, photoURL }); // End Firebase-specific code });

In the 17 lines of code above (sans comments), only five of them are Firebase-specific. Four lines are needed to import and configure. Two lines needed to signInAnonymously() and user.updateProfile(). The first sign-in method makes a call to your Firebase back end and authenticates the user. The call returns a result that contains the needed properties, such as the uid. Even with guest users, you can associate data to a user in your back end with this uid. After the user is signed in, the example calls updateProfile on the user object. Even though this user is a guest, they can still have a display name and a profile photo.

Setting up Google auth

The great news is that this works the same exact way with all other permanent providers, like Email and Password, Google, Facebook, Twitter, GitHub, Microsoft, Phone Number, and so much more. Implementing the Google Sign-in only takes a few lines of code as well.

socialForm.addEventListener('submit', submitEvent => { submitEvent.preventDefault(); // Firebase-specific code let provider = new firebase.auth.GoogleAuthProvider(); firebaseApp.auth().signInWithRedirect(provider); // End Firebase-specific code });

Each social style provider initiates a redirect-based authentication flow. That’s a fancy way of saying that the signInWithRedirect method will go to the sign-in page owned by the provider and then return back to your app with the authenticated user. In this case, the user is redirected to Google’s sign-in page, signs in, and then is returned back to your app.

Monitoring authentication state

How do you get the user back from this redirect? There are a few ways, but I’m going to go with the most common. You can detect the authentication state of any active user whether logged in or out.

firebaseApp.auth().onAuthStateChanged(user => { if(user != null) { console.log(user.toJSON()); } else { console.log("No user!"); } });

The onAuthStateChanged method updates whenever there is a change in a user’s authentication state. It will fire initially on page load telling you if a user is logged in, logs in, or logs out. This allows you to build a UI that reacts to these state changes. It also fits well into client-side routers because it can redirect a user to the proper page with a small amount of code.

The app in this case just uses <template> tags to replace the contents of the “phone” element. If the user is logged in, the app routes to the new template.

<div class="container"> <div class="phone"> <!-- Phone contents replaced with template tags --> </div> </div>

This provides a simple relationship. The .phone is the “root view.” Each <template> tag is a “child view.” The authentication state determines which view is shown.

firebaseApp.auth().onAuthStateChanged(user => { if(user != null) { // Show demo view routeTo("demo", firebaseApp, user); } else { console.log("No user!"); // Show log in page routeTo("signIn", firebaseApp); } });

The embedded demo isn’t “production ready” as the requirements were just work inside a single CodePen. The goal was to make it easy to read with each “view” contained within a template tag. This loosely emulates what a common routing solution would look like with a framework.

One important bit to note here is that the user object is passed down to the routeTo function. Getting the logged-in state is asynchronous. I find that it’s much easier to pass the user state down for the view than to make async calls from within the view. Many framework routers have a spot for this kind of async data fetching.

Convert guests to permanent users

The <template id="demo"> tag has a submit button for converting guests to permanent users. This is done by having the user sign in with the main provider (again, Google in this case).

let convertForm = document.querySelector('form.convert'); convertForm.addEventListener("submit", submitEvent => { submitEvent.preventDefault(); let provider = new firebase.auth.GoogleAuthProvider(); firebaseApp.auth().currentUser.linkWithRedirect(provider); });

Using the linkWithRedirect method will kick the user out to authenticate with the social provider. When the redirect returns the account will be merged. There’s nothing you have to change within the onAuthStateChanged method that controls the “child view” routing. What’s important to note is that the uid remains the same.

Handling account merging errors

A few edge cases can pop-up when you merge accounts. In some cases, a user could already have created an account with Google and is now trying to merge that existing account. There are many ways you can handle this scenario depending on how you want your app to work. I’m not going to get deep into this because we have a lot more to cover, but it’s important to know how to handle these errors.

async function checkForRedirect() { let auth = firebaseApp.auth(); try { let result = await auth.getRedirectResult(); } catch (error) { switch(error.code) { case 'auth/credential-already-in-use': { // You can check for the provider(s) in use let providers = await auth.fetchProvidersForEmail(error.email); // Then decide what strategy to take. A possible strategy is // notifying the user and asking them to sign in as that account } } } }

The code above uses the getRedirectResult method to detect if a user has directly returned from a social login redirect. If so, there’s a lot of information in that result. Most importantly here, we want to know if there was a problem. Firebase Auth will throw an error and provide relevant information on the error, such as the email and credentials, which will allow you to continue to merge the account. That’s not always what you want to do. In this case, I would probably indicate to the user that the account exists and prompt them to sign in with it. But I digress; I could talk sign-in forms for ages.

Now it’s on to building the data visualization (okay, it’s just a pie-chart) and providing it with a realtime data stream.

Setting up the data visualization

I’ll be honest. I have no idea what this app really does. I really wanted to test out building pie charts with a conic-gradient. I was excited about using CSS Custom Properties to change the values of the chart and syncing that in real time with a database, like Firestore. Let’s take a brief little detour to discuss how conic-gradient works.

A conic-gradient is a surprisingly well-supported CSS feature. Its key feature is that its color-stops are placed at the circumference of the circle.

.pie-chart { background-image: conic-gradient( purple 10%, /* 10% of the circumference */ magenta 0 20%, /* start at 0 go 20%, acts like 10% */ cyan 0 /* Fill the rest */ ); } CodePen Embed Fallback

You can build a pie chart with some quick math: lastStopPercent + percent. I stored these values in four CSS Custom Properties in the app pen: --pie-{n}-value (replace n with a number). Those values are used in another custom property that serves like a computed function.

:root { --pie-1-value: 10%; --pie-2-value: 10%; --pie-3-value: 80%; --pie-1-computed: var(--pie-1-value); --pie-2-computed: 0 calc(var(--pie-1-value) + var(--pie-2-value)); --pie-3-computed: 0 calc(var(--pie-2-value) + var(--pie-3-value)); }

Then the computed values are set in the conic-gradient.

background-image: conic-gradient( purple var(--pie-1-computed), magenta var(--pie-2-computed), cyan 0 );

The last computed value, --pie-3-computed, is ignored since it will always fill to the end (kinda like z for SVG paths). I think it’s still a good idea to set it in JavaScript to make the whole thing feel like it makes sense.

function setPieChartValue(percentage, index) { let root = document.documentElement; root.style.setProperty(`--pie-${index+1}-value`, `${percentage}%`); } let percentages = [25, 35, 60]; percentages.forEach(setPieChartValue);

You can hook up with pie chart with any data set with this newfound knowledge. The Firestore database has a .onSnapshot() method that streams data back from your database.

const fullPathDoc = firebaseApp.firestore().doc('/users/1234/expenses/3-2021'); fullPathDoc.onSnapshot(snap => { const { items } = doc.data(); items.forEach(setPieChartValue); });

A real time update will trigger the .onSnapshot() method whenever a value changes in your Firestore database at that document location. Now you might be asking yourself, what is a document location? How do I even store data in this database in the first place? Let’s take a dive into how Firestore works and how to model data in NoSQL databases.

How to model data in NoSQL database

Firestore is a document (NoSQL) database. It provides a hierarchical pattern of collections, which is a list of documents. Think of a document like a JSON object that has many more data types. A document can have a collection itself, which is known as a sub-collection. This is good for structuring data in a “parent-child” or hierarchical pattern.

If you haven’t followed the pre-requisites section up top, give it a read to make sure you have a Firestore database created before using any of this code yourself.

Just like Auth, you first need to import Firestore up top.

// This pen adds Firebase via the "Add External Scripts" option in codepen // https://www.gstatic.com/firebasejs/8.2.10/firebase-app.js // https://www.gstatic.com/firebasejs/8.2.10/firebase-auth.js // https://www.gstatic.com/firebasejs/8.2.10/firebase-firestore.js let firebaseConfig = { /* config */ }; let firebaseApp = firebase.initializeApp(firebaseConfig);

This tells the Firebase client library how to connect your Firestore database. Using this setup, you can create a reference to a piece of data in your database.

// Reference to collection stored at: '/expenses' const expensesCol = firebaseApp.firestore('expenses'); // Retrieve a snapshot of data (not in realtime) // Top level await is coming! (v8.dev/features/top-level-await) const snapshot = await expensesCol.get(); const expenses = snapshot.docs.map(d => d.data());

The code above creates a “Collection Reference” and then calls the .get() method to retrieve the data snapshot. This is not the data itself; it’s a wrapper that has a lot of helpful methods and metadata about the data. The last line “unwraps” the snapshot by iterating over the .docs array and calling the .data() function for each “Document Snapshot.” Note that this isn’t real time, but that’s coming up in just a bit!

What’s really important to grok here is the data structure (sometimes called a data model). This app stores the “expenses” of a user. Let’s say the document has the following structure.

{ uid: '1234', items: [ { label: "Food", value: 10 }, { label: "Services", value: 24 }, { label: "Rent", value: 30 }, { label: "Oops", value: 38 } ] }

The properties of a document are called a field. This document has an array field of items. Each item contains a label string and a value number. There’s also a string field named uid that stores the id of the user who owns the data. This structure simplifies the process of iterating over the values to create the pie chart. That part is solved, but how do we figure out how to get to a specific user’s expenses?

A traditional way of retrieving data based on a constraint is by using a query. That’s something you can do in Firestore with a .where() method.

// Let's pretend currentUser.uid === '1234' const currentUser = firebaseApp.auth().currentUser; // Reference to collection stored at: '/expenses' const expensesCol = firebaseApp.firestore('expenses'); // Query for the expenses belonging to uid == 1234 const userQuery = expensesCol.where('uid', '==', currentUser.uid); const snapshot = await userQuery.get();

This structure is fine but doesn’t take full advantage of the hierarchy provided by collections and, more importantly, sub-collections. Instead, you can structure your data in a “parent-child” relationship. Structures in Firestore work a lot like URLs for a website. You can design the paths to contain route parameter-like wildcards.

/users/:uid/expenses/month-year

The path above structures the data with a top-level collection of /users, then a document assigned to a uid, followed by a sub-collection. Sub-collections can exist even if there’s no existing parent document. Each sub-collection contains a document where you can retrieve the expenses by putting in the year as a number (e.g. 3 for March) and the year (2021).

// Let's pretend currentUser.uid === '1234' const currentUser = firebaseApp.auth().currentUser; // Reference to collection stored at: '/users' const usersCol = firebaseApp.firestore().collection('users'); // Reference to document stored at: '/users/1234'; const userDoc = usersCol.doc(currentUser.uid); // Reference to sub-collection stored at: '/users/1234/expenses' const userExpensesCol = userDoc.collection('expenses'); // Reference to document stored at: '/users/1234/expenses/3-2021'; const marchDoc = userExpensesCol.doc('3-2021'); // Alternatively, you could express a full path: const fullPathDoc = firebaseApp.firestore().doc('/users/1234/expenses/3-2021');

The example above shows that, with a little bit of modeling, you can retrieve a piece of data without needing to write a query. This won’t always be the case for each and every data structure. In many cases, it will be valid to have a top-level collection. It comes down to the kind of queries you want to write and how you want to secure your data. It’s good, however, to know your options when modeling the data. If you want to learn more about this topic, my colleague Todd Kerpelman has recorded a comprehensive series all about Firestore.

We’re going to go with the hierarchical approach in this app. With this data structured let’s stream in real time.

Streaming data to the visualization

The section above detail how to retrieve data in a “one-time” manner with .get(). Both documents and collections also have a .onSnapshot() method that allows you to stream the data in realtime.

const fullPathDoc = firebaseApp.firestore().doc('/users/1234/expenses/3-2021'); fullPathDoc.onSnapshot(snap => { const { items } = doc.data(); items.forEach((item, index) => { console.log(item); }); });

Whenever an update happens for the data stored at fullPathDoc, Firestore will stream the update to all connected clients. Using this data sync you can set all the CSS Custom Properties for the pie chart.

let db = firebaseApp.firestore(); let { uid } = firebaseApp.auth().currentUser; let marchDoc = db.doc(`users/${uid}/expenses/3-2021`); marchDoc.onSnapshot(snap => { let { factors } = snap.data(); factors.forEach((factor, index) => { root.style.setProperty(`--pie-${index+1}__value`, `${factor.value}%`); }); });

Now for the fun part! Go update the data in the database and see the pie slices move around.

Honestly, that is so much fun. I’ve been working on Firebase for nearly seven years and I never get tired of seeing the lightning-fast data updates. But the job is not yet complete! The database is insecure as it can be updated by anyone anywhere. It’s time to make sure it’s secure only to the user who owns that data.

Securing your database

If you’re new to Firebase or back-end development, you might be wondering why the database is insecure at the current moment. Firebase allows you to build apps without running your own server. This means you can connect to a database directly from the browser. So, if it’s that easy to access the database, what keeps someone from coming along and doing something malicious? The answer is Firebase Security Rules.

Security Rules are how you secure access to your Firebase services. You write a set of rules that specify how data is accessed in your back end. Firebase evaluates these rules for each request that comes into your back end and only allows the request if it passes the rule. In other words, you write a set of rules and Firebase runs them on the server to secure access to your services.

Security Rules are like a router

Security Rules are a custom language that works a lot like a router.

rules_version = '2'; service cloud.firestore { match /databases/{database}/documents { // When a request comes in for a "user/:userId" // let's allow the read or write (not very secure) // Don't copy this in your code plz! match /users/{userId} { allow read, write: if userId == "david"; } } }

The example above starts with a rules_version statement. Don’t worry too much about that, but it’s how you tell Firebase the version of the Security Rules language you want to use. Currently, it’s recommended to use version 2. Then the example goes on to create a service declaration. This tells Firebase what service you are trying to secure, which is Firestore in this case. Now, on to the important part: the match statements.

A match statement is the part that makes rules work like a router. The first match statement in the example establishes that you are matching for documents in the database. It’s a very general statement that says, Hey, look in the documents section. The next match statement is more specific. It looks to match a document within the users collection. The path syntax of /users/{userId} is similar to a routing syntax of /users/:userId where the :userId syntax notes a route parameter. In Security Rules, the {userId} syntax works just like route parameter, except that it’s called “wildcard” here. Any user within the collection can be matched with this statement. What happens when the user is matched? You use the allow statement to control the access.

The allow statement evaluates an expression and, if the result is true, it allows the operation. If it’s false, the operation is rejected. What’s useful about the allow statement is that there’s a lot of useful information to use in the containing match block. One useful piece of information is the wildcard itself. The example above uses the userId wildcard like a variable and tests to see if it matches the value of "``david``". Only a userId of "``david``" will allow the read or write operation.

Now that’s not a very useful rule, but it helps to start simple. Let’s take a moment to remember the structure of the database. Security Rules are a lot like an annotation on top of your data structure. You can make notes at the document paths that enforce your data access. This app stores data at a /users/{userId} collection and expenses at a /users/{userId}/expenses/month-year sub-collection. The security strategy is to ensure that only the authenticated user can read or write their data.

rules_version = '2'; service cloud.firestore { match /databases/{database}/documents { match /users/{userId}/{documents=**} { allow read, write: if request.auth.uid == userId; } } }

The example above starts off the same but starts to change when matching the /users/{userId} path. There’s this weird {documents=**} syntax tacked on at the end. This is called the recursive wildcard and it’s a way of cascading a rule to sub-collections—meaning any sub-collection of /users/{userId} will have the same rules applied to it. This is great in the current use case because both /users/{userId} and /users/{userId}/expenses/month-year should follow the same rule.

Inside of that match, the allow statement has been updated with a new variable named request. Security Rules come with an entire set of variables to help you write sophisticated rules. This variable is how you evaluate if the request comes from an authenticated user. The allow statement evaluates if the authenticated user has a uid that matches the {userId} wildcard. If that statement evaluates to true, the read or write is allowed. If the user is not authenticated or does not match the {userId} wildcard, no operation is allowed. Therefore, it’s secure!

The following snippet shows two requests from an authenticated user.

// Let's pretend currentUser.uid === '1234' const currentUser = firebaseApp.auth().currentUser; // The authenticated user owns this sub-collection const ownedDoc = firebaseApp.firestore().doc('/users/1234/expenses/3-2021'); // The authenticated user DOES NOT own this sub-collection const notOwnedDoc = firebaseApp.firestore().doc('/users/abcxyz/expenses/3-2021'); try { const ownedSnapshot = await ownedDoc.get(); const notOwnedSnapshot = await notOwnedDoc.get(); } catch (error) { // This will result in an error because the `notOwnedDoc` request will fail the security rule }

Just like that, you have secured a collection and a sub-collection. But what about the rest of the data in the database? What happens if you don’t write any rules for data at those paths? The example above will only allow access that matches the allow statement for the /users collection and its sub-collections. No other reads or writes will work, at all, for any other collections or sub-collections. In other words, if you don’t write an allow statement for a path, then it won’t allow any reads or writes. This is a great default for security because it makes you explicitly enforce your access at the appropriate paths. You can mess this up, however!

rules_version = '2'; service cloud.firestore { match /databases/{database}/documents { // Ahh!!! Never do this in a production app!!! // This will negate any rule you write below!! // Don't copy and paste this into your rules!! match /{document=**} { allow read, write: if true; } match /users/{userId}/{documents=**} { allow read, write: if request.auth.uid == userId; } } }

The sample above adds a match block that matches the path of /{document=**}. This is a recursive wildcard path that matches every document in the entire database. The allow statement always evaluates to true because it is true. This match block creates an overlapping match statement, meaning two or more match blocks that match on the same path. This isn’t like CSS where the last rule wins. Security Rules will evaluate each rule matched and if any of them evaluate to true, the operation is allowed. Therefore, the global recursive wildcard will negate any secure rule you have below. The global recursive wildcard is a strategy for opening up your database for a short-term “test mode” where there is no important non-publicly accessible data (nothing private or important) saved. Outside of that, I don’t recommend using it.

Write rules locally or in the console

The final topic to touch upon is where you write and save your rules. You have two options. The first is inside of the Firebase Console. Within the Firestore data viewer you’ll see an option for “Rules.” This tab shows you the rules that are active for your database. From here, you can write and even test scenarios against your rules. This approach is recommended for those who are getting started and trying to become familiar with Security Rules.

Another option is write rules locally on your machine and use the Firebase CLI to deploy them to the console. This allows you to keep them in source control and write tests for them to make sure they continue to work as your codebase evolves. This is the recommended approach for production apps and teams.

It’s worth noting again that your Firebase configuration used to create a Firebase App is not insecure. It’s the equivalent of someone knowing the domain name of your site. Someone knowing your domain name doesn’t make your site insecure. Security Rules are Firebase’s way of providing secure access to your data and your services.

Wrapping things up

That was a lot of Firebase information, especially about all the stuff about rules (security is important!). The information in this article signs in users, merges guest accounts, structures data, streams it to a visualization in real time, and makes sure it’s all secure. You can use these concepts to build so many different applications.

There are so many things I want you to know if you want to continue building with Firebase, such as the Emulator Suite. The app built in this article can run locally on your machine, which is a far easier development experience and great for testing in CI/CD environments. There are also a lot of great Firebase tools and framework libraries. Here are some links worth checking out.

If there’s one thing I hope you saw in this article, it’s that there aren’t too many lines of front-end Firebase code. A line here to sign in a user, a few lines there to get the data, but mostly code that’s specific to the app itself. The goal of Firebase is to allow you to build quickly. Remove the responsibility of managing backend infrastructure. Focus on the front end.

The post Firebase Crash Course appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

AutomateWoo Brings Automated Communications to Bookings

Css Tricks - Tue, 03/09/2021 - 5:00am

AutomateWoo is this handy extension for WooCommerce that adds triggers actions based on your online store’s activity. Someone abandoned their cart? Remind them by email. Someone made a purchase? Ask them to leave a review or follow up to see how they’re liking the product so far.

This sort of automated communication is gold. Automatically reaching out to customers based on their own activity keeps them engaged and encourages more activity, hopefully the kind that makes you more money.

AutomateWoo now integrates with WooCommerce Bookings and it includes a new trigger: booking status changes. So now, when a customer’s appointment changes—say from “pending” to “confirmed”—AutomateWoo can perform an action, like sending an email confirming the customer’s booked appointment. Or asking for feedback after the appointment. Or reminding a customer to book a follow-up appointment.

Or really anything else you can think of based on the status of a booking. It’s like having your own version of Zapier right in WordPress, but without having to manage another platform.

Once a customer’s booking status for a river rafting adventure is confirmed, they’ll get an email not only confirming the appointment but with any additional information they might need to know. OK, real-life situation.

Sometime in the middle of last year, I was working with a client that takes online appointments, or bookings. The user finds an open spot on a calendar and books that time to come in for service. A lot like making a hair appointment.

Well, what happens when a bunch of appointments need to be canceled? I don’t need to tell you what a big deal that was this time last year. This particular client had the unfortunate task of reaching out to each and every customer and updating the status of over 200 appointments.

That was awful. Fortunately, we found a way to bulk update the appointments. Then we sent a mass email to everyone with a canceled appointment. Not the most elegant solution, but it got the job done. I’ll tell you what, though, it would have been a heckuva lot less work and the communications would have been much polished if we had this AutomateWoo + WooCommerce Bookings combo. Simply create a trigger for a canceled status change, write the email, and bulk update the bookings statuses.

AutomateWoo and WooCommerce Bookings are both paid extensions for WooCommerce. The licenses will run you $349, which includes a year of updates and customer support. The value you get out of it really depends on your overall budget and how much activity happens on your site. I can tell you that would’ve been a stellar deal when we were trying to resolve 200+ canceled appointments. And if it leads to additional bookings, upsells, and fewer cancelations (because, hey, research has shown that 75% of email revenue is generated by triggered campaigns), then it could very well pay for itself.

You can give WooCommerce Bookings a front-end test drive with a live demo. Both extensions also come with a 30-day money-back guarantee, giving you a good amount of time to try them out.

The post AutomateWoo Brings Automated Communications to Bookings appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Web Components Are Easier Than You Think

Css Tricks - Mon, 03/08/2021 - 6:06am

When I’d go to a conference (when we were able to do such things) and see someone do a presentation on web components, I always thought it was pretty nifty (yes, apparently, I’m from 1950), but it always seemed complicated and excessive. A thousand lines of JavaScript to save four lines of HTML. The speaker would inevitably either gloss over the oodles of JavaScript to get it working or they’d go into excruciating detail and my eyes would glaze over as I thought about whether my per diem covered snacks.

But in a recent reference project to make learning HTML easier (by adding zombies and silly jokes, of course), the completist in me decided I had to cover every HTML element in the spec. Beyond those conference presentations, this was my first introduction to the <slot> and <template> elements. But as I tried to write something accurate and engaging, I was forced to delve a bit deeper.

And I’ve learned something in the process: web components are a lot easier than I remember.

Either web components have come a long way since the last time I caught myself daydreaming about snacks at a conference, or I let my initial fear of them get in the way of truly knowing them — probably both.

I’m here to tell you that you—yes, you—can create a web component. Let’s leave our distractions, fears, and even our snacks at the door for a moment and do this together.

Let’s start with the <template>

A <template> is an HTML element that allows us to create, well, a template—the HTML structure for the web component. A template doesn’t have to be a huge chunk of code. It can be as simple as:

<template> <p>The Zombies are coming!</p> </template>

The <template> element is important because it holds things together. It’s like the foundation of building; it’s the base from which everything else is built. Let’s use this small bit of HTML as the template for an <apocalyptic-warning> web component—you know, as a warning when the zombie apocalypse is upon us.

Then there’s the <slot>

<slot> is merely another HTML element just like <template>. But in this case, <slot> customizes what the <template> renders on the page.

<template> <p>The <slot>Zombies</slot> are coming!</p> </template>

Here, we’ve slotted (is that even a word?) the word “Zombies” in the templated markup. If we don’t do anything with the slot, it defaults to the content between the tags. That would be “Zombies” in this example.

Using <slot> is a lot like having a placeholder. We can use the placeholder as is, or define something else to go in there instead. We do that with the name attribute.

<template> <p>The <slot name="whats-coming">Zombies</slot> are coming!</p> </template>

The name attribute tells the web component which content goes where in the template. Right now, we’ve got a slot called whats-coming. We’re assuming zombies are coming first in the apocalypse, but the <slot> gives us some flexibility to slot something else in, like if it ends up being a robot, werewolf, or even a web component apocalypse.

Using the component

We’re technically done “writing” the component and can drop it in anywhere we want to use it.

<apocalyptic-warning> <span slot="whats-coming">Halitosis Laden Undead Minions</span> </apocalyptic-warning> <template> <p>The <slot name="whats-coming">Zombies</slot> are coming!</p> </template>

See what we did there? We put the <apocalyptic-warning> component on the page just like any other <div> or whatever. But we also dropped a <span> in there that references the name attribute of our <slot>. And what’s between that <span> is what we want to swap in for “Zombies” when the component renders.

Here’s a little gotcha worth calling out: custom element names must have a hyphen in them. It’s just one of those things you’ve gotta know going into things. The spec prescribes that to prevent conflicts in the event that HTML releases a new element with the same name.

Still with me so far? Not too scary, right? Well, minus the zombies. We still have a little work to do to make the <slot> swap possible, and that’s where we start to get into JavaScript.

Registering the component

As I said, you do need some JavaScript to make this all work, but it’s not the super complex, thousand-lined, in-depth code I always thought. Hopefully I can convince you as well.

You need a constructor function that registers the custom element. Otherwise, our component is like the undead: it’s there but not fully alive.

Here’s the constructor we’ll use:

// Defines the custom element with our appropriate name, <apocalyptic-warning> customElements.define("apocalyptic-warning", // Ensures that we have all the default properties and methods of a built in HTML element class extends HTMLElement { // Called anytime a new custom element is created constructor() { // Calls the parent constructor, i.e. the constructor for `HTMLElement`, so that everything is set up exactly as we would for creating a built in HTML element super(); // Grabs the <template> and stores it in `warning` let warning = document.getElementById("warningtemplate"); // Stores the contents of the template in `mywarning` let mywarning = warning.content; const shadowRoot = this.attachShadow({mode: "open"}).appendChild(mywarning.cloneNode(true)); } });

I left detailed comments in there that explain things line by line. Except the last line:

const shadowRoot = this.attachShadow({mode: "open"}).appendChild(mywarning.cloneNode(true));

We’re doing a lot in here. First, we’re taking our custom element (this) and creating a clandestine operative—I mean, shadow DOM. mode: open simply means that JavaScript from outside the :root can access and manipulate the elements within the shadow DOM, sort of like setting up back door access to the component.

From there, the shadow DOM has been created and we append a node to it. That node will be a deep copy of the template, including all elements and text of the template. With the template attached to the shadow DOM of the custom element, the <slot> and slot attribute take over for matching up content with where it should go.

Check this out. Now we can plop two instances of the same component, rendering different content simply by changing one element.

CodePen Embed Fallback Styling the component

You may have noticed styling in that demo. As you might expect, we absolutely have the ability to style our component with CSS. In fact, we can include a <style> element right in the <template>.

<template id="warningtemplate"> <style> p { background-color: pink; padding: 0.5em; border: 1px solid red; } </style> <p>The <slot name="whats-coming">Zombies</slot> are coming!</p> </template>

This way, the styles are scoped directly to the component and nothing leaks out to other elements on the same page, thanks to the shadow DOM.

Now in my head, I assumed that a custom element was taking a copy of the template, inserting the content you’ve added, and then injecting that into the page using the shadow DOM. While that’s what it looks like on the front end, that’s not how it actually works in the DOM. The content in a custom element stays where it is and the shadow DOM is sort of laid on top like an overlay.

And since the content is technically outside the template, any descendant selectors or classes we use in the template’s <style> element will have no affect on the slotted content. This doesn’t allow full encapsulation the way I had hoped or expected. But since a custom element is an element, we can use it as an element selector in any ol’ CSS file, including the main stylesheet used on a page. And although the inserted material isn’t technically in the template, it is in the custom element and descendant selectors from the CSS will work.

apocalyptic-warning span { color: blue; } CodePen Embed Fallback

But beware! Styles in the main CSS file cannot access elements in the <template> or shadow DOM.

Let’s put all of this together

Let’s look at an example, say a profile for a zombie dating service, like one you might need after the apocalypse. In order to style both the default content and any inserted content, we need both a <style> element in the <template> and styling in a CSS file.

The JavaScript code is exactly the same except now we’re working with a different component name, <zombie-profile>.

customElements.define("zombie-profile", class extends HTMLElement { constructor() { super(); let profile = document.getElementById("zprofiletemplate"); let myprofile = profile.content; const shadowRoot = this.attachShadow({mode: "open"}).appendChild(myprofile.cloneNode(true)); } } );

Here’s the HTML template, including the encapsulated CSS:

<template id="zprofiletemplate"> <style> img { width: 100%; max-width: 300px; height: auto; margin: 0 1em 0 0; } h2 { font-size: 3em; margin: 0 0 0.25em 0; line-height: 0.8; } h3 { margin: 0.5em 0 0 0; font-weight: normal; } .age, .infection-date { display: block; } span { line-height: 1.4; } .label { color: #555; } li, ul { display: inline; padding: 0; } li::after { content: ', '; } li:last-child::after { content: ''; } li:last-child::before { content: ' and '; } </style> <div class="profilepic"> <slot name="profile-image"><img src="https://assets.codepen.io/1804713/default.png" alt=""></slot> </div> <div class="info"> <h2><slot name="zombie-name" part="zname">Zombie Bob</slot></h2> <span class="age"><span class="label">Age:</span> <slot name="z-age">37</slot></span> <span class="infection-date"><span class="label">Infection Date:</span> <slot name="idate">September 12, 2025</slot></span> <div class="interests"> <span class="label">Interests: </span> <slot name="z-interests"> <ul> <li>Long Walks on Beach</li> <li>brains</li> <li>defeating humanity</li> </ul> </slot> </div> <span class="z-statement"><span class="label">Apocalyptic Statement: </span> <slot name="statement">Moooooooan!</slot></span> </div> </template>

Here’s the CSS for our <zombie-profile> element and its descendants from our main CSS file. Notice the duplication in there to ensure both the replaced elements and elements from the template are styled the same.

zombie-profile { width: calc(50% - 1em); border: 1px solid red; padding: 1em; margin-bottom: 2em; display: grid; grid-template-columns: 2fr 4fr; column-gap: 20px; } zombie-profile img { width: 100%; max-width: 300px; height: auto; margin: 0 1em 0 0; } zombie-profile li, zombie-profile ul { display: inline; padding: 0; } zombie-profile li::after { content: ', '; } zombie-profile li:last-child::after { content: ''; } zombie-profile li:last-child::before { content: ' and '; }

All together now!

CodePen Embed Fallback

While there are still a few gotchas and other nuances, I hope you feel more empowered to work with the web components now than you were a few minutes ago. Dip your toes in like we have here. Maybe sprinkle a custom component into your work here and there to get a feel for it and where it makes sense.

That’s really it. Now what are you more scared of, web components or the zombie apocalypse? I might have said web components in the not-so-distant past, but now I’m proud to say that zombies are the only thing that worry me (well, that and whether my per diem will cover snacks…)

The post Web Components Are Easier Than You Think appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Progressive enhancement and accessibility redux

QuirksBlog - Mon, 03/08/2021 - 3:00am

A while ago I asked about the exact relationship between progressive enhancement and accessiblity. What were the responses?

The resulting Twitter thread was interesting, and basically boiled down to this:

Progressive enhancement is an important accessibility tool.

I can agree with that without reservation, but still had a nagging feeling that this was not the entire story.

Then I received a private communication from Josh Fremer that connected the dots for me. His mail is worth quoting at length — with permission:

I think of [progressive enhancement and accessibility] as the same process, but with different foci. Accessibility aims to optimize an experience across a spectrum of user capabilities. Progressive enhancement aims to optimize an experience across a spectrum of user _agent_ capabilities.

Indeed, if you broaden the definition of "user agent" to include a user's physiology, I think the concepts become nearly identical.

What is the application of color to a website if not a progressive enhancement targeting user agents that can discern colors? Whether the "agent" in this case is the electronic device in the user's hands or the cells in their eyes is kind of moot. The principles of both PE and accessibility require us to consider the user who is unable to receive color information.

Conversely, what is the inability to process javascript other than a "disability" of the user agent? We would probably call handling this case a PE concern, but what If the user has disabled javascript in order to mitigate a physical disability? The capabilities of the user agent have been modified to better match the capabilities of the user and now it starts to look like an accessibility issue.

In closing a fun little thought experiment is to imagine a sci fi future in which users can plug computers directly into their brains, or swap their personalities into different bodies with different capabilities. This further blurs the line between what we traditionally call a "user agent" and a user's innate disabilities. "This web site is best viewed in a body with 20/20 vision and quick reflexes."

Food for thought.

(Josh promised to write an article about his idea, which I’ve never seen expressed like this before. If you have seen a similar argument, please let me know. Once Josh’s article is live I’ll include a link in this post.)

Women of Letters

Typography - Mon, 03/08/2021 - 2:30am

Read the book, Typographic Firsts

Women of Letters is the first in a new series of short interviews. We begin with a collection of four interviews with creatives from New York to Saigon: Lynne Yun a NYC-based type designer, technologist, and educator; Deb Pang Davis, a product designer with The 19th in Texas; Coleen Baik, a designer and artist in […]

The post Women of Letters appeared first on I Love Typography.

CSS-Tricks Chronicle XXXIX

Css Tricks - Fri, 03/05/2021 - 2:05pm

I’ve been lucky enough to be a guest on some podcasts and at some events, so I thought I’d do a quick little round-up here! These Chronicle posts are just that: an opportunity to share some off-site stiff that I’ve been up to. This time, it’s all different podcasts.

Web Rush

Episode 122: Modern Web with Chris Coyier

Chris Coyier talks with John, Ward, Dan, and Craig about the modern web. What technology should we be paying attention to? What tech has Chris used that was worth getting into? Flexbox or CSS Grid? Is there anything better than HTML coming? And what tools should developers be aware of?

Front-end Development South Africa

Live Q&A session with Chris Coyier

Audience

Evolving as podcasting grows with Chris Coyier of ShopTalk Show

Craig talks to Chris about what it’s like being an online creator (podcaster, blogger, software and web designer, etc.). Chris talks about the lessons he has learned and what it’s like to have a weekly podcast for ten years. They also talk about podcasting trends in terms of marketing, topics, and the future outlook of the industry.

Cloudinary Devjams

DevJams Episode #2: Fetching Local Production Images With Cloudinary for an Eleventy Site

Watch our hosts Sam Brace and Becky Peltz, as well as our special guest host Eric Portis, interview Chris Coyier about his recent development project. With Eleventy, Netlify, Puppeteer and Cloudinary’s fetch capabilities, he was able to create a microsite for his famous CSS-Tricks.com site that showcases various coding fonts you can use. Find how he did it by watching this episode!

The post CSS-Tricks Chronicle XXXIX appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A Super Flexible CSS Carousel, Enhanced With JavaScript Navigation

Css Tricks - Fri, 03/05/2021 - 5:47am

Not sure about you, but I often wonder how to build a carousel component in such a way that you can easily dump a bunch of items into the component and get a nice working carousel — one that allows you to scroll smoothly, navigate with the dynamic buttons, and is responsive. If that is the thing you’d like to build, follow along and we’ll work on it together!

This is what we’re aiming for:

CodePen Embed Fallback

We’re going to be working with quite a bit of JavaScript, React and the DOM API from here on out.

First, let’s spin up a fresh project

Let’s start by bootstrapping a simple React application with styled-components tossed in for styling:

npx create-react-app react-easy-carousel cd react-easy-carousel yarn add styled-components yarn install yarn start

Styling isn’t really the crux of what we’re doing, so I have prepared aa bunch of predefined components for us to use right out of the box:

// App.styled.js import styled from 'styled-components' export const H1 = styled('h1')` text-align: center; margin: 0; padding-bottom: 10rem; ` export const Relative = styled('div')` position: relative; ` export const Flex = styled('div')` display: flex; ` export const HorizontalCenter = styled(Flex)` justify-content: center; margin-left: auto; margin-right: auto; max-width: 25rem; ` export const Container = styled('div')` height: 100vh; width: 100%; background: #ecf0f1; ` export const Item = styled('div')` color: white; font-size: 2rem; text-transform: capitalize; width: ${({size}) => `${size}rem`}; height: ${({size}) => `${size}rem`}; display: flex; align-items: center; justify-content: center; `

Now let’s go to our App file, remove all unnecessary code, and build a basic structure for our carousel:

// App.js import {Carousel} from './Carousel' function App() { return ( <Container> <H1>Easy Carousel</H1> <HorizontalCenter> <Carousel> {/* Put your items here */} </Carousel> </HorizontalCenter> </Container> ) } export default App

I believe this structure is pretty straightforward. It’s the basic layout that centers the carousel directly in the middle of the page.

Now, let’s make the carousel component

Let’s talk about the structure of our component. We’re gonna need the main <div> container which as our base. Inside that, we’re going to take advantage of native scrolling and put another block that serves as the scrollable area.

// Carousel.js <CarouserContainer> <CarouserContainerInner> {children} </CarouserContainerInner> </CarouserContainer>

You can specify width and height on the inner container, but I’d avoid strict dimensions in favor of some sized component on top of it to keep things flexible.

Scrolling, the CSS way

We want that scroll to be smooth so it’s clear there’s a transition between slides, so we’ll reach for CSS scroll snapping, set the scroll horizontally along the x-axis, and hide the actual scroll bar while we’re at it.

export const CarouserContainerInner = styled(Flex)` overflow-x: scroll; scroll-snap-type: x mandatory; -ms-overflow-style: none; scrollbar-width: none; &::-webkit-scrollbar { display: none; } & > * { scroll-snap-align: center; } `

Wondering what’s up with scroll-snap-type and scroll-snap-align? That’s native CSS that allows us to control the scroll behavior in such a way that an element “snaps” into place during a scroll. So, in this case, we’ve set the snap type in the horizontal (x) direction and told the browser it has to stop at a snap position that is in the center of the element.

In other words: scroll to the next slide and make sure that slide is centered into view. Let’s break that down a bit to see how it fits into the bigger picture.

Our outer <div> is a flexible container that puts it’s children (the carousel slides) in a horizontal row. Those children will easily overflow the width of the container, so we’ve made it so we can scroll horizontally inside the container. That’s where scroll-snap-type comes into play. From Andy Adams in the CSS-Tricks Almanac:

Scroll snapping refers to “locking” the position of the viewport to specific elements on the page as the window (or a scrollable container) is scrolled. Think of it like putting a magnet on top of an element that sticks to the top of the viewport and forces the page to stop scrolling right there.

Couldn’t say it better myself. Play around with it in Andy’s demo on CodePen.

But, we still need another CSS property set on the container’s children (again, the carousel slides) that tells the browser where the scroll should stop. Andy likens this to a magnet, so let’s put that magnet directly on the center of our slides. That way, the scroll “locks” on the center of a slide, allowing to be full in view in the carousel container.

That property? scroll-snap-align.

& > * { scroll-snap-align: center; }

We can already test it out by creating some random array of items:

const colors = [ '#f1c40f', '#f39c12', '#e74c3c', '#16a085', '#2980b9', '#8e44ad', '#2c3e50', '#95a5a6', ] const colorsArray = colors.map((color) => ( <Item size={20} style={{background: color, borderRadius: '20px', opacity: 0.9}} key={color} > {color} </Item> ))

And dumping it right into our carousel:

// App.js <Container> <H1>Easy Carousel</H1> <HorizontalCenter> <Carousel>{colorsArray}</Carousel> </HorizontalCenter> </Container> CodePen Embed Fallback

Let’s also add some spacing to our items so they won’t look too squeezed. You may also notice that we have unnecessary spacing on the left of the first item. We can add a negative margin to offset it.

export const CarouserContainerInner = styled(Flex)` overflow-x: scroll; scroll-snap-type: x mandatory; -ms-overflow-style: none; scrollbar-width: none; margin-left: -1rem; &::-webkit-scrollbar { display: none; } & > * { scroll-snap-align: center; margin-left: 1rem; } `

Take a closer look at the cursor position while scrolling. It’s always centered. That’s the scroll-snap-align property at work!

And that’s it! We’ve made an awesome carousel where we can add any number of items, and it just plain works. Notice, too, that we did all of this in plain CSS, even if it was built as a React app. We didn’t really need React or styled-components to make this work.

CodePen Embed Fallback Bonus: Navigation

We could end the article here and move on, but I want to take this a bit further. What I like about what we have so far is that it’s flexible and does the basic job of scrolling through a set of items.

But you may have noticed a key enhancement in the demo at the start of this article: buttons that navigate through slides. That’s where we’re going to put the CSS down and put our JavaScript hats on to make this work.

First, let’s define buttons on the left and right of the carousel container that, when clicked, scrolls to the previous or next slide, respectively. I’m using simple SVG arrows as components:

// ArrowLeft export const ArrowLeft = ({size = 30, color = '#000000'}) => ( <svg xmlns="http://www.w3.org/2000/svg" width={size} height={size} viewBox="0 0 24 24" fill="none" stroke={color} strokeWidth="2" strokeLinecap="round" strokeLinejoin="round" > <path d="M19 12H6M12 5l-7 7 7 7" /> </svg> ) // ArrowRight export const ArrowRight = ({size = 30, color = '#000000'}) => ( <svg xmlns="http://www.w3.org/2000/svg" width={size} height={size} viewBox="0 0 24 24" fill="none" stroke={color} strokeWidth="2" strokeLinecap="round" strokeLinejoin="round" > <path d="M5 12h13M12 5l7 7-7 7" /> </svg> )

Now let’s position them on both sides of our carousel:

// Carousel.js <LeftCarouselButton> <ArrowLeft /> </LeftCarouselButton> <RightCarouselButton> <ArrowRight /> </RightCarouselButton>

We’ll sprinkle in some styling that adds absolute positioning to the arrows so that the left arrow sits on the left edge of the carousel and the right arrow sits on the right edge. A few other things are thrown in to style the buttons themselves to look like buttons. Also, we’re playing with the carousel container’s :hover state so that the buttons only show when the user’s cursor hovers the container.

// Carousel.styled.js // Position and style the buttons export const CarouselButton = styled('button')` position: absolute; cursor: pointer; top: 50%; z-index: 1; transition: transform 0.1s ease-in-out; background: white; border-radius: 15px; border: none; padding: 0.5rem; ` // Display buttons on hover export const LeftCarouselButton = styled(CarouselButton)` left: 0; transform: translate(-100%, -50%); ${CarouserContainer}:hover & { transform: translate(0%, -50%); } ` // Position the buttons to their respective sides export const RightCarouselButton = styled(CarouselButton)` right: 0; transform: translate(100%, -50%); ${CarouserContainer}:hover & { transform: translate(0%, -50%); } `

This is cool. Now we have buttons, but only when the user interacts with the carousel.

But do we always want to see both buttons? It’d be great if we hide the left arrow when we’re at the first slide, and hide the right arrow when we’re at the last slide. It’s like the user can navigate past those slides, so why set the illusion that they can?

I suggest creating a hook that’s responsible for all the scrolling functionality we need, as we’re gonna have a bunch of it. Plus, it’s just good practice to separate functional concerns from our visual component.

First, we need to get the reference to our component so we can get the position of the slides. Let’s do that with ref:

// Carousel.js const ref = useRef() const position = usePosition(ref) <CarouserContainer> <CarouserContainerInner ref={ref}> {children} </CarouserContainerInner> <LeftCarouselButton> <ArrowLeft /> </LeftCarouselButton> <RightCarouselButton> <ArrowRight /> </RightCarouselButton> </CarouserContainer>

The ref property is on <CarouserContainerInner> as it contains all our items and will allow us to do proper calculations.

Now let’s implement the hook itself. We have two buttons. To make them work, we need to keep track of the next and previous items accordingly. The best way to do so is to have a state for each one:

// usePosition.js export function usePosition(ref) { const [prevElement, setPrevElement] = useState(null) const [nextElement, setNextElement] = useState(null) }

The next step is to create a function that detects the position of the elements and updates the buttons to either hide or display depending on that position.

Let’s call it the update function. We’re gonna put it into React’s useEffect hook because, initially, we want to run this function when the DOM mounts the first time. We need access to our scrollable container which is available to use under the ref.current property. We’ll put it into a separate variable called element and start by getting the element’s position in the DOM.

We’re gonna use getBoundingClientRect() here as well. This is a very helpful function because it gives us an element’s position in the viewport (i.e. window) and allows us to proceed with our calculations.

// usePosition.js useEffect(() => { // Our scrollable container const element = ref.current const update = () => { const rect = element.getBoundingClientRect() }, [ref])

We’ve done a heck of a lot positioning so far and getBoundingClientRect() can help us understand both the size of the element — rect in this case — and its position relative to the viewport.

Credit: Mozilla Developer Network

The following step is a bit tricky as it requires a bit of math to calculate which elements are visible inside the container.

First, we need to filter each item by getting its position in the viewport and checking it against the container boundaries. Then, we check if the child’s left boundary is bigger than the container’s left boundary, and the same thing on the right side.

If one of these conditions is met means that our child is visible inside the container. Let’s convert it into the code step-by-step:

  1. We need to loop and filter through all container children. We can use the children property available on each node. So, let’s convert it into an array and filter:
const visibleElements = Array.from(element.children).filter((child) => {}
  1. After that, we need to get the position of each element by using that handy getBoundingClientRect() function once again:
const childRect = child.getBoundingClientRect()
  1. Now let’s bring our drawing to life:
rect.left <= childRect.left && rect.right >= childRect.right

Pulling that together, this is our script:

// usePosition.js const visibleElements = Array.from(element.children).filter((child) => { const childRect = child.getBoundingClientRect() return rect.left <= childRect.left && rect.right >= childRect.right })

Once we’ve filtered out items, we need to check whether an item is the first or the last one so we know to hide the left or right button accordingly. We’ll create two helper functions that check that condition using previousElementSibling and nextElementSibling. This way, we can see if there is a sibling in the list and whether it’s an HTML instance and, if it is, we will return it.

To receive the first element and return it, we need to take the first item from our visible items list and check if it contains the previous node. We’ll do the same thing for the last element in the list, however, we need to get the last item in the list and check if it contains the next element after itself:

// usePosition.js function getPrevElement(list) { const sibling = list[0].previousElementSibling if (sibling instanceof HTMLElement) { return sibling } return sibling } function getNextElement(list) { const sibling = list[list.length - 1].nextElementSibling if (sibling instanceof HTMLElement) { return sibling } return null }

Once we have those functions, we can finally check if there are any visible elements in the list, and then set our left and right buttons into the state:

// usePosition.js if (visibleElements.length > 0) { setPrevElement(getPrevElement(visibleElements)) setNextElement(getNextElement(visibleElements)) }

Now we need to call our function. Moreover, we want to call this function each time we scroll through the list — that’s when we want to detect the position of the element.

// usePosition.js export function usePosition(ref) { const [prevElement, setPrevElement] = useState(null) const [nextElement, setNextElement] = useState(null) useEffect(() => { const element = ref.current const update = () => { const rect = element.getBoundingClientRect() const visibleElements = Array.from(element.children).filter((child) => { const childRect = child.getBoundingClientRect() return rect.left <= childRect.left && rect.right >= childRect.right }) if (visibleElements.length > 0) { setPrevElement(getPrevElement(visibleElements)) setNextElement(getNextElement(visibleElements)) } } update() element.addEventListener('scroll', update, {passive: true}) return () => { element.removeEventListener('scroll', update, {passive: true}) } }, [ref])

Here’s an explanation for why we’re passing {passive: true} in there.

Now let’s return those properties from the hook and update our buttons accordingly:

// usePosition.js return { hasItemsOnLeft: prevElement !== null, hasItemsOnRight: nextElement !== null, } // Carousel.js <LeftCarouselButton hasItemsOnLeft={hasItemsOnLeft}> <ArrowLeft /> </LeftCarouselButton> <RightCarouselButton hasItemsOnRight={hasItemsOnRight}> <ArrowRight /> </RightCarouselButton> // Carousel.styled.js export const LeftCarouselButton = styled(CarouselButton)` left: 0; transform: translate(-100%, -50%); ${CarouserContainer}:hover & { transform: translate(0%, -50%); } visibility: ${({hasItemsOnLeft}) => (hasItemsOnLeft ? `all` : `hidden`)}; ` export const RightCarouselButton = styled(CarouselButton)` right: 0; transform: translate(100%, -50%); ${CarouserContainer}:hover & { transform: translate(0%, -50%); } visibility: ${({hasItemsOnRight}) => (hasItemsOnRight ? `all` : `hidden`)}; `

So far, so good. As you’ll see, our arrows show up dynamically depending on our scroll location in the list of items.

We’ve got just one final step to go to make the buttons functional. We need to create a function that’s gonna accept the next or previous element it needs to scroll to.

const scrollRight = useCallback(() => scrollToElement(nextElement), [ scrollToElement, nextElement, ]) const scrollLeft = useCallback(() => scrollToElement(prevElement), [ scrollToElement, prevElement, ])

Don’t forget to wrap functions into the useCallback hook in order to avoid unnecessary re-renders.

Next, we’ll implement the scrollToElement function. The idea is pretty simple. We need to take the left boundary of our previous or next element (depending on the button that’s clicked), sum it up with the width of the element, divided by two (center position), and offset this value by half of the container width. That will give us the exact scrollable distance to the center of the next/previous element.

Here’s that in code:

// usePosition.js const scrollToElement = useCallback( (element) => { const currentNode = ref.current if (!currentNode || !element) return let newScrollPosition newScrollPosition = element.offsetLeft + element.getBoundingClientRect().width / 2 - currentNode.getBoundingClientRect().width / 2 currentNode.scroll({ left: newScrollPosition, behavior: 'smooth', }) }, [ref], )

scroll actually does the scrolling for us while passing the precise distance we need to scroll to. Now let’s attach those functions to our buttons.

// Carousel.js const { hasItemsOnLeft, hasItemsOnRight, scrollRight, scrollLeft, } = usePosition(ref) <LeftCarouselButton hasItemsOnLeft={hasItemsOnLeft} onClick={scrollLeft}> <ArrowLeft /> </LeftCarouselButton> <RightCarouselButton hasItemsOnRight={hasItemsOnRight} onClick={scrollRight}> <ArrowRight /> </RightCarouselButton>

Pretty nice!

Like a good citizen, we ought to clean up our code a bit. For one, we can be more in control of the passed items with a little trick that automatically sends the styles needed for each child. The Children API is pretty rad and worth checking out.

<CarouserContainerInner ref={ref}> {React.Children.map(children, (child, index) => ( <CarouselItem key={index}>{child}</CarouselItem> ))} </CarouserContainerInner>

Now we just need to update our styled components. flex: 0 0 auto preserves the original sizes of the containers, so it’s totally optional

export const CarouselItem = styled('div')` flex: 0 0 auto; // Spacing between items margin-left: 1rem; ` export const CarouserContainerInner = styled(Flex)` overflow-x: scroll; scroll-snap-type: x mandatory; -ms-overflow-style: none; scrollbar-width: none; margin-left: -1rem; // Offset for children spacing &::-webkit-scrollbar { display: none; } ${CarouselItem} & { scroll-snap-align: center; } ` CodePen Embed Fallback Accessibility 

We care about our users, so we need to make our component not only functional, but also accessible so folks feel comfortable using it. Here are a couple things I’d suggest:

  • Adding role='region' to highlight the importance of this area.
  • Adding an area-label as an identifier.
  • Adding labels to our buttons so screen readers could easily identify them as “Previous” and “Next” and inform the user which direction a button goes.
// Carousel.js <CarouserContainer role="region" aria-label="Colors carousel"> <CarouserContainerInner ref={ref}> {React.Children.map(children, (child, index) => ( <CarouselItem key={index}>{child}</CarouselItem> ))} </CarouserContainerInner> <LeftCarouselButton hasItemsOnLeft={hasItemsOnLeft} onClick={scrollLeft} aria-label="Previous slide > <ArrowLeft /> </LeftCarouselButton> <RightCarouselButton hasItemsOnRight={hasItemsOnRight} onClick={scrollRight} aria-label="Next slide" > <ArrowRight /> </RightCarouselButton> </CarouserContainer> More than one carousel? No problem!

Feel free to add additional carousels to see how it behaves with the different size items. For example, let’s drop in a second carousel that’s just an array of numbers.

const numbersArray = Array.from(Array(10).keys()).map((number) => ( <Item size={5} style={{color: 'black'}} key={number}> {number} </Item> )) function App() { return ( <Container> <H1>Easy Carousel</H1> <HorizontalCenter> <Carousel>{colorsArray}</Carousel> </HorizontalCenter> <HorizontalCenter> <Carousel>{numbersArray}</Carousel> </HorizontalCenter> </Container> ) }

And voilà, magic! Dump a bunch of items and you’ve got fully workable carousel right out of the box.

CodePen Embed Fallback

Feel free to modify this and use it in your projects. I sincerely hope that this is a good starting point to use as-is, or enhance it even further for a more complex carousel. Questions? Ideas? Contact me on Twitter, GitHub, or the comments below!

The post A Super Flexible CSS Carousel, Enhanced With JavaScript Navigation appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Through the pipeline: An exploration of front-end bundlers

Css Tricks - Thu, 03/04/2021 - 1:56pm

I really like the kind of tech writing where a fellow developer lays out some specific needs, tries out different tech to fulfill those needs, and documents how it went for them.

That’s exactly what Andrew Walpole did here. He wanted to try out bundlers in the context of WordPress themes and needing a handful of specific files built. Two JavaScript and two Sass files, which can import things from npm, and need to be minified with sourcemaps and all that. Essentially the same crap we were doing when I wrote Grunt for People Who Think Things Like Grunt are Weird and Hard eight years ago. The process hasn’t gotten any easier, but at least it’s gotten faster.

The winner for Andrew: esbuild through Estrella.

Direct Link to ArticlePermalink

The post Through the pipeline: An exploration of front-end bundlers appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Weekly Platform News: Focus Rings, Donut Scope, More em Units, and Global Privacy Control

Css Tricks - Thu, 03/04/2021 - 11:33am

In this week’s news, Chrome tackles focus rings, we learn how to get “donut” scope, Global Privacy Control gets big-name adoption, it’s time to ditch pixels in media queries, and a snippet that prevents annoying form validation styling.

Chrome will stop displaying focus rings when clicking buttons

Chrome, Edge, and other Chromium-based browsers display a focus indicator (a.k.a. focus ring) when the user clicks or taps a (styled) button. For comparison, Safari and Firefox don’t display a focus indicator when a button is clicked or tapped, but do only when the button is focused via the keyboard.

The focus ring will stay on the button until the user clicks somewhere else on the page.

Some developers find this behavior annoying and are using various workarounds to prevent the focus ring from appearing when a button is clicked or tapped. For example, the popular what-input library continuously tracks the user’s input method (mouse, keyboard or touch), allowing the page to suppress focus rings specifically for mouse clicks.

[data-whatintent="mouse"] :focus { outline: none; }

A more recent workaround was enabled by the addition of the CSS :focus-visible pseudo-class to Chromium a few months ago. In the current version of Chrome, clicking or tapping a button invokes the button’s :focus state but not its :focus-visible state. that way, the page can use a suitable selector to suppress focus rings for clicks and taps without affecting keyboard users.

:focus:not(:focus-visible) { outline: none; }

Fortunately, these workarounds will soon become unnecessary. Chromium’s user agent stylesheet recently switched from :focus to :focus-visible, and as a result of this change, button clicks and taps no longer invoke focus rings. The new behavior will first ship in Chrome 90 next month.

The enhanced CSS :not() selector enables “donut scope”

I recently wrote about the A:not(B *) selector pattern that allows authors to select all A elements that are not descendants of a B element. This pattern can be expanded to A B:not(C *) to create a “donut scope.”

For example, the selector article p:not(blockquote *) matches all <p> elements that are descendants of an <article> element but not descendants of a <blockquote> element. In other words, it selects all paragraphs in an article except the ones that are in a block quotation.

The donut shape that gives this scope its name CodePen Embed Fallback The New York Times now honors Global Privacy Control

Announced last October, Global Privacy Control (GPC) is a new privacy signal for the web that is designed to be legally enforceable. Essentially, it’s an HTTP Sec-GPC: 1 request header that tells websites that the user does not want their personal data to be shared or sold.

The DuckDuckGo Privacy Essentials extension enables GPC by default in the browser

The New York Times has become the first major publisher to honor GPC. A number of other publishers, including The Washington Post and Automattic (WordPress.com), have committed to honoring it “this coming quarter.”

From NYT’s privacy page:

Does The Times support the Global Privacy Control (GPC)?

Yes. When we detect a GPC signal from a reader’s browser where GDPR, CCPA or a similar privacy law applies, we stop sharing the reader’s personal data online with other companies (except with our service providers).

The case for em-based media queries

Some browsers allow the user to increase the default font size in the browser’s settings. Unfortunately, this user preference has no effect on websites that set their font sizes in pixels (e.g., font-size: 20px). In part for this reason, some websites (including CSS-Tricks) instead use font-relative units, such as em and rem, which do respond to the user’s font size preference.

Ideally, a website that uses font-relative units for font-size should also use em values in media queries (e.g., min-width: 80em instead of min-width: 1280px). Otherwise, the site’s responsive layout may not always work as expected.

For example, CSS-Tricks switches from a two-column to a one-column layout on narrow viewports to prevent the article’s lines from becoming too short. However, if the user increases the default font size in the browser to 24px, the text on the page will become larger (as it should) but the page layout will not change, resulting in extremely short lines at certain viewport widths.

If you’d like to try out em-based media queries on your website, there is a PostCSS plugin that automatically converts min-width, max-width, min-height, and max-height media queries from px to em.

(via Nick Gard)

A new push to bring CSS :user-invalid to browsers

In 2017, Peter-Paul Koch published a series of three articles about native form validation on the web. Part 1 points out the problems with the widely supported CSS :invalid pseudo-class:

  • The validity of <input> elements is re-evaluated on every key stroke, so a form field can become :invalid while the user is still typing the value.
  • If a form field is required (<input required>), it will become :invalid immediately on page load.

Both of these behaviors are potentially confusing (and annoying), so websites cannot rely solely on the :invalid selector to indicate that a value entered by the user is not valid. However, there is the option to combine :invalid with :not(:focus) and even :not(:placeholder-shown) to ensure that the page’s “invalid” styles do not apply to the <input> until the user has finished entering the value and moved focus to another element.

CodePen Embed Fallback

The CSS Selectors module defines a :user-invalid pseudo-class that avoids the problems of :invalid by only matching an <input> “after the user has significantly interacted with it.”

Firefox already supports this functionality via the :-moz-ui-invalid pseudo-class (see it in action). Mozilla now intends to un-prefix this pseudo-class and ship it under the standard :user-invalid name. There are still no signals from other browser vendors, but the Chromium and WebKit bugs for this feature have been filed.

The post Weekly Platform News: Focus Rings, Donut Scope, More em Units, and Global Privacy Control appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.