Web Standards

A Glassy (and Classy) Text Effect

Css Tricks - Thu, 08/29/2019 - 4:12am

The landing page for Apple Arcade has a cool effect where some "white" text has a sort of translucent effect. You can see some of the color of the background behind it through the text. It's not like knockout text where you see the exact background. In this case, live video is playing underneath. It's like if you were to blur the video and then show that blurry video through the letters.

Well, that's exactly what's happening.

Here's a video so you can see it in action (even after they change that page or you are in a browser that doesn't support the effect):

And hey, if you don't like the effect, that's cool. The rest of this is a technological exploration of how it was done — not a declaration of when and how you should use it.

There are two main properties here that have to work together perfectly to pull this off:

  1. backdrop-filter
  2. clip-path

The backdrop-filter property is easy as heck to use. Set it, and it can filter whatever background is seen through that element.

See the Pen
Basic example of backdrop-filter
by Chris Coyier (@chriscoyier)
on CodePen.

Next we'll place text in that container, but we'll actually hide it. It just needs to be there for accessibility. But we'll end up sort of replacing the text by making a clip path out of the text. Yes indeed! We'll use the SVG <text> inside a <clipPath> element and then use that to clip the entire element that has backdrop-filter on it.

See the Pen
Text with Blurred Background
by Chris Coyier (@chriscoyier)
on CodePen.

For some reason (that I think is a bug), Chrome renders that like this:

It's failing to clip the element properly, even though it supposedly supports both of those properties. I tried using an @supports block, but that's not helpful here. It looks like Apple's site has a .no-backdrop-blur class on the <html> element (Modernizr-style) that is set on Chrome to avoid using the effect at all. I just let my demo break. Maybe someday it'll get that fixed.

It looks right in Safari:

And Firefox doesn't support backdrop-filter at the moment, so the @supports block does its thing and gives you white text instead.

The post A Glassy (and Classy) Text Effect appeared first on CSS-Tricks.

Nested Gradients with background-clip

Css Tricks - Wed, 08/28/2019 - 11:31am

I can't say I use background-clip all that often. I'd wager it's hardly ever used in day-to-day CSS work. But I was reminded of it in a post by Stefan Judis, which coincidentally was itself a learning-response post to a post over here by Ana Tudor.

Here's a quick explanation.

You've probably seen this thing a million times:

The box model visualizer in DevTools.

That's showing you the size and position of an element, as well as how that size is made up: content size, padding, margin, and border.

Those things aren't just theoretical to help with understanding and debugging. Elements actually have a content-box, padding-box, and border-box. Perhaps we encounter that most often when we literally set the box-sizing property. (It's tremendously useful to universally set it to border-box).

Those values are the same values as background-clip uses! Meaning that you can set a background to only cover those specific areas. And because multiple backgrounds is a thing, that means we can have multiple backgrounds with different clipping on each.

Like this:

See the Pen
Multiple background-clip
by Chris Coyier (@chriscoyier)
on CodePen.

But that's boring and there are many ways to pull off that effect, like using borders, outline, and box-shadow or any combination of them.

What is more interesting is the fact that those backgrounds could be gradients, and that's a lot harder to pull off any other way!

See the Pen
Nested Gradients
by Chris Coyier (@chriscoyier)
on CodePen.

The post Nested Gradients with background-clip appeared first on CSS-Tricks.

Creating a Maintainable Icon System with Sass

Css Tricks - Wed, 08/28/2019 - 4:05am

One of my favorite ways of adding icons to a site is by including them as data URL background images to pseudo-elements (e.g. ::after) in my CSS. This technique offers several advantages:

  • They don't require any additional HTTP requests other than the CSS file.
  • Using the background-size property, you can set your pseudo-element to any size you need without worrying that they will overflow the boundaries (or get chopped off).
  • They are ignored by screen readers (at least in my tests using VoiceOver on the Mac) so which is good for decorative-only icons.

But there are some drawbacks to this technique as well:

  • When used as a background-image data URL, you lose the ability to change the SVG's colors using the "fill" or "stroke" CSS properties (same as if you used the filename reference, e.g. url( 'some-icon-file.svg' )). We can use filter() as an alternative, but that might not always be a feasible solution.
  • SVG markup can look big and ugly when used as data URLs, making them difficult to maintain when you need to use the icons in multiple locations and/or have to change them.

We're going to address both of these drawbacks in this article.

The situation

Let's build a site that uses a robust iconography system, and let's say that it has several different button icons which all indicate different actions:

  • A "download" icon for downloadable content
  • An "external link" icon for buttons that take us to another website
  • A "right caret" icon for taking us to the next step in a process

Right off the bat, that gives us three icons. And while that may not seem like much, already I'm getting nervous about how maintainable this is going to be when we scale it out to more icons like social media networks and the like. For the sake of this article, we're going to stop at these three, but you can imagine how in a sophisticated icon system this could get very unwieldy, very quickly.

It's time to go to the code. First, we'll set up a basic button, and then by using a BEM naming convention, we'll assign the proper icon to its corresponding button. (At this point, it's fair to warn you that we'll be writing everything out in Sass, or more specifically, SCSS. And for the sake of argument, assume I'm running Autoprefixer to deal with things like the appearance property.)

.button { appearance: none; background: #d95a2b; border: 0; border-radius: 100em; color: #fff; cursor: pointer; display: inline-block; font-size: 18px; font-weight: 700; line-height: 1; padding: 1em calc( 1.5em + 32px ) 0.9em 1.5em; position: relative; text-align: center; text-transform: uppercase; transition: background-color 200ms ease-in-out; &:hover, &:focus, &:active { background: #8c3c2a; } }

This gives us a simple, attractive, orange button that turns to a darker orange on the hover, focused, and active states. It even gives us a little room for the icons we want to add, so let's add them in now using pseudo-elements:

.button { /* everything from before, plus... */ &::after { background: center / 24px 24px no-repeat; // Shorthand for: background-position, background-size, background-repeat border-radius: 100em; bottom: 0; content: ''; position: absolute; right: 0; top: 0; width: 48px; } &--download { &::after { background-image: url( 'data:image/svg+xml;utf-8,<svg xmlns="http://www.w3.org/2000/svg" width="30.544" height="25.294" viewBox="0 0 30.544 25.294"><g transform="translate(-991.366 -1287.5)"><path d="M1454.5,1298.922l6.881,6.881-6.881,6.881" transform="translate(2312.404 -157.556) rotate(90)" fill="none" stroke="%23fff" stroke-width="3"/><path d="M8853.866,5633.57v9.724h27.544v-9.724" transform="translate(-7861 -4332)" fill="none" stroke="%23fff" stroke-linejoin="round" stroke-width="3"/><line y2="14" transform="translate(1006.5 1287.5)" fill="none" stroke="%23fff" stroke-width="3"/></g></svg>' ); } } &--external { &::after { background-image: url( 'data:image/svg+xml;utf-8,<svg xmlns="http://www.w3.org/2000/svg" width="31.408" height="33.919" viewBox="0 0 31.408 33.919"><g transform="translate(-1008.919 -965.628)"><g transform="translate(1046.174 2398.574) rotate(-135)"><path d="M0,0,7.879,7.879,0,15.759" transform="translate(1025.259 990.17) rotate(90)" fill="none" stroke="%23fff" stroke-width="3"/><line y2="16.032" transform="translate(1017.516 980.5)" fill="none" stroke="%23fff" stroke-width="3"/></g><path d="M10683.643,5322.808v10.24h-20.386v-21.215h7.446" transform="translate(-9652.838 -4335)" fill="none" stroke="%23fff" stroke-width="3"/></g></svg>' ); } } &--caret-right { &::after { background-image: url( 'data:image/svg+xml;utf-8,<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 19.129 34.016"><path d="M1454.5,1298.922l15.947,15.947-15.947,15.947" transform="translate(-1453.439 -1297.861)" fill="none" stroke="%23fff" stroke-width="3"/></svg>' ); } } }

Let's pause here. While we're keeping our SCSS as tidy as possible by declaring the properties common to all buttons, and then only specifying the background SVGs on a per-class basis, it's already starting to look a bit unwieldy. That's that second downside to SVGs I mentioned before: having to use big, ugly markup in our CSS code.

Also, note how we're defining our fill and stroke colors inside the SVGs. At some point, browsers decided that the octothorpe ("#") that we all know and love in our hex colors was a security risk, and declared that they would no longer support data URLs that contained them. This leaves us with three options:

  1. Convert our data URLs from markup (like we have here) to base-64 encoded strings, but that makes them even less maintainable than before by completely obfuscating them; or
  2. Use rgba() or hsla() notation, not always intuitive as many developers have been using hex for years; or
  3. Convert our octothorpes to their URL-encoded equivalents, "%23".

We're going to go with option number three, and work around that browser limitation. (I will mention here, however, that this technique will work with rgb(), hsla(), or any other valid color format, even CSS named colors. But please don't use CSS named colors in production code.)

Moving to maps

At this point, we only have three buttons fully declared. But I don't like them just dumped in the code like this. If we needed to use those same icons elsewhere, we'd have to copy and paste the SVG markup, or else we could assign them to variables (either Sass or CSS custom properties), and reuse them that way. But I'm going to go for what's behind door number three, and switch to using one of Sass' greatest features: maps.

If you're not familiar with Sass maps, they are, in essence, the Sass version of an associative array. Instead of a numerically-indexed array of items, we can assign a name (a key, if you will) so that we can retrieve them by something logical and easily remembered. So let's build a Sass map of our three icons:

$icons: ( 'download': '<svg xmlns="http://www.w3.org/2000/svg" width="30.544" height="25.294" viewBox="0 0 30.544 25.294"><g transform="translate(-991.366 -1287.5)"><path d="M1454.5,1298.922l6.881,6.881-6.881,6.881" transform="translate(2312.404 -157.556) rotate(90)" fill="none" stroke="%23fff" stroke-width="3"/><path d="M8853.866,5633.57v9.724h27.544v-9.724" transform="translate(-7861 -4332)" fill="none" stroke="%23fff" stroke-linejoin="round" stroke-width="3"/><line y2="14" transform="translate(1006.5 1287.5)" fill="none" stroke="%23fff" stroke-width="3"/></g></svg>', 'external': '<svg xmlns="http://www.w3.org/2000/svg" width="31.408" height="33.919" viewBox="0 0 31.408 33.919"><g transform="translate(-1008.919 -965.628)"><g transform="translate(1046.174 2398.574) rotate(-135)"><path d="M0,0,7.879,7.879,0,15.759" transform="translate(1025.259 990.17) rotate(90)" fill="none" stroke="%23fff" stroke-width="3"/><line y2="16.032" transform="translate(1017.516 980.5)" fill="none" stroke="%23fff" stroke-width="3"/></g><path d="M10683.643,5322.808v10.24h-20.386v-21.215h7.446" transform="translate(-9652.838 -4335)" fill="none" stroke="%23fff" stroke-width="3"/></g></svg>', 'caret-right': '<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 19.129 34.016"><path d="M1454.5,1298.922l15.947,15.947-15.947,15.947" transform="translate(-1453.439 -1297.861)" fill="none" stroke="%23fff" stroke-width="3"/></svg>', );

There are two things to note here: We didn't include the data:image/svg+xml;utf-8, string in any of those icons, only the SVG markup itself. That string is going to be the same every single time we need to use these icons, so why repeat ourselves and run the risk of making a mistake? Let's instead define it as its own string and prepend it to the icon markup when needed:

$data-svg-prefix: 'data:image/svg+xml;utf-8,';

The other thing to note is that we aren't actually making our SVG any prettier; there's no way to do that. What we are doing is pulling all that ugliness out of the code we're working on a day-to-day basis so we don't have to look at all that visual clutter as much. Heck, we could even put it in its own partial that we only have to touch when we need to add more icons. Out of sight, out of mind!

So now, let's use our map. Going back to our button code, we can now replace those icon literals with pulling them from the icon map instead:

&--download { &::after { background-image: url( $data-svg-prefix + map-get( $icons, 'download' ) ); } } &--external { &::after { background-image: url( $data-svg-prefix + map-get( $icons, 'external' ) ); } } &--next { &::after { background-image: url( $data-svg-prefix + map-get( $icons, 'caret-right' ) ); } }

Already, that's looking much better. We've started abstracting out our icons in a way that keeps our code readable and maintainable. And if that were the only challenge, we'd be done. But in the real-world project that inspired this article, we had another wrinkle: different colors.

Our buttons are a solid color which turn to a darker version of that color on their hover state. But what if we want "ghost" buttons instead, that turn into solid colors on hover? In this case, white icons would be invisible for buttons that appear on white backgrounds (and probably look wrong on non-white backgrounds). What we're going to need are two variations of each icon: the white one for the hover state, and one that matches button's border and text color for the non-hover state.

Let's update our button's base CSS to turn it in from a solid button to a ghost button that turns solid on hover. And we'll need to adjust the pseudo-elements for our icons, too, so we can swap them out on hover as well.

.button { appearance: none; background: none; border: 3px solid #d95a2b; border-radius: 100em; color: #d95a2b; cursor: pointer; display: inline-block; font-size: 18px; font-weight: bold; line-height: 1; padding: 1em calc( 1.5em + 32px ) 0.9em 1.5em; position: relative; text-align: center; text-transform: uppercase; transition: 200ms ease-in-out; transition-property: background-color, color; &:hover, &:focus, &:active { background: #d95a2b; color: #fff; } }

Now we need to create our different-colored icons. One possible solution is to add the color variations directly to our map... somehow. We can either add new different-colored icons as additional items in our one-dimensional map, or make our map two-dimensional.

One-Dimensional Map:

$icons: ( 'download-white': '<svg xmlns="http://www.w3.org/2000/svg" width="30.544" height="25.294" viewBox="0 0 30.544 25.294"><g transform="translate(-991.366 -1287.5)"><path d="M1454.5,1298.922l6.881,6.881-6.881,6.881" transform="translate(2312.404 -157.556) rotate(90)" fill="none" stroke="%23fff" stroke-width="3"/><path d="M8853.866,5633.57v9.724h27.544v-9.724" transform="translate(-7861 -4332)" fill="none" stroke="%23fff" stroke-linejoin="round" stroke-width="3"/><line y2="14" transform="translate(1006.5 1287.5)" fill="none" stroke="%23fff" stroke-width="3"/></g></svg>', 'download-orange': '<svg xmlns="http://www.w3.org/2000/svg" width="30.544" height="25.294" viewBox="0 0 30.544 25.294"><g transform="translate(-991.366 -1287.5)"><path d="M1454.5,1298.922l6.881,6.881-6.881,6.881" transform="translate(2312.404 -157.556) rotate(90)" fill="none" stroke="%23d95a2b" stroke-width="3"/><path d="M8853.866,5633.57v9.724h27.544v-9.724" transform="translate(-7861 -4332)" fill="none" stroke="%23d95a2b" stroke-linejoin="round" stroke-width="3"/><line y2="14" transform="translate(1006.5 1287.5)" fill="none" stroke="%23d95a2b" stroke-width="3"/></g></svg>', );

Two-Dimensional Map:

$icons: ( 'download': ( 'white': '<svg xmlns="http://www.w3.org/2000/svg" width="30.544" height="25.294" viewBox="0 0 30.544 25.294"><g transform="translate(-991.366 -1287.5)"><path d="M1454.5,1298.922l6.881,6.881-6.881,6.881" transform="translate(2312.404 -157.556) rotate(90)" fill="none" stroke="%23fff" stroke-width="3"/><path d="M8853.866,5633.57v9.724h27.544v-9.724" transform="translate(-7861 -4332)" fill="none" stroke="%23fff" stroke-linejoin="round" stroke-width="3"/><line y2="14" transform="translate(1006.5 1287.5)" fill="none" stroke="%23fff" stroke-width="3"/></g></svg>', 'orange': '<svg xmlns="http://www.w3.org/2000/svg" width="30.544" height="25.294" viewBox="0 0 30.544 25.294"><g transform="translate(-991.366 -1287.5)"><path d="M1454.5,1298.922l6.881,6.881-6.881,6.881" transform="translate(2312.404 -157.556) rotate(90)" fill="none" stroke="%23d95a2b" stroke-width="3"/><path d="M8853.866,5633.57v9.724h27.544v-9.724" transform="translate(-7861 -4332)" fill="none" stroke="%23d95a2b" stroke-linejoin="round" stroke-width="3"/><line y2="14" transform="translate(1006.5 1287.5)" fill="none" stroke="%23d95a2b" stroke-width="3"/></g></svg>', ), );

Either way, this is problematic. By just adding one additional color, we're going to double our maintenance efforts. Need to change the existing download icon with a different one? We need to manually create each color variation to add it to the map. Need a third color? Now you've just tripled your maintenance costs. I'm not even going to get into the code to retrieve values from a multi-dimensional Sass map because that's not going to serve our ultimate goal here. Instead, we're just going to move on.

Enter string replacement

Aside from maps, the utility of Sass in this article comes from how we can use it to make CSS behave more like a programming language. Sass has built-in functions (like map-get(), which we've already seen), and it allows us to write our own.

Sass also has a bunch of string functions built-in, but inexplicably, a string replacement function isn't one of them. That's too bad, as its usefulness is obvious. But all is not lost.

Hugo Giradel gave us a Sass version of str-replace() here on CSS-Tricks in 2014. We can use that here to create one version of our icons in our Sass map, using a placeholder for our color values. Let's add that function to our own code:

@function str-replace( $string, $search, $replace: '' ) { $index: str-index( $string, $search ); @if $index { @return str-slice( $string, 1, $index - 1 ) + $replace + str-replace( str-slice( $string, $index + str-length( $search ) ), $search, $replace); } @return $string; }

Next, we'll update our original Sass icon map (the one with only the white versions of our icons) to replace the white with a placeholder (%%COLOR%%) that we can swap out with whatever color we call for, on demand.

$icons: ( 'download': '<svg xmlns="http://www.w3.org/2000/svg" width="30.544" height="25.294" viewBox="0 0 30.544 25.294"><g transform="translate(-991.366 -1287.5)"><path d="M1454.5,1298.922l6.881,6.881-6.881,6.881" transform="translate(2312.404 -157.556) rotate(90)" fill="none" stroke="%%COLOR%%" stroke-width="3"/><path d="M8853.866,5633.57v9.724h27.544v-9.724" transform="translate(-7861 -4332)" fill="none" stroke="%%COLOR%%" stroke-linejoin="round" stroke-width="3"/><line y2="14" transform="translate(1006.5 1287.5)" fill="none" stroke="%%COLOR%%" stroke-width="3"/></g></svg>', 'external': '<svg xmlns="http://www.w3.org/2000/svg" width="31.408" height="33.919" viewBox="0 0 31.408 33.919"><g transform="translate(-1008.919 -965.628)"><g transform="translate(1046.174 2398.574) rotate(-135)"><path d="M0,0,7.879,7.879,0,15.759" transform="translate(1025.259 990.17) rotate(90)" fill="none" stroke="%%COLOR%%" stroke-width="3"/><line y2="16.032" transform="translate(1017.516 980.5)" fill="none" stroke="%%COLOR%%" stroke-width="3"/></g><path d="M10683.643,5322.808v10.24h-20.386v-21.215h7.446" transform="translate(-9652.838 -4335)" fill="none" stroke="%%COLOR%%" stroke-width="3"/></g></svg>', 'caret-right': '<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 19.129 34.016"><path d="M1454.5,1298.922l15.947,15.947-15.947,15.947" transform="translate(-1453.439 -1297.861)" fill="none" stroke="%%COLOR%%" stroke-width="3"/></svg>', );

But if we were going to try and fetch these icons using just our new str-replace() function and Sass' built-in map-get() function, we'd end with something big and ugly. I'd rather tie these two together with one more function that makes calling the icon we want in the color we want as simple as one function with two parameters (and because I'm particularly lazy, we'll even make the color default to white, so we can omit that parameter if that's the color icon we want).

Because we're getting an icon, it's a "getter" function, and so we'll call it get-icon():

@function get-icon( $icon, $color: #fff ) { $icon: map-get( $icons, $icon ); $placeholder: '%%COLOR%%'; $data-uri: str-replace( url( $data-svg-prefix + $icon ), $placeholder, $color ); @return str-replace( $data-uri, '#', '%23' ); }

Remember where we said that browsers won't render data URLs that have octothorpes in them? Yeah, we're str-replace()ing that too so we don't have to remember to pass along "%23" in our color hex codes.

Side note: I have a Sass function for abstracting colors too, but since that's outside the scope of this article, I'll refer you to my get-color() gist to peruse at your leisure.

The result

Now that we have our get-icon() function, let's put it to use. Going back to our button code, we can replace our map-get() function with our new icon getter:

&--download { &::before { background-image: get-icon( 'download', #d95a2b ); } &::after { background-image: get-icon( 'download', #fff ); // The ", #fff" isn't strictly necessary, because white is already our default } } &--external { &::before { background-image: get-icon( 'external', #d95a2b ); } &::after { background-image: get-icon( 'external' ); } } &--next { &::before { background-image: get-icon( 'arrow-right', #d95a2b ); } &::after { background-image: get-icon( 'arrow-right' ); } }

So much easier, isn't it? Now we can call any icon we've defined, with any color we need. All with simple, clean, logical code.

  • We only ever have to declare an SVG in one place.
  • We have a function that gets that icon in whatever color we give it.
  • Everything is abstracted out to a logical function that does exactly what it looks like it will do: get X icon in Y color.
Making it fool-proof

The one thing we're lacking is error-checking. I'm a huge believer in failing silently... or at the very least, failing in a way that is invisible to the user yet clearly tells the developer what is wrong and how to fix it. (For that reason, I should be using unit tests way more than I do, but that's a topic for another day.)

One way we have already reduced our function's propensity for errors is by setting a default color (in this case, white). So if the developer using get-icon() forgets to add a color, no worries; the icon will be white, and if that's not what the developer wanted, it's obvious and easily fixed.

But wait, what if that second parameter isn't a color? As if, the developer entered a color incorrectly, so that it was no longer being recognized as a color by the Sass processor?

Fortunately we can check for what type of value the $color variable is:

@function get-icon( $icon, $color: #fff ) { @if 'color' != type-of( $color ) { @warn 'The requested color - "' + $color + '" - was not recognized as a Sass color value.'; @return null; } $icon: map-get( $icons, $icon ); $placeholder: '%%COLOR%%'; $data-uri: str-replace( url( $data-svg-prefix + $icon ), $placeholder, $color ); @return str-replace( $data-uri, '#', '%23' ); }

Now if we tried to enter a nonsensical color value:

&--download { &::before { background-image: get-icon( 'download', ce-nest-pas-un-couleur ); } }

...we get output explaining our error:

Line 25 CSS: The requested color - "ce-nest-pas-un-couleur" - was not recognized as a Sass color value.

...and the processing stops.

But what if the developer doesn't declare the icon? Or, more likely, declares an icon that doesn't exist in the Sass map? Serving a default icon doesn't really make sense in this scenario, which is why the icon is a mandatory parameter in the first place. But just to make sure we are calling an icon, and it is valid, we're going to add another check:

@function get-icon( $icon, $color: #fff ) { @if 'color' != type-of( $color ) { @warn 'The requested color - "' + $color + '" - was not recognized as a Sass color value.'; @return null; } @if map-has-key( $icons, $icon ) { $icon: map-get( $icons, $icon ); $placeholder: '%%COLOR%%'; $data-uri: str-replace( url( $data-svg-prefix + $icon ), $placeholder, $color ); @return str-replace( $data-uri, '#', '%23' ); } @warn 'The requested icon - "' + $icon + '" - is not defined in the $icons map.'; @return null; }

We've wrapped the meat of the function inside an @if statement that checks if the map has the key provided. If so (which is the situation we're hoping for), the processed data URL is returned. The function stops right then and there — at the @return — which is why we don't need an @else statement.

But if our icon isn't found, then null is returned, along with a @warning in the console output identifying the problem request, plus the partial filename and line number. Now we know exactly what's wrong, and when and what needs fixing.

So if we were to accidentally enter:

&--download { &::before { background-image: get-icon( 'ce-nest-pas-une-icône', #d95a2b ); } }

...we would see the output in our console, where our Sass process was watching and running:

Line 32 CSS: The requested icon - "ce-nest-pas-une-icône" - is not defined in the $icons map.

As for the button itself, the area where the icon would be will be blank. Not as good as having our desired icon there, but soooo much better than a broken image graphic or some such.

Conclusion

After all of that, let's take a look at our final, processed CSS:

.button { -webkit-appearance: none; -moz-appearance: none; appearance: none; background: none; border: 3px solid #d95a2b; border-radius: 100em; color: #d95a2b; cursor: pointer; display: inline-block; font-size: 18px; font-weight: 700; line-height: 1; padding: 1em calc( 1.5em + 32px ) 0.9em 1.5em; position: relative; text-align: center; text-transform: uppercase; transition: 200ms ease-in-out; transition-property: background-color, color; } .button:hover, .button:active, .button:focus { background: #d95a2b; color: #fff; } .button::before, .button::after { background: center / 24px 24px no-repeat; border-radius: 100em; bottom: 0; content: ''; position: absolute; right: 0; top: 0; width: 48px; } .button::after { opacity: 0; transition: opacity 200ms ease-in-out; } .button:hover::after, .button:focus::after, .button:active::after { opacity: 1; } .button--download::before { background-image: url('data:image/svg+xml;utf-8,<svg xmlns="http://www.w3.org/2000/svg" width="30.544" height="25.294" viewBox="0 0 30.544 25.294"><g transform="translate(-991.366 -1287.5)"><path d="M1454.5,1298.922l6.881,6.881-6.881,6.881" transform="translate(2312.404 -157.556) rotate(90)" fill="none" stroke="%23d95a2b" stroke-width="3"/><path d="M8853.866,5633.57v9.724h27.544v-9.724" transform="translate(-7861 -4332)" fill="none" stroke="%23d95a2b" stroke-linejoin="round" stroke-width="3"/><line y2="14" transform="translate(1006.5 1287.5)" fill="none" stroke="%23d95a2b" stroke-width="3"/></g></svg>'); } .button--download::after { background-image: url('data:image/svg+xml;utf-8,<svg xmlns="http://www.w3.org/2000/svg" width="30.544" height="25.294" viewBox="0 0 30.544 25.294"><g transform="translate(-991.366 -1287.5)"><path d="M1454.5,1298.922l6.881,6.881-6.881,6.881" transform="translate(2312.404 -157.556) rotate(90)" fill="none" stroke="%23fff" stroke-width="3"/><path d="M8853.866,5633.57v9.724h27.544v-9.724" transform="translate(-7861 -4332)" fill="none" stroke="%23fff" stroke-linejoin="round" stroke-width="3"/><line y2="14" transform="translate(1006.5 1287.5)" fill="none" stroke="%23fff" stroke-width="3"/></g></svg>'); } .button--external::before { background-image: url('data:image/svg+xml;utf-8,<svg xmlns="http://www.w3.org/2000/svg" width="31.408" height="33.919" viewBox="0 0 31.408 33.919"><g transform="translate(-1008.919 -965.628)"><g transform="translate(1046.174 2398.574) rotate(-135)"><path d="M0,0,7.879,7.879,0,15.759" transform="translate(1025.259 990.17) rotate(90)" fill="none" stroke="%23d95a2b" stroke-width="3"/><line y2="16.032" transform="translate(1017.516 980.5)" fill="none" stroke="%23d95a2b" stroke-width="3"/></g><path d="M10683.643,5322.808v10.24h-20.386v-21.215h7.446" transform="translate(-9652.838 -4335)" fill="none" stroke="%23d95a2b" stroke-width="3"/></g></svg>'); } .button--external::after { background-image: url('data:image/svg+xml;utf-8,<svg xmlns="http://www.w3.org/2000/svg" width="31.408" height="33.919" viewBox="0 0 31.408 33.919"><g transform="translate(-1008.919 -965.628)"><g transform="translate(1046.174 2398.574) rotate(-135)"><path d="M0,0,7.879,7.879,0,15.759" transform="translate(1025.259 990.17) rotate(90)" fill="none" stroke="%23fff" stroke-width="3"/><line y2="16.032" transform="translate(1017.516 980.5)" fill="none" stroke="%23fff" stroke-width="3"/></g><path d="M10683.643,5322.808v10.24h-20.386v-21.215h7.446" transform="translate(-9652.838 -4335)" fill="none" stroke="%23fff" stroke-width="3"/></g></svg>'); } .button--next::before { background-image: url('data:image/svg+xml;utf-8,<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 19.129 34.016"><path d="M1454.5,1298.922l15.947,15.947-15.947,15.947" transform="translate(-1453.439 -1297.861)" fill="none" stroke="%23d95a2b" stroke-width="3"/></svg>'); } .button--next::after { background-image: url('data:image/svg+xml;utf-8,<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 19.129 34.016"><path d="M1454.5,1298.922l15.947,15.947-15.947,15.947" transform="translate(-1453.439 -1297.861)" fill="none" stroke="%23fff" stroke-width="3"/></svg>'); }

Yikes, still ugly, but it's ugliness that becomes the browser's problem, not ours.

I've put all this together in CodePen for you to fork and experiment. The long goal for this mini-project is to create a PostCSS plugin to do all of this. This would increase the availability of this technique to everyone regardless of whether they were using a CSS preprocessor or not, or which preprocessor they're using.

"If I have seen further it is by standing on the shoulders of Giants."
– Isaac Newton, 1675

Of course we can't talk about Sass and string replacement and (especially) SVGs without gratefully acknowledging the contributions of the others who've inspired this technique.

The post Creating a Maintainable Icon System with Sass appeared first on CSS-Tricks.

Can you rotate the cursor in CSS?

Css Tricks - Wed, 08/28/2019 - 4:05am

Kinda! There is no simple or standard way to do it, but it's possible. You can change the cursor to different built-in native versions with CSS with the cursor property, but that doesn't help much here. You can also use that property to set a static image as the cursor. But again that doesn't help much because you can't rotate it once it's there.

The trick is to totally hide the cursor with cursor: none; and replace it with your own element.

Here's an example of that:

See the Pen
Move fake mouse with JavaScript
by Chris Coyier (@chriscoyier)
on CodePen.

That's not rotating yet. But now that the cursor is just some element on the page, CSS's transform: rotate(); is fully capable of that job. Some math is required.

I'll leave that to Aaron Iker's really fun demo:

See the Pen
Mouse cursor pointing to cta
by Aaron Iker (@aaroniker)
on CodePen.

Is this an accessibility problem? Something about it makes me think it might be. It's a little extra motion where you aren't expecting it and perhaps a little disorienting when an element you might rely on for a form of stability starts moving on you. It's really only something you'd do for limited-use novelty and while respecting the prefers-reduced-motion. You could also keep the original cursor and do something under it, as Jackson Callaway has done here.

The post Can you rotate the cursor in CSS? appeared first on CSS-Tricks.

Going Buildless

Css Tricks - Tue, 08/27/2019 - 4:44am

I'm in a long distance relationship. That means I’m on a plane to England every few weeks, and every time I'm on that plane, I think about how nice it would be to read some Reddit posts. What I could do is find a Reddit app that lets me cache posts for offline (I’m sure there is one out there), or I could take the opportunity to write something myself and have fun using the latest and greatest technologies and web standards out there!

On top of that, there has been a lot of discussion around what I like to call going buildless, which I think is really fascinating development in which production projects are created without using a build process (like a bundler).

This post is also a homage to a couple of awesome people in the web community who are making some great things possible. I'll be linking to all that stuff as we move along. Do note that this won't be a step-by-step tutorial, but if you want to check out the code, you can find the finished project on GitHub.

Our end result should look something like this:

Let's dive in and install a few dependencies npm i @babel/core babel-loader @babel/preset-env @babel/preset-react webpack webpack-cli react react-dom redux react-redux html-webpack-plugin are-you-tired-yet html-loader webpack-dev-server

I'm kidding.

We're not gonna use any of that.

We're going to try and avoid as much tooling and dependencies as we can to keep the entry barrier low. What we will be using is:

  • LitElement - LitElement is our component model. It's easy to use, lightweight, close to the metal, and leverages web components.
  • @vaadin/router - This is a really small (< 7kb) router that has an awesome developer experience that I cannot recommend enough.
  • @pika/web - This will help us get our modules together for easy development.
  • es-dev-server - This is a simple dev server for modern web development workflows, made by us at open-wc. Although any HTTP server will doc, feel free to bring your own.

That's it! We'll also be using a few browser standards, namely: es modules, web components, import-maps, kv-storage and service-worker.

Let's go ahead and install our dependencies:

npm i -S lit-element @vaadin/router npm i -D @pika/web es-dev-server

We'll also add a postinstall hook to our package.json that's going to run Pika for us:

"scripts": { "start": "es-dev-server", "postinstall": "pika-web" } &#x1f42d; Pika

Pika is a project by Fred K. Schott that aims to bring that nostalgic 2014 simplicity to 2019 web development. Fred is up to all sorts of awesome stuff. For one, he made pika.dev, which lets you easily search for modern JavaScript packages on npm. He also recently gave his talk Reimagining the Registry at DinosaurJS 2019, which I highly recommend you watch.

Pika takes things even one step further. If we run pika-web, it'll install our dependencies as single JavaScript files to a new web_modules/ directory. If your dependency exports an ES "module" entrypoint in its package.json manifest, Pika supports it. If you have any transitive dependencies, Pika will create separate chunks for any shared code among your dependencies.

What this means, is that in our case our output will look something like:

?? web_modules/ ?? lit-element.js ?? @vaadin ?? router.js

Sweet! That's it. We have our dependencies ready to go as single JavaScript module files, and this is going to make things really convenient for us later on in this post, so stay tuned!

&#x1f4e5; Import maps

Alright! Now that we've got our dependencies sorted out, let's get to work. We'll make an index.html that'll look something like this:

<html> <!-- head, etc. --> <body> <reddit-pwa-app></reddit-pwa-app> <script src="./src/reddit-pwa-app.js" type="module"></script> </body> </html>

And reddit-pwa-app.js:

import { LitElement, html } from 'lit-element'; class RedditPwaApp extends LitElement { // ... render() { return html` <h1>Hello world!</h1> `; } } customElements.define('reddit-pwa-app', RedditPwaApp);

We're off to a great start. Let's try and see how this looks in the browser so far, so lets start our server, open the browser and... What's this? An error?

Oh boy.

And we've barely even started. Alright, let's take a look. The problem here is that our module specifiers are bare. They are bare module specifiers. What this means is that there are no paths specified, no file extensions, they're just... pretty bare. Our browser has no idea on what to do with this, so it'll throw an error.

import { LitElement, html } from 'lit-element'; // <-- bare module specifier import { Router } from '@vaadin/router'; // <-- bare module specifier import { foo } from './bar.js'; // <-- not bare! import { html } from 'https://unpkg.com/lit-html'; // <-- not bare!

Naturally, we could use some tools for this, like webpack, or rollup, or a dev server that rewrites the bare module specifiers to something meaningful to browsers, so we can load our imports. But that means we have to bring in a bunch of tooling, dive into configuration, and we're trying to stay minimal here. We just want to write code! In order to solve this, we're going to take a look at import maps.

Import maps is a new proposal that lets you control the behavior of JavaScript imports. Using an import map, we can control what URLs get fetched by JavaScript import statements and import() expressions, and allows this mapping to be reused in non-import contexts. This is great for several reasons:

  • It allows our bare module specifiers to work.
  • It provides a fallback resolution so that import $ from "jquery"; can try to go to a CDN first, but fall back to a local version if the CDN server is down.
  • It enables polyfilling of (or other control over) built-in modules. (More on that later, hang on tight!)
  • Solves the nested dependency problem. (Go read that blog!)

Sounds pretty sweet, no? Import maps are currently available in Chrome 75+ behind a flag, and with that knowledge in mind, let's go to our index.html, and add an import map to our <head>:

<head> <script type="importmap"> { "imports": { "@vaadin/router": "/web_modules/@vaadin/router.js", "lit-element": "/web_modules/lit-element.js" } } </script> </head>

If we go back to our browser, and refresh our page, we'll have no more errors, and we should see our <h1>Hello world!</h1> on our screen.

Import maps is an incredibly interesting new standard, and definitely something you should be keeping your eyes on. If you're interested in experimenting with them, and generate your own import map based on a yarn.lock file, you can try our open-wc import-maps-generate package and play around. Im really excited to see what people will develop in combination with import maps.

&#x1f4e1; Service Worker

Alright, we're going to skip ahead in time a little bit. We've got our dependencies working, we have our router set up, and we've done some API calls to get the data from Reddit and display it on our screen. Going over all of the code is a bit out of scope for this post, but remember that you can find all the code in the GitHub repo if you want to read the implementation details.

Since we're making this app so we can read reddit threads on the airplane it would be great if our application worked offline, and if we could somehow save some posts to read.

Service workers are a kind of JavaScript Worker that runs in the background. You can visualize it as sitting in between the web page, and the network. Whenever your web page makes a request, it goes through the service worker first. This means that we can intercept the request, and do stuff with it! For example, we can let the request go through to the network to get a response, and cache it when it returns so we can use that cached data later when we might be offline. We can also use a service worker to precache our assets. What this means is that we can precache any critical assets our application may need in order to work offline. If we have no network connection, we can simply fall back to the assets we cached, and still have a working (albeit offline) application.

If you're interested in learning more about Progressive Web Apps and service worker, I highly recommend you read The Offline Cookbook, by Jake Archibald, as well as this video tutorial series by Jad Joubran.

Let's go ahead and implement a service worker. In our index.html, we'll add the following snippet:

<script> if ('serviceWorker' in navigator) { window.addEventListener('load', () => { navigator.serviceWorker.register('./sw.js').then(() => { console.log('ServiceWorker registered!'); }, (err) => { console.log('ServiceWorker registration failed: ', err); }); }); } </script>

We'll also add a sw.js file to the root of our project. So we're about to precache the assets of our app, and this is where Pika just made life really easy for us. If you'll take a look at the install handler in the service worker file:

self.addEventListener('install', (event) => { event.waitUntil( caches.open(CACHENAME).then((cache) => { return cache.addAll([ '/', './web_modules/lit-element.js', './web_modules/@vaadin/router.js', './src/reddit-pwa-app.js', './src/reddit-pwa-comment.js', './src/reddit-pwa-search.js', './src/reddit-pwa-subreddit.js', './src/reddit-pwa-thread.js', './src/utils.js', ]); }) ); });

You'll find that we're totally in control of our assets, and we have a nice, clean list of files we need in order to work offline.

&#x1f4f4; Going offline

Right. Now that we've cached our assets to work offline, it would be excellent if we could actually save some posts that we can read while offline. There are many ways that lead to Rome, but since we're living on the edge a little bit, we're going to go with: Kv-storage!

&#x1f4e6; Built-in Modules

There are a few things to talk about here. Kv-storage is a built-in module. Built-in modules are very similar to regular JavaScript modules, except they ship with the browser. It's good to note that while built-in modules ship with the browser, they are not exposed on the global scope, and are namespaced with std: (Yes, really.). This has a few advantages: they won't add any overhead to starting up a new JavaScript runtime context (e.g. a new tab, worker, or service worker), and they won't consume any memory or CPU unless they're actually imported, as well as avoid naming collisions with existing code.

Another interesting, if not somewhat controversial, proposal as a built-in module is the std-toast element, and the std-switch element.

&#x1f5c3; Kv-storage

Alright, with that out of the way, lets talk about kv-storage. Kv-storage (or "key value storage") is layered on top of IndexedDB and fairly similar to localStorage, except for only a few major differences.

The motivation for kv-storage is that localStorage is synchronous, which can lead to bad performance and syncing issues. It's also limited exclusively to String key/value pairs. The alternative, IndexedDB, is... hard to use. The reason it's so hard to use is that it predates promises, and this leads to a, well, pretty bad developer experience. Not fun. Kv-storage, however, is a lot of fun, asynchronous, and easy to use! Consider the following example:

import { storage, /* StorageArea */ } from "std:kv-storage"; (async () => { await storage.set("mycat", "Tom"); console.log(await storage.get("mycat")); // Tom })();

Notice how we're importing from std:kv-storage? This import specifier is bare as well, but in this case it's okay because it actually ships with the browser.

Pretty neat. We can perfectly use this for adding a 'save for offline' button, and simply store the JSON data for a Reddit thread, and get it when we need it.

// reddit-pwa-thread.js:52: const savedPosts = new StorageArea("saved-posts"); // ... async saveForOffline() { await savedPosts.set(this.location.params.id, this.thread); // id of the post + thread as json this.isPostSaved = true; }

So now if we click the “save for offline" button, and we go to the DevTools “Application" tab, we can see a kv-storage:saved-posts that holds the JSON data for this post:

And if we go back to our search page, we'll have a list of saved posts with the post we just saved:

&#x1f52e; Polyfilling

Excellent. However, we're about to run into another problem here. Living on the edge is fun, but also dangerous. The problem that we're hitting here is that, at the time of writing, kv-storage is only implemented in Chrome behind a flag. That's not great. Fortunately, there's a polyfill available, and at the same time we get to show off yet another really useful feature of import-maps; polyfilling!

First things first, lets install the kv-storage-polyfill:

npm i -S kv-storage-polyfill

Note that our postinstall hook will run Pika for us again.

Let’s also add the following to our import map in our index.html:

<script type="importmap"> { "imports": { "@vaadin/router": "/web_modules/@vaadin/router.js", "lit-element": "/web_modules/lit-element.js", "/web_modules/kv-storage-polyfill.js": [ "std:kv-storage", "/web_modules/kv-storage-polyfill.js" ] } } </script>

What happens here is that whenever /web_modules/kv-storage-polyfill.js is requested or imported, the browser will first try to see if std:kv-storage is available; however, if that fails, it'll load /web_modules/kv-storage-polyfill.js instead.

So in code, if we import:

import { StorageArea } from '/web_modules/kv-storage-polyfill.js';

This is what will happen:

"/web_modules/kv-storage-polyfill.js": [ // when I'm requested "std:kv-storage", // try me first! "/web_modules/kv-storage-polyfill.js" // or fallback to me ] &#x1f389; Conclusion

And we should now have a simple, functioning PWA with minimal dependencies. There are a few nitpicks to this project that we could complain about, and they'd all likely be fair. For example, we probably could've gone without using Pika, but it does make life really easy for us. You could have made the same argument about adding a webpack configuration, but you'd have missed the point. The point here is to make a fun application, while using some of the latest features, drop some buzzwords, and have a low barrier for entry. As Fred Schott would say: "In 2019, you should use a bundler because you want to, not because you need to."

If you're interested in nitpicking, however, you can read this great discussion about using webpack vs. Pika vs. buildless, and you'll get some great insights from Sean Larkinn of the webpack core team himself, as well as Fred K. Schott, creator of Pika.

I hope you enjoyed this blog post, and I hope you learned something, or discovered some new interesting people to follow. There are lots of exciting developments happening in this space right now, and I hope I got you as excited about them as I am. If you have any questions, comments, feedback, or nitpicks, feel free to reach out to me on twitter at @passle_ or @openwc and don't forget to check out open-wc.org &#x1f609;.

Honorable Mentions

I'd like to give a few shout-outs to some very interesting people that are doing some great stuff, and you may want to keep an eye on.

The post Going Buildless appeared first on CSS-Tricks.

More Flexible Online Stores WooCommerce and Gutenberg Blocks

Css Tricks - Tue, 08/27/2019 - 4:43am

Blocks have become an indispensable component for managing content in WordPress since the Gutenberg editor was officially released earlier this year. Not only does WordPress include some nifty blocks right out of the box, but we're starting to see plugin developers take advantage of them and provide some interesting ones as well.

One of those plugins is WooCommerce. That makes a lot of sense seeing since WooCommerce is part of the Automattic family of products.

WooCommerce Blocks

Blocks are not exactly new to WooCommerce. Well, blocks themselves are a relatively new concept, but WooCommerce was early on the bandwagon to leverage them as a plugin.

For example, the Newest Products block makes it trivial to showcase the most recently added products to a WooCommerce store catalog. In the past, that would have required manually entering and linking up each product. Now, all it takes is dropping a block into place and then on to the next task.

There are a whole slew of blocks like this in WooCommerce. So many, in fact, that making an online store itself has also become somewhat trivial. Sure, sure, e-commerce isn't easy. But at least the part about building a fully functional store that offers a great user experience is less of a worry.

Here are all of the block currently offered by WooCommerce:

  • Newest Products: Show off the most recently added products.
  • Best-Selling Products: Display top-sellers.
  • Product Categories List: Embeds an unordered list of all available product categories that link to category pages.
  • Products by Category: Create a custom list of products by searching product categories.
  • Featured Product: You guessed it! Same as a featured category, but linked to a specific product page.
  • Hand-Picked Products: Create a custom matrix of products from the catalog.
  • On Sale Products: Highlights only products that are currently marked as on sale from the full retail price.
  • Products by Attribute: This lets you put items together by product characteristics. For example, you could make a page just showing large clothing sizes.
  • Top Rated Products: This one puts all of your products with the best reviews.

Phew, that's a lot considering this all comes with a standard WooCommerce installation! Seriously, this opens up a huge range of possibilities for building store pages and driving sales.

New WooCommerce Blocks

If 11 blocks were not enough for you to do some pretty awesome things already, WooCommerce is actively developing more and has recently shipped two new ones.

The first one is which is an offshoot of Featured Product. Let's say you have introduced a collection of products filed under the same category. This block is a perfect way to spotlight that new tag by dropping in a hero component. It takes the description and featured image of the category and places them into an attractive layout that would otherwise require time and development.

The second new block is Products by Tag which is another useful way to group products together on a page or post, just like the Products by Category block. Choose one or multiple tags and those products will display together in a grid.

WooCommerce has really opened the floodgates, giving both developers and store owners new and interesting ways to build an online shopping experience. These are the types of features you might expect to see in some proprietary enterprise-level e-commerce software, but they are freely available in WooCommerce, which is freely available to install into WordPress, which is freely available to anyone who wants a website.

Give WooCommerce a try and see how it can up upgrade your online store.

The post More Flexible Online Stores WooCommerce and Gutenberg Blocks appeared first on CSS-Tricks.

Reusable Popovers to Add a Little Pop

Css Tricks - Mon, 08/26/2019 - 5:12am

A popover is a transient view that shows up on top of a content on the screen when a user clicks on a control button or within a defined area. For example, clicking on an info icon on a specific list item to get the item details. Typically, a popover includes an arrow pointing to the location from which it emerged.

Popovers are great for situations when we want to show a temporary context to get user’s attention when interacting with a specific element on the screen. They provide additional context and instruction for users without having to clutter up a screen. Users can simply close them by clicking the same way they were opened or outside the popover.

We’re going to look at a library called popper.js that allows us to create reusable popover components in the Vue framework. Popovers are the perfect type of component for a component-based system like Vue because they can be contained, encapsulated components that are maintained on their own, but used anywhere throughout an app.

Let’s dig in and get started.

But first: What’s the difference between a popover and tooltip?

Was the name "popover" throwing you for a loop? The truth is that popovers are a lot like tooltips, which are another common UI pattern for displaying additional context in a contained element. There are differences between them, though, so let’s briefly spell them out so we have a solid handle on what we’re building.

Tooltips Popovers Tooltips are meant to be exactly that, a hint or tip on what a tool or other interaction does. They are meant to clarify or help you use the content that they hover over, not add additional content. Popovers, on the other hand, can be much more verbose, they can include a header and many lines of text in the body. Tooltips are typically only visible on hover, for that reason if you need to be able to read the content while interacting with other parts of the page then a tooltip will not work. Popovers are typically dismissible, whether by click on other parts of the page or second clicking the popover target (depending on implementation), for that reason you can set up a popover to allow you to interact with other elements on the page while still being able to read it's content.

Popovers are most appropriate on larger screens and we’re most likely to encounter them in use cases such as:

Looking at those use cases, we can glean some requirements that make a good popover:

  1. Reusability: A popover should allow to pass a custom content to the popover.
  2. Dismissibility: A popover should be dismissible by clicking outside of the popover and escape button.
  3. Positioning: A popover should reposition itself when the screen edge is reached.
  4. Interaction: A popover should allow to interact with the content in the popover.

I created an example to refer to as we go through the process of creating a component.

View Demo

OK, now that we’ve got a baseline understanding of popovers and what we’re building, let’s get into the step-by-step details for creating them using popper.js.

Step 1: Create the BasePopover component

Let’s start by creating a component that will be responsible for initializing and positioning the popover. We’ll call this component BasePopover.vue and, in the component template, we’ll render two elements:

  • Popover content: This is the element that will be responsible for rendering the content within the popover. For now we use a slot that will allow us to pass the content from the parent component responsible for rendering our popover (Requirement #1: Reusability).
  • Popover overlay: This is the element responsible for covering the content under the popover and preventing user from interacting with the elements outside the popover. It also allows us to close the popover when clicked (Requirement #2: Dismissibility).
// BasePopover.vue <template> <div> <div ref="basePopoverContent" class="base-popover" > <slot /> </div> <div ref="basePopoverOverlay" class="base-popover__overlay" /> </div> </template>

In the script section of the component:

  • we import popper.js (the library that takes care of the popover positioning), then
  • we receive the popoverOptions props, and finally
  • we set initial popperInstance to null (because initially we do not have any popover).

Let’s describe what the popoverOptions object contains:

  • popoverReference: This is an object in relation to which the popover will be positioned (usually element that triggers the popover).
  • placement: This is a popper.js placement option that specifies the where the popover is displayed in relation to the popover reference element (the thing it is attached to)
  • offset: This is a popper.js offset modifier that allows us to adjust popover position by passing x- and y-coordinates.
import Popper from "popper.js" export default { name: "BasePopover", props: { popoverOptions: { type: Object, required: true } }, data() { return { popperInstance: null } } }

Why do we need that? The popper.js library allows us to position the element in relation to another element with ease. It also does the magic when the popover gets to the edge of the screen an reposition it to be always in user’s viewport (Requirement #3: Positioning)

Step 2: Initialize popper.js

Now that we have a BasePopover component skeleton, we will add few methods that will be responsible for positioning and showing the popover.

In the initPopper method, we will start by creating a modifiers object that will be used to create a Popper instance. We set the options received from the parent component (placement and offset) to the corresponding fields in the modifiers object. All those fields are optional, which is why we first need to check for their existence.

Then, we initialize a new Popper instance by passing:

  • the popoverReference node (the element to which the popover is pointing: popoverReference ref)
  • the popper content node (the element containing the popover content: basePopoverContent ref)
  • the options object

We also set the preventOverflow option to prevent the popover from being positioned outside of the viewport. After initialization we set the popper instance to our popperInstance data property to have access to methods and properties provided by popper.js in the future.

methods: { ... initPopper() { const modifiers = {} const { popoverReference, offset, placement } = this.popoverOptions if (offset) { modifiers.offset = { offset } } if (placement) { modifiers.placement = placement } this.popperInstance = new Popper( popoverReference, this.$refs.basePopoverContent, { placement, modifiers: { ...modifiers, preventOverflow: { boundariesElement: "viewport" } } } ) } ... }

Now that we have our initPopper method ready, we need a place to invoke it. The best place for that is in the mounted lifecycle hook.

mounted() { this.initPopper() this.updateOverlayPosition() }

As you can see, we are calling one more method in the mounted hook: the updateOverlayPosition method. This method is a safeguard used to reposition our overlay in case we have any other elements on the page that have absolute positioning (e.g. NavBar, SideBar). The method is making sure the overlay is always covering the full screen and prevent user from interacting with any element except the popover and overlay itself.

methods: { ... updateOverlayPosition() { const overlayElement = this.$refs.basePopoverOverlay; const overlayPosition = overlayElement.getBoundingClientRect(); overlayElement.style.transform = <code>translate(-${overlayPosition.x}px, -${ overlayPosition.y }px)`; } ... } Step 3: Destroy Popper

We have our popper initialized but now we need a way to remove and dispose it when it gets closed. There’s no need to have it in the DOM at that point.

We want to close the popover when we click anywhere outside of it. We can do that by adding a click listener to the overlay because we made sure that the overlay is always covering the whole screen under our popover

<template> ... <div ref="basePopoverOverlay" class="base-popover__overlay" @click.stop="destroyPopover" /> ... </template>

Let’s create a method responsible for destroying the popover. In that method we first check if the popperInstance actually exist and if it does we call popper destroy method that makes sure the popper instance is destroyed. After that we clean our popperInstance data property by setting it to null and emit a closePopover event that will be handled in the component responsible for rendering the popover.

methods: { ... destroyPopover() { if (this.popperInstance) { this.popperInstance.destroy(); this.popperInstance = null; this.$emit("closePopover"); } } ... } Step 4: Render BasePopover component

OK, we have our popover ready to be rendered. We do that in our parent component, which will be responsible for managing the visibility of the popover and passing the content to it.

In the template, we need to have an element responsible for triggering our popover (popoverReference) and the BasePopover component. The BasePopover component receives a popoverOptions property that will tell the component how we want to display it and isPopoverVisible property bound to v-if directive that will be responsible for showing and hiding the popover.

<template> <div> <img ref="popoverReference" width="25%" src="./assets/logo.png" > <BasePopover v-if="isPopoverVisible" :popover-options="popoverOptions" > <div class="custom-content"> <img width="25%" src="./assets/logo.png"> Vue is Awesome! </div> </BasePopover> </div> </template>

In the script section of the component, we import our BasePopover component, set the isPopoverVisible flag initially to false and popoverOptions object that will be used to configure popover on init.

data() { return { isPopoverVisible: false, popoverOptions: { popoverReference: null, placement: "top", offset: "0,0" } }; }

We set popoverReference property to null initially because the element that will be the popover trigger does not exist when our parent component is created. We get that fixed in the mounted lifecycle hook when the component (and the popover reference) gets rendered.

mounted() { this.popoverOptions.popoverReference = this.$refs.popoverReference; }

Now let’s create two methods, openPopover and closePopover that will be responsible for showing and hiding our popover by setting proper value on the isPopoverVisible property.

methods: { closePopover() { this.isPopoverVisible = false; }, openPopover() { this.isPopoverVisible = true; } }

The last thing we need to do in this step is to attach those methods to appropriate elements in our template. We attach the openPopover method to click event on our trigger element and closePopover method to closePopover event emitted from the BasePopover component when the popover gets destroyed by clicking on the popover overlay.

<template> <div> <img ... @click="openPopover" > <BasePopover ... @closePopover="closePopover" > ... </BasePopover> </div> </template>

Having this in place, we have our popover showing up when we click on the trigger element and disappearing when we click outside of the popover.

Step 5: Create BasePopoverContent component

It does not look like a popover though. Sure, it renders content passed to the BasePopover component, but it does so without the usual popover wrapper and an arrow pointing to the trigger element. We could have included the wrapper component in the BasePopover component, but this would made it less reusable and couple the popover to a specific template implementation. Our solution allows us to render any template in the popover. We also want to make sure that the component is responsible only for positioning and showing the content.

To make it look like a popover, let’s create a BasePopoverContent component. We need to render two elements in the template:

  • an arrow element having a popper.js x-arrow selector needed for the popper.js to properly position the arrow
  • content wrapper that expose a slot element in which our content will be rendered
<template> <div class="base-popover-content"> <div class="base-popover-content__arrow" x-arrow/> <div class="base-popover-content__body"> <slot/> </div> </div> </template>

Now let’s use our wrapper component in the parent component where we use BasePopover

<template> <div> <img ref="popoverReference" width="25%" src="./assets/logo.png" @click="openPopover" > <BasePopover v-if="isPopoverVisible" :popover-options="popoverOptions" @closePopover="closePopover" > <BasePopoverContent> <div class="custom-content"> <img width="25%" src="./assets/logo.png"> Vue is Awesome! </div> </BasePopoverContent> </BasePopover> </div> </template>

And, there we go!

You can see the popover animating in and out in the example above. We’ve left animation out of this article for the sake of brevity, but you can check out other popper.js examples for inspiration.

You can see the animation code and working example here.

Let’s look at our requirements and see if we didn’t missed anything:

Pass? Requirement Explanation Pass Reusability We used a slot in the BasePopover component that decouples the popover implementation from the content template. This allows us to pass any content to the component. Fail Dismissibility We made it possible to close the popover when clicking outside of it. We still need to make sure we can dismiss the popover by pressing the ESC on the keyboard. Pass Positioning That’s where popper.js solved an issue for us. It not only gave us positioning superpowers, but also takes care of repositioning the popover when it reaches the edge of the viewport. Fail Interaction We have a popover popping in and out, but we do not have any interactions with the popover content yet. As for now, it looks more like a tooltip than popover and could actually be used as a tooltip when it comes to showing and hiding the element. Tooltips are usually shown on hover, so that’s the only change we’d have to make.

Darn, we failed interaction requirement. Adding the interaction is a matter of creating a component (or components) that will be placed in the BasePopoverContent slot. In the example, I created a very simple component with a header and text showing a few Vue style guide rules. By clicking on the buttons, we can interact with the popover content and change the rules, when you get to the last rule the button changes its purpose and serve as a close button for the popover. It’s a lot like the new user welcome screens we see in apps.

We also need to fully meet the dismissibility requirement. We want to hit ESC on the keyboard to close the popover in addition to clicking anywhere outside it. For kicks, we’ll also add an event that proceeds to the next Vue style guide rule when pressing Enter.

We can handle that in the component responsible for rendering the popover content using Vue event key modifiers. To make the events work we need to make sure that the popover is focused when mounted. To do that we add a tabindex attribute to the popover content and a ref that will allow us to access the element in the mounted hook and call focus method on it.

// VueTourPopoverContent.vue <template> <div class="vue-tour-popover-content" ref="vueTourPopoverContent" tabindex="-1" @keydown.enter="proceedToNextStep" @keydown.esc="closePopover" > ... </template ... <script> export default { ... mounted() { this.$refs.vueTourPopoverContent.focus(); } ... } </script> Wrapping up

And there we have it: a fully functional popover component we can use anywhere in our app. Here are a few things we learned along the way:

  • Use a popover to expose a small amount of information or functionality. Remember that the content will disappear when user is finished with it.
  • Consider using popovers instead of temporary views like sidebars. Popovers leave more space for content and are only temporary.
  • Enable a closure behavior that makes sense based on the popover’s function. A popover should be visible only when needed. If it allows user to make a choice, close the popover as soon as the user makes a decision.
  • Position popovers onscreen with care. A popover’s arrow should always point directly to the element that triggered it and should never cover the trigger element.
  • Display one popover on screen at a time. More than one steals attention.
  • Take care of the popover size. Prevent making it too big but bear in mind that proper use of padding can make things look nice and clean.

If you don't want to dig too deep into the code and you just need the component as it is, you can try out the npm package version of the component.

Hopefully you will find this component useful in your project!

The post Reusable Popovers to Add a Little Pop appeared first on CSS-Tricks.

Jeremy Keith – Building the Web

Css Tricks - Fri, 08/23/2019 - 5:36am

I really enjoyed this interview with Jeremy Keith on the state of the web, how things have changed in recent years and why he’s a mix of optimistic and nervous for the future.

One thing that caught my attention during the interview more than anything was where Jeremy started discussing how folks think that websites are pretty crummy in general. This reminded me that I cannot count the number of times when someone has said to me “ah, I can’t view this website on my phone.”

We have websites that aren’t responsive! We have websites that litter the UI with advertisements and modals! And we have websites that are slow as all heck just when we need them the most!

Of course folks are going to start complaining about the web and working around them if they find that this is the case. I’ll even catch myself sending an email to myself when I know that the mobile experience is going to be crummy. Or I’ll Instapaper something because the design of the website is particularly difficult to read. Remember, Reader Mode is the button to beat.

My quick thought on this is that we shouldn’t become sour and pessimistic. We should roll up our sleeves and get to work because clearly there’s much left to do.

Direct Link to ArticlePermalink

The post Jeremy Keith – Building the Web appeared first on CSS-Tricks.

Multiplayer Tic Tac Toe with GraphQL

Css Tricks - Fri, 08/23/2019 - 4:35am

GraphQL is a query language for APIs that is very empowering for front-end developers. As the GraphQL site explains it, you describe your data, ask for what you want, and get predictable results.

If you haven’t worked with it before, GraphQL might be a little confusing to grok at first glance. So, let’s build a multiplayer tic-tac-toe game using it in order to demonstrate how it’s used and what we can do with it.

First thing we need is a backend for our APIs. We’re going to use Hasura GraphQL Engine along with a custom GraphQL server for this tutorial. We’ll look at the queries and mutations that the client-side needs to build the game. You can implement this kind of thing in whatever framework you wish, but we’re going with use React and Apollo for this tutorial.

Here’s what we’re making:

GitHub Repo

A brief GraphQL primer

GraphQL is a query language for APIs; a language with a syntax that defines a way to fetch data from the server. It works with any kind of API that is backed by a strong system that makes your codebase resilient. Some of the primary characteristics of GraphQL are:

  • The client can ask the server for what queries it supports (check out introspection for more).
  • The client must ask the server for exactly what it wants. It can't ask for something like a wildcard (*) but rather exact fields. For example, to get a user's ID and name, the GraphQL query would be something like: query { user { id name } }
  • Every query is made to a single endpoint and every request is a POST request.
  • Given a query, the structure of the response is enforced. For example, for the above query to get the id and name of a user object, a successful response would be something like: { "data": { "user": { "id": "AUTH:482919", "name": "Jon Doe" } } }

This series of articles is a great place to start if you want to know more about GraphQL.

Why are we using GraphQL, anyway?

We just discussed how GraphQL demands that the client must ask the server for exactly what it wants. That means there is no unnecessary data retrieved from the server, like in case of REST where you would receive a huge response even when you need one field. Getting what new need and only what we need optimizes responses so that they’re speedy and predictable.

The client can ask the server for its schema via introspection. This schema can be used to build queries dynamically using an API explorer like GraphiQL. It also enables linting and auto-completing because every query can be built with and cross-checked against the GraphQL schema. As a front-end developer, this greatly enhances the DX as there is much less human error involved.

Since there is a single endpoint and every request is a POST request, GraphQL can avoid a lot of boilerplate since it doesn't have to track endpoints, request payloads and response types. Response caching is much easier because every query response can be expected to be of a certain structure.

Furthermore, GraphQL has a well-defined spec for implementing real-time subscriptions. You do not have to come up with your own implementation details for building real-time servers. Build a server that complies with GraphQL’s real-time spec and any client can start consuming the real-time GraphQL APIs with ease.

GraphQL Terminology

I will be using some GraphQL terms in this post, so it’s worth covering a few of them here in advance.

  • Query: A GraphQL query is one that simply fetches data from the server.
  • Mutation: This is a GraphQL query that changes something on the server and fetches some data.
  • Subscription: This is a GraphQL query that subscribes the client to some changes on the server.
  • Query variables: These allow us to add parameters to a GraphQL query.
Getting back to the backend

Now that we have a cursory understanding of GraphQL, let’s start with modeling a backend. Our GraphQL backend would be a combination of Hasura GraphQL Engine and a custom GraphQL server. We will not go into the subtleties of code in this case.

Since Tic Tac Toe is a multiplayer game, there is a need to store state in the database. We will use Postgres as our database and Hasura GraphQL Engine as a GraphQL server that allows us to CRUD the data in Postgres over GraphQL.

Apart from CRUD on the database, we would also want to run some custom logic in the form of GraphQL mutations. We will use a custom GraphQL server for that.

Hasura describes itself quite nicely in its README file:

GraphQL Engine is a blazing-fast GraphQL server that gives you instant, realtime GraphQL APIs over Postgres, with webhook triggers on database events, and remote schemas for business logic.

Going a little deeper, Hasura GraphQL Engine is an open-source server that sits on top of a Postgres database and allows you to CRUD the data over real-time GraphQL. It works with both new and existing Postgres databases. It also has an access control layer that you can integrate with any auth provider. In this post though, we will not implement auth for the sake of brevity.

Let’s start by deploying an instance of Hasura GraphQL Engine to Heroku's free tier that comes with a fresh Postgres database. Go ahead, do it; it is free and you do not need to enter your credit card details :)

Once you deploy, you will land up on the Hasura console which is the admin UI to manage your backend. Note that the URL you are at, is your GraphQL Engine URL. Lets start with creating our required tables.

user

A user table will store our users. To create the table, go to the "Data" tab on top and click on the "Create Table" button.

This table has an id column which is the unique identifier of each user and a name column that stores the user’s name.

board

The board table will store the game boards where all the action happens. We’ll spin up a new board for each game that starts.

Lets look at the columns of this table:

  • id: A unique UUID for each board that is auto generated
  • user_1_id: The user_id of the first user. This user by default plays X in the game
  • user_2_id: The user_id of the second user. This user by default plays O.
  • winner: This is a text field that is set to X or O once the winner has been decided.
  • turn: This is a text field that can be X or O and it stores the current turn. It starts with X.

Since user_1_id and user_2_id store the user IDs, let’s add a constraint on the board table that ensures user_1_id and user_2_id to be present in table user.

Go to the "Modify" tab in the Hasura board table and add the foreign keys.

Adding the foreign key on user_1_id. We’ll need to add a new foreign key on user_2_id as well.

Now, based on these relationships, we need to create a connection between these tables so that we can query the user information while querying the board.

Go to the "Relationships" tab in Hasura and create relationships called user1 and user2 for user_1_id and user_2_id based suggested relations.

move

Finally, we need a move table that stores the moves made by users on a board.

Let’s look at the columns:

  • id: The unique identifier of each move that is auto generated
  • user_id: The ID of the user that made the move
  • board_id: The ID of the board that the move was made on
  • position: The position where the move was made (i.e. 1-9)

Since user_id and board_id are foreign keys to user and board table, respectively. We must create these foreign key constraints just like we did above. Next, create the relationships as user for the foreign key on user_id and board for the foreign key on board_id. Then, we’ll go back to the board table's "Relationship" tab and create the suggested relationship to move table. Call it moves.

We need a custom server

Apart from storing data in the database, we also want to perform custom logic. The logic? Whenever a user makes a move, the move must be validated, made before the turn must is switched.

In order to do that, we must run an SQL transaction on the database. I have written a GraphQL server that does exactly that that I’ve deployed on Glitch.

Now we have two GraphQL servers. But GraphQL spec enforces that there must be only one endpoint. For this purpose, Hasura supports remote schemas — i.e. you can provide an external GraphQL endpoint to Hasura and it will merge this GraphQL server with itself and serve the combined schema under a single endpoint. Let’s add this custom GraphQL server to our Hasura Engine instance:

  1. Fork the GraphQL server.
  2. Add an environment variable that is the connection to your Postgres database. To do that, go to https://dashboard.heroku.com, choose your app, go to "Settings" and reveal config vars.

A few more steps from there:

  1. Copy the value for the DATABASE_URL.
  2. Go to the GraphQL server you forked and paste that value in the .env file (POSTGRES_CONNECTION_STRING=<value>).
  3. Click on the "Show Live" button on top and copy the opened URL.

We’ll add this URL to GraphQL engine by adding it as a remote schema. Go to the "Remote Schemas" tab on top and click on the "Add" option.

We are done with setting up our backend!

Let’s work on the front end

I will not be going into the details of front-end implementation since y’all would choose to implement it in the framework of your choice. I’ll go ahead and provide all the required queries and mutations that you would need to build the game. Using these with the front-end framework of your choice will allow you to build a fully functional multiplayer Tic Tac Toe.

Setup

Apollo Client is the go-to library for client-side GraphQL. They have awesome abstractions for React, Vue, Angular, iOS, Android etc. It helps you save a lot of boilerplate and the DX is smooth. You might want to consider using Apollo client over doing everything from scratch.

Let’s discuss the queries and mutations that the client would need for this game.

Insert user

In the app that I built, I generated a random username for each user and inserted this name into the database whenever they open the app. I also stored the name and generated a user ID in local storage so that the same user does not have different usernames. The mutation I used is:

mutation ($name:String) { insert_user ( objects: { name: $name } ) { returning { id } } }

This mutation inserts an entry into the user table and returns the generated id. If you observe the mutation closely, it uses $name. This is called the query variable. When you send this mutation to the server along with the variables { "name": "bazooka"}, the GraphQL server would replace $name from the query variables, which in this case would be "bazooka."

If you wish, you can implement auth and insert into this table with the username or the nickname.

Load all boards

To load all the boards, we run a GraphQL subscription:

subscription { board ( where: { _and: { winner: { _is_null: true }, user_2_id: { _is_null: true } } } order_by: { created_at: asc } ) { id user1 { id name } user_2_id created_at winner } }

This subscription is a live query that returns the id, user1 along with their id and name (from the relationship), user_2_id, winner and created_at. We have set a where filter which fetches us only the boards without a valid winner and where user_2_id is null which means the board is is open for a player to join. Finally, we order these boards by their created_at timestamp.

Creating a board

Users can create boards for other people to join. To do that, they have to insert an entry into the boards table.

mutation ($user_id: Int) { insert_board ( objects: [{ user_1_id: $user_id, turn: "x", }] ) { returning { id } } } Joining a board

To join a board, a user needs to update the user_2_id of the board with their own user_id. The mutation looks like:

mutation ($user_id: Int, $board_id: uuid!) { update_board ( _set: { user_2_id: $user_id }, where: { _and: { id: { _eq: $board_id }, user_2_id: { _is_null: true }, user_1_id: { _neq: $user_id } } } ) { affected_rows returning { id } } }

In the above GraphQL mutation, we are setting the user_2_id of a board to a user_id. We have also added additional checks such that this action succeeds only if the joining player is not the creator and the board is not already full. After the mutation, we ask for the number of affected rows.

In my app, after joining a board, I would redirect users to /play?board_id=<board_id>.

Subscribing to the board

When both users are in game, we need real-time updates about the moves of each player. So we must subscribe to the given board that is being played on and also the moves (through the relationship).

subscription($board_id: uuid!) { board: board_by_pk (id: $board_id) { id moves (order_by: { id: desc}) { id position user { id name } user_id } user1 { id name } user2 { id name } turn winner } }

The above query subscribes the client to the board that is being played. Whenever a new move is played, the client will be updated with it.

Making a move

To make a move, we will be using the make_move mutation from our custom GraphQL server.

mutation ( $board_id: String!, $position: Int!, $user_id: Int! ) { make_move ( board_id: $board_id, position: $position, user_id: $user_id ) { success } }

This mutation takes a board_id, position and user_id from query variables. It validates the move, makes the move and also switches the turn. In the end, it returns whether this transaction was successful or not.

Tic Tac Whoa!

And now you have a working game of Tic Tac Toe! You can implement any real-time multiplayer game with GraphQL subscriptions using the concepts we covered. Let me know if you have any questions and I would be happy to answer.

The post Multiplayer Tic Tac Toe with GraphQL appeared first on CSS-Tricks.

Weekly Platform News: Improving UX on Slow Connections, a Tip for Writing Alt Text and a Polyfill for the HTML loading attribute

Css Tricks - Thu, 08/22/2019 - 1:13pm

In this week's roundup, how to determine a slow connection, what we should put into alt text for images, and a new polyfill for the HTML loading attribute, plus more.

Detecting users on slow connections

Algolia is using the Network Information API (see the API’s Chrome status page) to detect users on slow connections — about 9% of their users — and make the following adjustments to ensure a good user experience:

  • increase the request timeout when the user performs a search query (a static timeout can cause a false positive for users on slow connections)
  • show a notification to the user while they’re waiting for search results (e.g., "You are on a slow connection. This might take a while.")
  • request fewer search results to decrease the total response size
  • debounce queries (e.g., don’t send queries at every keystroke)
navigator.connection.addEventListener("change", () => { // effective round-trip time estimate in ms let rtt = navigator.connection.rtt; // effective connection type let effectiveType = navigator.connection.effectiveType; if (rtt > 500 || effectiveType.includes("2g")) { // slow connection } });

(via Jonas Badalic)

Alt text should communicate the main point

The key is to describe what you want your audience to get out of the image rather than a simple description of what the image is.

<!-- BEFORE --> <img alt="Graph showing the use of the phrase "Who you gonna call?" in popular media over time."> <!-- AFTER --> <img alt="Graph illustrating an 800% increase in the use of the phrase "Who you gonna call?" in popular media after the release of Ghostbusters on June 7th, 1984.">

(via Caitlin Cashin)

In other news...
  • There is a new polyfill for the HTML loading attribute that is used by wrapping the images and iframes that you want to lazy-load in <noscript> elements (via Maximilian Franzke).
  • WeChat, the Chinese multi-purpose app with over one billion monthly active users, hosts over one million "mini programs" that are built in a very similar fashion to web apps (essentially CSS and JavaScript) (via Thomas Steiner).
  • Microsoft has made 24 new (online) voices from 21 different languages available to the Speech Synthesis API in the preview version of Edge ("these voices are the most natural-sounding voices available today") (via Scott Low)

Read more news in my new, weekly Sunday issue. Visit webplatform.news for more information.

The post Weekly Platform News: Improving UX on Slow Connections, a Tip for Writing Alt Text and a Polyfill for the HTML loading attribute appeared first on CSS-Tricks.

Advice for Technical Writing

Css Tricks - Thu, 08/22/2019 - 6:30am

In advance of a recent podcast with the incredible technical writer and Smashing Magazine editor-in-chief Rachel Andrew, I gathered up a bunch of thoughts and references on the subject of technical writing. So many smart people have said a lot of smart things over the years that I thought I'd round up some of my favorite advice and sprinkle in my own experiences, as someone who has also done his fair share of technical writing and editing.

There is a much larger world of technical writing out there. My experience and interest is largely about web technology and blogging, so I'm coming at it from that angle and many of the people I quote throughout are in that same boat.

Picking something to write about

If you want to write for CSS-Tricks and you ask me what you should write about, I'm probably going to turn that question around on you. It's likely I don't know you well enough to pick the perfect topic for you. More importantly, what I really want you to write about is something that is personal and important to you. Articles rooted in recent excitement about a particular idea or technology always come out better than dictated assignments.

My best advice:

Write the article you wish you found when you googled something.

— Chris Coyier (@chriscoyier) October 30, 2017

That said, I do maintain a list of ideas specifically for this site. Any writing can be done on assignment and sometimes that elicits the spark needed for something great and on-target for the audience of a site.

Write at the moment of learning

The moment you learn something is the best time to write. It's fresh in your mind and you can remember what it was like before you understood it. You also understand what it takes to go from not knowing to knowing it. That's the journey you need to take people on.

If you don't have time, at least try to capture the vibe wherever you save your ideas. Don't just write down "dataset." Write down some quick notes like, "I didn't realize DOM elements had a .dataset property for getting and setting data attributes. I wonder if it's better to use that than getAttribute." That way, you'll be able to reload that realization in your brain when you revisit the idea.

What have you learned in just the last few days? I bet there is a blog post there. Manuel Matuzovic does an excellent job of putting this into practice with the "Today I Learned" (TIL) section of his blog.

Comparing technologies is an underused format

Here's some advice Rachel shared that I don't see taken advantage of nearly enough:

There is a sweet spot for writing technical posts and tutorials. Write for the professional who hasn't had time to learn that thing yet, and link it back to things they already know. For example explaining a modern JS technique to someone who knows jQuery.

— Rachel Andrew (@rachelandrew) February 20, 2019

Tell me about how this new framework relates to Backbone. Tell me how this CMS relates to WordPress. Tell me how some technology connects to another technology that is safe to assume is more widely understood.

Technology changes a lot, but what technology does doesn't change all that much.

Careful with that intro

The main comment I add on tutorials I review is to ask for an intro that describes what the tutorial is about. I'm 600 words in and still don't know what the tutorial is about and who it is for. #writing

— Rachel Andrew (@rachelandrew) January 5, 2018

Not getting to the point right at the top of technical articles is a dang epidemic. The start of a technical article is not the time to wax poetic or drop some cliché light philosophy like, "Web design sure has changed a lot." You don't have to be boring, but you do need to tell me what this article is going to get into and who it is for.

Brian Rinaldi says:

“Does the title make the article sound interesting?” If the title interests a reader, they’ll typically read the intro and decide, “Is it worth my time reading the whole thing?” A common mistake I see in a lot of technical posts is either too much introduction or, alternatively, far too little.

A single well-written paragraph can set the stage for a technical blog post.

Careful with the title, too

I remember a conversation from years ago with content strategist Erin Kissane where she strongly advised me to choose boring titles for everything. Not just the title of blog posts, but for everything, including the names of sections, tags, and even subheadings within posts.

Here's the thing with boring: it works. Boring isn't the right word either; it's clarity. The world is full of clickbait, and maybe that's effective in some genres, but technical blogging isn't one of them.

A nice clear blog post title: Getting Started with GraphQL, Phoenix, and React by Margaret Williford

A terrible version of the same: Build a web app with modern technologies in 30 minutes!

What's a web app? What technologies? What's modern about them? What's with the weird time limit?

SEO matters and Margaret's article is going to do a lot better in both the short and long term with that clear title.

The outro

Ben Halpern says that the next most important thing after the intro is:

[...] the last paragraph.

People don't read top-to-bottom the moment when they arrive, so there is a good chance it's the second paragraph people read. Personally, I find the beginning a lot more important than the ending, but there is a certain art to the ending as well.

A lot of people just <h2>Conclusion</h2> and write a few words about what was just went over. Honestly, I don't hate that. It falls into this time tested pattern:

  1. Tell 'em what you're gonna tell 'em
  2. Tell 'em
  3. Tell 'em what you told 'em

That helps your message sink in and brings things full circle. Technical blogging isn't terribly different from marketing in that sense. You're trying to get people to understand something and you're using whatever tricks you need to to get the job done. A little repetition is a classic trick.

Make it scannable

Brian Rinaldi says:

[...] the wall of text can be easily be made less intimidating and appear much more visually appealing through the use of visual elements that break it up. The easiest is to simply place section subheadings throughout your post.

I agree: subheadings are probably the easiest and most powerful trick for scannability.

Other methods:

  • Lists: Like what I'm doing right now! I didn't have to use a list. Paragraphs might have worked as well, but a list makes contextual sense here and is probably tricking some of you into reading this. Or at least scanning it and getting some key points in the process.
  • Images: Please make them relevant and contextual. Skip the funny GIF. Screenshots are often great in a technical context because they provide a visual to what might otherwise be a difficult concept to explain. I like Unsplash for thematic imagery, too, but you can do better than a random picture of trees, a woman drinking coffee, or a random rack of servers.
  • Illustrations: The abstract nature of an illustration is your friend. It tricks people into having a quick thought about what you are describing. They generally take a little work to pull off, but the pay-off for you and the reader can be huge.
  • Videos: You can't simply drop a 42-minute video in the middle of a blog post, but if you can make it clear that you are demonstrating something visual and the video is less than a minute, it can be a powerful choice. You've always got <video autoplay muted loop controls> as well to make it GIF-like.
  • Blocks of code: Technical blog posts are often about code. Don't avoid code, embrace it. I love how Dan Abramov sprinkles in code blocks in blog posts not so much to demonstrate syntax and setup, but to make points. I'm going to recommend Embedded Pens as well, because they're fully interactive demoes in addition to serving as code blocks.
  • Tables: Don't forget about tabular data! Presenting information (particularly data or definitions) in a table makes it more understandable than it would have been any other way.
  • Collapsing sections: The <details>/<summary> elements make quick work of collapsible content. If you've got a large section of content that is good to be there but doesn't need to be read by everyone, collapse it! Our reference guide of media queries for devices is a decent example of this in action.

Whatever you pick here, you should pick things that help enhance the points you're making even if in some ways it feels like trickery. It's trickery to help readability, and that's a good thing.

My favorite technique? A little bit of design. Use design principals like spacing, color, and alignment to help the readability of posts. We even go so far as to art direct some posts where design can enhance the central point being made.

The point here isn't to do away with walls of text altogether. Sometimes that's exactly what's needed because you're asking a reader to deeply read a passage that they otherwise wouldn't get what's needed from the content. However, more often than not, a post can strongly benefit from some healthy use of white space.

Use an active voice

I find this one a little tricky to wrap my head around, but Katy Decorah has a great presentation about technical writing that explains this point in great detail. It's kinda like using present tense and stating a point directly rather than passively.

Passive: "After the file is downloaded..."
Active: "After you download the file..."

Passive: "The request is processed by the server."
Active: "The server processes the request."

Here's another clear explanation with examples by Neal Whitman, read by Mignon Fogarty (Grammar Girl):

The key point is made about a minute into the recording. There are lots of words to avoid

"Just" is a big one. Brad Frost:

“Just” makes me feel like an idiot. “Just” presumes I come from a specific background, studied certain courses in university, am fluent in certain technologies, and have read all the right books, articles, and resources. “Just” is a dangerous word.

There are plenty of others to avoid, which which I've written about before. Read the comments in that last link. Long story short: there are lots of words that do more harm than good in technical writing, not only because they can come across as preachy, but because they usually don't help the sentences where they're used. Try taking "just" out of any sentence. The sentence will still make sense without it.

Simply Clearly Just Of course Everyone knows Easy However So Basically Turns out In order to Very Be mindful of your tone

Tone is concerned with how you say something in consideration of the context. For example, you wouldn't deliver bad news to someone with a happy tone. The way you express yourself ought to be aligned with the situation.

This is our tone goal on this site:

Friendly. Authoritative. Welcoming. We're all in this together. Flexible (nondogmatic about ideas). Thankful.

MailChimp has a very extensive guide to theirs.

It's worth pointing out that tone and voice are separate concepts. I like to think of voice as never changing (it's your personality which is a part of who you are) while tone changes to suit the context. In other words, you can have a professional voice while communicating in a friendly tone.

I don't think there is one true tone that is perfect for technical writing, but since the high-level goal of any technical writing is to help someone understand something complicated, you can use tone to help. A joke in the middle of a set of intricate steps is confusing. A bunch! of! excitement! about something might feel out of place or disingenuous, but being drab and lifeless is worse. I'd say if you're writing under your own name, let's feel a little bit of your personality as long as it's not at the cost of clarity. If you're writing under a brand, match what they have established whether it has been codified or not.

Careful about length

The general tendency in technical writing is to write too much rather than too little. Wade Christensen:

Whether trained by school assignments with word minimums or just uncritical, most of us write too much. Beyond approaching each draft with a ruthless cutting mentality, there are several ways to write short from draft one.

Word limits can help, even if they're self-imposed.

I heard from a fledgling editor recently who struggled with his writers submitting posts with high word counts, so he suggested they keep it to 1000-1500 as a guideline and that seemed effective. This post is roughly double the high end there, for comparison.

The real solution, if the resources are there, is ruthless editing.

I personally don't find that writing too long is the only issue. I've had just as many occurrences of writers going too short and not digging into the topic deep enough. I don't like focusing on the length; I like focusing on the clarity of the delivery and usefulness of the content itself.

Side note: Breaking up a post into multiple parts (as separate posts in a series) is not a solution for posts that are too long. In fact, it can exacerbate the problem. Do that only if the different parts are thematically different and can stand alone without the other parts.

Don't stop yourself from writing

There is an invisible force, built from fear, that keeps a lot of people away from technical blogging. "Meh, everybody already knows this," you might think. (They don't). "What if I'm wrong and someone calls me out?" (You aren't wrong if what you're doing is working for you.)

There can still be blockers even if you overcome those fears and start putting words to screen. Here's Max Böck:

There is a thing that happens to me while writing. I start with a fresh idea, excited to shape it into words. But as time passes, I lose confidence.

The trick for Max is not to wait too long and to ignore feelings holding you back:

I’ll publish something as soon as I feel confident that all the important points I want to get across are there. I try to ignore the voice screaming “it’s not ready” just for long enough to push it online.

Jeremy Keith goes so far to say we shouldn't even keep drafts:

I think keeping drafts can be counterproductive. The problem is that, once something is a draft rather than a blog post, it’s likely to stay a draft and never become a blog post. And the longer something stays in draft, the less likely it is to ever see the light of day.

The chances that your writing helps someone is pretty high! Matthias Ott:

Even the smallest post can help someone else out there.

Think you're too inexperienced? You're probably not, but even if you were, a perspective from someone with less experience is still useful. Ali Spittel:

If you have a blog post that contains mostly correct information, or at least your interpretation of the topic, then you're experienced enough. There are lots of excellent posts out there from the perspective of newbies, and they're really important!

Fear is a real thing in writing and dealing with it can be debilitating. While it's primarily geared toward creative writing, The War of Art by Stephen Pressfield is a good read to help break through the fear.

There is no one perfect style

We each have our own unique perspectives and writing styles. One writing style might be more approachable to some, and can therefore help and benefit a large (or even small) number of people in ways you might not expect.

...says Sara Soueidan. She continues:

Just write.

Even if only one person learns something from your article, you’ll feel great, and that you’ve contributed — even if just a little bit — to this amazing community that we’re all constantly learning from.

Technical blog posts don't have to be devoid of creativity. You could create a wonderful technical blog post that is an annotated chat room conversation between two developers learning from each other. Or a blog post that is a series of videos that build on each other.

The more introductory, the higher the bar

The web is saturated with beginner-rated and surface-level blog posts. There's a sea of crash courses, 101s, and intros out there. You've gotta knock it out of the park if you want to stand out from the pack and be useful.

There is no particular change in tone necessarily for a beginner-focused post. You don't need to do the equivalent of talking slowly or talking down. You only need to be clear, and clarity is valuable to readers at any skill level, not to mention appreciated by them as well. A very advanced programmer can and will appreciate the clarity in a technical blog post even if it's something they already understand.

But the bar isn't that high in general

You don't need a decade of experience to write a blog post. I'd say it's closer to a day of experience, a desire to write, and having something to say. I think you'd be surprised at how little you need to do to make a blog post stand out and be read. Put in some effort, make clear points, focus on readability, and you will do well.

I hope the advice in this post helps!

Abstraction is helpful, but real-world examples are sometimes better

Christine writes:

It’s one thing to describe a high-level concept, and another to explain or illustrate how that concept applies to the real world. In technical writing, you’ll often be covering complex or hard-to-understand subjects, so it’s even more important to use a well-placed example or two to showcase why your topic matters, or how it relates to the real world.

I find myself pushing back on code that is too abstract more than I push back on code that is too focused on a real-world use case. I'd rather see ["Charles Adok", "Samantha Frederick"] than ["foo", "bar"] or [a, b] any day, but more importantly, what is then done with that data to make it feel like a relatable programming scenario.

But avoid real-world examples that come at the cost of clarity. If abstraction is useful to drive a complex point home without getting lost in the details, so be it.

Blogging opens doors

Everyone I've ever met who had ever actively blogged has said that blogging has had a positive impact on their career. Besides being a public demonstration of your ability to think and present ideas, it helps you understand things better. To teach is to learn.

I'd attribute my own blogging as the biggest contributor to any success I've had. Here's Khoi Vinh, a designer ten times more successful than I'll ever be:

It’s hard to overstate how important my blog has been, but if I were to try to distill it down into one word, it would be: “amplifier.”

You get better at what you do.

There is no way around it: practice makes you better. The expectations around practice are sometimes very clear and culturally ingrained. In order to get better at playing the piano, you take piano lessons and practice. We all know this. But people also say "Oh, I'm a terrible cook," as if cooking as a skill is somehow fundamentally different than playing the piano and doesn't require the same amount of learning and practice.

You get better at writing by writing more. That is, writing with stakes. Writing and then publicly publishing what you write such that people read it.

You can go to school for writing. You could get a writing coach. My thinking is nothing teaches better than writing often. Whatever it is you sink time into is what you end up getting good at. Is 10,000 hours a good framework for you? Go with it. Heck, I find even people that sit around watching a lot of TV end up being pretty damn good at watching TV.

Your voice alone < A story with context < Stories including others < Research and data along with stories including others

An article where you just say some stuff is OK. You're allowed to say stuff.

But you can do better.

An article where you tell a true story about how something worked for you is better. Context! Now we can better understand where you are coming from when you say your stuff. Plus everybody likes a story.

An article where you combine that with quoting other people's writing and stories is even better. Now you're painting a larger picture and helping validate what you're saying. Context and flavor!

An article where you combine all that with research and data is the best. Now you're being personal, acknowledging a world outside yourself, layering in context, and avoiding being too anecdotal. Kapow! Now you're writing!

Are you pitching?

Read what the site says about guest writing. Here's ours.

Not to scare you off, but 90% of submissions are garbage. Maybe 75% is outright spam and another 15% are people that clearly didn't read anything we had to say about guest posting and are way off base. I can usually tell from the quality of writing in the email itself if they'll be a good guest blogger.

I say things like that, and then feel compelled to remind you the bar isn't that high.

Are there any useful tools?

There probably is, but I don't wanna link you off to tools I can't vouch for. All I use is Dropbox Paper for collaborative writing because the sharing model is easy and allows for co-editing and commenting. Plus Grammarly because it catches a ton of mistakes as you go.

&#x1f4dd;&#x1f389;

The post Advice for Technical Writing appeared first on CSS-Tricks.

Navbar Nudging on @keyframers

Css Tricks - Thu, 08/22/2019 - 4:18am

I got to be the featured guest over on The Keyframers the other day. We looked at a Dribbble shot by Björgvin Pétur Sigurjónsson and then slowly built it, taking some purposeful detours along the way to discuss various tech.

We start by considering doing it entirely in CSS, then go for some light JavaScript to alter some data attributes as state, then ultimately end up using flipping.

This is where we ended up:

See the Pen
Navbar Nudging w/ Chris Coyier | Three Person Collaborative Animation Tutorial | @keyframers 2.14.0
by @keyframers (@keyframers)
on CodePen.

The video:

(My audio goes from terrible to good at about 12 minutes.)

Other takes!

Some of our Animigos made their own fantastic versions of this animation!

➡️ @steeevg: https://t.co/ZP5RxJcAAa
➡️ @mariod: https://t.co/PAFiGyZzGs

Have another solution in mind?
Give it a shot and share your results!

— the @keyframers (@keyframers) August 15, 2019

The post Navbar Nudging on @keyframers appeared first on CSS-Tricks.

Using requestAnimationFrame with React Hooks

Css Tricks - Wed, 08/21/2019 - 4:05am

Animating with requestAnimationFrame should be easy, but if you haven’t read React’s documentation thoroughly then you will probably run into a few things that might cause you a headache. Here are three gotcha moments I learned the hard way.

TLDR: Pass an empty array as a second parameter for useEffect to avoid it running more than once and pass a function to your state’s setter function to make sure you always have the correct state. Also, use useRef for storing things like the timestamp and the request’s ID.

useRef is not only for DOM references

There are three ways to store variables within functional components:

  1. We can define a simple const or let whose value will always be reinitialized with every component re-rendering.
  2. We can use useState whose value persists across re-renderings, and if you change it, it will also trigger re-rendering.
  3. We can use useRef.

The useRef hook is primarily used to access the DOM, but it’s more than that. It is a mutable object that persists a value across multiple re-renderings. It is really similar to the useState hook except you read and write its value through its .current property, and changing its value won’t re-render the component.

For instance, the example below will always show 5 even if the component is re-rendered by its parent.

function Component() { let variable = 5; setTimeout(() => { variable = variable + 3; }, 100) return <div>{variable}</div> }

...whereas this one will keep increasing the number by three and keeps re-rendering even if the parent does not change.

function Component() { const [variable, setVariable] = React.useState(5); setTimeout(() => { setVariable(variable + 3); }, 100) return <div>{variable}</div> }

And finally, this one returns five and won’t re-render. However, if the parent triggers a re-render then it will have an increased value every time (assuming the re-render happened after 100 milliseconds).

function Component() { const variable = React.useRef(5); setTimeout(() => { variable.current = variable.current + 3; }, 100) return <div>{variable.current}</div> }

If we have mutable values that we want to remember at the next or later renders and we don’t want them to trigger a re-render when they change, then we should use useRef. In our case, we will need the ever-changing request animation frame ID at cleanup, and if we animate based on the the time passed between cycles, then we need to remember the previous animation’s timestamp. These two variables should be stored as refs.

The side effects of useEffect

We can use the useEffect hook to initialize and cleanup our requests, though we want to make sure it only runs once; otherwise it’s going to end up creating, canceling and re-creating the animation frame request at every render. Here’s a working, but bad example:

function App() { const [state, setState] = React.useState(0) const requestRef = React.useRef() const animate = time => { // Change the state according to the animation requestRef.current = requestAnimationFrame(animate); } // DON’T DO THIS React.useEffect(() => { requestRef.current = requestAnimationFrame(animate); return () => cancelAnimationFrame(requestRef.current); }); return <div>{state}</div>; }

Why is it bad? If you run this, the useEffect will trigger the animate function that will both change the state and request a new animation frame. It sounds good, except that the state change will re-render the component by running the whole function again including the useEffect hook that will first as a cleanup cancel the request made by the animate function in the previous cycle and then spin up a new request. This ultimately replaces the request made by the animate function and it's completely unnecessary. We could avoid this by not spinning up a new request in the animate function, but that still wouldn't be so nice. It would still leave us with an unnecessary cleanup every round and if the component re-renders for some other reason — like the parent re-renders it or some other state has changed — then the unnecessary cancelation and request re-creation would still happen. It is a better pattern to only initialize requests once, keep them spinning by the animate function and then cleanup once when the component unmounts.

To make sure the useEffect hook runs only once, we can pass an empty array as a second argument to it. Passing an empty array has a side-effect though, which avoids us from having the correct state during animation. The second argument is a list of changing values that the effect needs to react to. We don’t want to react to anything — we only want to initialize the animation — hence we have the empty array. But React will interpret this in a way that means this effect doesn’t have to be up to date with the state. And that includes the animate function because it was originally called from the effect. As a result, if we try to get the value of the state in the animate function, it will always be the initial value. If we want to change the state based on its previous value and the time passed, then it probably won’t work.

function App() { const [state, setState] = React.useState(0) const requestRef = React.useRef() const animate = time => { // The 'state' will always be the initial value here requestRef.current = requestAnimationFrame(animate); } React.useEffect(() => { requestRef.current = requestAnimationFrame(animate); return () => cancelAnimationFrame(requestRef.current); }, []); // Make sure the effect runs only once return <div>{state}</div>; } The state’s setter function also accepts a function

There’s a way to use our latest state even if the useEffect hook locked our state to its initial value. The setter function of the useState hook can also accept a function. So instead of passing a value based on the current state as you probably would do most of the time:

setState(state + delta)

... you can also pass on a function that receives the previous value as a parameter. And, yes, that’s going to return the correct value even in our situation:

setState(prevState => prevState + delta) Putting it all together

Here’s a simple example to wrap things up. We’re going to put all of the above together to create a counter that counts up to 100 then restarts from the beginning. Technical variables that we want to persist and mutate without re-rendering the whole component are stored with useRef. We made sure useEffect only runs once by passing an empty array as its second parameter. And we mutate the state by passing on a function to the setter of useState to make sure we always have the correct state.

See the Pen
Using requestAnimationFrame with React hooks
by Hunor Marton Borbely (@HunorMarton)
on CodePen.

Update: Taking the extra mile with a custom Hook

Once the basics are clear we can also go meta with Hooks, by extracting most of our logic into a custom Hook. This will have two benefits:

  1. It greatly simplifies our component, hiding technical variables that are related to the animation, but not to our main logic
  2. Custom Hooks are reusable. If you need an animation in another component you can simply use it there as well

Custom Hooks might sound like an advanced topic at first, but ultimately we just move a part of our code from our component to a function, then call that function in our component just like any other function. As a convention, a custom Hook's name should start with the use keyword and the rules of Hooks apply, but otherwise, they are just simple functions that we can customize with inputs and that might return something.

In our case to make a generic Hook for requestAnimationFrame we can pass on a callback that our custom Hook will call at every animation cycle. This way our main animation logic will stay in our component, but the component itself will be more focused.

See the Pen
Using requestAnimationFrame with custom React Hook
by Hunor Marton Borbely (@HunorMarton)
on CodePen.

The post Using requestAnimationFrame with React Hooks appeared first on CSS-Tricks.

Other Ways to SPAs

Css Tricks - Wed, 08/21/2019 - 4:04am

That rhymed lolz.

I mentioned on a podcast the other day that I sorta think WordPress should ship with Turbolinks. It's a rather simple premise:

  1. Build a server-rendered site.
  2. Turbolinks intercepts clicks on same-origin links.
  3. It uses AJAX for the HTML of the new page and replaces the current page with the new one.

In other words, turning a server-rendered app into "Single Page App" (SPA) by way of adding this library.

Why bother? It can be a little quicker. Full page refreshes can feel slow compared to an SPA. Turbolinks is kinda "old" technology, but it's still perfectly useful. In fact, Starr Horne recently wrote a great blog post about migrating to it at Honeybadger:

Honeybadger isn't a single page app, and it probably won't ever be. SPAs just don't make sense for our technical requirements. Take a look:

  • Our app is mostly about displaying pages of static information.
  • We crunch a lot of data to generate a single error report page.
  • We have a very small team of four developers, and so we want to keep our codebase as small and simple as possible.

... There's an approach we've been using for years that lets us have our cake and eat it too ... and its big idea is that you can get SPA-like speed without all the JavaScript.

That's what I mean about WordPress. It's very good that it's server-rendered by default, but it could also benefit from SPA stuff with a simple approach like Turbolinks. You could always add it on your own though.

Just leaving your server-rendered site isn't a terrible thing. If you keep the pages light and resources cached, you're probably fine.

Chrome has started some new ideas:

I don't doubt this server-rendered but enhance-into-SPA is what has helped popularize approaches like Next and Gatsby.

I don't want to discount the power of a "real" SPA approach. The network is the main offender for slow websites, so if an app is architected to shoot across relatively tiny bits of data (rather relatively heavy huge chunks of HTML) and then calculate the smallest amount of the DOM it can re-render and do that, then that's pretty awesome. Well, that is, until the bottleneck becomes JavaScript itself.

It's just unfortunate that an SPA approach is often done at the cost of doing no server-side rendering at all. And similarly unfortunate is that the cost of "hydrating" a server-rendered app to become an SPA comes at the cost of tying up the main thread in JavaScript.

Damned if you do. Damned if you don't.

Fortunately, there is a spectrum of rendering choices for choosing an appropriate architecture.

The post Other Ways to SPAs appeared first on CSS-Tricks.

Getting Netlify Large Media Going

Css Tricks - Tue, 08/20/2019 - 11:51am

I just did this the other day so I figured I'd blog it up. There is a thing called Git Large File Storage (Git LFS). Here's the entire point of it: it keeps large files out of your repo directly. Say you have 500MB of images on your site and they kinda need to be in the repo so you can work with it locally. But that sucks because someone cloning the repo needs to download a ton of data. Git LFS is the answer.

Netlify has a product on top of Git LFS called Large Media. Here's the entire point of it: In addition to making it all easier to set up and providing a place to put those large files, once you have your files in there, you can URL query param based resizing on them, which is very useful. I'm all about letting computers do my image sizing for me.

You should probably just read the docs if you're getting started with this. But I ran into a few snags so I'm jotting them down here in case this ends up useful.

You gotta install stuff

I'm on a Mac, so these are the things I did. You'll need:

  1. Git LFS itself: brew install git-lfs
  2. Netlify CLI: npm install netlify-cli -g
  3. Netlify Large Media add-on for the CLI: netlify plugins:install netlify-lm-plugin and then netlify lm:install
"Link" the site

You literally have to auth on Netlify and that connects Netlify CLI to the site you're working on.

netlify link

It will create a .netlify/state.json file in your project like this:

{ "siteId": "xxx" } Run setup netlify lm:setup

This creates another file in your project at .lsfconfig:

[lfs] url = https://xxx.netlify.com/.netlify/large-media

You should commit them both.

"Track" all your images

You'll need to run more terminal commands to tell Netlify Large Media exactly which images should be on Git LFS. Say you have a bunch of PNGs and JPGs, you could run:

git lfs track "*.jpg" "*.png"

This was a minor gotcha for me. My project had mostly .jpeg files and I got confused why this wasn't picking them up. Notice the slightly different file extension (ughadgk).

This will make yet another file on your project called .gitattributes. In my case:

*.jpg filter=lfs diff=lfs merge=lfs -text *.png filter=lfs diff=lfs merge=lfs -text *.jpeg filter=lfs diff=lfs merge=lfs -text

This time, when you push, all the images will upload to the Netlify Large Media storage service. It might be an extra-slow seeming push depending on how much you're uploading.

My main gotcha was at this point. This last push would just spin and spin for me and my Git client and eventually fail. Turns out I needed to install the netlify-credential-helper. It worked fine after I did that.

And for the record, it's not just images that can be handled this way, it's any large file. I believe they are called "binary" files and are what Git isn't particularly good at handling.

Check out your repo, the images are just pointers now Git repo where a JPG file isn't actually an image, but a small text pointer. And the best part

To resize the image on-the-fly to the size I want, I can do it through URL params:

<img src="slides/Oops.003.jpeg?nf_resize=fit&w=1000" alt="Screenshots of CSS-Tricks and CodePen homepages" />

Which is superpowered by a responsive images syntax. For example...

<img srcset="img.jpg?nf_resize=fit&w=320 320w, img.jpg?nf_resize=fit&w=480 480w, img.jpg?nf_resize=fit&w=800 800w" sizes="(max-width: 320px) 280px, (max-width: 480px) 440px, 800px" src="img.jpg?nf_resize=fit&w=800" alt="Elva dressed as a fairy">

The post Getting Netlify Large Media Going appeared first on CSS-Tricks.

Let’s Build a JAMstack E-Commerce Store with Netlify Functions

Css Tricks - Tue, 08/20/2019 - 4:41am

A lot of people are confused about what JAMstack is. The acronym stands for JavaScript, APIs, and Markup, but truly, JAMstack doesn’t have to include all three. What defines JAMstack is that it’s served without web servers. If you consider the history of computing, this type of abstraction isn’t unnatural; rather it’s the inevitable progression this industry has been moving towards.

So, if JAMstack tends to be static by definition, it can’t have dynamic functionality, server-side events, or use a JavaScript framework, right? Thankfully, not so. In this tutorial, we’ll set up a JAMstack e-commerce app and add some serverless functionality with Netlify Functions (which abstract AWS Lambda, and are super dope in my opinion).

I'll show more directly how the Nuxt/Vue part was set up in a follow-up post, but for now we're going to focus on the Stripe serverless function. I'll show you how I set this one up, and we'll even talk about how to connect to other static site generators such as Gatsby. Full disclosure, I work for Netlify, and am using their tools for this, it is possible to connect to Stripe with other services. I chose to work for Netlify in part because I enjoy some of the nice abstractions that their services offer.

This site and repo should get you started if you’d like to set something like this up yourself:

Demo Site

GitHub Repo Scaffold our app

The very first step is to set up our app. This one is built with Nuxt to create a Vue app, but you can replace these commands with your tech stack of choice:

yarn create nuxt-app hub create git add -A git commit -m “initial commit” git push -u origin master

I am using yarn, hub (which allows me to create repos from the command line) and Nuxt. You may need to install these tools locally or globally before proceeding.

With these few commands, following the prompts, we can set up an entirely new Nuxt project as well as the repo.

If we log into Netlify and authenticate, it will ask us to pick a repo:

I'll use yarn generate to create the project. With that, I can add in the site settings for Nuxt in the dist directory and hit feploy! That's all it takes to set up CI/CD and deploy the site! Now every time I push to the master branch, not only will I deploy, but I'll be given a unique link for that particular deploy as well. So awesome.

A basic serverless function with Netlify

So here's the exciting part, because the setup for this kind of functionality is so quick! If you’re unfamiliar with Serverless, you can think of it like the same JavaScript functions you know and love, but executed on the server. Serverless functions are event-driven logic and their pricing is extremely low (not just on Netlify, but industry-wide) and scales with your usage. And yes, we have to add the qualifier here: serverless still uses servers, but babysitting them is no longer your job. Let’s get started.

Our very basic function looks like this. I stored mine in a folder named functions, and just called it index.js. You can truly call the folder and function what you want.

// functions/index.js exports.handler = async (event, context) => { return { statusCode: 200, body: JSON.stringify({ message: "Hi there Tacos", event }) } }

We’ll also need to create a netlify.toml file at the root of the project and let it know which directory to find the function in, which in this case, is "functions."

// netlify.toml [build] functions = "functions"

If we push to master and go into the dashboard, you can see it pick up the function!

If you look at the endpoint listed above it’s stored here:
https://ecommerce-netlify.netlify.com/.netlify/functions/index

Really, for any site you give it, the URL will follow this pattern:
https:/<yoursiteurlhere>/.netlify/functions/<functionname>

When we hit that endpoint, it provides us with the message we passed in, as well as all the event data we logged as well:

I love how few steps that is! This small bit of code gives us infinite power and capabilities for rich, dynamic functionality on our sites.

Hook up Stripe

Pairing with Stripe is extremely fun because it's easy to use, sophisticated, has great docs, and works well with serverless functions. I have other tutorials where I used Stripe because I enjoy using their service so much.

Here’s a bird’s eye view of the app we’ll be building:

First we’ll go to the Stripe dashboard and get our keys. For anyone totally scandalized right now, it's OK, these are testing keys. You can use them, too, but you’ll learn more if you set them up on your own. (It’s two clicks and I promise it’s not hard to follow along from here.)

We’ll install a package called dotenv which will help us store our key and test it locally. Then, we’ll store our Stripe secret key to a Stripe variable. (You can call it anything, but here I’ve called it STRIPE_SECRET_KEY, and that’s pretty much industry standard.)

yarn add dotenv require("dotenv").config() const stripe = require("stripe")(process.env.STRIPE_SECRET_KEY)

In the Netlify dashboard, we’ll go to "Build & deploy," then "Environment" to add in Environment variables, where the STRIPE_SECRET_KEY is key, and the value will be the key that starts with sk.

We’ll also add in some headers so we don’t run into CORS issues. We’ll use these headers throughout the function we’re going to build.

const headers = { "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Headers": "Content-Type" }

So, now we’ll create the functionality for talking to Stripe. We’ll make sure we’ll handle the cases that it’s not the HTTP method we’re expecting, and also that we’re getting the information we expect.

You can already see here, what data we’re going to be needing to send to stripe by what we check for. We’ll need the token, the total amount, and an idempotency key.

If you're unfamiliar with idempotency keys, they are unique values that are generated by a client and sent to an API along with a request in case the connection is disrupted. If the server receives a call that it realizes is a duplicate, it ignores the request and responds with a successful status code. Oh, and it's an impossible word to pronounce.

exports.handler = async (event, context) => { if (!event.body || event.httpMethod !== "POST") { return { statusCode: 400, headers, body: JSON.stringify({ status: "invalid http method" }) } } const data = JSON.parse(event.body) if (!data.stripeToken || !data.stripeAmt || !data.stripeIdempotency) { console.error("Required information is missing.") return { statusCode: 400, headers, body: JSON.stringify({ status: "missing information" }) } }

Now, we’ll kick off the Stripe payment processing! We’ll create a Stripe customer using the email and token, do a little logging, and then create the Stripe charge. We’ll specify the currency, amount, email, customer ID, and give a description while we're at it. Finally, we’ll provide the idempotency key (pronounced eye-dem-po-ten-see), and log that it was successful.

(While it's not shown here, we’ll also do some error handling.)

// stripe payment processing begins here try { await stripe.customers .create({ email: data.stripeEmail, source: data.stripeToken }) .then(customer => { console.log( `starting the charges, amt: ${data.stripeAmt}, email: ${data.stripeEmail}` ) return stripe.charges .create( { currency: "usd", amount: data.stripeAmt, receipt_email: data.stripeEmail, customer: customer.id, description: "Sample Charge" }, { idempotency_key: data.stripeIdempotency } ) .then(result => { console.log(`Charge created: ${result}`) }) }) Hook it up to Nuxt

If we look back at our application, you can see we have pages and components that live inside the pages. The Vuex store is like the brain of our application. It will hold the state of the app, and that's what will communicate with Stripe. However, we still need to communicate with our user via the client. We'll collect the credit card data in a component called AppCard.vue that will live on the cart page.

First, we'll install a package called vue-stripe-elements-plus, that gives us some Stripe form elements that allow us to collect credit card data, and even sets us up with a pay method that allows us to create tokens for stripe payment processing. We'll also add a library called uuid that will allow us to generate unique keys, which we'll use for the idempotency key.

yarn add vue-stripe-elements-plus uuid

The default setup they give us to work with vue-stripe-elements-plus looks like this:

<template> <div id='app'> <h1>Please give us your payment details:</h1> <card class='stripe-card' :class='{ complete }' stripe='pk_test_XXXXXXXXXXXXXXXXXXXXXXXX' :options='stripeOptions' @change='complete = $event.complete' /> <button class='pay-with-stripe' @click='pay' :disabled='!complete'>Pay with credit card</button> </div> </template> <script> import { stripeKey, stripeOptions } from './stripeConfig.json' import { Card, createToken } from 'vue-stripe-elements-plus' export default { data () { return { complete: false, stripeOptions: { // see https://stripe.com/docs/stripe.js#element-options for details } } }, components: { Card }, methods: { pay () { // createToken returns a Promise which resolves in a result object with // either a token or an error key. // See https://stripe.com/docs/api#tokens for the token object. // See https://stripe.com/docs/api#errors for the error object. // More general https://stripe.com/docs/stripe.js#stripe-create-token. createToken().then(data => console.log(data.token)) } } } </script>

So here's what we're going to do. We're going to update the form to store the customer email, and update the pay method to send that and the token or error key to the Vuex store. We'll dispatch an action to do so.

data() { return { ... stripeEmail: "" }; }, methods: { pay() { createToken().then(data => { const stripeData = { data, stripeEmail: this.stripeEmail }; this.$store.dispatch("postStripeFunction", stripeData); }); }, ...

That postStripeFunction action we dispatched looks like this:

// Vuex store export const actions = { async postStripeFunction({ getters, commit }, payload) { commit("updateCartUI", "loading") try { await axios .post( "https://ecommerce-netlify.netlify.com/.netlify/functions/index", { stripeEmail: payload.stripeEmail, stripeAmt: Math.floor(getters.cartTotal * 100), //it expects the price in cents stripeToken: "tok_visa", //testing token, later we would use payload.data.token stripeIdempotency: uuidv1() //we use this library to create a unique id }, { headers: { "Content-Type": "application/json" } } ) .then(res => { if (res.status === 200) { commit("updateCartUI", "success") setTimeout(() => commit("clearCart"), 3000) …

We're going to update the UI state to loading while we're processing. Then we'll use axios to post to our function endpoint (the URL you saw earlier in the post when we set up our function). We'll send over the email, the amt, the token and the unique key that we built the function to expect.

Then if it was successful, we'll update the UI state to reflect that.

One last note I'll give is that I store the UI state in a string, rather than a boolean. I usually start it with something like "idle" and, in this case, I'll also have "loading," "success," and "failure." I don't use boolean states because I've rarely encountered a situation where UI state only has two states. When you work with booleans for this purpose, you tend to need to break it out into more and more states, and checking for all of them can get increasingly complicated.

As it stands, I can reflect changes in the UI on the cart page with legible conditionals, like this:

<section v-if="cartUIStatus === 'idle'"> <app-cart-display /> </section> <section v-else-if="cartUIStatus === 'loading'" class="loader"> <app-loader /> </section> <section v-else-if="cartUIStatus === 'success'" class="success"> <h2>Success!</h2> <p>Thank you for your purchase. You'll be receiving your items in 4 business days.</p> <p>Forgot something?</p> <button class="pay-with-stripe"> <nuxt-link exact to="/">Back to Home</nuxt-link> </button> </section> <section v-else-if="cartUIStatus === 'failure'"> <p>Oops, something went wrong. Redirecting you to your cart to try again.</p> </section>

And there you have it! We’re all set up and running to accept payments with stripe on a Nuxt, Vue site with a Netlify function, and it wasn’t even that complicated to set up!

Gatsby Applications

We used Nuxt in this instance but if you wanted to set up the same kind of functionality with something that uses React such as Gatsby, there's a plugin for that. (Everything is plugin in Gatsby. ☺️)

You would install it with this command:

yarn add gatsby-plugin-netlify-functions

...and add the plugin to your application like this:

plugins: [ { resolve: `gatsby-plugin-netlify-functions`, options: { functionsSrc: `${__dirname}/src/functions`, functionsOutput: `${__dirname}/functions`, }, }, ]

The serverless function used in this demo is straight up JavaScript, so it's also portable to React applications. There's a plugin to add the Stripe script to your Gatsby app (again, everything is a plugin). Fair warning: this adds the script to every page on the site. To collect the credit card information on the client, you would use React Stripe Elements, which is similar to the Vue one we used above.

Just make sure that you're collecting from the client and passing all the information the function is expecting:

  • The user email
  • The total amount, in cents
  • The token
  • The idempotency key
Demo Site

GitHub Repo

With such a low barrier to entry, you can see how you can make really dynamic experiences with JAMstack applications. It's amazing how much you can accomplish without any maintenance costs from servers. Stripe and Netlify Functions make setting up payment processing in a static application such a smooth developer experience!

The post Let’s Build a JAMstack E-Commerce Store with Netlify Functions appeared first on CSS-Tricks.

Lazy load embedded YouTube videos

Css Tricks - Tue, 08/20/2019 - 4:41am

This is a very clever idea via Arthur Corenzan. Rather than use the default YouTube embed, which adds a crapload of resources to a page whether the user plays the video or not, use the little tiny placeholder webpage that is just an image you can click that is linked to the YouTube embed.

It still behaves essentially exactly the same: click, play video in place.

The trick is rooted in srcdoc, a feature of <iframe> where you can put the entire contents of an HTML document in the attribute. It's like inline styling but an inline-entire-documenting sort of thing. I've used it in the past when I embedded MailChimp-created newsletters on this site. I'd save the email into the database as a complete HTML document, retrieve it as needed, and chuck it into an <iframe> with srcdoc.

Arthur credits Remy for a tweak to get it working in IE 11 and Adrian for some accessibility tweaks.

I also agree with Hugh in the comments of that post. Now that native lazy loading has dropped in Chrome (see our coverage) we might as well slap loading="lazy" on there too, as that will mean no requests at all if it renders out of viewport.

I'll embed a demo here too:

See the Pen
Lazy Loaded YouTube Video
by Chris Coyier (@chriscoyier)
on CodePen.

Direct Link to ArticlePermalink

The post Lazy load embedded YouTube videos appeared first on CSS-Tricks.

Using rel=”preconnect” to establish network connections early and increase performance

Css Tricks - Mon, 08/19/2019 - 3:05pm

Milica Mihajlija:

Adding rel=preconnect to a <link> informs the browser that your page intends to establish a connection to another domain, and that you'd like the process to start as soon as possible. Resources will load more quickly because the setup process has already been completed by the time the browser requests them.

The graphic in the post does a good job of making this an obviously good choice for performance:

Robin did a good job of rounding up information on all this type of stuff a few years back. Looks like the best practice right now is using these two:

<link rel="preconnect" href="http://example.com"> <link rel="dns-prefetch" href="http://example.com">

For all domains that aren't the main domain you're loading the document from.

A quick look at CSS-Tricks resources right now, I get:

secure.gravatar.com static.codepen.io res.cloudinary.com ad.doubleclick.com s3.buysellads.com srv.buysellads.com www.google-analytics.com

That'd be 14 extra <link> tags in the first few packets of data on every request on this site. It sounds like a perf win, but I'd want to test that before no-brainer chucking it in there.

Andy Davies did some recent experimentation:

So what difference can preconnect make?

I used the HTTP Archive to find a couple of sites that use Cloudinary for their images, and tested them unchanged, and then with the preconnect script injected. Each test consisted of nine runs, using Chrome emulating a mobile device, and the Cable network profile.

There’s a noticeable visual improvement in the first site, with the main background image loading over half a second sooner (top) than on the unchanged site (bottom).

This stuff makes me think of instant.page (which just went v2), which is a fancy little script that preloads things based on interactions. It's now a browser extension (FasterChrome) that I've been trying out. I can't say I notice a huge difference, but I'm almost always on fast internet connections.

The post Using rel=”preconnect” to establish network connections early and increase performance appeared first on CSS-Tricks.

Bounce Element Around Viewport in CSS

Css Tricks - Mon, 08/19/2019 - 4:34am

Let's say you were gonna bounce an element all around a screen, sorta like an old school screensaver or Pong or something.

You'd probably be tracking the X location of the element, increasing or decreasing it in a time loop and — when the element reached the maximum or minimum value — it would reverse direction. Then do that same thing with the Y location and you've got the effect we're after. Simple enough with some JavaScript and math.

Here's The Coding Train explaining it clearly:

Here's a canvas implementation. It's Pong so it factors in paddles and is slightly more complicated, but the basic math is still there:

See the Pen
Pong
by Joseph Gutierrez (@DerBaumeister)
on CodePen.

But what if we wanted to do this purely in CSS? We could write @keyframes that move the transform or left/top properties... but what values would we use? If we're trying to bounce around the entire screen (viewport), we'd need to know the dimensions of the screen and then use those values. But we never know that exact size in CSS.

Or do we?

CSS has viewport units, which are based on the size of the entire viewport. Plus, we've got calc() and we presumably know the size of our own element.

That's the clever root of Scott Kellum's demo:

See the Pen
Codepen screensaver
by Scott Kellum (@scottkellum)
on CodePen.

The extra tricky part is breaking the X animation and the Y animation apart into two separate animations (one on a parent and one on a child) so that, when the direction reverses, it can happen independently and it looks more screensaver-like.

<div class="el-wrap x"> <div class="el y"></div> </div> :root { --width: 300px; --x-speed: 13s; --y-speed: 7s; --transition-speed: 2.2s; } .el { width: var(--width); height: var(--width); } .x { animation: x var(--x-speed) linear infinite alternate; } .y { animation: y var(--y-speed) linear infinite alternate; } @keyframes x { 100% { transform: translateX(calc(100vw - var(--width))); } } @keyframes y { 100% { transform: translateY(calc(100vh - var(--width))); } }

I stole that idea, and added some blobbiness and an extra element for this little demo:

See the Pen
Morphing Blogs with `border-radius`
by Chris Coyier (@chriscoyier)
on CodePen.

The post Bounce Element Around Viewport in CSS appeared first on CSS-Tricks.

Can you view print stylesheets applied directly in the browser?

Css Tricks - Mon, 08/19/2019 - 4:33am

Yep.

Let's take a look at how to do it in different browsers. Although note the date of this blog post. This stuff tends to change over time, so if anything here is wrong, let us know and we can update it.

In Firefox...

It's a little button in DevTools. So easy!

  1. Open DevTools (Command+Option+i)
  2. Go to the “Inspector” tab
  3. Click the little page icon
In Chrome and Edge...

It's a little weirder, I think, but it's still a fairly easy thing to do in DevTools.

  • Open DevTools (Command+Option+i)
  • If you don't have the weird-special-bottom-area-thing, press the Escape key
  • Click the menu icon to choose tabs to open
  • Select the “Rendering” tab
  • Scroll to bottom of the “Rendering” tab options
  • Choose print from the options for Emulate CSS media
In Safari...

Safari has a little button a lot like Firefox, but it looks different.

  1. Open DevTools (Command+Option+i)
  2. Go to the “Inspector” tab
  3. Click the little page icon

The post Can you view print stylesheets applied directly in the browser? appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.