Front End Web Development

Responsive web design turns ten.

Css Tricks - Wed, 05/27/2020 - 3:41am

Ethan on the thinking and research that inspired the term:

Around that time, my partner Elizabeth visited the High Line in New York City shortly after it opened. When she got back, she told me about these wheeled lounge chairs she saw in one section, and how people would move them apart for a bit of solitude, or push a few chairs together to sit closer to friends. We got to excitedly chatting about them. I thought there was something really compelling about that image: a space that could be controlled, reshaped, and redesigned by the people who moved through it.

I remember spending that evening reading more about those chairs and, from there, about more dynamic forms of architecture. I read about concepts for walls built with tensile materials and embedded sensors, and how those walls could bend and flex as people drew near to them. I read about glass walls that could become opaque at the flip of a switch, or when movement was detected. I even bought a rather wonderful book on the subject, Interactive Architecture, which described these new spaces as “a conversation” between physical objects or spaces, and the people who interacted with them.

After a few days of research, I found some articles that alternated between two different terms for the same concept. They’d call it interactive architecture, sure, but then they’d refer to it with a different name: responsive architecture.

Fascinating.

Responsive web design is so locked in now a decade later it’s just an assumption. I would have called it an assumption in half that time. My answer in an interview…

Is responsive something that you have to sell in any more or does everyone get it now?

I think that responsive design was an assumption in 2015. Even then, if you delivered a website to a client that was just a zoomed out “desktop” website they would assume it’s broken and that you didn’t really do your job. Today, even more so. It’s just not done.

The technical side of responsive design is fascinating to me of course. Even Google has guides on the subject and highly encourages this approach. But the core technical implementation isn’t particularly complex. Stay fluid; use some @media queries to restyle things as needed.

The bigger deal in the last decade was the impact on businesses. Adjusting workflows to accommodate this style of thinking. Combining teams of developers who used to work on entirely different codebases now working on a single codebase. The impact at organizations wasn’t nearly as straightforward as the technology of it all.

There is a resonance between that and more recent shifts in the world of building websites, like the astounding rise of design systems and, even more so, the Coup d’état of JavaScript.

Direct Link to ArticlePermalink

The post Responsive web design turns ten. appeared first on CSS-Tricks.

A Guide to the Responsive Images Syntax in HTML

Css Tricks - Tue, 05/26/2020 - 12:15pm
Only View Code Snippets

This guide is about the HTML syntax for responsive images (and a little bit of CSS for good measure). The responsive images syntax is about serving one image from multiple options based on rules and circumstances. There are two forms of responsive images, and they’re for two different things:

If your only goal is…

Increased Performance

Then what you need is…

<img srcset="" src="" alt="" >

There is a lot of performance gain to be had by using responsive images. Image weight has a huge impact on pages’ overall performance, and responsive images are one of the best things that you can do to cut image weight. Imagine the browser being able to choose between a 300×300 image or a 600×600. If the browser only needs the 300×300, that’s potentially a 4× bytes-over-the-wire savings! Savings generally go up as the display resolution and viewport size go down; on the smallest screens, a couple of case studies have shown byte savings of 70–90%.

Using srcset

If you also need…

Design Control

Then what you need is…

<picture> <source srcset="" media=""> <source srcset="" media=""> <img src="" alt=""> </picture>

Another perfectly legit goal with responsive images is not just to serve different sizes of the same image, but to serve different images. For example, cropping an image differently depending on the size of the screen and differences in the layout. This is referred to as “art direction.”

The <picture> element is also used for fallback image types and any other sort of media query switching (e.g. different images for dark mode). You get greater control of what browsers display.

Using <picture>

There is a lot to talk about here, so let’s go through both syntaxes, all of the related attributes and values, and talk about a few related subjects along the way, like tooling and browsers.

Table of Contents Using srcset

The <img srcset="" src="" alt=""> syntax is for serving differently-sized versions of the same image. You could try to serve entirely different images using this syntax, but browsers assume that everything in a srcset is visually-identical and will choose whichever size they think is best, in impossible-for-you-to-predict ways. So I wouldn’t reccomend it.

Perhaps the easiest-possible responsive images syntax is adding a srcset attribute with x descriptors on the images to label them for use on displays with different pixel-densities.

<img alt="A baby smiling with a yellow headband." src="baby-lowres.jpg" srcset="baby-highres.jpg 2x" >

Here, we’ve made the default (the src) the “low res” (1×) copy of the image. Defaulting to the smallest/fastest resources is usually the smart choice. We also provide a 2× version. If the browser knows it is on a higher pixel-density display (the 2x part), it will use that image instead.

Demo <img alt="A baby smiling with a yellow headband." src="baby-lowres.jpg" srcset=" baby-high-1.jpg 1.5x, baby-high-2.jpg 2x, baby-high-3.jpg 3x, baby-high-4.jpg 4x, baby-high-5.jpg 100x " >

You can do as many pixel-density variants as you like.

While this is cool and useful, x descriptors only account for a small percentage of responsive images usage. Why? They only let browsers adapt based on one thing: display pixel-density. A lot of times, though, our responsive images are on responsive layouts, and the image’s layout size is shrinking and stretching right along with the viewport. In those situations, the browser needs to make decisions based on two things: the pixel-density of the screen, and the layout size of the image. That’s where w descriptors and the sizes attribute come in, which we’ll look at in the next section.

Using srcset / w + sizes

This is the good stuff. This accounts for around 85% of responsive images usage on the web. We’re still serving the same image at multiple sizes, only we’re giving the browser more information so that it can adapt based on both pixel-density and layout size.

<img alt="A baby smiling with a yellow headband." srcset=" baby-s.jpg 300w, baby-m.jpg 600w, baby-l.jpg 1200w, baby-xl.jpg 2000w " sizes="70vmin" >

We’re still providing multiple copies of the same image and letting the browser pick the most appropriate one. But instead of labeling them with a pixel density (x) we’re labelling them with their resource width, using w descriptors. So if baby-s.jpg is 300×450, we label it as 300w.

Using srcset with width (w) descriptors like this means that it will need to be paired with the sizes attribute so that the browser will know how large of a space the image will be displaying in. Without this information, browsers can’t make smart choices.

Demo Creating accurate sizes

Creating sizes attributes can get tricky. The sizes attribute describes the width that the image will display within the layout of your specific site, meaning it is closely tied to your CSS. The width that images render at is layout-dependent rather than just viewport dependent!

Let’s take a look at a fairly simple layout with three breakpoints. Here’s a video demonstrating this:

Demo

The breakpoints are expressed with media queries in CSS:

body { margin: 2rem; font: 500 125% system-ui, sans-serif; } .page-wrap { display: grid; gap: 1rem; grid-template-columns: 1fr 200px; grid-template-areas: "header header" "main aside" "footer footer"; } @media (max-width: 700px) { .page-wrap { grid-template-columns: 100%; grid-template-areas: "header" "main" "aside" "footer"; } } @media (max-width: 500px) { body { margin: 0; } }

The image is sized differently at each breakpoint. Here’s a breakdown of all of the bits and pieces that affect the image’s layout width at the largest breakpoint (when the viewport is wider than 700px):

The image is as wide as 100vw minus all that explicitly sized margin, padding, column widths, and gap.
  • At the largest size: there is 9rem of explicit spacing, so the image is calc(100vw - 9rem - 200px) wide. If that column used a fr unit instead of 200px, we’d kinda be screwed here.
  • At the medium size: the sidebar is dropped below, so there is less spacing to consider. Still, we can do calc(100vw - 6rem) to account for the margins and padding.
  • At the smallest size: the body margin is removed, so just calc(100vw - 2rem) will do the trick.

Phew! To be honest, I found that a little challenging to think out, and made a bunch of mistakes as I was creating this. In the end, I had this:

<img ... sizes=" (max-width: 500px) calc(100vw - 2rem), (max-width: 700px) calc(100vw - 6rem), calc(100vw - 9rem - 200px) " />

A sizes attribute that gives the browser the width of the image across all three breakpoints, factoring in the layout grid, and all of the surrounding gap, margin, and padding that end up impacting the image’s width.

Now wait! Drumroll! &#x1f941;&#x1f941;&#x1f941;That’s still wrong. I don’t understand why exactly, because to me that looks like it 100% describes what is happening in the CSS layout. But it’s wrong because Martin Auswöger’s RespImageLint says so. Running that tool over the isolated demo reports no problems except the fact that the sizes attribute is wrong for some viewport sizes, and should be:

<img ... sizes=" (min-width: 2420px) 2000px, (min-width: 720px) calc(94.76vw - 274px), (min-width: 520px) calc(100vw - 96px), calc(100vw - 32px) " >

I don’t know how that’s calculated and it’s entirely unmaintainable by hand, but, it’s accurate. Martin’s tool programmatically resizes the page a bunch and writes out a sizes attribute that describes the actual, observed width of the image over a wide range of viewport sizes. It’s computers, doing math, so it’s right. So, if you want a super-accurate sizes attribute, I’d recommend just putting a wrong one on at first, running this tool, and copying out the correct one.

For an even deeper dive into all this, check out Eric Portis’ w descriptors and sizes: Under the hood.

Being more chill about sizes

Another option is use the Horseshoes & Hand Grenades Method™ of sizes (or, in other words, close counts). This comes highly suggested.

For example, sizes="96vw" says, “This image is going to be pretty big on the page — almost the full width — but there will always be a little padding around the edges, so not quite. Or sizes="(min-width: 1000px) 33vw, 96vw" says, “This image is in a three-column layout on large screens and close to full-width otherwise.” Practicality-wise, this can be a sane solution.

You might find that some automated responsive image solutions, which have no way of knowing your layout, make a guess — something like sizes="(max-width: 1000px) 100vw, 1000px". This is just saying, “Hey we don’t really know much about this layout, but we’re gonna take a stab and say, worst case, the image is full-width, and let’s hope it never renders larger than 1000px”.

Abstracting sizes

I’m sure you can imagine how easy it is to not only get sizes wrong, but also have it become wrong over time as layouts change on your site. It may be smart for you to abstract it using a templating language or content filter so that you can change the value across all of your images more easily.

I’m essentially talking about setting a sizes value in a variable once, and using that variable in a bunch of different <img> elements across your site. Native HTML doesn’t offer that, but any back end language does; for instance, PHP constants, Rails config variables, the React context API used for a global state variable, or variables within a templating language like Liquid can all be used to abstract sizes.

<?php // Somewhere global $my_sizes = ""; ?> <img srcset="" src="" alt="" sizes="<?php echo $my_sizes; ?>" /> “Browser’s choice”

Now that we have a sizes attribute in place, the browser knows what size (or close to it) the image will render at and can work its magic. That is, it can do some math that factors in the pixel density of the screen, and the size that the image will render at, then pick the most appropriately-sized image.

The math is fairly straightforward at first. Say you’re about to show an image that is 40vw wide on a viewport that is 1200px wide, on a 2x pixel-density screen. The perfect image would be 960 pixels wide, so the browser is going to look for the closest thing it’s got. The browser will always calculate a target size that it would prefer based on the viewport and pixel-density situations, and what it knows from sizes, and compare that target to what it’s got to pick from in srcset. How browsers do the picking, though, can get a little weird.

A browser might factor more things into this equation if it chooses to. For example, it could consider the user’s current network speeds, or whether or not the user has flipped on some sort of “data saver” preference. I’m not sure if any browsers actually do this sort of thing, but they are free to if they wish as that’s how the spec was written. What some browsers sometimes choose to do is pull from cache. If the math shows they should be using a 300px image, but they already have a 600px in local cache, they will just use that. Smart. Room for this sort of thing is a strength of the srcset/sizes syntax. It’s also why you always use different sizes of the same image, within srcset: you’ve got no way to know which image is going to be selected. It’s the browser’s choice.

This is weird. Doesn’t the browser already know this stuff?

You might be thinking, “Uhm why do I have to tell the browser how big the image will render, doesn’t it know that?” Well, it does, but only after it’s downloaded your HTML and CSS and laid everything out. The sizes attribute is about speed. It gives the browser enough information to make a smart choice as soon as it sees your <img>.

<img data-sizes="auto" data-srcset=" responsive-image1.jpg 300w, responsive-image2.jpg 600w, responsive-image3.jpg 900w" class="lazyload" />

Now you might be thinking, “But what about lazy-loaded images?” (as in, by the time a lazy-loaded image is requested, layout’s already been done and the browser already knows the image’s render size). Well, good thinking! Alexander Farkas’ lazysizes library writes out sizes attributes automatically on lazyload, and there’s an ongoing discussion about how to do auto-sizes for lazy-loaded images, natively.

sizes can be bigger than the viewport

Quick note on sizes. Say you have an effect on your site so that an image “zooms in” when it’s clicked. Maybe it expands to fill the whole viewport, or maybe it zooms even more, so that you can see more detail. In the past, we might have had to swap out the src on click in order to switch to a higher-res version. But now, assuming a higher-res source is already in the srcset, you can just change the sizes attribute to something huge, like 200vw or 300vw, and the browser should download the super-high-res source automatically for you. Here’s an article by Scott Jehl on this technique.

↩️ Back to top

Using <picture>

Hopefully, we’ve beaten it into the ground that <img srcset="" sizes="" alt=""> is for serving differently-sized versions of the same image. The <picture> syntax can do that too, but the difference here is that the browser must respect the rules that you set. That’s useful when you want to change more than just the resolution of the loaded image to fit the user’s situation. This intentional changing of the image is usually called “art direction.”

Art Direction <picture> <source srcset="baby-zoomed-out.jpg" media="(min-width: 1000px)" /> <source srcset="baby.jpg" media="(min-width: 600px)" /> <img src="baby-zoomed-in.jpg" alt="Baby Sleeping" /> </picture>

This code block is an example of what it might look like to have three stages of an “art directed” image.

  • On large screens, show a zoomed-out photo.
  • On medium screens, show that same photo, zoomed in a bit.
  • On small screens, zoom in even more.

The browser must respect our media queries and will swap images at our exact breakpoints. That way, we can be absolutely sure that nobody on a small screen will see a tiny, zoomed-out image, which might not have the same impact as one of the zoomed-in versions.

Here’s a demo, written in Pug to abstract out some of the repetitive nature of <picture>.

CodePen Embed Fallback Art direction can do a lot more than just cropping

Although cropping and zooming like this is the most common form of art direction by far, you can do a lot more with it. For instance, you can:

Sky’s the limit, really.

Combining source and srcset

Because <source> also uses the srcset syntax, they can be combined. This means that you can still reap the performance benefits of srcset even while swapping out visually-different images with <source>. It gets pretty verbose though!

<picture> <source srcset=" baby-zoomed-out-2x.jpg 2x, baby-zoomed-out.jpg " media="(min-width: 1000px)" /> <source srcset=" baby-2x.jpg 2x, baby.jpg " media="(min-width: 600px)" /> <img srcset=" baby-zoomed-out-2x.jpg 2x " src="baby-zoomed-out.jpg" alt="Baby Sleeping" /> </picture>

The more variations you create and the more resized versions you create per variation, the more verbose this code has to get.

Fallbacks for modern image formats

The <picture> element is uniquely suited to being able to handle “fallbacks.” That is, images in cutting-edge formats that not all browsers might be able to handle, with alternative formats for browsers that can’t load the preferred, fancy one. For example, let’s say you want to use an image in the WebP format. It’s a pretty great image format, often being the most performant choice, and it’s supported everywhere that the <picture> element is, except Safari. You can handle that situation yourself, like:

<picture> <source srcset="party.webp"> <img src="party.jpg" alt="A huge party with cakes."> </picture>

This succeeds in serving a WebP image to browsers that support it, and falls back to a JPEG image, which is definitely supported by all browsers.

Here’s an example of a photograph (of me) at the exact same size where the WebP version is about 10% (!!!) of the size of the JPEG.

CodePen Embed Fallback

How do you create a WebP image? Well, it’s more of a pain in the butt than you’d like it to be, that’s for sure. There are online converters, command line tools, and some modern design software, like Sketch, helps you export in that format. My preference is to use an image hosting CDN service that automatically sends images in the perfect format for the requesting browser, which makes all this unnecessary (because you can just use img/srcset).

WebP isn’t the only player like this. Safari doesn’t support WebP, but does support a format called JPG 2000 which has some advantages over JPEG. Internet Explorer 11 happens to support an image format called JPEG-XR which has different advantages. So to hit all three, that could look like:

<picture> <source srcset="/images/cereal-box.webp" type="image/webp" /> <source srcset="/images/cereal-box.jp2" type="image/jp2" /> <img src="/images/cereal-box.jxr" type="image/vnd.ms-photo" /> </picture>

This syntax (borrowed form a blog post by Josh Comeau) supports all three of the “next-gen” image formats in one go. IE 11 doesn’t support the <picture> syntax, but it doesn’t matter because it will get the <img> fallback which is in the JPEG-XR format it understands.

Estelle Weyl also covered this idea in a 2016 blog post on image optimization.

↩️ Back to top

Where do you get the differently-sized images?

You can make them yourself. Heck, even the free Preview app on my Mac can resize an image and “Save As.”

The Mac Preview app resizing an image, which is something that literally any image editing application (including Photoshop, Affinity Designer, Acorn, etc.) can also do. Plus, they often help by exporting the variations all at once.

But that’s work. It’s more likely that the creation of variations of these images is automated somehow (see the section below) or you use a service that allows you to create variations just by manipulating the URL to the image. That’s a super common feature of any image hosting/image CDN service. To name a few:

Not only do these services offer on-the-fly image resizing, they also often offer additional stuff, like cropping, filtering, adding text, and all kinds of useful features, not to mention serving assets efficiently from a CDN and automatically in next-gen formats. That makes them a really strong choice for just about any website, I’d say.

Here’s Glen Maddern in a really great screencast talking about how useful Image CDNs can be:

Design software is booming more aware that we often need multiple copies of images. The exporting interface from Figma is pretty nice, where any given selection can be exported. It allows multiple exports at once (in different sizes and formats) and remembers what you dod the last time you exported.

Exporting in Figma Automated responsive images

The syntax of responsive images is complex to the point that doing it by hand is often out of the question. I’d highly recommend automating and abstracting as much of this away as possible. Fortunately, a lot of tooling that helps you build websites knows this and includes some sort of support for it. I think that’s great because that’s what software should be doing for us, particularly when it is something that is entirely programmatic and can be done better by code than by humans. Here are some examples…

  • Cloudinary has this responsive breakpoints tool including an API for generating the perfect breakpoints.
  • WordPress generates multiple versions of images and outputs in the responsive images syntax by default.
  • Gatsby has a grab-bag of plugins for transforming and implementing images on your site. You ultimately implement them with gatsby-image, which is a whole fancy thing for implementing responsive images and other image loading optimizations. Speaking of React, it has component abstractions like “An Almost Ideal React Image Component” that also does cool stuff.
  • Nicolas Hoizey’s Images Responsiver Node module (and it’s Eleventy plugin) makes a ton of smart markup choices for you, and pairs nicely with a CDN that can handle the on-the-fly resizing bits.
  • These are just a few examples! Literally anything you can do to make this process easier or automatic is worth doing.
Here’s me inspecting an image in a WordPress blog post and seeing a beefy srcset with a healthy amount of pre-generated size options and a sizes attribute tailored to this theme. A landing page for gatsby-image explaining all of the additional image loading stuff it can do.

I’m sure there are many more CMSs and other software products that help automate away the complexities of creating the responsive images syntax. While I love that all this syntax exists, I find it all entirely too cumbersome to author by hand. Still, I think it’s worth knowing all this syntax so that we can build our own abstractions, or check in on the abstractions we’re using to make sure they are doing things correctly.

Related concepts
  • The object-fit property in CSS controls how an image will behave in its own box. For example, an image will normally “squish” if you change the dimensions to something different than its natural aspect ratio, but object-fit can be used to crop it or contain it instead.
  • The object-position property in CSS allows you to nudge an image around within its box.
What about responsive images in CSS with background images?

We’ve covered exactly this before. The trick is to use @media queries to change the background-image source. For example:

.img { background-image: url(small.jpg); } @media (min-width: 468px), (-webkit-min-device-pixel-ratio: 2), (min-resolution: 192dpi) { .img { background-image: url(large.jpg); } }

With this CSS syntax, depending on the browser conditions, the browser will only download one of the two images, which achieves the same performance goal that the responsive images syntax in HTML does. If it helps, think of the above as the CSS equivalent of the <picture> syntax: the browser must follow your rules and display what matches.

If you’re looking to let the browser choose the best option, like srcset/sizes, but in CSS, the solution is ultimately going to be the image-set() function. There’s two problems with image-set(), today, though:

  • Support for it isn’t there yet. Safari’s implementation leads the pack, but image-set() has been prefixed in Chrome for eight years, and it’s not there at all in Firefox.
  • Even the spec itself seems behind the times. For example, it only supports x descriptors (no w, yet).

Best to just use media queries for now.

Do you need to polyfill?

I’m pretty meh on pollyfilling any of this right this moment. There is a great polyfill though, called Picturefill, which will buy you full IE 9-11 support if you need that. Remember, though, that none of this stuff breaks to the point of not displaying any image at all in non-supporting browsers, assuming you have an <img src="" alt=""> in there somewhere. If you make the (fairly safe) assumption that IE 11 is running on a low-pixel-density desktop display, you can make your image sources reflect that by default and build out from there.

Other important image considerations
  • Optimizing quality: The point of responsive images is loading the smallest, most impactful resource that you can. You can’t achieve that without effectively compressing your image. You’re aiming for a “sweet spot” for every image, between looking good and being light. I like to let image hosting services solve this problem for me, but Etsy has a really great writeup of what they’ve been able to accomplish with infrastructure that they built themselves.
  • Serving from CDNs: Speaking of image hosting services, speed comes in many forms. Fast servers that are geographically close to the user are an important speed factor as well.
  • Caching: What’s better than loading less data over the network? Loading no data at all! That’s what HTTP caching is for. Using the Cache-Control header, you can tell the browser to hang on to images so that if the same image is needed again, the browser doesn’t have to go over the network to get it, which is a massive performance boost for repeat viewings.
  • Lazy loading: This is another way to avoid loading images entirely. Lazy loading means waiting to download an image until it is in or near the viewport. So, for example, an image way far down the page won’t load if the user never scrolls there.
Other good resources

(That I haven’t linked up in the post already!)

Browser Support

This is for srcset/sizes, but it’s the same for <picture>.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeFirefoxIEEdgeSafari3838No169Mobile / TabletAndroid ChromeAndroid FirefoxAndroidiOS Safari8168819.0-9.2

The post A Guide to the Responsive Images Syntax in HTML appeared first on CSS-Tricks.

CSS Tips for New Devs

Css Tricks - Tue, 05/26/2020 - 12:15pm

Amber Wilson has some CSS Tips for New Devs, like:

It’s not a good idea to fix shortcomings in your HTML with CSS. Fix your HTML first!

And…

You can change CSS right in your browser’s DevTools (to open them, right-click the browser window and choose “inspect” or “inspect element”). The great thing is, none of the styles will be saved so you can experiment here! Another great thing about the DevTools is the “computed styles” tab, because this shows you exactly what styles are currently applied to an element. This can be really helpful when it comes to debugging your CSS!

There are 24 tips there. I “counted” by using DevTools to change the <ul> to an <ol>. &#x1f609;

Direct Link to ArticlePermalink

The post CSS Tips for New Devs appeared first on CSS-Tricks.

Framer Web

Css Tricks - Tue, 05/26/2020 - 4:35am

The prototyping app Framer just launched the web version of their design tool and it looks pretty darn neat. I particularly love the design of the marketing site that explains how to use Framer and what sets it apart from other design tools. They have a ton of examples that you can pop open to explore as well, like this demo for how to make hover tooltips in the app:

I have to say that I love how the UI feels in Framer — both on the website and the design app itself. It all reminds me of the Oculus Quest UI with rounded corners and dark-mode inspired elements. I know it’s probably just a silly trend, but I like it!

Anyway, I’ve yet to dig into this fancy new tool too much but the animation effects appear to be quite magic and absolutely worth experimenting with.

Direct Link to ArticlePermalink

The post Framer Web appeared first on CSS-Tricks.

How to Convert a Date String into a Human-Readable Format

Css Tricks - Mon, 05/25/2020 - 4:13am

I’ll be the first to admit that I’m writing this article, in part, because it’s something I look up often and want to be able to find it next time. Formatting a date string that you get from an API in JavaScript can take many shapes — anything from loading all of Moment.js to have very finite control, or just using a couple of lines to update it. This article is not meant to be comprehensive, but aims to show the most common path of human legibility.

ISO 8601 is an extremely common date format. The “Z” at the end means the time in ISO 8601 format is using the UTC standard, i.e. no timezone. Here’s an example: 2020-05-25T04:00:00Z. When I bring data in from an API, it’s typically in ISO 8601 format.

If I wanted to format the above string in a more readable format, say May 25, 2020, I would do this:

const dateString = '2020-05-14T04:00:00Z' const formatDate = (dateString) => { const options = { year: "numeric", month: "long", day: "numeric" } return new Date(dateString).toLocaleDateString(undefined, options) }

Here’s what I’m doing…

First, I’m passing in options for how I want the output to be formatted. There are many, many other options we could pass in there to format the date in different ways. I’m just showing a fairly common example.

const options = { year: "numeric", month: "long", day: "numeric" }

Next, I’m creating a new Date instance that represents a single moment in time in a platform-independent format.

return new Date(dateString)

Finally, I’m using the .toLocaleDateString() method to apply the formatting options.

return new Date(dateString).toLocaleDateString(undefined, options)

Note that I passed in undefined. Not defining the value in this case means the time will be represented by whatever the default locale is. You can also set it to be a certain area/language. Or, for apps and sites that are internationalized, you can pass in what the user has selected (e.g. 'en-US' for the United States, 'de-DE' for Germany, and so forth). There’s a nice npm package that includes list of more locales and their codes.

Hope that helps you get started! And high fives to future Sarah for not having to look this up again in multiple places. &#x1f91a;

The post How to Convert a Date String into a Human-Readable Format appeared first on CSS-Tricks.

Fun with Fonts

Typography - Mon, 05/25/2020 - 1:29am

Today I launched two short multiple choice quizzes. The first starts at the beginning with Gutenberg, with questions about his life and his famous Bible. Some of the questions are pretty easy; others you might find rather difficult. The second game, Glorious Glyphs, tests your font identification chops by having you identify individual characters or […]

The post Fun with Fonts appeared first on I Love Typography.

“The Modern Web”

Css Tricks - Fri, 05/22/2020 - 10:11am

A couple of interesting articles making the rounds:

I like Tom’s assertion that React (which he’s using as a stand-in for JavaScript frameworks in general) has an ideal usage:

There is a sweet spot of React: in moderately interactive interfaces. Complex forms that require immediate feedback, UIs that need to move around and react instantly. That’s where it excels.

If there is anything I hope for the world of web design and development, it’s that we get better at picking the right tools for the job.

I heard several people hone in on this:

I can, for example, guarantee that this blog is faster than any Gatsby blog (and much love to the Gatsby team) because there is nothing that a React static site can do that will make it faster than a non-React static site.

One reaction was hell yes. React is a bunch of JavaScript and it does lots of stuff, but does not grant superpowers that make the web faster than it was without it. Another reaction was: well it actually does. That’s kind of the whole point of SPAs: not needing to reload the page. Instead, we’re able to make a trimmed network request for the new data needed for a new page and re-rendering only what is necessary.

Rich digs into that even more:

When I tap on a link on Tom’s JS-free website, the browser first waits to confirm that it was a tap and not a brush/swipe, then makes a request, and then we have to wait for the response. With a framework-authored site with client-side routing, we can start to do more interesting things. We can make informed guesses based on analytics about which things the user is likely to interact with and preload the logic and data for them. We can kick off requests as soon as the user first touches (or hovers) the link instead of waiting for confirmation of a tap — worst case scenario, we’ve loaded some stuff that will be useful later if they do tap on it. We can provide better visual feedback that loading is taking place and a transition is about to occur. And we don’t need to load the entire contents of the page — often, we can make do with a small bit of JSON because we already have the JavaScript for the page. This stuff gets fiendishly difficult to do by hand.

That’s what makes this stuff so easy to argue about. Everyone has good points. When we try to speak on behalf of the entire web, it’s tough for us all to agree. But the web is too big for broad, sweeping assertions.

Do people reach for React-powered SPAs too much? Probably, but that’s not without reason. There is innovation there that draws people in. The question is, how can we improve it?

From a front-of-the-front-end perspective, the fact that front-end frameworks like React encourage demand us write a front-end in components is compelling all by itself.

There is optimism and pessimism in both posts. The ending sentences of both are starkly different.

The post “The Modern Web” appeared first on CSS-Tricks.

The Fastest Google Fonts

Css Tricks - Fri, 05/22/2020 - 4:55am

When you use font-display: swap;, which Google Fonts does when you use the default &display=swap part of the URL , you’re already saying, “I’m cool with FOUT,” which is another way of saying web text is displayed right away, and when the web font is ready, “swap” to it.

There is already an async nature to what you are doing, so you might as well extend that async-ness to the rest of the font loading. Harry Roberts:

If you’re going to use font-display for your Google Fonts then it makes sense to asynchronously load the whole request chain.

Harry’s recommended snippet:

<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin /> <link rel="preload" as="style" href="$CSS&display=swap" /> <link rel="stylesheet" href="$CSS&display=swap" media="print" onload="this.media='all'" />

$CSS is the main part of the URL that Google Fonts gives you.

Looks like a ~20% render time savings with no change in how it looks/feels when loading/. Other than that, it’s faster.

Direct Link to ArticlePermalink

The post The Fastest Google Fonts appeared first on CSS-Tricks.

A “new direction” in the struggle against rightward scrolling

Css Tricks - Thu, 05/21/2020 - 10:26am

You know those times you get a horizontal scrollbar when accidentally placing an element off the right edge of the browser window? It might be a menu that slides in or the like. Sometimes we to overflow-x: hidden; on the body to fix that, but that can sometimes wreck stuff like position: sticky;.

Well, you know how if you place an element off the left edge of a browser window, it doesn’t do that? That’s “data loss” and just how things work around here. It actually has to do with the direction of the page. If you were in a RTL situation, it would be the left edge of the browser window causing the overflow situation and the right edge where it doesn’t.

Emerson Loustau leverages that idea to solve a problem here. I’d be way too nervous messing with direction like this because I just don’t know what the side effects would be. But, hey, at least it doesn’t break position: sticky;.

Direct Link to ArticlePermalink

The post A “new direction” in the struggle against rightward scrolling appeared first on CSS-Tricks.

Flexbox-like “just put elements in a row” with CSS grid

Css Tricks - Thu, 05/21/2020 - 9:57am

It occurred to me while we were talking about flexbox and gap that one reason we sometimes reach for flexbox is to chuck some boxes in a row and space them out a little.

My brain still reaches for flexbox in that situation, and with gap, it probably will continue to do so. It’s worth noting though that grid can do the same thing in its own special way.

Like this:

.grid { display: grid; gap: 1rem; grid-auto-flow: column; } CodePen Embed Fallback

They all look equal width there, but that’s only because there is no content in them. With content, you’ll see the boxes start pushing on each other based on the natural width of that content. If you need to exert some control, you can always set width / min-width / max-width on the elements that fall into those columns — or, set them with grid-template-columns but without setting the actual number of columns, then letting the min-content dictate the width.

.grid { display: grid; gap: 1rem; grid-auto-flow: column; grid-template-columns: repeat(auto-fit, minmax(min-content, 1fr)); }

Flexible grids are the coolest.

Another thought… if you only want the whole grid itself to be as wide as the content (i.e. less than 100% or auto, if need be) then be aware that display: inline-grid; is a thing.

The post Flexbox-like “just put elements in a row” with CSS grid appeared first on CSS-Tricks.

How to Make Taxonomy Pages With Gatsby and Sanity.io

Css Tricks - Thu, 05/21/2020 - 4:58am

In this tutorial, we’ll cover how to make taxonomy pages with Gatsby with structured content from Sanity.io. You will learn how to use Gatsby’s Node creation APIs to add fields to your content types in Gatsby’s GraphQL API. Specifically, we’re going to create category pages for the Sanity’s blog starter.

That being said, there is nothing Sanity-specific about what we’re covering here. You’re able to do this regardless of which content source you may have. We’re just reaching for Sanity.io for the sake of demonstration.

Get up and running with the blog

If you want to follow this tutorial with your own Gatsby project, go ahead and skip to the section for creating a new page template in Gatsby. If not, head over to sanity.io/create and launch the Gatsby blog starter. It will put the code for Sanity Studio and the Gatsby front-end in your GitHub account and set up the deployment for both on Netlify. All the configuration, including example content, will be in place so that you can dive right into learning how to create taxonomy pages.

Once the project is iniated, make sure to clone the new repository on GitHub to local, and install the dependencies:

git clone git@github.com:username/your-repository-name.git cd your-repository-name npm i

If you want to run both Sanity Studio (the CMS) and the Gatsby front-end locally, you can do so by running the command npm run dev in a terminal from the project root. You can also cd into the web folder and just run Gatsby with the same command.

You should also install the Sanity CLI and log in to your account from the terminal: npm i -g @sanity/cli && sanity login. This will give you tooling and useful commands to interact with Sanity projects. You can add the --help flag to get more information on its functionality and commands.

We will be doing some customization to the gatsby-node.js file. To see the result of the changes, restart Gatsby’s development server. This is done in most systems by hitting CTRL + C in the terminal and running npm run dev again.

Getting familiar with the content model

Look into the /studio/schemas/documents folder. There are schema files for our main content types: author, category, site settings, and posts. Each of the files exports a JavaScript object that defines the fields and properties of these content types. Inside of post.js is the field definition for categories:

{ name: 'categories', type: 'array', title: 'Categories', of: [ { type: 'reference', to: [{ type: 'category' }] } ] },

This will create an array field with reference objects to category documents. Inside of the blog’s studio it will look like this:

An array field with references to category documents in the blog studio Adding slugs to the category type

Head over to /studio/schemas/documents/category.js. There is a simple content model for a category that consists of a title and a description. Now that we’re creating dedicated pages for categories, it would be handy to have a slug field as well. We can define that in the schema like this:

// studio/schemas/documents/category.js export default { name: 'category', type: 'document', title: 'Category', fields: [ { name: 'title', type: 'string', title: 'Title' }, { name: 'slug', type: 'slug', title: 'Slug', options: { // add a button to generate slug from the title field source: 'title' } }, { name: 'description', type: 'text', title: 'Description' } ] }

Now that we have changed the content model, we need to update the GraphQL schema definition as well. Do this by executing npm run graphql-deploy (alternatively: sanity graphql deploy) in the studio folder. You will get warnings about breaking changes, but since we are only adding a field, you can proceed without worry. If you want the field to accessible in your studio on Netlify, check the changes into git (with git add . && git commit -m"add slug field") and push it to your GitHub repository (git push origin master).

Now we should go through the categories and generate slugs for them. Remember to hit the publish button to make the changes accessible for Gatsby! And if you were running Gatsby’s development server, you’ll need to restart that too.

Quick sidenote on how the Sanity source plugin works

When starting Gatsby in development or building a website, the source plugin will first fetch the GraphQL Schema Definitions from Sanity deployed GraphQL API. The source plugin uses this to tell Gatsby which fields should be available to prevent it from breaking if the content for certain fields happens to disappear. Then it will hit the project’s export endpoint, which streams all the accessible documents to Gatsby’s in-memory datastore.

In order words, the whole site is built with two requests. Running the development server, will also set up a listener that pushes whatever changes come from Sanity to Gatsby in real-time, without doing additional API queries. If we give the source plugin a token with permission to read drafts, we’ll see the changes instantly. This can also be experienced with Gatsby Preview.

Adding a category page template in Gatsby

Now that we have the GraphQL schema definition and some content ready, we can dive into creating category page templates in Gatsby. We need to do two things:

  • Tell Gatsby to create pages for the category nodes (that is Gatsby’s term for “documents”).
  • Give Gatsby a template file to generate the HTML with the page data.

Begin by opening the /web/gatsby-node.js file. Code will already be here that can be used to create the blog post pages. We’ll largely leverage this exact code, but for categories. Let’s take it step-by-step:

Between the createBlogPostPages function and the line that starts with exports.createPages, we can add the following code. I’ve put in comments here to explain what’s going on:

// web/gatsby-node.js // ... async function createCategoryPages (graphql, actions) { // Get Gatsby‘s method for creating new pages const {createPage} = actions // Query Gatsby‘s GraphAPI for all the categories that come from Sanity // You can query this API on http://localhost:8000/___graphql const result = await graphql(`{ allSanityCategory { nodes { slug { current } id } } } `) // If there are any errors in the query, cancel the build and tell us if (result.errors) throw result.errors // Let‘s gracefully handle if allSanityCatgogy is null const categoryNodes = (result.data.allSanityCategory || {}).nodes || [] categoryNodes // Loop through the category nodes, but don't return anything .forEach((node) => { // Desctructure the id and slug fields for each category const {id, slug = {}} = node // If there isn't a slug, we want to do nothing if (!slug) return // Make the URL with the current slug const path = `/categories/${slug.current}` // Create the page using the URL path and the template file, and pass down the id // that we can use to query for the right category in the template file createPage({ path, component: require.resolve('./src/templates/category.js'), context: {id} }) }) }

Last, this function is needed at the bottom of the file:

// /web/gatsby-node.js // ... exports.createPages = async ({graphql, actions}) => { await createBlogPostPages(graphql, actions) await createCategoryPages(graphql, actions) // <= add the function here }

Now that we have the machinery to create the category page node in place, we need to add a template for how it actually should look in the browser. We’ll base it on the existing blog post template to get some consistent styling, but keep it fairly simple in the process.

// /web/src/templates/category.js import React from 'react' import {graphql} from 'gatsby' import Container from '../components/container' import GraphQLErrorList from '../components/graphql-error-list' import SEO from '../components/seo' import Layout from '../containers/layout' export const query = graphql` query CategoryTemplateQuery($id: String!) { category: sanityCategory(id: {eq: $id}) { title description } } ` const CategoryPostTemplate = props => { const {data = {}, errors} = props const {title, description} = data.category || {} return ( <Layout> <Container> {errors && <GraphQLErrorList errors={errors} />} {!data.category && <p>No category data</p>} <SEO title={title} description={description} /> <article> <h1>Category: {title}</h1> <p>{description}</p> </article> </Container> </Layout> ) } export default CategoryPostTemplate

We are using the ID that was passed into the context in gatsby-node.js to query the category content. Then we use it to query the title and description fields that are on the category type. Make sure to restart with npm run dev after saving these changes, and head over to localhost:8000/categories/structured-content in the browser. The page should look something like this:

A barebones category page

Cool stuff! But it would be even cooler if we actually could see what posts that belong to this category, because, well, that’s kinda the point of having categories in the first place, right? Ideally, we should be able to query for a “pages” field on the category object.

Before we learn how to that, we need to take a step back to understand how Sanity’s references work.

Querying Sanity’s references

Even though we’re only defining the references in one type, Sanity’s datastore will index them “bi-directionally.” That means creating a reference to the “Structured content” category document from a post lets Sanity know that the category has these incoming references and will keep you from deleting it as long as the reference exists (references can be set as “weak” to override this behavior). If we use GROQ, we can query categories and join posts that have them like this (see the query and result in action on groq.dev):

*[_type == "category"]{ _id, _type, title, "posts": *[_type == "post" && references(^._id)]{ title, slug } } // alternative: *[_type == "post" && ^._id in categories[]._ref]{

This ouputs a data structure that lets us make a simple category post template:

[ { "_id": "39d2ca7f-4862-4ab2-b902-0bf10f1d4c34", "_type": "category", "title": "Structured content", "posts": [ { "title": "Exploration powered by structured content", "slug": { "_type": "slug", "current": "exploration-powered-by-structured-content" } }, { "title": "My brand new blog powered by Sanity.io", "slug": { "_type": "slug", "current": "my-brand-new-blog-powered-by-sanity-io" } } ] }, // ... more entries ] That’s fine for GROQ, what about GraphQL?

Here‘s the kicker: As of yet, this kind of query isn’t possible with Gatsby’s GraphQL API out of the box. But fear not! Gatsby has a powerful API for changing its GraphQL schema that lets us add fields.

Using createResolvers to edit Gatsby’s GraphQL API

Gatsby holds all the content in memory when it builds your site and exposes some APIs that let us tap into how it processes this information. Among these are the Node APIs. It’s probably good to clarify that when we are talking about “node” in Gatsby — not to be confused with Node.js. The creators of Gatsby have borrowed “edges and nodes” from Graph theory where “edges” are the connections between the “nodes” which are the “points” where the actual content is located. Since an edge is a connection between nodes, it can have a “next” and “previous” property.

The edges with next and previous, and the node with fields in GraphQL’s API explorer

The Node APIs are used by plugins first and foremost, but they can be used to customize how our GraphQL API should work as well. One of these APIs is called createResolvers. It’s fairly new and it lets us tap into how a type’s nodes are created so we can make queries that add data to them.

Let’s use it to add the following logic:

  • Check for ones with the SanityCategory type when creating the nodes.
  • If a node matches this type, create a new field called posts and set it to the SanityPost type.
  • Then run a query that filters all posts that has lists a category that matches the current category’s ID.
  • If there are matching IDs, add the content of the post nodes to this field.

Add the following code to the /web/gatsby-node.js file, either below or above the code that’s already in there:

// /web/gatsby-node.js // Notice the capitalized type names exports.createResolvers = ({createResolvers}) => { const resolvers = { SanityCategory: { posts: { type: ['SanityPost'], resolve (source, args, context, info) { return context.nodeModel.runQuery({ type: 'SanityPost', query: { filter: { categories: { elemMatch: { _id: { eq: source._id } } } } } }) } } } } createResolvers(resolvers) }

Now, let’s restart Gatsby’s development server. We should be able to find a new field for posts inside of the sanityCategory and allSanityCategory types.

Adding the list of posts to the category template

Now that we have the data we need, we can return to our category page template (/web/src/templates/category.js) and add a list with links to the posts belonging to the category.

// /web/src/templates/category.js import React from 'react' import {graphql, Link} from 'gatsby' import Container from '../components/container' import GraphQLErrorList from '../components/graphql-error-list' import SEO from '../components/seo' import Layout from '../containers/layout' // Import a function to build the blog URL import {getBlogUrl} from '../lib/helpers' // Add “posts” to the GraphQL query export const query = graphql` query CategoryTemplateQuery($id: String!) { category: sanityCategory(id: {eq: $id}) { title description posts { _id title publishedAt slug { current } } } } ` const CategoryPostTemplate = props => { const {data = {}, errors} = props // Destructure the new posts property from props const {title, description, posts} = data.category || {} return ( <Layout> <Container> {errors && <GraphQLErrorList errors={errors} />} {!data.category && <p>No category data</p>} <SEO title={title} description={description} /> <article> <h1>Category: {title}</h1> <p>{description}</p> {/* If there are any posts, add the heading, with the list of links to the posts */} {posts && ( <React.Fragment> <h2>Posts</h2> <ul> { posts.map(post => ( <li key={post._id}> <Link to={getBlogUrl(post.publishedAt, post.slug)}>{post.title}</Link> </li>)) } </ul> </React.Fragment>) } </article> </Container> </Layout> ) } export default CategoryPostTemplate

This code will produce this simple category page with a list of linked posts – just liked we wanted!

Go make taxonomy pages!

We just completed the process of creating new page types with custom page templates in Gatsby. We covered one of Gatsby’s Node APIs called createResolver and used it to add a new posts field to the category nodes.

This should give you what you need to make other types of taxonomy pages! Do you have multiple authors on your blog? Well, you can use the same logic to create author pages. The interesting thing with the GraphQL filter is that you can use it to go beyond the explicit relationship made with references. It can also be used to match other fields using regular expressions or string comparisons. It’s fairly flexible!

The post How to Make Taxonomy Pages With Gatsby and Sanity.io appeared first on CSS-Tricks.

Roll Your Own Comments With Gatsby and FaunaDB

Css Tricks - Thu, 05/21/2020 - 4:57am

If you haven’t used Gatsby before have a read about why it’s fast in every way that matters, and if you haven’t used FaunaDB before you’re in for a treat. If you’re looking to make your static sites full blown Jamstack applications this is the back end solution for you!

This tutorial will only focus on the operations you need to use FaunaDB to power a comment system for a Gatsby blog. The app comes complete with inputs fields that allow users to comment on your posts and an admin area for you to approve or delete comments before they appear on each post. Authentication is provided by Netlify’s Identity widget and it’s all sewn together using Netlify serverless functions and an Apollo/GraphQL API that pushes data up to a FaunaDB database collection.

I chose FaunaDB for the database for a number of reasons. Firstly there’s a very generous free tier! perfect for those small projects that need a back end, there’s native support for GraphQL queries and it has some really powerful indexing features!

…and to quote the creators;

No matter which stack you use, or where you’re deploying your app, FaunaDB gives you effortless, low-latency and reliable access to your data via APIs familiar to you

You can see the finished comments app here.

Get Started

To get started clone the repo at https://github.com/PaulieScanlon/fauna-gatsby-comments

or:

git clone https://github.com/PaulieScanlon/fauna-gatsby-comments.git

Then install all the dependencies:

npm install

Also cd in to functions/apollo-graphql and install the dependencies for the Netlify function:

npm install

This is a separate package and has its own dependencies, you’ll be using this later.

We also need to install the Netlify CLI as you’ll also use this later:

npm install netlify-cli -g

Now lets add three new files that aren’t part of the repo.

At the root of your project create a .env .env.development and .env.production

Add the following to .env:

GATSBY_FAUNA_DB = GATSBY_FAUNA_COLLECTION =

Add the following to .env.development:

GATSBY_FAUNA_DB = GATSBY_FAUNA_COLLECTION = GATSBY_SHOW_SIGN_UP = true GATSBY_ADMIN_ID =

Add the following to .env.production:

GATSBY_FAUNA_DB = GATSBY_FAUNA_COLLECTION = GATSBY_SHOW_SIGN_UP = false GATSBY_ADMIN_ID =

You’ll come back to these later but in case you’re wondering

  • GATSBY_FAUNA_DB is the FaunaDB secret key for your database
  • GATSBY_FAUNA_COLLECTION is the FaunaDB collection name
  • GATSBY_SHOW_SIGN_UP is used to hide the Sign up button when the site is in production
  • GATSBY_ADMIN_ID is a user id that Netlify Identity will generate for you

If you’re the curious type you can get a taster of the app by running gatsby develop or yarn develop and then navigate to http://localhost:8000 in your browser.

FaunaDB

So Let’s get cracking, but before we write any operations head over to https://fauna.com/ and sign up!

Database and Collection
  • Create a new database by clicking NEW DATABASE
  • Name the database: I’ve called the demo database fauna-gatsby-comments
  • Create a new Collection by clicking NEW COLLECTION
  • Name the collection: I’ve called the demo collection demo-blog-comments
Server Key

Now you’ll need to to set up a server key. Go to SECURITY

  • Create a new key by clicking NEW KEY
  • Select the database you want the key to apply to, fauna-gatsby-comments for example
  • Set the Role as Admin
  • Name the server key: I’ve called the demo key demo-blog-server-key
Environment Variables Pt. 1

Copy the server key and add it to GATSBY_FAUNA_DB in .env.development, .env.production and .env.

You’ll also need to add the name of the collection to GATSBY_FAUNA_COLLECTION in .env.development, .env.production and .env.

Adding these values to .env are just so you can test your development FaunaDB operations, which you’ll do next.

Let’s start by creating a comment so head back to boop.js:

// boop.js ... // CREATE COMMENT createComment: async () => { const slug = "/posts/some-post" const name = "some name" const comment = "some comment" const results = await client.query( q.Create(q.Collection(COLLECTION_NAME), { data: { isApproved: false, slug: slug, date: new Date().toString(), name: name, comment: comment, }, }) ) console.log(JSON.stringify(results, null, 2)) return { commentId: results.ref.id, } }, ...

The breakdown of this function is as follows;

  • q is the instance of faunadb.query
  • Create is the FaunaDB method to create an entry within a collection
  • Collection is area in the database to store the data. It takes the name of the collection as the first argument and a data object as the second.

The second argument is the shape of the data you need to drive the applications comment system.

For now you’re going to hard-code slug, name and comment but in the final app these values are captured by the input form on the posts page and passed in via args

The breakdown for the shape is as follows;

  • isApproved is the status of the comment and by default it’s false until we approve it in the Admin page
  • slug is the path to the post where the comment was written
  • date is the time stamp the comment was written
  • name is the name the user entered in the comments from
  • comment is the comment the user entered in the comments form

When you (or a user) creates a comment you’re not really interested in dealing with the response because as far as the user is concerned all they’ll see is either a success or error message.

After a user has posted a comment it will go in to your Admin queue until you approve it but if you did want to return something you could surface this in the UI by returning something from the createComment function.

Create a comment

If you’ve hard coded a slug, name and comment you can now run the following in your CLI

node boop createComment

If everything worked correctly you should see a log in your terminal of the new comment.

{ "ref": { "@ref": { "id": "263413122555970050", "collection": { "@ref": { "id": "demo-blog-comments", "collection": { "@ref": { "id": "collections" } } } } } }, "ts": 1587469179600000, "data": { "isApproved": false, "slug": "/posts/some-post", "date": "Tue Apr 21 2020 12:39:39 GMT+0100 (British Summer Time)", "name": "some name", "comment": "some comment" } } { commentId: '263413122555970050' }

If you head over to COLLECTIONS in FaunaDB you should see your new entry in the collection.

You’ll need to create a few more comments while in development so change the hard-coded values for name and comment and run the following again.

node boop createComment

Do this a few times so you end up with at least three new comments stored in the database, you’ll use these in a moment.

Delete comment by id

Now that you can create comments you’ll also need to be able to delete a comment.

By adding the commentId of one of the comments you created above you can delete it from the database. The commentId is the id in the ref.@ref object

Again you’re not really concerned with the return value here but if you wanted to surface this in the UI you could do so by returning something from the deleteCommentById function.

// boop.js ... // DELETE COMMENT deleteCommentById: async () => { const commentId = "263413122555970050"; const results = await client.query( q.Delete(q.Ref(q.Collection(COLLECTION_NAME), commentId)) ); console.log(JSON.stringify(results, null, 2)); return { commentId: results.ref.id, }; }, ...

The breakdown of this function is as follows

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Delete is the FaunaDB delete method to delete entries from a collection
  • Ref is the unique FaunaDB ref used to identify the entry
  • Collection is area in the database where the data is stored

If you’ve hard coded a commentId you can now run the following in your CLI:

node boop deleteCommentById

If you head back over to COLLECTIONS in FaunaDB you should see that entry no longer exists in collection

Indexes

Next you’re going to create an INDEX in FaunaDB.

An INDEX allows you to query the database with a specific term and define a specific data shape to return.

When working with GraphQL and / or TypeScript this is really powerful because you can use FaunaDB indexes to return only the data you need and in a predictable shape. This makes data typing responses in GraphQL and / TypeScript a dream… I’ve worked on a number of applications that just return a massive object of useless values which will inevitably cause bugs in your app. blurg!

  • Go to INDEXES and click NEW INDEX
  • Name the index: I’ve called this one get-all-comments
  • Set the source collection to the name of the collection you setup earlier

As mentioned above when you query the database using this index you can tell FaunaDB which parts of the entry you want to return.

You can do this by adding “values” but be careful to enter the values exactly as they appear below because (on the FaunaDB free tier) you can’t amend these after you’ve created them so if there’s a mistake you’ll have to delete the index and start again… bummer!

The values you need to add are as follows:

  • ref
  • data.isApproved
  • data.slug
  • data.date
  • data.name
  • data.comment

After you’ve added all the values you can click SAVE.

Get all comments // boop.js ... // GET ALL COMMENTS getAllComments: async () => { const results = await client.query( q.Paginate(q.Match(q.Index("get-all-comments"))) ); console.log(JSON.stringify(results, null, 2)); return results.data.map(([ref, isApproved, slug, date, name, comment]) => ({ commentId: ref.id, isApproved, slug, date, name, comment, })); }, ...

The breakdown of this function is as follows

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Paginate paginates the responses
  • Match returns matched results
  • Index is the name of the Index you just created

The shape of the returned result here is an array of the same shape you defined in the Index “values”

If you run the following you should see the list of all the comments you created earlier:

node boop getAllComments Get comments by slug

You’re going to take a similar approach as above but this time create a new Index that allows you to query FaunaDB in a different way. The key difference here is that when you get-comments-by-slug you’ll need to tell FaunaDB about this specific term and you can do this by adding data.slug to the Terms field.

  • Go to INDEX and click NEW INDEX
  • Name the index, I’ve called this one get-comments-by-slug
  • Set the source collection to the name of the collection you setup earlier
  • Add data.slug in the terms field

The values you need to add are as follows:

  • ref
  • data.isApproved
  • data.slug
  • data.date
  • data.name
  • data.comment

After you’ve added all the values you can click SAVE.

// boop.js ... // GET COMMENT BY SLUG getCommentsBySlug: async () => { const slug = "/posts/some-post"; const results = await client.query( q.Paginate(q.Match(q.Index("get-comments-by-slug"), slug)) ); console.log(JSON.stringify(results, null, 2)); return results.data.map(([ref, isApproved, slug, date, name, comment]) => ({ commentId: ref.id, isApproved, slug, date, name, comment, })); }, ...

The breakdown of this function is as follows:

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Paginate paginates the responses
  • Match returns matched results
  • Index is the name of the Index you just created

The shape of the returned result here is an array of the same shape you defined in the Index “values” you can create this shape in the same way you did above and be sure to add a value for terms. Again be careful to enter these with care.

If you run the following you should see the list of all the comments you created earlier but for a specific slug:

node boop getCommentsBySlug Approve comment by id

When you create a comment you manually set the isApproved value to false. This prevents the comment from being shown in the app until you approve it.

You’ll now need to create a function to do this but you’ll need to hard-code a commentId. Use a commentId from one of the comments you created earlier:

// boop.js ... // APPROVE COMMENT BY ID approveCommentById: async () => { const commentId = '263413122555970050' const results = await client.query( q.Update(q.Ref(q.Collection(COLLECTION_NAME), commentId), { data: { isApproved: true, }, }) ); console.log(JSON.stringify(results, null, 2)); return { isApproved: results.isApproved, }; }, ...

The breakdown of this function is as follows:

  • client is the FaunaDB client instance
  • query is a method to get data from FaunaDB
  • q is the instance of faunadb.query
  • Update is the FaundaDB method up update an entry
  • Ref is the unique FaunaDB ref used to identify the entry
  • Collection is area in the database where the data is stored

If you’ve hard coded a commentId you can now run the following in your CLI:

node boop approveCommentById

If you run the getCommentsBySlug again you should now see the isApproved status of the entry you hard-coded the commentId for will have changed to true.

node boop getCommentsBySlug

These are all the operations required to manage the data from the app.

In the repo if you have a look at apollo-graphql.js which can be found in functions/apollo-graphql you’ll see the all of the above operations. As mentioned before the hard-coded values are replaced by args, these are the values passed in from various parts of the app.

Netlify

Assuming you’ve completed the Netlify sign up process or already have an account with Netlify you can now push the demo app to your GitHub account.

To do this you’ll need to have initialize git locally, added a remote and have pushed the demo repo upstream before proceeding.

You should now be able to link the repo up to Netlify’s Continuous Deployment.

If you click the “New site from Git” button on the Netlify dashboard you can authorize access to your GitHub account and select the gatsby-fauna-comments repo to enable Netlify’s Continuous Deployment. You’ll need to have deployed at least once so that we have a pubic URL of your app.

The URL will look something like this https://ecstatic-lewin-b1bd17.netlify.app but feel free to rename it and make a note of the URL as you’ll need it for the Netlify Identity step mentioned shortly.

Environment Variables Pt. 2

In a previous step you added the FaunaDB database secret key and collection name to your .env files(s). You’ll also need to add the same to Netlify’s Environment variables.

  • Navigate to Settings from the Netlify navigation
  • Click on Build and deploy
  • Either select Environment or scroll down until you see Environment variables
  • Click on Edit variables

Proceed to add the following:

GATSBY_SHOW_SIGN_UP = false GATSBY_FAUNA_DB = you FaunaDB secret key GATSBY_FAUNA_COLLECTION = you FaunaDB collection name

While you’re here you’ll also need to amend the Sensitive variable policy, select Deploy without restrictions

Netlify Identity Widget

I mentioned before that when a comment is created the isApproved value is set to false, this prevents comments from appearing on blog posts until you (the admin) have approved them. In order to become admin you’ll need to create an identity.

You can achieve this by using the Netlify Identity Widget.

If you’ve completed the Continuous Deployment step above you can navigate to the Identity page from the Netlify navigation.

You wont see any users in here just yet so lets use the app to connect the dots, but before you do that make sure you click Enable Identity

Before you continue I just want to point out you’ll be using netlify dev instead of gatsby develop or yarn develop from now on. This is because you’ll be using some “special” Netlify methods in the app and staring the server using netlify dev is required to spin up various processes you’ll be using.

  • Spin up the app using netlify dev
  • Navigate to http://localhost:8888/admin/
  • Click the Sign Up button in the header

You will also need to point the Netlify Identity widget at your newly deployed app URL. This was the URL I mentioned you’ll need to make a note of earlier, if you’ve not renamed your app it’ll look something like this https://ecstatic-lewin-b1bd17.netlify.app/ There will be a prompt in the pop up window to Set site’s URL.

You can now complete the necessary sign up steps.

After sign up you’ll get an email asking you to confirm you identity and once that’s completed refresh the Identity page in Netlify and you should see yourself as a user.

It’s now login time, but before you do this find Identity.js in src/components and temporarily un-comment the console.log() on line 14. This will log the Netlify Identity user object to the console.

  • Restart your local server
  • Spin up the app again using netlify dev
  • Click the Login button in the header

If this all works you should be able to see a console log for netlifyIdentity.currentUser: find the id key and copy the value.

Set this as the value for GATSBY_ADMIN_ID = in both .env.production and .env.development

You can now safely remove the console.log() on line 14 in Identity.js or just comment it out again.

GATSBY_ADMIN_ID = your Netlify Identity user id

…and finally

  • Restart your local server
  • Spin up the app again using netlify dev

Now you should be able to login as “Admin”… hooray!

Navigate to http://localhost:8888/admin/ and Login.

It’s important to note here you’ll be using localhost:8888 for development now and NOT localhost:8000 which is more common with Gatsby development

Before you test this in the deployed environment make sure you go back to Netlify’s Environment variables and add your Netlify Identity user id to the Environment variables!

  • Navigate to Settings from the Netlify navigation
  • Click on Build and deploy
  • Either select Environment or scroll down until you see Environment variables
  • Click on Edit variables

Proceed to add the following:

GATSBY_ADMIN_ID = your Netlify Identity user id

If you have a play around with the app and enter a few comments on each of the posts then navigate back to Admin page you can choose to either approve or delete the comments.

Naturally only approved comments will be displayed on any given post and deleted ones are gone forever.

If you’ve used this tutorial for your project I’d love to hear from you at @pauliescanlon.

By Paulie Scanlon (@pauliescanlon), Front End React UI Developer / UX Engineer: After all is said and done, structure + order = fun.

Visit Paulie’s Blog at: www.paulie.dev

The post Roll Your Own Comments With Gatsby and FaunaDB appeared first on CSS-Tricks.

Radio Buttons Are Like Selects; Checkboxes Are Like Multiple Selects

Css Tricks - Wed, 05/20/2020 - 4:41am

I was reading Anna Kaley’s “Listboxes vs. Dropdown Lists” post the other day. It’s a fairly straightforward comparison between different UI implementations of selecting options. There is lots of good advice there. Classics like that you should use radio buttons (single select) or checkboxes (multiple select) if you’re showing five or fewer options, and the different options when the number of options grows from there.

One thing that isn’t talked about is how you implement these things. I imagine that’s somewhat on purpose as the point is to talk UX, not tech. But how you implement them plays a huge part in UX. In web design and development circles, the conversation about these things usually involves whether you can pull these things off with native controls, or if you need to rebuild them from scratch. If you can use native controls, you often should, because there are tons of UX that you get for free that that might otherwise be lost or forgotten when you rebuild — like how everything works via the keyboard.

The reason people chose “rebuild” is often for styling reasons, but that’s changing slowly over time. We’ve got lots of control over radios and checkboxes now. We can style the outside of selects pretty well and even the inside with trickery.

But even without custom styling, we still have some UI options. If you need to select one option from many, we’ve got <input type="radio"> buttons, but data and end-result-wise, that’s the same as a <select>. If you need to select multiple options, we’ve got <input type="checkbox">, but that’s data and end-result-wise the same as <select multiple>.

CodePen Embed Fallback

You pick based on the room you have available and the UX of whatever you’re building.

The post Radio Buttons Are Like Selects; Checkboxes Are Like Multiple Selects appeared first on CSS-Tricks.

WordPress Block Transforms

Css Tricks - Tue, 05/19/2020 - 1:52pm

This has been the year of Gutenberg for us here at CSS-Tricks. In fact, that’s a goal we set at the end of last year. We’re much further along that I thought we’d be, authoring all new content in the block editor¹, enabling the block editor for all content now. That means when we open most old posts, we see all the content in the “Classic” block. It looks like this:

A post written on CSS-Tricks before we were using the block editor.

The entire contents of the post is in a single block, so not exactly making any use of the block editor. It’s still “visual,” like the block editor, but it’s more like the old visual editor using TinyMCE. I never used that as it kinda forcefully mangled HTML in a way I didn’t like.

This is the #1 thing I was worried about

Transforming a Classic block into new blocks is as trivial as selecting the Classic block and selecting the “Convert to Blocks” option.

Select the option and the one block becomes many blocks.

How does the block editor handle block-izing old content, when we tell it to do that from the “Convert to Blocks” option? What if it totally screws up content during the conversion? Will we ever be able to switch?

The answer: it does a pretty darn good job. But… there are still issues. Not “bugs” but situations where we have custom HTML in our old content and it doesn’t know what to do with it — let alone how to convert it into exactly the blocks we wish it would. There is a way!

Basic Block Transforms

That’s where this idea of “Block Transforms” comes in. All (well, most?) native blocks have “to” and “from” transformations. You’re probably already familiar with how it manifests in the UI. Like a paragraph can transform “to” a quote and vice versa. Here’s a super meta screenshot of this very paragraph:

Those transforms aren’t magic; they are very explicitly coded. When you register a block, you specify the transforms. Say you were registering your own custom code block. You’d want to make sure that you could transform it…

  • From and to the default built-in code block, and probably a handful of others that might be useful.
  • Back to the built-in code block.

Which might look like:

registerBlockType("my/code-block", { title: __("My Code Block"), ... transforms: { from: [ { type: "block", priority: 7, blocks: ["core/code", "core/paragraph", "core/preformatted"], transform: function (attributes) { return createBlock("my/code-block", { content: attributes.content, }); }, }, ], to: [ { type: "block", blocks: ["core/code"], transform: ({ content }) => createBlock("core/code", { content }), }, ], ...

Those are transforms to and from other blocks. Fortunately, this is a pretty simple block where we’re just shuffling the content around. More complex blocks might need to pass around more data, but I haven’t had to deal with that yet.

The more magical stuff: Block Transforms from raw code

Here’s the moment of truth for old content:

The “Convert to Blocks” option.

In this situation, blocks are being created not from other blocks, but from raw code. Quite literally, the HTML is being looked at and choices are being made about what blocks to make from chunks of that HTML. This is where it’s amazing the block editor does such a good job with the choices, and also where things can go wrong and it can fail, make wrong block choices, or mangle content.

In our old content, a block of code (a super very important thing) in a post would look like this:

<pre rel="JavaScript"><code class="language-javascript" markup="tt"> let html = `<div>cool</div>`; </code></pre>

Sometimes the block conversion would do OK on those, turning it into a native code block. But there were a number of problems:

  1. I don’t want a native code block. I want that to be transformed into our own new code block (blogged about that here).
  2. I need some of the information in those attributes to inform settings on the new block, like what kind of code it is.
  3. The HTML in our old code blocks was not escaped and I need it to not choke on that.

I don’t have all the answers here, as this is an evolving process, but I do have some block transforms in place now that are working pretty well. Here’s what a “raw” transform (as opposed to a “block” transform) looks like:

registerBlockType("my/code-block", { title: __("My Code Block"), // ... transforms: { from: [ { type: "block", priority: 7, // ... }, { type: "raw", priority: 8, isMatch: (node) => node.nodeName === "PRE" && node.children.length === 1 && node.firstChild.nodeName === "CODE", transform: function (node) { let pre = node; let code = node.querySelector("code"); let codeType = "html"; if (pre.classList.contains("language-css")) { codeType = "css"; } if (pre.getAttribute("rel") === "CSS") { codeType = "css"; } if (pre.classList.contains("language-javascript")) { codeType = "javascript"; } if (code.classList.contains("language-javascript")) { codeType = "javascript"; } // ... other data wrangling... return createBlock("csstricks/code-block", { content: code.innerHTML, codeType: codeType, }); }, }, ], to: [ // ... ], // ... }

That isMatch function runs on every node in the HTML it finds, so this is the big opportunity to return true from that in the special situations you need to. Note in the code above that I’m specifically looking for HTML that looks like <pre ...><code ...>. When that matches, the transform runs, and I can return a createBlock call that passes in data and content I extract from the node with JavaScript.

Another example: Pasting a URL

“Raw” transforms don’t only happen when you “Convert to Blocks.” They happen when you paste content into the block editor too. You’ve probably experienced this before. Say you have copied some table markup from somewhere and paste it into the block editor -— it will probably paste as a table. A YouTube URL might paste into an embed. This kind of thing is why copy/pasting from Word documents and the like tend to work so well with the block editor.

Say you want some special behavior when a certain type of URL is pasted into the editor. This was the situation I was in with our custom CodePen Embed block. I wanted it so if you pasted a codepen.io URL, it would use this custom block, instead of the default embed.

This is a “from” transform that looks like this:

{ type: "raw", priority: 8, // higher number to beat out default isMatch: (node) => node.nodeName === "P" && node.innerText.startsWith("https://codepen.io/"), transform: function (node) { return createBlock("cp/codepen-gutenberg-embed-block", { penURL: node.innerText, penID: getPenID(node.innerText), // helper function }); }, } So…

Is it messy? A little. But it’s as powerful as you need it to be. If you have an old site with lots of bespoke HTML and shortcodes and stuff, then getting into block transforms is the only ticket out.

I’m glad I went to WPBlockTalk and caught K. Adam White’s talk on shortcodes because there was just one slide that clued me into that this was even possible. There is a little bit of documentation on it.

One thing I’d like to figure out is if it’s possible to run these transforms on all old content in the database. Seems a little scary, but also like it might be a good idea in some situations. Once I get my transformations really solid, I could see doing that so any old content ready-to-go in the block editor when opening it up. I just have no idea how to go about it.

I’m glad to be somewhat on top of this though, as I friggin love the block editor right now. It’s a pleasure to write in and build content with it. I like what Justin Tadlock said:

The block system is not going anywhere. WordPress has moved beyond the point where we should consider the block editor as a separate entity. It is an integral part of WordPress and will eventually touch more and more areas outside of the editing screen.

It’s here to stay. Embracing the block editor and bending it to our will is key.

  1. What are we calling it anyway? “Gutenberg” doesn’t seem right anymore. Feels like that will fade away, even though the development of it still happens in the Gutenberg plugin. I think I’ll just call it “the block editor” unless specifically referring to that plugin.

The post WordPress Block Transforms appeared first on CSS-Tricks.

How to Build a Chrome Extension

Css Tricks - Tue, 05/19/2020 - 4:38am

I made a Chrome extension this weekend because I found I was doing the same task over and over and wanted to automate it. Plus, I’m a nerd living through a pandemic, so I spend my pent-up energy building things. I’ve made a few Chrome Extensions over the years, hope this post helps you get going, too. Let’s get started!

Create the manifest

The first step is creating a manifest.json file in a project folder. This serves a similar purpose to a package.json, it provides the Chrome Web Store with critical information about the project, including the name, version, the required permissions, and so forth. Here’s an example:

{ "manifest_version": 2, "name": "Sample Name", "version": "1.0.0", "description": "This is a sample description", "short_name": "Short Sample Name", "permissions": ["activeTab", "declarativeContent", "storage", "<all_urls>"], "content_scripts": [ { "matches": ["<all_urls>"], "css": ["background.css"], "js": ["background.js"] } ], "browser_action": { "default_title": "Does a thing when you do a thing", "default_popup": "popup.html", "default_icon": { "16": "icons/icon16.png", "32": "icons/icon32.png" } } }

You might notice a few things- first: the names and descriptions can be anything you’d like.

The permissions depend on what the extension needs to do. We have ["activeTab", "declarativeContent", "storage", "<all_urls>"] in this example because this particular extension needs information about the active tab, needs to change the page content, needs to access localStorage, and needs to be active on all sites. If it only needs it to be active on one site at a time, we can remove the last index of that array. 

A list of all of the permissions and what they mean can be found in Chrome’s extension docs.

"content_scripts": [ { "matches": ["<all_urls>"], "css": ["background.css"], "js": ["background.js"] } ],

The content_scripts section sets the sites where the extension should be active. If you want a single site, like Twitter for example, you would say ["https://twitter.com/*"]. The CSS and JavaScript files are everything needed for extensions. For instance, my productive Twitter extension uses these files to override Twitter’s default appearance.

"browser_action": { "default_title": "Does a thing when you do a thing", "default_popup": "popup.html", "default_icon": { "16": "icons/icon16.png", "32": "icons/icon32.png" } }

There are things in browser_action that are also optional. For example, if the extension doesn’t need a popup for its functionality, then both the default_title and default_popup can be removed. In that case, all that’s needed the icon for the extension. If the extension only works on some sites, then Chrome will grey out the icon when it’s inactive.

Debugging

Once the manifest, CSS and JavaScript files are ready, head over to chrome://extensions/from the browser’s address bar and enable developer mode. That activates the “Load unpacked” button to add the extension files. It’s also possible to toggle whether or not the developer version of the extension is active.

I would highly recommend starting a GitHub repository to version control the files at this point. It’s a good way to save the work.

The extension needs to be reloaded from this interface when it is updated. A little refresh icon will display on the screen. Also, if the extension has any errors during development, it will show an error button with a stack trace and more info here as well.

Popup functionality

If the extension need to make use of a popup that comes off the extension icon, it’s thankfully fairly straightforward. After designating the name of the file with browser_action in the manifest file, a page can be built with whatever HTML and CSS you’ll like to include, including images (I tend to use inline SVG).

We’ll probably want to add some functionality to a popup. That make take some JavaScript, so make sure the JavaScript file is designated in the manifest file and is linked up in your popup file as well, like this: <script src="background.js"></script>

In that file, start by creating functionality and we’ll have access to the popup DOM like this:

document.addEventListener("DOMContentLoaded", () => { var button = document.getElementById("submit") button.addEventListener("click", (e) => { console.log(e) }) })

If we create a button in the popup.html file, assign it an ID called submit, and then return a console log, you might notice that nothing is actually logged in the console. That’s because we’re in a different context, meaning we’ll need to right-click on the popup and open up a different set of DevTools.

We now have access to logging and debugging! Keep in mind, though, that if anything is set in localStorage, then it will only exist in the extension’s DevTools localStorage; not the user’s browser localStorage. (This bit me the first time I tried it!)

Running scripts outside the extension

This is all fine and good, but say we want to run a script that has access to information on the current tab? Here’s a couple of ways we would do this. I would typically call a separate function from inside the DOMContentLoaded event listener:

Example 1: Activate a file function exampleFunction() { chrome.tabs.executeScript(() => { chrome.tabs.executeScript({ file: "content.js" }) }) } Example 2: Execute just a bit of code

This way is great if there’s only a small bit of code to run. However, it quickly gets tough to work with since it requires passing everything as a string or template literal.

function exampleFunction() { chrome.tabs.executeScript({ code: `console.log(‘hi there’)` }) } Example 3: Activate a file and pass a parameter

Remember, the extension and tab are operating in different contexts. That makes passing parameters between them a not-so-trivial task. What we’ll do here is nest the first two examples to pass a bit of code into the second file. I will store everything I need in a single option, but we’ll have to stringify the object for that to work properly.

function exampleFunction(options) { chrome.tabs.executeScript( { code: "var options = " + JSON.stringify(options) }, function() { chrome.tabs.executeScript({ file: "content.js" }) } ) } Icons

Even though the manifest file only defines two icons, we need two more to officially submit the extension to the Chrome Web Store: one that’s 128px square, and one that I call icon128_proper.png, which is also 128px, but has a little padding inside it between the edge of the image and the icon.

Keep in mind that whatever icon is used needs to look good both in light mode and dark mode for the browser. I usually find my icons on the Noun Project.

Submitting to the Chrome Web Store

Now we get to head over to the Chrome Web Store developer console to submit the extension! Click the “New Item” button, the drag and drop the zipped project file into the uploader.

From there, Chrome will ask a few questions about the extension, request information about the permissions requested in the extension and why they’re needed. Fair warning: requesting “activeTab” or “tabs” permissions will require a longer review to make sure the code isn’t doing anything abusive.

That’s it! This should get you all set up and on your way to building a Chrome browser extension! &#x1f389;

The post How to Build a Chrome Extension appeared first on CSS-Tricks.

Using BugHerd to Track Visual Feedback on Websites

Css Tricks - Tue, 05/19/2020 - 3:32am

BugHerd is about collecting visual feedback for websites.

If you’re like me, you’re constantly looking at your own websites and you’re constantly critiquing them. I think that’s healthy. Nothing gets better if you look at your own work and consider it perfectly finished. This is where BugHerd shines. With BugHerd, anytime you have one of those little “uh oh this area is a little rough” moments while looking at your site, you can log it to be dealt with.

Let’s take a look at a workflow like that. I’m going to assume you’ve signed up for a BugHerd account (if not grab a free trial here) and either installed the script on your site or have installed the browser extension and are using that.

I’ve done that for this very site. So now I’m looking at a page like our Archives Page, and I spot some stuff that is a little off.

I’ve taken a screenshot and circled the things that I think are visually off:

  1. The “Top Tags” and dropdown arrow are pretty far separated with nothing much connecting them. Maybe dropdowns like that should have a background or border to make that more clear.
  2. There is a weird shadow in the middle of the bottom line.

With BugHerd, I can act upon that stuff immediately. Rather than some janky workflow involving manual screenshots and opening tickets on some other unrelated website, I can do it right from the site itself.

  1. I open the BugHerd sidebar
  2. I click the green + button
  3. Select the element around where I want to give the visual feedback
  4. Enter the details of the bug

Their help video does a great job of showing this.

Here’s me logging one of those bugs I found:

Now, the BugHerd website becomes my dashboard for dealing with visual bugs. This unlocks a continual cycle of polish that that is how great websites get great!

Note the kanban board setup, which is always my prefered way to work on any project. Cards are things that need to be worked on and there are columns for cards that aren’t started, started, and finished. Perhaps your team works another way though? Maybe you have a few more columns you generally kanban with, or you name them in a specific way? That’s totally customizable in BugHerd.

I love that BugHerd itself is customizable, but at a higher level, the entire workflow is customizable and that’s even better.

  • I can set up BugHerd just for myself and use it for visual improvement work on my own projects
  • I can set up BugHerd for just the design team and we can use it among ourselves to track visual issues and get them fixed.
  • I can set up BugHerd for the entire company, so everyone feels empowered to call out visual rough spots.
  • I can set up BugHerd for clients, if I’m a freelancer or agency worker, so that the clients themselves can use it to report visual feedback.
  • I can open up BugHerd wide open so that guests of these websites can use it to report visual problems.

Check out this example of a design team with core members and guests and their preferred workflow setup:

It’s hard to imagine a better dedicated tool than BugHerd for visual feedback.

The post Using BugHerd to Track Visual Feedback on Websites appeared first on CSS-Tricks.

First Steps into a Possible CSS Masonry Layout

Css Tricks - Mon, 05/18/2020 - 10:58am

It’s not at the level of demand as, say, container queries, but being able to make “masonry” layouts in CSS has been a big ask for CSS developers for a long time. Masonry being that kind of layout where unevenly-sized elements are layed out in ragged rows. Sorta like a typical brick wall turned sideways.

The layout alone is achievable in CSS alone already, but with one big caveat: the items aren’t arranged in rows, they are arranged in columns, which is often a deal-breaker for folks.

/* People usually don't want this */ 1 4 6 8 2 7 3 5 9 /* They want this *. 1 2 3 4 5 6 7 8 9

If you want that ragged row thing and horizontal source order, you’re in JavaScript territory. Until now, that is, as Firefox rolled this out under a feature flag in Firefox Nightly, as part of CSS grid.

Mats Palmgren:

An implementation of this proposal is now available in Firefox Nightly. It is disabled by default, so you need to load about:config and set the preference layout.css.grid-template-masonry-value.enabled to true to enable it (type “masonry” in the search box on that page and it will show you that pref).

Jen Simmons has created some demos already:

CodePen Embed Fallback Is this really a grid?

A bit of pushback from Rachel Andrew

Grid isn’t Masonry, because it’s a grid with strict rows and columns. If you take another look at the layout created by Masonry, we don’t have strict rows and columns. Typically we have defined rows, but the columns act more like a flex layout, or Multicol. The key difference between the layout you get with Multicol and a Masonry layout, is that in Multicol the items are displayed by column. Typically in a Masonry layout you want them displayed row-wise.

[…]

Speaking personally, I am not a huge fan of this being part of the Grid specification. It is certainly compelling at first glance, however I feel that this is a relatively specialist layout mode and actually isn’t a grid at all. It is more akin to flex layout than grid layout.

By placing this layout method into the Grid spec I worry that we then tie ourselves to needing to support the Masonry functionality with any other additions to Grid.

None of this is final yet, and there is active CSS Working Group discussion about it.

As Jen said:

This is an experimental implementation — being discussed as a possible CSS specification. It is NOT yet official, and likely will change. Do not write blog posts saying this is definitely a thing. It’s not a thing. Not yet. It’s an experiment. A prototype. If you have thoughts, chime in at the CSSWG.

Houdini?

Last time there was chatter about native masonry, it was mingled with idea that the CSS Layout API, as part of Houdini, could do this. That is a thing, as you can see by opening this demo (repo) in Chrome Canary.

I’m not totally up to speed on whether Houdini is intended to be a thing so that ideas like this can be prototyped in the browser and ultimately moved out of Houdini, or if the ideas should just stay in Houdini, or what.

The post First Steps into a Possible CSS Masonry Layout appeared first on CSS-Tricks.

Unprefixed `appearance `

Css Tricks - Mon, 05/18/2020 - 10:56am

It’s interesting how third-parties are sometimes super involved in pushing browser things forward. One big story there was how Bloomberg hired Igalia to implement CSS grid across the browsers.

Here’s another story of Bocoup doing that, this time for the appearance property. The story is told in a Twitter thread, but the thread is broken somehow (looks like a deleted Tweet), so your best bet is to go to this one, then scroll up and down to see the whole thing. Gosh, I hope they blog it.

It took literally years of work:

2 years ago, @firefox asked us to work on a project to fix problems within the CSS appearance property. The issue came when we found out that each browser has its own implementation of how the appearance property should work on forms.

They had to do tons of research, write tests, and ultimately overhaul HTML and CSS specs. Then they needed to prove that, with those changes, browsers could un-prefix the property without breaking websites — the first attempt at this broke websites and was reverted. Then they actually get all three major browsers to do it. (Landed in Chrome, Firefox is on it, Safari has an open bug, and there is public desire to coordinate a release.)

Really goes to show just how long and grueling this work can be because it’s so crucial to get it right. If you’re into this stuff, listen to ShopTalk 407 with Brian Kardell.

The post Unprefixed `appearance ` appeared first on CSS-Tricks.

CSS fix for 100vh in mobile WebKit

Css Tricks - Fri, 05/15/2020 - 11:45am

A surprisingly common response when asking people about things they’d fix about anything in CSS, is to improve the handling of viewport units.

One thing that comes up often is how they relate to scrollbars. For example, if an element is sized to 100vw and stretches edge-to-edge, that’s fine so long as the page doesn’t have a vertical scrollbar. If it does have a vertical scrollbar, then 100vw is too wide, and the presence of that vertical scrollbar triggers a horizontal scrollbar because viewport units don’t have an elegant/optional way of handling that. So you might be hiding overflow on the body when you otherwise wouldn’t need to, for example. (Demo)

Another scenario involves mobile browsers. You might use viewport units to help you position a fixed footer along the bottom of the screen. But then browser chrome might come up (e.g. navigation, keyboard, etc), and it may cover the footer, because the mobile browser doesn’t consider anything changed about the viewport size.

Matt Smith documents this problem:

On the left, the browser navigation bar (considered browser chrome) is covering up the footer making it appear that the footer is beyond 100vh when it is not. On the right, the -webkit-fill-available property is being used rather than viewport units to fix the problem.

And a solution of sorts:

body { min-height: 100vh; min-height: -webkit-fill-available; } html { height: -webkit-fill-available; }

The above was updated to make sure the html element was being used, as we were told Chrome is updating the behavior to match Firefox’s implementation.

Does this really work? […] I’ve had no problems with any of the tests I’ve run and I’m using this method in production right now. But I did receive a number of responses to my tweet pointing to other possible problems with using this (the effects of rotating devices, Chrome not completely ignoring the property, etc.)

It would be better to get some real cross-browser solution for this someday, but I don’t see any issues using this as an improvement. It’s weird to use a vendor-prefixed property as a progressive enhancement, but hey, the world is weird.

Direct Link to ArticlePermalink

The post CSS fix for 100vh in mobile WebKit appeared first on CSS-Tricks.

Comparing Social Media Outlets for Developer Tips

Css Tricks - Fri, 05/15/2020 - 11:34am

As a little experiment, I shared a development tip on three different social networks. I also tried to post it in a format that was most suitable for that particular social network:

How did each of them “do”? Let’s take a look. But bear in mind… this ain’t scientific. This is just me having a glance at one isolated example to get a feel for things across different social media sites.

The Twitter Thread The Tweet

A little journey with lists, as a &#x1f9f5; thread.

`list-style-position: outside;` is the default for lists, and is a pretty decent default. The best part about it is that both the markers *and* the content are aligned. pic.twitter.com/CkQv1hIt6q

— CSS-Tricks (@css) April 27, 2020

Twitter is probably our largest social media outlet. Despite the fact that I’ve done absolutely nothing with it this year other than auto-tweeting posts from this site (via our Jetpack Integration), those tweets do just about as well as it ever did when I was writing each tweet. These numbers are bound to change, but at the time of writing:

Views

102,501

Followers

~446,000

Retweets

108

Engagements

3,753

Likes

428 (first tweet)

Twitter provides analytics on tweets

Going off that engagements number, a little bit less than 1% of the followers had anything to do with it. I’d say this was a very average tweet for us, if not on the low side.

The Instagram Post The Post View this post on Instagram

There are alignment things to consider with lists like an <ol>. The markers and the content. Outside positioning does well. But it uses the edge of the box as alignment and renders markers outside the box which can be bad for getting cut off. There is a solution with custom counters and subgrid though!

A post shared by Chris Coyier (@real_css_tricks) on Apr 26, 2020 at 5:44pm PDT

Instagram is by far the smallest of our social media outlets, being newer and not something I stay particularly active or consistent on. No auto-posting there just yet.

Followers

~2,800

Likes

308

Reached

2,685

Instagram provides analytics (“insights”) on posts.

Using Reach, that’s 96% of the followers. That’s pretty incredible compared t 1% of followers on Twitter. Although, on Twitter. I can easily put URLs to tweets and send people places, where my only options on Instagram are “check out the link in my profile” or use a swipe-up thing in an Instagram Story. So, despite the high engagement of Instagram, I’m mostly just getting the satisfaction of teaching something as well as a little brand awareness. It’s much harder for me to get you to directly do something from Instagram.

The YouTube Video The Video

YouTube is in the middle for us, much bigger than Instagram but not as big as Twitter. YouTube is a little unique in that there can be (and are) advertising directly on the videos and that get’s a “revenue share” from YouTube. That’s very much not driving motivation for using YouTube (I make 50 cents a day, but it is unique compared to the others.

Subscribers

51,300

Likes

116

Views

2,455

YouTube provides video analytics Facebook?

We do have a Facebook page but it’s the most neglected of all of them. We auto-post new articles to it, but this experiment didn’t really have a blog post. I published the video to our site, but that doesn’t get auto-posted to Facebook, so the tip never made it there.

I used to feel a little guilty about not taking as much advantage of Facebook as I could, but whenever I look at overall analytics, I’m reminded that all of social media accounts combine for ~2% of traffic to this site. Spending any more time on this stuff is foolish for me, when that time could be spent on content for this site and information architecture for what we already have. And for Facebook specifically, whatever time we have spent there has never seemed to pan out. Just not a hive for developers.

CodePen?

I probably should have factored CodePen into this more, since it’s something of a social network itself with similar metrics. I worked on the examples in CodePen and the whole video was done in CodePen. But in this case, it was more about the journey than the destination. I did ultimately link to a demo at the end of the Twitter thread, but Instagram can’t link to it and I wasn’t as compelled to link to it on YouTube as the video itself to me was the important information.

If I was trying to compare CodePen stats here, I would have created the Pen in a step-by-step educational format so I could deliver the same idea. That actually sounds fun and I should probably still do that!

Winner?

Eh.

The problem is that there isn’t anything particularly useful to measure. What would have been way more interesting is if I had some really important call to action in each one where I’m like trying to sell you something or get you to sign up for something or whatever. I feel like that’s the real world of developer marketing. You gotta do 100 things for someone for free if you want them to do something for you on that 101st time. And on the 101st time, you should probably measure it somehow to see if the effort is worth it.

Here’s the very basic data together though…

FollowersEngagements%Twitter~446,0003,7530.08%Instagram~2,8002,68596%YouTube~51,3002,4555%

One interesting thing is that I find the effort was about equal for all of them. You’d think a video would be hardest, but at least that’s just hit-record-hit-stop and minor editing. The other formats take longer to craft with custom text and graphics.

These would be my takeaways from this limited experiment:

  • You need big numbers on Twitter to do much. That’s because the engagement is pretty low. Still, it’s probably our best outlet for getting people to click a link and do something.
  • Instagram has amazing engagement, but it’s hard to send anyone anywhere. It’s still no wonder why people use it. You really do reach your audience there. If you had a strong call to action, I bet you could still get people do to it even with the absence of links (since people know how to search for stuff on the web).
  • While I mentioned that for this example the effort level was fairly even, in general, YouTube is going to require much higher effort. Video production just isn’t the same as farting out a couple of words or a screenshot. With that, and knowing that you’d need absolutely massive numbers to earn anything directly from YouTube, it’s pretty similar to other social networks in that you need to derive value from it abstractly.
  • This was not an idea that “went viral” in any sense. This is just standard-grade engagement, which was good for this experiment. I’m always super surprised at the type of developer tips that go viral. It’s always something I don’t expect, and often something I’m like awwwww we have an article about that too! I’d never bet on or expect anything going viral. Making stuff that your normal audience likes is the ticket.
  • Being active is pretty important. Any chart I’ve seen has big peaks when posts go out regularly and valleys when they don’t. Post regularly = riding the peaks.
  • None of this compares anywhere close to the real jewel of making things: blogging. Blogging is where you have full control and full benefit. The most important thing social media can do is get people over to your own site.

The post Comparing Social Media Outlets for Developer Tips appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.