Web Standards

Two Images and an API: Everything We Need for Recoloring Products

Css Tricks - Fri, 10/11/2019 - 4:34am

I recently found a solution to dynamically update the color of any product image. So with just one <img> of a product, we can colorize it in different ways to show different color options. We don’t even need any fancy SVG or CSS to get it done!

We’ll be using an image editor (e.g. Photoshop or Sketch) and the image transformation service imgix. (This isn’t a sponsored post and there is no affiliation here — it’s just a technique I want to share.)

See the Pen
Dynamic Car color
by Der Dooley (@ddools)
on CodePen.

I work with a travel software company called CarTrawler on the engineering team, and I recently undertook a project a revamp our car images library that we use to display car rental search results. I wanted to take this opportunity to introduce dynamically colored cars.

We may sometimes load up to 200 different cars at the same time, so speed and performance are key requirements. We also have five different products throughout unique code bases, so avoiding over-engineering is vital to success.

I wanted to be able to dynamically change the color of each of these cars without needing additional front-end changes to the code.

Step 1: The Base Layer

I'm using car photos here, but this technique could be applied to any product. First we need a base layer. This is the default layer we would display without any color and it should look good on its own.

Step 2: The Paint Layer

Next we create a paint layer that is the same dimensions as the base layer, but only contains the areas where the colors should change dynamically.

A light color is key for the paint layer. Using white or a light shade of gray gives us a great advantage because we are ultimately “blending” this image with color. Anything darker or in a different hue would make it hard to mix this base color with other colors.

Step 3: Using the imgix API

This is where things get interesting. I'm going to leverage multiple parameters from the imgix API. Let's apply a black to our paint layer.

(Source URL)

We changed the color by applying a standard black hex value of #000000.

https://ddools.imgix.net/cars/paint.png?w=600&bri=-18&con=26&monochrome=000000

If you noticed the URL of the image above, you might be wondering: What the heck are all those parameters? The imgix API docs have a lot of great information, so no need to go into greater detail here. But I will explain the parameters I used.

  • w. The width I want the image to be
  • bri. Adjusts the brightness level
  • con. Adjusts the amount of contrast
  • monochrome. The dynamic hex color

Because we are going to stack our layers via imgix we will need to encode our paint layer. That means replacing some of the characters in the URL with encoded values — like we’d do if we were using inline SVG as a background image in CSS.

https%3A%2F%2Fddools.imgix.net%2Fcars%2Fpaint.png%3Fw%3D600%26bri%3D-18%26con%3D26%26monochrome%3D000000 Step 4: Stack the Layers

Now we are going to use imgix’s watermark parameter to stack the paint layer on top of our base layer.

https://ddools.imgix.net/cars/base.png?w=600&mark-align=center,middle&mark=[PAINTLAYER]

Let's look at the parameters being used:

  • w. This is the image width and it must be identical for both layers.
  • mark-align. This centers the paint layer on top of the base layer.
  • mark. This is where the encoded paint layer goes.

In the end, you will get a single URL that will look like something like this:

https://ddools.imgix.net/cars/base.png?w=600&mark-align=center,middle&mark=https%3A%2F%2Fddools.imgix.net%2Fcars%2Fpaint.png%3Fw%3D600%26bri%3D-18%26con%3D26%26monochrome%3D000000

That gives the car in black:

(Source URL)

Now that we have one URL, we can basically swap out the black hex value with any other colors we want. Let’s try blue!

(Source URL)

Or green!

(Source URL)

Why not red?

(Source URL)

That's it! There are certainly other ways to accomplish the same thing, but this seems so straightforward that it’s worth sharing. There was no need code a bunch of additional functionality. No complex libraries to manage or wrangle. All we need is a couple of images that an online tool will stack and blend for us. Seems like a pretty reasonable solution!

The post Two Images and an API: Everything We Need for Recoloring Products appeared first on CSS-Tricks.

The Teletype Text Element Lives On… at Least on This Site

Css Tricks - Fri, 10/11/2019 - 4:33am

It was this: <tt>

I say "was" because it's deprecated. It may still "work" (like everybody's favorite <marquee> in some browsers), but it could stop working anytime, they say. The whole purpose of it was to display text in a monospace font, like the way Teletype machines used to.

Dave used it jokingly the other day.

Per recent events: As you can see by this official transcript, Dave Rupert LLC has done nothing wrong...

<tt>
Client: This is the greatest call I've ever been on.
Dave Rupert LLC: Definitely and we didn't even do anything illegal or quid pro quo'y.
</tt>

— Dave Rupert (@davatron5000) September 24, 2019

Which got me thinking how much I used to use that element!

&#x1f62c;&#x1f62c;&#x1f62c; pic.twitter.com/lDZpKx9Moy

— Chris Coyier (@chriscoyier) September 24, 2019

Right here on CSS-Tricks. See, in my early days, I learned about that element and how its job is to set text as monospace. I thought, oh! like code! and then for years that's how I marked up code on this site. I had never heard of the <code> element! When I did, I switched over to that. But I still haven't updated every single article from <tt> to <code>. It lingers in articles like this:

I bring this up just because it's a funny little example of not knowing what you don't know. It's worth having a little sympathy for people early in their journey and just doing things that get the job done because that's all they know. We've all been there... and are always still there to some degree.

The post The Teletype Text Element Lives On… at Least on This Site appeared first on CSS-Tricks.

Weekly Platform News: Impact of Third-Party Code, Passive Mixed Content, Countries with the Slowest Connections

Css Tricks - Thu, 10/10/2019 - 7:32am

In this week's roundup, Lighthouse sheds light on third-party scripts, insecure resources will get blocked on secure sites, and many country connection speeds are still trying to catch up to others... literally.

Measure the impact of third-party code during page load

Lighthouse, Chrome’s built-in auditing tool, now shows a warning when the impact of third-party code on page load performance is too high. The pre-existing “Third-party usage” diagnostic audit will now fail if the total main-thread blocking time caused by third-parties is larger than 250ms during page load.

Note: This feature was added in Lighthouse version 5.3.0, which is currently available in Chrome Canary.

(via Patrick Hulce)

Passive mixed content is coming to an end

Currently, browsers still allow web pages loaded over a secure connection (HTTPS) to load images, videos, and audio over an insecure connection. Such insecurely-loaded resources on securely-loaded pages are known as “passive mixed content,” and they represent a security and privacy risk.

An insecurely-loaded image can allow an attacker to communicate incorrect information to the user (e.g., a fabricated stock chart), mutate client-side state (e.g., set a cookie), or induce the user to take an unintended action (e.g., changing the label on a button).

Starting next February, Chrome will auto-upgrade all passive mixed content to https:, and resources that fail to load over https: will be blocked. According to data from Chrome Beta, auto-upgrade currently fails for about 30% of image loads.

(via Emily Stark)

Fast connections are still not common in many countries

Data from Chrome UX Report shows that there are still many countries and territories in the world where most people access the Internet over a 3G or slower connection. (This includes a number of small island nations that are not visible on this map.)

(via Paul Calvano)

More news...

Read even more news in my weekly Sunday issue that can be delivered to you via email every Monday morning.

More News ?

The post Weekly Platform News: Impact of Third-Party Code, Passive Mixed Content, Countries with the Slowest Connections appeared first on CSS-Tricks.

Recipes for Performance Testing Single Page Applications in WebPageTest

Css Tricks - Thu, 10/10/2019 - 4:24am

WebPageTest is an online tool and an Open Source project to help developers audit the performance of their websites. As a Web Performance Evangelist at Theodo, I use it every single day. I am constantly amazed at what it offers to the web development community at large and the web performance folks particularly — for free.

But things can get difficult pretty quickly when dealing with Single Page Applications — usually written with React, Vue, Svelte or any other front-end framework. How can you get through a log in page? How can you test the performance of your users’ flow, when most of it happens client-side and does not have a specific URL to point to?

Throughout this article, we are going to find out how to solve these problems (and many more), and you’ll be ready to test the performance of your Single Page Application with WebPageTest!

Note: This articles requires an intermediate understanding about some of WebPageTest advanced features.

If you are curious about web performance and want a good introduction to WebPageTest, I would highly recommend the following resources:

The problem with testing Single Page Applications

Single Page Applications (SPAs) radically changed the way websites work. Instead of letting the back end (e.g. Django, Rails and Laravel) do most of the grunt work and delivering "ready-to-use" HTML to the browser, SPAs rely heavily on JavaScript to have the browser compute HTML. Such front-end frameworks include React, Vue, Angular or Svelte.

The simplicity of WebPageTest is what makes part of its appeal to developers: head to http://webpagetest.org/easy, enter your URL, wait a little, and voilà! Your performance audit is ready.

If you are building an SPA and want to measure its performance, you could rely on end-to-end testing tools like Selenium, Cypress or Puppeteer. However, I have found that none of these has the amount of performance-related information and easy-to-use tooling that WebPageTest offers.

But testing SPAs with WebPageTest can be complex.

In many SPAs, most of the site is protected behind a log in form. I often use Netlify for hosting my sites (including my personal blog), and most of the time I spend in the application is on authenticated pages, like the dashboard listing all my websites. As the information on my dashboard is specific to me, any other user trying to access https://app.netlify.com/teams/phacks/sites is not going to see my dashboard, but will instead be redirected to either a login or 404 page.

The same goes for WebPageTest. If I enter my dashboard URL into http://webpagetest.org/easy, the audit will be performed against the login page.

Moreover, testing and monitoring the performance of dynamic interactions in SPAs cannot be achieved with simple WebPageTest audits.

Here’s an example. Nuage is a domain name registrar with fancy animations and a beautiful, dynamic interface. When you search for domain names to buy, an asynchronous call fetches the results of the request and the results are displayed as they are retrieved.

As you might have noticed in the video above, the URL of the page does not change as I type my search terms. As a consequence, it is not possible to test the performance of the search experience using a simple WebPageTest audit as we do not have a proper URL to point to the action of searching something — only to an empty search page.

Some other problems can arise from the SPA paradigm shift when using WebPageTest:

  • Clicking around to navigate a webpage is usually harder than merely heading to a new URL, but it is sometimes the only option in SPAs.
  • Authentication in SPAs is usually implemented using JSON Web Tokens instead of good ol’ cookies, which rules out the option of setting authentication cookies directly in WebPageTest (as described here).
  • Using React and Redux (or other application state management libraries) for your SPA can mean that forms are harder to fill out programmatically, since using .innerText() or .value() to set a field’s value may not forward it to the application store.
  • As API calls are often asynchronous and various loaders can be used to indicate a loading state, those can "trick" WebPageTest into believing the page has actually finished loading when it has, in fact, not. I have seen it happen with longer-than-usual API calls (5+ seconds).

As I have faced these problems on several projects, I have come up with a range of tips and techniques to counter them.

The many ways of selecting an element

Selecting DOM elements is a key part of doing all sorts of automated testing, be it for end-to-end testing with Selenium or Cypress or for performance testing with WebPageTest. Selecting DOM elements allows us to click on links and buttons, fill in forms and more generally interact with the application.

There are several ways of selecting a particular DOM elements using native browser APIs, that range from the straightforward document.getElementsByClassName to the thorny but really powerful XPath selectors. In this section, we will see three different possibilities, ordered by increasing complexity.

Get an element by id, className or tagName

If the element you want to select (say, an "Empty Cart" button) has a specific and unique id (e.g. #empty-cart), class name, or is the only button on the page, you can click on it using the getElementsBy methods:

const emptyCartButton = document.getElementsById("empty-cart")[0]; // or document.getElementsByClassName(".empty-cart-button")[0] // or document.getElementsByTagName("button")[0] emptyCartButton.click();

If you have several buttons on the same page, you can filter the resulting list before interacting with the element:

const buttons = document.getElementsByTagName("button"); const emptyCartButton = buttons.filter(button => button.innerText.includes("Empty Cart") )[0]; emptyCartButton.click(); Use complex CSS selectors

Sometimes, the particular element you want to interact with does not present an interesting unicity property in either its ID, class or tag.

One way to circumvent this issue is to add this unicity manually, for testing purposes only. Adding #perf-test-empty-cart-button to a specific button is innocuous for your website markup and can dramatically simplify your testing setup.

However, this solution can sometimes be out of reach: you may not have access to the source code of the application, or may not be able to deploy new versions quickly. In those situations, it is useful to know about document.querySelector (and its variant document.querySelectorAll) and using complex CSS selectors.

Here are a few examples of what can be achieved with document.querySelector:

// Select the first input with the `name="username"` property document.querySelector("input[name='username']"); // Select all number inputs document.querySelectorAll("input[type='number']"); // Select the first h1 inside the <section> document.querySelector("section h1"); // Select the first direct descendent of a <nav> which is of type <img> document.querySelector("nav > img");

What’s interesting here is you have the full power of CSS selectors at hand. I encourage you to have a look at the always-useful MDN’s reference table of selectors!

Going nuclear: XPath selectors

XML Path Language (XPath), albeit really powerful, is harder to grasp and maintain than the CSS solutions above. I rarely have to use it, but it is definitively useful to know that it exists.

One such instance is when you want to select a node by its text value, and can’t resort to CSS selectors. Here’s a handy snippet to use in those cases:

// Returns the that has the exact content 'Sep 16, 2015' document.evaluate( "//span[text()='Sep 16, 2015']", document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null ).singleNodeValue;

I will not go into details on how to use it as it would have me wander away from the goal of this article. To be fair, I don’t even know what many of the parameters above even mean. However, I can definitely recommend the MDN documentation should you want to read on the topic.

Recipes for common use cases

In the following section, we will see how to test the performance in common use cases of Single Page Applications. I call these my testing recipes.

In order to illustrate those recipes, I will use the React Admin demo website as an example. React Admin is an open source project aimed at building admin applications and back offices.

It is a typical example of a SPA because it uses React (as the name suggests), calls remote APIs, has a login interface, many forms and relies on client-side routing. I encourage you to go take a quick look at the website (the demo account is demo/demo ) in order to have an idea of what we will be trying to achieve.

Authentication and forms

The authentication page of React Admin requires the user to input a username and a password:

The authentication screen of React Admin

Intuitively, one could take the following approach to filling in the form and submit:

const [usernameInput, passwordInput] = document.getElementsByTagName("input"); usernameInput.value = "demo"; // innerText could also be used here passwordInput.value = "demo"; document.getElementsByTagName("button")[0].click();

If you run these commands sequentially in a DevTools console on the login page, you will see that all fields are reset and the login request fails upon submitting by clicking the button. The problem comes from the fact that the new values we set with .value() (or .innerText()) are not kicked back to the Redux store, and thus not "processed" by the application.

What we need to do then is explicitly tell React that the value has changed so that it will update internal bookkeeping accordingly. This can be achieved using the Event interface.

const updateInputValue = (input, newValue) => { let lastValue = input.value; input.value = newValue; let event = new Event("input", { bubbles: true }); let tracker = input._valueTracker; if (tracker) { tracker.setValue(lastValue); } input.dispatchEvent(event); };

Note: this solution is pretty hacky (even according to its own author), however it works well for our purposes here.

Our updated script becomes:

const updateInputValue = (input, newValue) => { let lastValue = input.value; input.value = newValue; let event = new Event("input", { bubbles: true }); let tracker = input._valueTracker; if (tracker) { tracker.setValue(lastValue); } input.dispatchEvent(event); }; const [usernameInput, passwordInput] = document.getElementsByTagName("input"); updateInputValue(usernameInput, "demo"); updateInputValue(passwordInput, "demo"); document.getElementsByTagName("button")[0].click();

Hurrah! You can try it in your browser’s console—It works like a charm.

Translating this to an actual WebPageTest script (with scripting keywords, single line commands and tab-separated parameters) would look like this:

setEventName Go to Login navigate https://marmelab.com/react-admin-demo/ setEventName Login exec const updateInputValue = (input, newValue) => { let lastValue = input.value; input.value = newValue; let event = new Event("input", { bubbles: true }); let tracker = input._valueTracker; if (tracker) { tracker.setValue(lastValue); } input.dispatchEvent(event);}; exec const [usernameInput, passwordInput] = document.getElementsByTagName("input") exec updateInputValue(usernameInput, "demo") exec updateInputValue(passwordInput, "demo") execAndWait document.getElementsByTagName("button")[0].click()

Note that clicking on the submit button leads us to a new page and triggers API calls, which means we need to use the execAndWait command.

You can see the full results of the test at this address. (Note: the results may have been archived by WebPageTest — you can, however, run the test again yourself!)

Here is a short video (captured by WebPageTest) in which you can see that we indeed passed the authentication step:

Navigating between pages

For traditional Server Rendered pages, navigating from one URL to the next in WebPageTest scripting is done via the navigate <url> command.

However, for SPAs, this does not reflect the experience of the user, as client-side routing means that the server has no role in navigation. Thus, hitting a URL directly would significantly slow down the measured performance (because of the time it takes for the JavaScript framework to be compiled, parsed and executed), a slowdown that the user does not experience when changing pages. As it is crucial to simulate the user flow the best we can, we need to handle the navigation on the client as well.

Hopefully, this is a lot simpler to do than filling up forms. We only need to select the link (or button) that will take us to the new page, and .click() on it! Let’s follow through our previous example, although now we want to test the performance of the Reviews list, and of a single Review page.

A user would typically click on the Reviews item on the left-hand navigation menu, then on any item in the list. Inspecting the elements in DevTools may lead us to a selection strategy as follows:

document.querySelector("a[href='#reviews']"); // select the Reviews link in the menu document.querySelector("table tr"); // select the first item in the Reviews list

As both clicks lead to page transition and API calls (to fetch the reviews), we need to use the execAndWait keyword for the script:

setEventName Go to Login navigate https://marmelab.com/react-admin-demo/ setEventName Login exec const updateInputValue = (input, newValue) => { let lastValue = input.value; input.value = newValue; let event = new Event("input", { bubbles: true }); let tracker = input._valueTracker; if (tracker) { tracker.setValue(lastValue); } input.dispatchEvent(event);}; exec const [usernameInput, passwordInput] = document.getElementsByTagName("input") exec updateInputValue(usernameInput, "demo") exec updateInputValue(passwordInput, "demo") execAndWait document.getElementsByTagName("button")[0].click() setEventName Go to Reviews execAndWait document.querySelector("a[href='#/reviews']").click() setEventName Open a single Review execAndWait document.querySelector("table tbody tr").click()

Here’s the video of the complete script running on WebPageTest:

The audit result from WebPageTest shows the performance metrics and waterfall graphs for each step of the script, allowing us to monitor the performance of each API call and interaction:

What about Internet Explorer 11 compatibility?

WebPageTest allows us to select which location, browser and network conditions the test will use. Internet Explorer 11 (IE11) is among the available browser options, and if you try the previous scripts on IE11, they will fail.

This is due to two reasons:

The ES6 syntax problem can be overcome by translating our scripts to ES5 syntax (no arrow functions, no let and const, no array destructuring), which might look like this:

setEventName Go to Login navigate https://marmelab.com/react-admin-demo/ setEventName Login exec var updateInputValue = function(input, newValue) { var lastValue = input.value; input.value = newValue; var event = new Event("input", { bubbles: true }); var tracker = input._valueTracker; if (tracker) { tracker.setValue(lastValue); } input.dispatchEvent(event);}; exec var usernameInput = document.getElementsByTagName("input")[0] exec var passwordInput = document.getElementsByTagName("input")[1] exec updateInputValue(usernameInput, "demo") exec updateInputValue(passwordInput, "demo") execAndWait document.getElementsByTagName("button")[0].click() setEventName Go to Reviews execAndWait document.querySelector("a[href='#/reviews']").click() setEventName Open a single Review execAndWait document.querySelector("table tbody tr").click()

In order to bypass the absence of CustomEvent support, we can turn to polyfills and add one manually at the top of the script. This polyfill is available on MDN:

(function() { if (typeof window.CustomEvent === "function") return false; function CustomEvent(event, params) { params = params || { bubbles: false, cancelable: false, detail: undefined }; var evt = document.createEvent("CustomEvent"); evt.initCustomEvent( event, params.bubbles, params.cancelable, params.detail ); return evt; } CustomEvent.prototype = window.Event.prototype; window.CustomEvent = CustomEvent; })();

We can then replace all mentions of Event by CustomEvent, set the polyfill to fit on a single line and we are good to go!

setEventName Go to Login navigate https://marmelab.com/react-admin-demo/ exec (function(){if(typeof window.CustomEvent==="function")return false;function CustomEvent(event,params){params=params||{bubbles:false,cancelable:false,detail:undefined};var evt=document.createEvent("CustomEvent");evt.initCustomEvent(event,params.bubbles,params.cancelable,params.detail);return evt}CustomEvent.prototype=window.Event.prototype;window.CustomEvent=CustomEvent})(); setEventName Login exec var updateInputValue = function(input, newValue) { var lastValue = input.value; input.value = newValue; var event = new CustomEvent("input", { bubbles: true }); var tracker = input._valueTracker; if (tracker) { tracker.setValue(lastValue); } input.dispatchEvent(event);}; exec var usernameInput = document.getElementsByTagName("input")[0] exec var passwordInput = document.getElementsByTagName("input")[1] exec updateInputValue(usernameInput, "demo") exec updateInputValue(passwordInput, "demo") execAndWait document.getElementsByTagName("button")[0].click() setEventName Go to Reviews execAndWait document.querySelector("a[href='#/reviews']").click() setEventName Open a single Review execAndWait document.querySelector("table tbody tr").click()

Et voilà!

General tips and tricks for WebPageTest scripting

One last thing I want to do is provide a few tips and tricks that make writing WebPageTest scripts easier. Feel free to DM me on Twitter if you have any suggestions!

Security first!

Remember to tick both privacy checkboxes if your script includes senstitive data, like credentials!

WebPageTest security controls Browse the docs

The WebPageTest Scripting docs are full of features that I didn’t cover in this article, ranging from DNS Overriding to iPhone Spoofing and even if/else conditionals.

When you plan on writing a new script, I recommend to have a look at the available parameters first and see if any can help make your scripting easier or more robust.

Long loading states

Sometimes, a remote API call (say, for fetching the reviews) will take a long time. A loading indicator, such as a spinner, can be used to tell the user to wait a bit as something is happening.

WebPageTest tries to detect when a page has finished loading by figuring out if things are changing on the screen. If your loading indicator lasts a long time, WebPageTest might mistake it for an integral part of your page design and cut the audit before the API call returns — thus truncating your measures.

A way to circumvent this issue is to tell WebPageTest to wait at least a certain duration before stopping the test. This is a parameter available under the Advanced tab:

WebPageTest minimum test duration Keeping your script (and results) human-readable
  • Use blank lines and comments (//) generously because single-line JavaScript commands can sometimes be hard to grasp.
  • Keep a multi-line version somewhere as your reference, and single-line everything as you are about to test. This helps readability. Like, a lot.
  • Use setEventName to name your different "steps." This makes for more readable tests as it explicits the sequence of pages the audit goes through, and also appears in the WebPageTest results.
Iterating on your scripts
  • First, make sure that your script works in the browser. To do so, strip the WebPageTest keywords (the first word of every line of your script), then copy and paste each line in the browser console to verify that everything is working as expected at every step of the way.
  • Once you are ready to submit your test to WebPageTest, do it first with very light settings: only one run, a fast browser (cough — not IE11 — cough), no network throttling, no repeat view, a well-dimensioned instance (Dulles, VA, usually has good response times). This will help you detect and correct errors way faster.
Automating your scripts

Your test script is running smoothly, and you start getting performance reports of your Single Page App. As you ship new features, it is important that you monitor its performance regularly to catch regressions at the earliest.

To address this problem, I am currently working on Falco, a soon-to-be-open-sourced WebPageTest test runner. Falco takes care of automating your audits, then presents the results in an easy-to-read interface while letting you read the full reports when you need it. You can follow me on Twitter to know when it goes open source, and learn more about web performance and WebPageTest!

The post Recipes for Performance Testing Single Page Applications in WebPageTest appeared first on CSS-Tricks.

Images Are Not Static Content

Css Tricks - Thu, 10/10/2019 - 4:24am

We constantly hear about the importance of keeping websites lean and fast. A fast-loading website makes users more satisfied, and satisfied users spend more time and money on your website. However, website optimization is a complex task, as there is not one silver bullet to fix all of the issues causing poor performance.

We also hear that addressing the performance of images is a low hanging fruit if you want to improve your site’s user experience However, anyone who has gotten their hands dirty trying to optimize images and cover major use cases and scenarios with responsive images knows that the complexity of this task escalates quickly. For most medium to large sites, image optimization is not a task suited to humans. This is why image content delivery networks (CDN) exist.

An image CDN is indeed a content delivery network built especially for images. Just like the name suggests. So, why would we need a special CDN to serve images? Why not use a regular CDN to serve static files? Short answer is that images are not static files...

Most image CDNs treat an image as dynamic content by optimizing the image in different ways based on context where the image is consumed.

Explained a bit differently; if you’re using responsive images on your website, an image cdn will automatically generate the derivatives of the image according to the sizes specified in the markup, usually based on some URL parameters. For example, the below code selects from 3 derivatives specified in the srcset attribute based on 3 breakpoints:

<img src="//i.foo.com/image.jpg" alt="cat" srcset="//i.foo.com/image.jpg?width=320 320w, //i.foo.com/image.jpg?width=640 640w, //i.foo.com/image.jpg?width=1280 1280w" sizes="(max-width: 480px) 100vw, (max-width: 900px) 33vw, 254px">

This way, the developer or designer doesn’t have to worry about creating all the image versions beforehand. Which is very good news, because the number of derivative images may quickly grow exponentially based on many break points, image formats, and screen resolutions. And that is before we’ve started talking about art direction.

Dynamic Image Optimization on Autopilot

Now that we’ve seen how an image CDN can create different sizes of an image on the fly, let's examine how this improves web performance.

Before we go further to choose an image CDN for the examples later, it is important to point out the difference between an image CDN and a digital asset management tool (DAM). A DAM, such as Cloudinary, is mostly focused for file management aspect and often allows you to edit images and apply art direction like filters. Usually these DAMs need a general purpose CDN in front and there is little support for automation of image optimization tasks.

On the opposite end of the scale is ImageEngine. ImageEngine is the most effective image CDN on the market thanks to its built in device detection that enables superior image optimization for mobile traffic. Since mobile devices account for more than 50% of the traffic in many countries, ImageEngine truly has an advantage over other CDNs. While most other image CDNs only offer little or no automatic optimization, ImageEngine has more advanced approach thanks to its focus on mobile traffic. Hence, ImageEngine will be able to produce the best results with less implementation effort and maintenance.

How ImageEngine Improves Web Performance

With ImageEngine handling all image traffic, images are no longer static content. Images are now adapted and served exactly in the size, format, compression rate and resolution needed. Fine. But how do we measure the improvement?

These days, the “go to tool” for identifying performance issues and measuring performance is Google Lighthouse. Lighthouse is available as a standalone app and in your Chrome developer tools.

We’ll run a performance audit on an e-commerce demo page listing product images.

The page has a typical responsive grid layout with product images. The layout has a few breakpoints where the display size of the images change because number of items per row changes. Moreover, there is a mouse over feature displaying a different image of the product. The mouseover effect is handled by JavaScript and even the hidden image is always loaded in our example. So all in all, quite a few images and potential sizes.

Step One: Assess Current State

Running the Lighthouse audit on the demo-page we see a number of issues, summarized in a performance score of 98. The best score is 100, so 98 might not seem that bad. Which is true, but pay more attention to the metrics below the score. The performance score is calculated based on a few metrics with varied weighting. The images on our page have direct and indirect impact on these metrics.

In the details of the report, we see a few opportunities related to images listed:

  • Properly size images. The images does not have the right pixel size. This is quite common on pages with a responsive or fluid layout.
  • Serve images in next-gen formats. For Chrome this basically mean to convert images to webp. Usually webp is a more efficient format than most others when it comes to byte size and decode speed.
  • Efficiently encode images. There is more compression that can be applied to the images before impacting perceived visual quality.

The estimated savings (to the right in the report) are huge. This demonstrates why addressing images is considered a low hanging fruit for performance.

If you haven't signed up already, create a free ImageEngine trial account. Once you’ve completed signup you can define the image origin (usually your website) and a domain from which you want to serve images from. The image may be something like images.mydomain.com. You point this domain name to ImageEngine with a CNAME record in your DNS, and you’re good to go.

The next step is changing the markup to make the most out of ImageEngine’s automatic features.

If our previous image tag looked like this:

<img class="pic-1" src="images/demo9/img-1.jpg">

Our new image tag will look like this when the ImageEngine domain name is serving the images:

<img class="pic-1" src="https://images.mydomain.com/images/demo9/img-1.jpg">

Because our grid layout is fluid with 4 breakpoints, we might also consider to use responsive images syntax:

<img class="pic-1" src="https://images.mydomain.com/images/demo9/img-1.jpg" sizes="(max-width: 576px) 93vw, (max-width: 768px) 238px, (max-width: 768px) 238px, (max-width: 992px) 148px, 253px" >

Thanks to ImageEngine's support for Client hints, ImageEngine will now generate the exact pixel size needed. Client hints are additional HTTP headers the browser can send to enable more accurate image resizing. Client hints are currently only supported by Chrome browsers

Step Three: Measure the Improvement

Running the Lighthouse audit again, we see that the score is now 100. But more importantly, look at the improvements in timings. “Time to interactive” for example. 0.7 seconds less waiting for the user in order to interact with the page. All because images are optimized properly.

What does really “optimized” mean in this case? Why is the page faster and user experience better with ImageEngine? Most of the positive impact is due to reduction in byte size of the images. The less bytes, the faster are the images transferred from the host (or ImageEngine’s edge servers) to the browser. Moreover, lighter images are usually faster to decode and render onto the users screen. This is very simplified, but let’s see how much ImageEngine reduces the image payload using WebPageTest.org to compare our demo site with-, and without ImageEngine:

ImageEngine reduces the image payload to only 25% of the original size.

Bonus: Fix Caching

In the continuous hunt for improved performance, you may have seen this alert from Lighthouse.

Lighthouse thinks the images have a too short Time To Live (TTL) -measured in seconds- in the browser cache. By default, ImageEngine passes on the cache directives given by the origin but luckily this can be changed in ImageEngine's management interface.

Next Step: Automate Image Optimization

We’ve seen how images should no longer can be treated as static content if we want a high performing web site. Because images have such a high impact on website performance, images must be tailored according to the capabilities and context of the browser and user.

A purpose-built image CDN will relieve humans of the responsibility of trying to accommodate all possible combinations of image formats, sizes and compression levels. Managing image derivatives, is not a task for humans as it will quickly grow to become unmanageable.

Using tools like Lighthouse and WebPageTest.org document the positive impact image CDNs like ImageEngine has on important performance metrics.

The post Images Are Not Static Content appeared first on CSS-Tricks.

Formative Vs Summative : The User Testing Battle

Usability Geek - Wed, 10/09/2019 - 11:48pm
As Steve Jobs once said,  “Design is not just what it looks like and feels like. Design is how it works.”  Even though the legendary tech leader spoke out about how design needs to be...
Categories: Web Standards

Blocking Third-Party Hands from the Cookie Jar

Css Tricks - Wed, 10/09/2019 - 10:51am

Third-party cookies are set on your computer from domains other than the one that you're actually on right now. For example, if I log into css-tricks.com, I'll get a cookie from css-tricks.com that handles my authentication. But css-tricks.com might also load an image from some other site. A common tactic in online advertising is to render a "tracking pixel" image (well named, right?) that is used to track advertising impressions. That request to another site for the image (say, ad.doubleclick.com) also can set a cookie.

Eric Lawrence explains the issue:

The tracking pixel’s cookie is called a third party cookie because it was set by a domain unrelated to the page itself.

If you later visit B.textslashplain.com, which also contains a tracking pixel from ad.doubleclick.net, the tracking pixel’s cookie set on your visit to A.example.com is sent to ad.doubleclick.net, and now that tracker knows that you’ve visited both sites. As you browse more and more sites that contain a tracking pixel from the same provider, that provider can build up a very complete profile of the sites you like to visit, and use that information to target ads to you, sell the data to a data aggregation company, etc.

But times are a changin'. Eric goes on to explain the browser landscape:

The default stuff is the big deal, because all browsers offer some way to block third-party cookies. But of course, nobody actually does it. Jeremy:

It’s hard to believe that we ever allowed third-party cookies and scripts in the first place. Between them, they’re responsible for the worst ills of the World Wide Web.

2019 is the year we apparently reached the breaking point.

The post Blocking Third-Party Hands from the Cookie Jar appeared first on CSS-Tricks.

Patterns for Practical CSS Custom Properties Use

Css Tricks - Wed, 10/09/2019 - 3:56am

I've been playing around with CSS Custom Properties to discover their power since browser support is finally at a place where we can use them in our production code. I’ve been using them in a number different ways and I’d love for you to get as excited about them as I am. They are so useful and powerful!

I find that CSS variables usage tends to fall into categories. Of course, you’re free to use CSS variables however you like, but thinking of them in these different categories might help you understand the different ways in which they can be used.

  • Variables. The basics, such as setting, a brand color to use wherever needed.
  • Default Values. For example, a default border-radius that can be overridden later.
  • Cascading Values. Using clues based on specificity, such as user preferences.
  • Scoped Rulesets. Intentional variations on individual elements, like links and buttons.
  • Mixins. Rulesets intended to bring their values to a new context.
  • Inline Properties. Values passed in from inline styles in our HTML.

The examples we’ll look at are simplified and condensed patterns from a CSS framework I created and maintain called Cutestrap.

A quick note on browser support

There are two common lines of questions I hear when Custom Properties come up. The first is about browser support. What browsers support them? Are there fallbacks we need to use where they aren’t supported?

The global market share that support the things we’re covering in this post is 85%. Still, it’s worth cross-referencing caniuse) with your user base to determine how much and where progressive enhancement makes sense for your project.

The second question is always about how to use Custom Properties. So let’s dive into usage!

Pattern 1: Variables

The first thing we’ll tackle is setting a variable for a brand color as a Custom Property and using it on an SVG element. We'll also use a fallback to cover users on trailing browsers.

html { --brand-color: hsl(230, 80%, 60%); } .logo { fill: pink; /* fallback */ fill: var(--brand-color); }

Here, we've declared a variable called --brand-color in our html ruleset. The variable is defined on an element that’s always present, so it will cascade to every element where it’s used. Long story short, we can use that variable in our .logo ruleset.

We declared a pink fallback value for trailing browsers. In the second fill declaration, we pass --brand-color into the var() function, which will return the value we set for that Custom Property.

That’s pretty much how the pattern goes: define the variable (--variable-name) and then use it on an element (var(--variable-name)).

See the Pen
Patterns for Practical Custom Properties: Example 1.0
by Tyler Childs (@tylerchilds)
on CodePen.

Pattern 2: Default values

The var() function we used in the first example can also provide default values in case the Custom Property it is trying to access is not set.

For example, say we give buttons a rounded border. We can create a variable — we’ll call it --roundness — but we won't define it like we did before. Instead, we’ll assign a default value when putting the variable to use.

.button { /* --roundness: 2px; */ border-radius: var(--roundness, 10px); }

A use case for default values without defining the Custom Property is when your project is still in design but your feature is due today. This make it a lot easier to update the value later if the design changes.

So, you give your button a nice default, meet your deadline and when --roundness is finally set as a global Custom Property, your button will get that update for free without needing to come back to it.

See the Pen
Patterns for Practical Custom Properties: Example 2.0
by Tyler Childs (@tylerchilds)
on CodePen.

You can edit on CodePen and uncomment the code above to see what the button will look like when --roundness is set!

Pattern 3: Cascading values

Now that we've got the basics under our belt, let's start building the future we owe ourselves. I really miss the personality that AIM and MySpace had by letting users express themselves with custom text and background colors on profiles.

Let’s bring that back and build a school message board where each student can set their own font, background color and text color for the messages they post.

User-based themes

What we’re basically doing is letting students create a custom theme. We'll set the theme configurations inside data-attribute rulesets so that any descendants — a .message element in this case — that consume the themes will have access to those Custom Properties.

.message { background-color: var(--student-background, #fff); color: var(--student-color, #000); font-family: var(--student-font, "Times New Roman", serif); margin-bottom: 10px; padding: 10px; } [data-student-theme="rachel"] { --student-background: rgb(43, 25, 61); --student-color: rgb(252, 249, 249); --student-font: Arial, sans-serif; } [data-student-theme="jen"] { --student-background: #d55349; --student-color: #000; --student-font: Avenir, Helvetica, sans-serif; } [data-student-theme="tyler"] { --student-background: blue; --student-color: yellow; --student-font: "Comic Sans MS", "Comic Sans", cursive; }

Here’s the markup:

<section> <div data-student-theme="chris"> <p class="message">Chris: I've spoken at events and given workshops all over the world at conferences.</p> </div> <div data-student-theme="rachel"> <p class="message">Rachel: I prefer email over other forms of communication.</p> </div> <div data-student-theme="jen"> <p class="message">Jen: This is why I immediately set up my new team with Slack for real-time chat.</p> </div> <div data-student-theme="tyler"> <p class="message">Tyler: I miss AIM and MySpace, but this message board is okay.</p> </div> </section>

We have set all of our student themes using [data-student-theme] selectors for our student theme rulesets. The Custom Properties for background, color, and font will apply to our .message ruleset if they are set for that student because .message is a descendant of the div containing the data-attribute that, in turn, contains the Custom Property values to consume. Otherwise, the default values we provided will be used instead.

See the Pen
Patterns for Practical Custom Properties: Example 3.0
by Tyler Childs (@tylerchilds)
on CodePen.

Readable theme override

As fun and cool as it is for users to control custom styles, what users pick won't always be accessible with considerations for contrast, color vision deficiency, or anyone that prefers their eyes to not bleed when reading. Remember the GeoCities days?

Let's add a class that provides a cleaner look and feel and set it on the parent element (<section>) so it overrides any student theme when it’s present.

.readable-theme [data-student-theme] { --student-background: hsl(50, 50%, 90%); --student-color: hsl(200, 50%, 10%); --student-font: Verdana, Geneva, sans-serif; } <section class="readable-theme"> ... </section>

We’re utilizing the cascade to override the student themes by setting a higher specificity such that the background, color, and font will be in scope and will apply to every .message ruleset.

See the Pen
Patterns for Practical Custom Properties: Example 3.1
by Tyler Childs (@tylerchilds)
on CodePen.

Pattern 4: Scoped rulesets

Speaking of scope, we can scope Custom Properties and use them to streamline what is otherwise boilerplate CSS. For example, we can define variables for different link states.

a { --link: hsl(230, 60%, 50%); --link-visited: hsl(290, 60%, 50%); --link-hover: hsl(230, 80%, 60%); --link-active: hsl(350, 60%, 50%); } a:link { color: var(--link); } a:visited { color: var(--link-visited); } a:hover { color: var(--link-hover); } a:active { color: var(--link-active); } <a href="#">Link Example</a>

Now that we've written out the Custom Properties globally on the <a> element and used them on our link states, we don't need to write them again. These are scoped to our <a> element’s ruleset so they are only set on anchor tags and their children. This allows us to not pollute the global namespace.

Example: Grayscale link

Going forward, we can control the links we just created by changing the Custom Properties for our different use cases. For example, let’s create a gray-colored link.

.grayscale { --link: LightSlateGrey; --link-visited: Silver; --link-hover: DimGray; --link-active: LightSteelBlue; } <a href="#" class="grayscale">Link Example</a>

We’ve declared a .grayscale ruleset that contains the colors for our different link states. Since the selector for this ruleset has a greater specificity then the default, these variable values are used and then applied to the pseudo-class rulesets for our link states instead of what was defined on the <a> element.

See the Pen
Patterns for Practical Custom Properties: Example 4.0
by Tyler Childs (@tylerchilds)
on CodePen.

Example: Custom links

If setting four Custom Properties feels like too much work, what if we set a single hue instead? That could make things a lot easier to manage.

.custom-link { --hue: 30; --link: hsl(var(--hue), 60%, 50%); --link-visited: hsl(calc(var(--hue) + 60), 60%, 50%); --link-hover: hsl(var(--hue), 80%, 60%); --link-active: hsl(calc(var(--hue) + 120), 60%, 50%); } .danger { --hue: 350; } <a href="#" class="custom-link">Link Example</a> <a href="#" class="custom-link danger">Link Example</a>

See the Pen
Patterns for Practical Custom Properties: Example 4.1
by Tyler Childs (@tylerchilds)
on CodePen.

By introducing a variable for a hue value and applying it to our HSL color values in the other variables, we merely have to change that one value to update all four link states.

Calculations are powerful in combination with Custom Properties since they let
your styles be more expressive with less effort. Check out this technique by Josh Bader where he uses a similar approach to enforce accessible color contrasts on buttons.

Pattern 5: Mixins

A mixin, in regards to Custom Properties, is a function declared as a Custom Property value. The arguments for the mixin are other Custom Properties that will recalculate the mixin when they’re changed which, in turn, will update styles.

The custom link example we just looked at is actually a mixin. We can set the value for --hue and then each of the four link states will recalculate accordingly.

Example: Baseline grid foundation

Let's learn more about mixins by creating a baseline grid to help with vertical rhythm. This way, our content has a pleasant cadence by utilizing consistent spacing.

.baseline, .baseline * { --rhythm: 2rem; --line-height: var(--sub-rhythm, var(--rhythm)); --line-height-ratio: 1.4; --font-size: calc(var(--line-height) / var(--line-height-ratio)); } .baseline { font-size: var(--font-size); line-height: var(--line-height); }

We've applied the ruleset for our baseline grid to a .baseline class and any of its descendants.

  • --rhythm: This is the foundation of our baseline. Updating it will impact all the other properties.
  • --line-height: This is set to --rhythm by default, since --sub-rhythm is not set here.
  • --sub-rhythm: This allows us to override the --line-height — and subsequently, the --font-size — while maintaining the overall baseline grid.
  • --line-height-ratio: This helps enforce a nice amount of spacing between lines of text.
  • --font-size: This is calculated by dividing our --line-height by our --line-height-ratio.

We also set our font-size and line-height in our .baseline ruleset to use the --font-size and --line-height from our baseline grid. In short, whenever the rhythm changes, the line height and font size change accordingly while maintaining a legible experience.

OK, let’s put the baseline to use.

Let's create a tiny webpage. We'll use our --rhythm Custom Property for all of the spacing between elements.

.baseline h2, .baseline p, .baseline ul { padding: 0 var(--rhythm); margin: 0 0 var(--rhythm); } .baseline p { --line-height-ratio: 1.2; } .baseline h2 { --sub-rhythm: calc(3 * var(--rhythm)); --line-height-ratio: 1; } .baseline p, .baseline h2 { font-size: var(--font-size); line-height: var(--line-height); } .baseline ul { margin-left: var(--rhythm); } <section class="baseline"> <h2>A Tiny Webpage</h2> <p>This is the tiniest webpage. It has three noteworthy features:</p> <ul> <li>Tiny</li> <li>Exemplary</li> <li>Identifies as Hufflepuff</li> </ul> </section>

We're essentially using two mixins here: --line-height and --font-size. We need to set the properties font-size and line-height to their Custom Property counterparts in order to set the heading and paragraph. The mixins have been recalculated in those rulesets, but they need to be set before the updated styling will be applied to them.

See the Pen
Patterns for Practical Custom Properties: Example 5.0
by Tyler Childs (@tylerchilds)
on CodePen.

Something to keep in mind: You probably do not want to use the Custom Property values in the ruleset itself when applying mixins using a wildcard selector. It gives those styles a higher specificity than any other inheritance that comes along with the cascade, making them hard to override without using !important.

Pattern 6: Inline properties

We can also declare Custom Properties inline. Let's build a lightweight grid system demonstrate.

.grid { --columns: auto-fit; display: grid; gap: 10px; grid-template-columns: repeat(var(--columns), minmax(0, 1fr)); } <div class="grid"> <img src="https://www.fillmurray.com/900/600" alt="Bill Murray" /> <img src="https://www.placecage.com/900/600" alt="Nic Cage" /> <img src="https://www.placecage.com/g/900/600" alt="Nic Cage gray" /> <img src="https://www.fillmurray.com/g/900/600" alt="Bill Murray gray" /> <img src="https://www.placecage.com/c/900/600" alt="Nic Cage crazy" /> <img src="https://www.placecage.com/gif/900/600" alt="Nic Cage gif" /> </div>

By default, the grid has equally sized columns that will automatically lay themselves into a single row.

See the Pen
Patterns for Practical Custom Properties: Example 6.0
by Tyler Childs (@tylerchilds)
on CodePen.

To control the number of columns we can set our --columns Custom Property
inline on our grid element.

<div class="grid" style="--columns: 3;"> ... </div>

See the Pen
Patterns for Practical Custom Properties: Example 6.1
by Tyler Childs (@tylerchilds)
on CodePen.

We just looked at six different use cases for Custom Properties — at least ones that I commonly use. Even if you were already aware of and have been using Custom Properties, hopefully seeing them used these ways gives you a better idea of when and where to use them effectively.

Are there different types of patterns you use with Custom Properties? Share them in the comments and link up some demos — I’d love to see them!

If you’re new to Custom Properties are are looking to level up, try playing around with the examples we covered here, but add media queries to the mix. You’ll see how adaptive these can be and how many interesting opportunities open up when you have the power to change values on the fly.

Plus, there are a ton of other great resources right here on CSS-Tricks to up your Custom Properties game in the Custom Properties Guide.

See the Pen
Thank you for Reading!
by Tyler Childs (@tylerchilds)
on CodePen.

The post Patterns for Practical CSS Custom Properties Use appeared first on CSS-Tricks.

Let’s Not Forget About Container Queries

Css Tricks - Wed, 10/09/2019 - 3:56am

Container queries are always on the top of the list of requested improvements to CSS. The general sentiment is that if we had container queries, we wouldn't write as many global media queries based on page size. That's because we're actually trying to control a more scoped container, and the only reason we use media queries for that now is because it's the best tool we have in CSS. I absolutely believe that.

There is another sentiment that goes around once in a while that goes something like: "you developers think you need container queries but you really don't." I am not a fan of that. It seems terribly obvious that we would do good things with them if they were available, not the least of which is writing cleaner, portable, understandable code. Everyone seems to agree that building UIs from components is the way to go these days which makes the need for container queries all the more obvious.

It's wonderful that there are modern JavaScript ideas that help us do use them today — but that doesn't mean the technology needs to stay there. It makes way more sense in CSS.

Here's my late 2019 thought dump on the subject:

  • Philip Walton's "Responsive Components: a Solution to the Container Queries Problem" is a great look at using JavaScript's ResizeObserver as one way to solve the issue today. It performs great and works anywhere. The demo site is the best one out there because it highlights the need for responsive components (although there are other documented use cases as well). Philip even says a pure CSS solution would be more ideal.
  • CSS nesting got a little round of enthusiasm about a year ago. The conversation makes it seem like nesting is plausible. I'm in favor of this as a long-time fan of sensible Sass nesting. It makes me wonder if the syntax for container queries could leverage the same sort of thing. Maybe nested queries are scoped to the parent selector? Or you prefix the media statement with an ampersand as the current spec does with descendant selectors?
  • Other proposed syntaxes generally involve some use of the colon, like .container:media(max-width: 400px) { }. I like that, too. Single-colon selectors (pseduo selectors) are philosophically "select the element under these conditions" — like :hover, :nth-child, etc. — so a media scope makes sense.
  • I don't think syntax is the biggest enemy of this feature; it's the performance of how it is implemented. Last I understood, it's not even performance as much as it mucks with the entire rendering flow of how browsers work. That seems like a massive hurdle. I still don't wanna forget about it. There is lots of innovation happening on the web and, just because it's not clear how to implement it today, that doesn't mean someone won't figure out a practical path forward tomorrow.

The post Let’s Not Forget About Container Queries appeared first on CSS-Tricks.

A Snippet to See all SVGs in a Sprite

Css Tricks - Tue, 10/08/2019 - 11:14am

I think of an SVG sprite as this:

<svg display="none"> <symbol id="icon-one"> ... <symbol> <symbol id="icon-two"> ... <symbol> <symbol id="icon-three"> ... <symbol> </svg>

I was long a fan of that approach for icon systems (<use>-ing them as needed), but I favor including the SVGs directly as needed these days. Still, sprites are fine, and fairly popular.

What if you have a sprite, and you wanna see what's in it?

Here's a tiny bit of JavaScript that will loop through all the symbols it finds and inject a SVG that uses each one...

const sprite = document.querySelector("#sprite"); const symbols = sprite.querySelectorAll("symbol"); symbols.forEach(symbol => { document.body.insertAdjacentHTML("beforeend", ` <svg width="50" height="50"> <use xlink:href="#${symbol.id}" /> <svg> `) });

See the Pen
Visually turn a sprite into individual SVGs
by Chris Coyier (@chriscoyier)
on CodePen.

That's all.

The post A Snippet to See all SVGs in a Sprite appeared first on CSS-Tricks.

Clipping, Clipping, and More Clipping!

Css Tricks - Tue, 10/08/2019 - 4:18am

There are so many things you can do with clipping paths. I've been exploring them for quite some time and have come up with different techniques and use cases for them — and I want to share my findings with you! I hope this will spark new ideas for fun things you can do with the CSS clip-path property. Hopefully, you'll either put them to use on your projects or play around and have fun with them.

Before we dive in, it’s worth mentioning that this is my third post here on CSS-Tricks about clip paths. You might want to check those out for a little background:

This article is full of new ideas!

Idea 1: The Double Clip

One neat trick is to use clipping paths to cut content many times. It might sound obvious, but I haven’t seen many people using this concept.

For example, let’s look at an expanding menu:

See the Pen
The more menu
by Mikael Ainalem (@ainalem)
on CodePen.

Clipping can only be applied to a DOM node once. A node cannot have several active instances of the same CSS rule, so that means one clip-path per instance. Yet, there is no upper limit for how many times you can combine clipped nodes. We can, for example, place a clipped <div> inside another clipped <div> and so on. In the ancestry of DOM nodes, we can apply as many separate cuts as we want.

That’s exactly what I did in the demo above. I let a clipped node fill out another clipped node. The parent acts as a boundary, which the child fills up while zooming in. This creates an unusual effect where a rounded menu appears. Think of it as an advanced method of overflow: hidden.

You can, of course, argue that SVGs are better suited for this purpose. Compared to clipping paths, SVG is capable of doing a lot more. Among other things, SVG provides smooth scaling. If clipping fully supported Bézier curves, the conversation would be different. This is not the case at the time of writing. Regardless, clipping paths are very convenient. One node, one CSS rule and you're good to go. As far as the demo above is concerned, clipping paths do a good job and thus are a viable option.

I put together short video that explains the inner workings of the menu:

Idea 2: Zooming Clip Paths

Another (less obvious) trick is to use clipping paths for zooming. We can actually use CSS transitions to animate clipping paths!

The transition system is quite astonishing in how it's built. In my opinion, its addition to the web is one of the biggest leaps that the web has taken in recent years. It supports transitions between a whole range of different values. Clipping paths are among the accepted values we can animate. Animation, in general, means interpolation between two extremes. For clipping, this translates to an interpolation between two complete, different paths. Here's where the web's refined animation system shows its strength. It doesn’t only work with single values — it also works when animating sets of values.

When animating clipping paths specifically, each coordinate gets interpolated separately. This is important. It makes clipping path animations look coherent and smooth.

Let's look at the demo. Click on an image to restart the effect:

See the Pen
Brand cut zoom
by Mikael Ainalem (@ainalem)
on CodePen.

I’m using clip-path transitions in this demo. It's used to zoom in from one clipping path covering a tiny region going into another huge one. The smallest version of the clipping path is much smaller than the resolution — in other words, it's invisible to the eye when applied. The other extreme is slightly bigger than the viewport. At this zoom level, no cuts are visible since all clipping takes place outside the visible area. Animating between these two different clipping paths creates an interesting effect. The clipped shape seems to reveal the content behind it as it zooms in.

As you may have noticed, the demo uses different shapes. In this case, I'm using logos of popular shoe brands. This gives you an idea of what the effect would look like when used in a more realistic scenario.

Again, here’s a video that walks through the technical stuff in fine detail:

Idea 3: A Clipped Overlay

Another idea is to use clipping paths to create highlight effects. For example, let’s say we want to use clipping paths to create an active state in a menu.

See the Pen
Skewed stretchy menu
by Mikael Ainalem (@ainalem)
on CodePen.

The clipped path above stretches between the different menu options when it animates. Besides, we’re using an interesting shape to make the UI stand out a bit.

The demo uses an altered copy of the same content where the duplicate copy sits on top of the existing content. It's placed in the exact same position as the menu and serves as the active state. In essence, it appears like any other regular active state for a menu. The difference is that it's created with clipping paths rather than fancy CSS styles on HTML elements.

Using clipping enables creating some unusual effects here. The skewed shape is one thing, but we also get the stretchy effect as well. The menu comes with two separate cuts — one on the left and one on the right — which makes it possible to animate the cuts with different timing using transition delays. The result is a stretchy animation with very little effort. As the default easing is non-linear, the delay causes a slight rubber band effect.

The second trick here is to apply the delays depending on direction. If the active state needs to move to the right, then the right side needs to start animating first, and vice versa. I get the directional awareness by using a dash of JavaScript to apply the correct class accordingly on clicks.

Idea 4: Slices of the Pie

How often do you see a circular expanding menu on the web? Preposterous, right!? Well, clipping paths not only make it possible but fairly trivial as well.

See the Pen
The circular menu
by Mikael Ainalem (@ainalem)
on CodePen.

We normally see menus that contain links ordered in a single line or even in dropdowns, like the first trick we looked at. What we’re doing here instead is placing those links inside arcs rather than rectangles. Using rectangles would be the conventional way, of course. The idea here is to explore a more mobile-friendly interaction with two specific UX principles in mind:

  • A clear target that is comfortable to tap with a thumb
  • Change takes place close to the focal point — the place where your visual focus is at the moment

The demo is not specifically about clipping paths. I just happen to use clipping paths to create the pen. Again, like the expandable menu demo earlier, it's a question of convenience. With clipping and a border radius of 50%, I get the arcs I need in no time.

Idea 5: The Toggle

Toggles never cease to amaze web developers like us. It seems like someone introduces a new interpretation of a toggle every week. Well, here’s mine:

See the Pen
Inverted toggle
by Mikael Ainalem (@ainalem)
on CodePen.

The demo is a remake of this dribbble shot by Oleg Frolov. It combines all three of the techniques we covered in this article. Those are:

  • The double clip
  • Zooming clip paths
  • A clipped overlay

All these on/off switches seem to have one thing in common. They consist of an oval background and a circle, resembling real mechanical switches. The way this toggle works is by scaling up a circular clipping path inside a rounded container. The container cuts the content by overflow: hidden, i.e. double clipping.

Another key part of the demo is having two alternating versions in markup. They are the original and its yin-yang inverted mirrored copy. Using two versions instead of one is, at risk of being repetitive, a matter of convenience. With two, we only need to create a transition for the first version. Then, we can reuse most of it for the second. At the end of the transition the toggle switches over to the opposite version. As the inverted version is identical with the previous end state, it's impossible to spot the shift. The good thing about this technique is reusing parts of the animation. The drawback is the jank we get when interrupting the animation. That happens when the user punches the toggle before the animation has completed.

Let's again have look under the hood:

Closing words

You might think: Exploration is one thing, but what about production? Can I use clipping paths on a site I’m currently working on? Is it ready for prime time?

Well, that question doesn’t have a straightforward answer. There are, among other things, two problems to take a closer look at:

1. Browser support
2. Performance

At the time of writing there is, according to caniuse, about 93% browser support. I'd say we're on the verge of of mass adoption. Note, this number is takes the WebKit prefix into account.

There's also always the IE argument but it's really no argument to me. I can't see that it's worthwhile to go the extra mile for IE. Should you create workarounds for an insecure browser? Your users are better off with a modern browser. There are, of course, a few rare cases where legacy is a must. But you probably won't consider modern CSS at all in those cases.

How about performance then? Well, performance gets tricky as things mount up, but nothing that I'd say would prevent us from using clipping paths today. It's always measured performance that counts. It's probable that clipping, on average, causes a bigger performance hit than other CSS rules. But remember that the practices we've covered here are recommendations, not law. Treat them as such. Make a habit out of measuring performance.

Go on, cut your web pages in pieces. See what happens!

The post Clipping, Clipping, and More Clipping! appeared first on CSS-Tricks.

Wufoo Cracks the Code for Forms So You Don’t Have To

Css Tricks - Tue, 10/08/2019 - 4:17am

There was a lot of buzz about forms last week when Jason Grisby pointed to a missing pattern attribute on Chipotle's order form that could have been used to help-through millions of dollars in orders. Adrian Roselli followed that up with the common mistake of forgetting for and id attributes on form inputs and the potential cost of it.

Forms are hard. And that's without thinking about more complex features, like building conditional logic into questions, getting into validation, triggering emails on submission, handling inputs on different devices, storing submissions, or integrating with other services, among many, many other things. Forms aren't just hard, they are downright complicated.

That's why I'm glad there's a company like Wufoo that has all of that sorted out. There's been many a time where I convince myself I can build a form myself only to abandon the idea for an embedded Wufoo form instead.

Why Wufoo? First off, it's been around forever. They focus on forms and forms alone, so I'm confident they know exactly what they're doing. I get all the semantic markup I want based on their tried and tested product and adding it to my (or any other) site is as easy as dropping in a snippet.

Plus, Wufoo continues to innovate! They're releasing new features all the time. Just this past month, they shipped a new Zapier integration that opens up a ton of possibilities, like sending submissions to a Google spreadsheet, firing off submission notifications in Slack, creating Trello cards from submissions, and more. And again, that's on top of an already stacked featured set that offers everything from multi-page forms and showing and hiding fields conditionally to collecting payments and allowing file uploads over a secure encrypted connection.

You can see where we use Wufoo here on CSS-Tricks to power the contact form. What's cool about that simple form is we can direct email notifications to specific inboxes based on the contact selection. It even integrates with Mailchimp, so we can offer an option to sign up for our newsletter directly in the contact form.

We also decided to use Wufoo to improve the way we accept guest posts (and you should definitely submit an idea). We used to lean on plain ol' email and the contact form, but using Wufoo has allowed us to level up so we can collect more details about a post submission upfront and tailor the form based on the type of submission it is.

I'd say Wufoo is great for any type of form. It handles anything you throw at it, easily integrates into any site, and helps prevent costly mistakes that are apparently worth gobs of cash for some companies.

The post Wufoo Cracks the Code for Forms So You Don’t Have To appeared first on CSS-Tricks.

Some Hands-On with the HTML Dialog Element

Css Tricks - Mon, 10/07/2019 - 10:25am

This is me looking at the HTML <dialog> element for the first time. I've been aware of it for a while, but haven't taken it for a spin yet. It has some pretty cool and compelling features. I can't decide for you if you should use it in production on your sites, but I'd think it's starting to be possible.

It's not just a semantic element, it has APIs and special CSS.

We'll get to that stuff in a moment, but it's notable because it makes the browser support stuff significant.

When we first got HTML5 elements like <article>, it pretty much didn't matter if the browser supported it or not because nothing was worse-off in those scenarios if you used it. You could make it block-level and it was just like a meaningless div you would have used anyway.

That said, I wouldn't just use <dialog> as a "more semantic <div> replacement." It's got too much functionality for that.

Let's do the browser support thing.

As I write:

  • Chrome's got it (37+), so Edge is about to get it.
  • Firefox has the User-Agent (UA) styles in place (69+), but the functionality is behind a dom.dialog_element.enabled flag. Even with the flag, it doesn't look like we get the CSS stuff yet.
  • No support from Safari.

It's polyfillable.

It's certainly more compelling to use features with a better support than this, but I'd say it's close and it might just cross the line if you're the polyfilling type anyway.

On to the juicy stuff.

A dialog is either open or not. <dialog> I'm closed. </dialog> <dialog open> I'm open. </dialog> There is some UA styling to them.

It seems to match in Chrome and Firefox. The UA stylesheet has it centered with auto margins, thick black lines, sized to content.

See the Pen
Basic Open Dialog
by Chris Coyier (@chriscoyier)
on CodePen.

Like any UA styles, you'll almost surely override them with your own fancy dialog styles — shadows and typography and whatever else matches your site's style.

There is a JavaScript API for opening and closing them.

Say you have a reference to the element named dialog:

dialog.show(); dialog.hide();

See the Pen
Basic Togglable Dialog
by Chris Coyier (@chriscoyier)
on CodePen.

You should probably use this more explicit command though:

dialog.showModal();

That's what makes the backdrop work (and we'll get to that soon). I'm not sure I quite grok it, but the the spec talks about a "pending dialog stack" and this API will open the modal pending that stack. Here's a modal that can open a second stacking modal:

See the Pen
Basic Open Dialog
by Chris Coyier (@chriscoyier)
on CodePen.

There's also an HTML-based way close them.

If you put a special kind of form in there, submitting the form will close the modal.

<form method="dialog"> <button>Close</button> </form>

See the Pen
Dialog with Form that Closes Dialog
by Chris Coyier (@chriscoyier)
on CodePen.

Notice that if you programmatically open the dialog, you get a backdrop cover.

This has always been one of the more finicky things about building your own dialogs. A common UI pattern is to darken the background behind the dialog to focus attention on the dialog.

We get that for free with <dialog>, assuming you open it via JavaScript. You control the look of it with the ::backdrop pseudo-element. Instead of the low-opacity black default, let's do red with stripes:

See the Pen
Custom Dialog Backdrop
by Chris Coyier (@chriscoyier)
on CodePen.

Cool bonus: you can use backdrop-filter to do stuff like blur the background.

See the Pen
Native Dialog
by Chris Coyier (@chriscoyier)
on CodePen.

It moves focus for you

I don't know much about this stuff, but I can fire up VoiceOver on my Mac and see the dialog come into focus see that when I trigger the button that opens the modal.

It doesn't trap focus there, and I hear that's ideal for modals. We have a clever idea for that utilizing CSS you could explore.

Rob Dodson said: "modals are actually the boss battle at the end of web accessibility." Kinda nice that the native browser version helps with a lot of that. You even automatically get the Escape key closing functionality, which is great. There's no click outside to close, though. Perhaps someday pending user feedback.

Ire's article is a go-to resource for building your own dialogs and a ton of accessibility considerations when using them.

The post Some Hands-On with the HTML Dialog Element appeared first on CSS-Tricks.

Introducing Sass Modules

Css Tricks - Mon, 10/07/2019 - 4:24am

Sass just launched a major new feature you might recognize from other languages: a module system. This is a big step forward for @import. one of the most-used Sass-features. While the current @import rule allows you to pull in third-party packages, and split your Sass into manageable "partials," it has a few limitations:

  • @import is also a CSS feature, and the differences can be confusing
  • If you @import the same file multiple times, it can slow down compilation, cause override conflicts, and generate duplicate output.
  • Everything is in the global namespace, including third-party packages – so my color() function might override your existing color() function, or vice versa.
  • When you use a function like color(). it’s impossible to know exactly where it was defined. Which @import does it come from?

Sass package authors (like me) have tried to work around the namespace issues by manually prefixing our variables and functions — but Sass modules are a much more powerful solution. In brief, @import is being replaced with more explicit @use and @forward rules. Over the next few years Sass @import will be deprecated, and then removed. You can still use CSS imports, but they won’t be compiled by Sass. Don’t worry, there’s a migration tool to help you upgrade!

Import files with @use @use 'buttons';

The new @use is similar to @import. but has some notable differences:

  • The file is only imported once, no matter how many times you @use it in a project.
  • Variables, mixins, and functions (what Sass calls "members") that start with an underscore (_) or hyphen (-) are considered private, and not imported.
  • Members from the used file (buttons.scss in this case) are only made available locally, but not passed along to future imports.
  • Similarly, @extends will only apply up the chain; extending selectors in imported files, but not extending files that import this one.
  • All imported members are namespaced by default.

When we @use a file, Sass automatically generates a namespace based on the file name:

@use 'buttons'; // creates a `buttons` namespace @use 'forms'; // creates a `forms` namespace

We now have access to members from both buttons.scss and forms.scss — but that access is not transferred between the imports: forms.scss still has no access to the variables defined in buttons.scss. Because the imported features are namespaced, we have to use a new period-divided syntax to access them:

// variables: <namespace>.$variable $btn-color: buttons.$color; $form-border: forms.$input-border; // functions: <namespace>.function() $btn-background: buttons.background(); $form-border: forms.border(); // mixins: @include <namespace>.mixin() @include buttons.submit(); @include forms.input();

We can change or remove the default namespace by adding as <name> to the import:

@use 'buttons' as *; // the star removes any namespace @use 'forms' as 'f'; $btn-color: $color; // buttons.$color without a namespace $form-border: f.$input-border; // forms.$input-border with a custom namespace

Using as * adds a module to the root namespace, so no prefix is required, but those members are still locally scoped to the current document.

Import built-in Sass modules

Internal Sass features have also moved into the module system, so we have complete control over the global namespace. There are several built-in modules — math, color, string, list, map, selector, and meta — which have to be imported explicitly in a file before they are used:

@use 'sass:math'; $half: math.percentage(1/2);

Sass modules can also be imported to the global namespace:

@use 'sass:math' as *; $half: percentage(1/2);

Internal functions that already had prefixed names, like map-get or str-index. can be used without duplicating that prefix:

@use 'sass:map'; @use 'sass:string'; $map-get: map.get(('key': 'value'), 'key'); $str-index: string.index('string', 'i');

You can find a full list of built-in modules, functions, and name changes in the Sass module specification.

New and changed core features

As a side benefit, this means that Sass can safely add new internal mixins and functions without causing name conflicts. The most exciting example in this release is a sass:meta mixin called load-css(). This works similar to @use but it only returns generated CSS output, and it can be used dynamically anywhere in our code:

@use 'sass:meta'; $theme-name: 'dark'; [data-theme='#{$theme-name}'] { @include meta.load-css($theme-name); }

The first argument is a module URL (like @use) but it can be dynamically changed by variables, and even include interpolation, like theme-#{$name}. The second (optional) argument accepts a map of configuration values:

// Configure the $base-color variable in 'theme/dark' before loading @include meta.load-css( 'theme/dark', $with: ('base-color': rebeccapurple) );

The $with argument accepts configuration keys and values for any variable in the loaded module, if it is both:

  • A global variable that doesn’t start with _ or - (now used to signify privacy)
  • Marked as a !default value, to be configured
// theme/_dark.scss $base-color: black !default; // available for configuration $_private: true !default; // not available because private $config: false; // not available because not marked as a !default

Note that the 'base-color' key will set the $base-color variable.

There are two more sass:meta functions that are new: module-variables() and module-functions(). Each returns a map of member names and values from an already-imported module. These accept a single argument matching the module namespace:

@use 'forms'; $form-vars: module-variables('forms'); // ( // button-color: blue, // input-border: thin, // ) $form-functions: module-functions('forms'); // ( // background: get-function('background'), // border: get-function('border'), // )

Several other sass:meta functions — global-variable-exists(), function-exists(), mixin-exists(), and get-function() — will get additional $module arguments, allowing us to inspect each namespace explicitly.

Adjusting and scaling colors

The sass:color module also has some interesting caveats, as we try to move away from some legacy issues. Many of the legacy shortcuts like lighten(). or adjust-hue() are deprecated for now in favor of explicit color.adjust() and color.scale() functions:

// previously lighten(red, 20%) $light-red: color.adjust(red, $lightness: 20%); // previously adjust-hue(red, 180deg) $complement: color.adjust(red, $hue: 180deg);

Some of those old functions (like adjust-hue) are redundant and unnecessary. Others — like lighten. darken. saturate. and so on — need to be re-built with better internal logic. The original functions were based on adjust(). which uses linear math: adding 20% to the current lightness of red in our example above. In most cases, we actually want to scale() the lightness by a percentage, relative to the current value:

// 20% of the distance to white, rather than current-lightness + 20 $light-red: color.scale(red, $lightness: 20%);

Once fully deprecated and removed, these shortcut functions will eventually re-appear in sass:color with new behavior based on color.scale() rather than color.adjust(). This is happening in stages to avoid sudden backwards-breaking changes. In the meantime, I recommend manually checking your code to see where color.scale() might work better for you.

Configure imported libraries

Third-party or re-usable libraries will often come with default global configuration variables for you to override. We used to do that with variables before an import:

// _buttons.scss $color: blue !default; // old.scss $color: red; @import 'buttons';

Since used modules no longer have access to local variables, we need a new way to set those defaults. We can do that by adding a configuration map to @use:

@use 'buttons' with ( $color: red, $style: 'flat', );

This is similar to the $with argument in load-css(). but rather than using variable-names as keys, we use the variable itself, starting with $.

I love how explicit this makes configuration, but there’s one rule that has tripped me up several times: a module can only be configured once, the first time it is used. Import order has always been important for Sass, even with @import. but those issues always failed silently. Now we get an explicit error, which is both good and sometimes surprising. Make sure to @use and configure libraries first thing in any "entrypoint" file (the central document that imports all partials), so that those configurations compile before other @use of the libraries.

It’s (currently) impossible to "chain" configurations together while keeping them editable, but you can wrap a configured module along with extensions, and pass that along as a new module.

Pass along files with @forward

We don’t always need to use a file, and access its members. Sometimes we just want to pass it along to future imports. Let’s say we have multiple form-related partials, and we want to import all of them together as one namespace. We can do that with @forward:

// forms/_index.scss @forward 'input'; @forward 'textarea'; @forward 'select'; @forward 'buttons';

Members of the forwarded files are not available in the current document and no namespace is created, but those variables, functions, and mixins will be available when another file wants to @use or @forward the entire collection. If the forwarded partials contain actual CSS, that will also be passed along without generating output until the package is used. At that point it will all be treated as a single module with a single namespace:

// styles.scss @use 'forms'; // imports all of the forwarded members in the `forms` namespace

Note: if you ask Sass to import a directory, it will look for a file named index or _index)

By default, all public members will forward with a module. But we can be more selective by adding show or hide clauses, and naming specific members to include or exclude:

// forward only the 'input' border() mixin, and $border-color variable @forward 'input' show border, $border-color; // forward all 'buttons' members *except* the gradient() function @forward 'buttons' hide gradient;

Note: when functions and mixins share a name, they are shown and hidden together.

In order to clarify source, or avoid naming conflicts between forwarded modules, we can use as to prefix members of a partial as we forward:

// forms/_index.scss // @forward "<url>" as <prefix>-*; // assume both modules include a background() mixin @forward 'input' as input-*; @forward 'buttons' as btn-*; // style.scss @use 'forms'; @include forms.input-background(); @include forms.btn-background();

And, if we need, we can always @use and @forward the same module by adding both rules:

@forward 'forms'; @use 'forms';

That’s particularly useful if you want to wrap a library with configuration or any additional tools, before passing it along to your other files. It can even help simplify import paths:

// _tools.scss // only use the library once, with configuration @use 'accoutrement/sass/tools' with ( $font-path: '../fonts/', ); // forward the configured library with this partial @forward 'accoutrement/sass/tools'; // add any extensions here... // _anywhere-else.scss // import the wrapped-and-extended library, already configured @use 'tools';

Both @use and @forward must be declared at the root of the document (not nested), and at the start of the file. Only @charset and simple variable definitions can appear before the import commands.

Moving to modules

In order to test the new syntax, I built a new open source Sass library (Cascading Color Systems) and a new website for my band — both still under construction. I wanted to understand modules as both a library and website author. Let’s start with the "end user" experience of writing site styles with the module syntax…

Maintaining and writing styles

Using modules on the website was a pleasure. The new syntax encourages a code architecture that I already use. All my global configuration and tool imports live in a single directory (I call it config), with an index file that forwards everything I need:

// config/_index.scss @forward 'tools'; @forward 'fonts'; @forward 'scale'; @forward 'colors';

As I build out other aspects of the site, I can import those tools and configurations wherever I need them:

// layout/_banner.scss @use '../config'; .page-title { @include config.font-family('header'); }

This even works with my existing Sass libraries, like Accoutrement and Herman, that still use the old @import syntax. Since the @import rule will not be replaced everywhere overnight, Sass has built in a transition period. Modules are available now, but @import will not be deprecated for another year or two — and only removed from the language a year after that. In the meantime, the two systems will work together in either direction:

  • If we @import a file that contains the new @use/@forward syntax, only the public members are imported, without namespace.
  • If we @use or @forward a file that contains legacy @import syntax, we get access to all the nested imports as a single namespace.

That means you can start using the new module syntax right away, without waiting for a new release of your favorite libraries: and I can take some time to update all my libraries!

Migration tool

Upgrading shouldn’t take long if we use the Migration Tool built by Jennifer Thakar. It can be installed with Node, Chocolatey, or Homebrew:

npm install -g sass-migrator choco install sass-migrator brew install sass/sass/migrator

This is not a single-use tool for migrating to modules. Now that Sass is back in active development (see below), the migration tool will also get regular updates to help migrate each new feature. It’s a good idea to install this globally, and keep it around for future use.

The migrator can be run from the command line, and will hopefully be added to third-party applications like CodeKit and Scout as well. Point it at a single Sass file, like style.scss. and tell it what migration(s) to apply. At this point there’s only one migration called module:

# sass-migrator <migration> <entrypoint.scss...> sass-migrator module style.scss

By default, the migrator will only update a single file, but in most cases we’ll want to update the main file and all its dependencies: any partials that are imported, forwarded, or used. We can do that by mentioning each file individually, or by adding the --migrate-deps flag:

sass-migrator --migrate-deps module style.scss

For a test-run, we can add --dry-run --verbose (or -nv for short), and see the results without changing any files. There are a number of other options that we can use to customize the migration — even one specifically for helping library authors remove old manual namespaces — but I won’t cover all of them here. The migration tool is fully documented on the Sass website.

Updating published libraries

I ran into a few issues on the library side, specifically trying to make user-configurations available across multiple files, and working around the missing chained-configurations. The ordering errors can be difficult to debug, but the results are worth the effort, and I think we’ll see some additional patches coming soon. I still have to experiment with the migration tool on complex packages, and possibly write a follow-up post for library authors.

The important thing to know right now is that Sass has us covered during the transition period. Not only can imports and modules work together, but we can create "import-only" files to provide a better experience for legacy users still @importing our libraries. In most cases, this will be an alternative version of the main package file, and you’ll want them side-by-side: <name>.scss for module users, and <name>.import.scss for legacy users. Any time a user calls @import <name>, it will load the .import version of the file:

// load _forms.scss @use 'forms'; // load _forms.input.scss @import 'forms';

This is particularly useful for adding prefixes for non-module users:

// _forms.import.scss // Forward the main module, while adding a prefix @forward "forms" as forms-*; Upgrading Sass

You may remember that Sass had a feature-freeze a few years back, to get various implementations (LibSass, Node Sass, Dart Sass) all caught up, and eventually retired the original Ruby implementation. That freeze ended last year, with several new features and active discussions and development on GitHub – but not much fanfare. If you missed those releases, you can get caught up on the Sass Blog:

Dart Sass is now the canonical implementation, and will generally be the first to implement new features. If you want the latest, I recommend making the switch. You can install Dart Sass with Node, Chocolatey, or Homebrew. It also works great with existing gulp-sass build steps.

Much like CSS (since CSS3), there is no longer a single unified version-number for new releases. All Sass implementations are working from the same specification, but each one has a unique release schedule and numbering, reflected with support information in the beautiful new documentation designed by Jina.

Sass Modules are available as of October 1st, 2019 in Dart Sass 1.23.0.

The post Introducing Sass Modules appeared first on CSS-Tricks.

Breakout Buttons

Css Tricks - Fri, 10/04/2019 - 10:29am

Andy covers a technique where a semantic <button> is used within a card component, but really, the whole card is clickable. The trick is to put a pseudo-element that goes beyond the button, covering the entire card. The tradeoff is that the pseudo-element sits on top of the text, so text selection is hampered a bit. I believe this is better than making the whole dang area a <button> because that would sacrifice semantics and likely cause extreme weirdness for assistive technology.

See the Pen
Semantic, progressively enhanced “break-out” button
by Andy Bell (@andybelldesign)
on CodePen.

You could do the same thing if your situation requires an <a> link instead of a <button>, but if that's the case, you actually can wrap the whole area in the link without much grief then wrap the part that appears to be a button in a span or something to make it look like a button.

This reminds me of the nested link problem: a large linked block that contains other different linked areas in it. Definitely can't nest anchor links. Sara Soueidan had the best answer where the "covering" link is placed within the card and absolutely positioned to cover the area while other links inside could be be layered on top with z-index.

I've moved her solution to a Pen for reference:

See the Pen
Nested Links Solution
by Chris Coyier (@chriscoyier)
on CodePen.

The post Breakout Buttons appeared first on CSS-Tricks.

Using GitHub Template Repos to Jump-Start Static Site Projects

Css Tricks - Fri, 10/04/2019 - 4:18am

If you’re getting started with static site generators, did you know you can use GitHub template repositories to quickly start new projects and reduce your setup time?

Most static site generators make installation easy, but each project still requires configuration after installation. When you build a lot of similar projects, you may duplicate effort during the setup phase. GitHub template repositories may save you a lot of time if you find yourself:

  • creating the same folder structures from previous projects,
  • copying and pasting config files from previous projects, and
  • copying and pasting boilerplate code from previous projects.

Unlike forking a repository, which allows you to use someone else’s code as a starting point, template repositories allow you to use your own code as a starting point, where each new project gets its own, independent Git history. Check it out!

Let’s take a look at how we can set up a convenient workflow. We’ll set up a boilerplate Eleventy project, turn it into a Git repository, host the repository on GitHub, and then configure that repository to be a template. Then, next time you have a static site project, you’ll be able to come back to the repository, click a button, and start working from an exact copy of your boilerplate.

Are you ready to try it out? Let’s set up our own static site using GitHub templates to see just how much templates can help streamline a static site project.

I’m using Eleventy as an example of a static site generator because it’s my personal go-to, but this process will work for Hugo, Jekyll, Nuxt, or any other flavor of static site generator you prefer.

If you want to see the finished product, check out my static site template repository.

First off, let’s create a template folder

We're going to kick things off by running each of these in the command line:

cd ~ mkdir static-site-template cd static-site-template

These three commands change directory into your home directory (~ in Unix-based systems), make a new directory called static-site-template, and then change directory into the static-site-template directory.

Next, we’ll initialize the Node project

In order to work with Eleventy, we need to install Node.js which allows your computer to run JavaScript code outside of a web browser.

Node.js comes with node package manager, or npm, which downloads node packages to your computer. Eleventy is a node package, so we can use npm to fetch it.

Assuming Node.js is installed, let’s head back to the command line and run:

npm init

This creates a file called package.json in the directory. npm will prompt you for a series of questions to fill out the metadata in your package.json. After answering the questions, the Node.js project is initialized.

Now we can install Eleventy

Initializing the project gave us a package.json file which lets npm install packages, run scripts, and do other tasks for us inside that project. npm uses package.json as an entry point in the project to figure out precisely how and what it should do when we give it commands.

We can tell npm to install Eleventy as a development dependency by running:

npm install -D @11ty/eleventy

This will add a devDependency entry to the package.json file and install the Eleventy package to a node_modules folder in the project.

The cool thing about the package.json file is that any other computer with Node.js and npm can read it and know to install Eleventy in the project node_modules directory without having to install it manually. See, we're already streamlining things!

Configuring Eleventy

There are tons of ways to configure an Eleventy project. Flexibility is Eleventy’s strength. For the purposes of this tutorial, I’m going to demonstrate a configuration that provides:

  • A folder to cleanly separate website source code from overall project files
  • An HTML document for a single page website
  • CSS to style the document
  • JavaScript to add functionality to the document

Hop back in the command line. Inside the static-site-template folder, run these commands one by one (excluding the comments that appear after each # symbol):

mkdir src # creates a directory for your website source code mkdir src/css # creates a directory for the website styles mkdir src/js # creates a directory for the website JavaScript touch index.html # creates the website HTML document touch css/style.css # creates the website styles touch js/main.js # creates the website JavaScript

This creates the basic file structure that will inform the Eleventy build. However, if we run Eleventy right now, it won’t generate the website we want. We still have to configure Eleventy to understand that it should only use files in the src folder for building, and that the css and js folders should be processed with passthrough file copy.

You can give this information to Eleventy through a file called .eleventy.js in the root of the static-site-template folder. You can create that file by running this command inside the static-site-template folder:

touch .eleventy.js

Edit the file in your favorite text editor so that it contains this:

module.exports = function(eleventyConfig) { eleventyConfig.addPassthroughCopy("src/css"); eleventyConfig.addPassthroughCopy("src/js"); return { dir: { input: "src" } }; };

Lines 2 and 3 tell Eleventy to use passthrough file copy for CSS and JavaScript. Line 6 tells Eleventy to use only the src directory to build its output.

Eleventy will now give us the expected output we want. Let’s put that to the test by putting this In the command line:

npx @11ty/eleventy

The npx command allows npm to execute code from the project node_module directory without touching the global environment. You’ll see output like this:

Writing _site/index.html from ./src/index.html. Copied 2 items and Processed 1 file in 0.04 seconds (v0.9.0)

The static-site-template folder should now have a new directory in it called _site. If you dig into that folder, you’ll find the css and js directories, along with the index.html file.

This _site folder is the final output from Eleventy. It is the entirety of the website, and you can host it on any static web host.

Without any content, styles, or scripts, the generated site isn’t very interesting:

Let’s create a boilerplate website

Next up, we’re going to put together the baseline for a super simple website we can use as the starting point for all projects moving forward.

It’s worth mentioning that Eleventy has a ton of boilerplate files for different types of projects. It’s totally fine to go with one of these though I often find I wind up needing to roll my own. So that’s what we’re doing here.

<!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <title>Static site template</title> <meta name="description" content="A static website"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet" href="css/style.css"> </head> <body> <h1>Great job making your website template!</h1> <script src="js/main.js"></script> </body> </html>

We may as well style things a tiny bit, so let’s add this to src/css/style.css:

body { font-family: sans-serif; }

And we can confirm JavaScript is hooked up by adding this to src/js/main.js:

(function() { console.log('Invoke the static site template JavaScript!'); })();

Want to see what we’ve got? Run npx @11ty/eleventy --serve in the command line. Eleventy will spin up a server with Browsersync and provide the local URL, which is probably something like localhost:8080.

Even the console tells us things are ready to go! Let’s move this over to a GitHub repo

Git is the most commonly used version control system in software development. Most Unix-based computers come with it installed, and you can turn any directory into a Git repository by running this command:

git init

We should get a message like this:

Initialized empty Git repository in /path/to/static-site-template/.git/

That means a hidden .git folder was added inside the project directory, which allows the Git program to run commands against the project.

Before we start running a bunch of Git commands on the project, we need to tell Git about files we don’t want it to touch.

Inside the static-site-template directory, run:

touch .gitignore

Then open up that file in your favorite text editor. Add this content to the file:

_site/ node_modules/

This tells Git to ignore the node_modules directory and the _site directory. Committing every single Node.js module to the repo could make things really messy and tough to manage. All that information is already in package.json anyway.

Similarly, there’s no need to version control _site. Eleventy can generate it from the files in src, so no need to take up space in GitHub. It’s also possible that if we were to:

  • version control _site,
  • change files in src, or
  • forget to run Eleventy again,

then _site will reflect an older build of the website, and future developers (or a future version of yourself) may accidentally use an outdated version of the site.

Git is version control software, and GitHub is a Git repository host. There are other Git host providers like BitBucket or GitLab, but since we’re talking about a GitHub-specific feature (template repositories), we’ll push our work up to GitHub. If you don’t already have an account, go ahead and join GitHub. Once you have an account, create a GitHub repository and name it static-site-template.

GitHub will ask a few questions when setting up a new repository. One of those is whether we want to create a new repository on the command line or push an existing repository from the command line. Neither of these choices are exactly what we need. They assume we either don’t have anything at all, or we have been using Git locally already. The static-site-template project already exists, has a Git repository initialized, but doesn’t yet have any commits on it.

So let’s ignore the prompts and instead run the following commands in the command line. Make sure to have the URL GitHub provides in the command from line 3 handy:

git add . git commit -m "first commit" git remote add origin https://github.com/your-username/static-site-template.git git push -u origin master

This adds the entire static-site-template folder to the Git staging area. It commits it with the message "first commit," adds a remote repository (the GitHub repository), and then pushes up the master branch to that repository.

Let’s template-ize this thing

OK, this is the crux of what we have been working toward. GitHub templates allows us to use the repository we’ve just created as the foundation for other projects in the future — without having to do all the work we’ve done to get here!

Click Settings on the GitHub landing page of the repository to get started. On the settings page, check the button for Template repository.

Now when we go back to the repository page, we’ll get a big green button that says Use this template. Click it and GitHub will create a new repository that’s a mirror of our new template. The new repository will start with the same files and folders as static-site-template. From there, download or clone that new repository to start a new project with all the base files and configuration we set up in the template project.

We can extend the template for future projects

Now that we have a template repository, we can use it for any new static site project that comes up. However, You may find that a new project has additional needs than what’s been set up in the template. For example, let’s say you need to tap into Eleventy’s templating engine or data processing power.

Go ahead and build on top of the template as you work on the new project. When you finish that project, identify pieces you want to reuse in future projects. Perhaps you figured out a cool hover effect on buttons. Or you built your own JavaScript carousel element. Or maybe you’re really proud of the document design and hierarchy of information.

If you think anything you did on a project might come up again on your next run, remove the project-specific details and add the new stuff to your template project. Push those changes up to GitHub, and the next time you use static-site-template to kick off a project, your reusable code will be available to you.

There are some limitations to this, of course

GitHub template repositories are a useful tool for avoiding repetitive setup on new web development projects. I find this especially useful for static site projects. These template repositories might not be as appropriate for more complex projects that require external services like databases with configuration that cannot be version-controlled in a single directory.

Template repositories allow you to ship reusable code you have written so you can solve a problem once and use that solution over and over again. But while your new solutions will carry over to future projects, they won’t be ported backwards to old projects.

This is a useful process for sites with very similar structure, styles, and functionality. Projects with wildly varied requirements may not benefit from this code-sharing, and you could end up bloating your project with unnecessary code.

Wrapping up

There you have it! You now have everything you need to not only start a static site project using Eleventy, but the power to re-purpose it on future projects. GitHub templates are so handy for kicking off projects quickly where we otherwise would have to re-build the same wheel over and over. Use them to your advantage and enjoy a jump start on your projects moving forward!

The post Using GitHub Template Repos to Jump-Start Static Site Projects appeared first on CSS-Tricks.

Why Progressive Web Apps Are The Future of Mobile Web

Css Tricks - Fri, 10/04/2019 - 4:16am

Here’s one of the best essays I’ve ever read about why progressive web apps are important, how they work, and what impact they have on a business:

PWAs are powerful, effective, fast and app-like.

It’s hard to imagine a mobile web property that could not be significantly improved via PWA implementation. They can also potentially eliminate the need for many “vanity” native apps that exist today.

My only small disagreement with this piece is their use of the term “mobile web.” I know it’s a tiny thing to get persnickety over but my hot take after reading it is this: it’s important to remember that progressive web apps are for everyone, desktop and mobile users alike. I think it’s important to reiterate that there is no mobile web. And that our goal is to be better than native.

Direct Link to ArticlePermalink

The post Why Progressive Web Apps Are The Future of Mobile Web appeared first on CSS-Tricks.

Weekly Platform News: Tracking via Web Storage, First Input Delay, Navigating by Headings

Css Tricks - Thu, 10/03/2019 - 2:45pm

In this week's roundup, Safari takes on cross-site tracking, the delay between load and user interaction is greater on mobile, and a new survey says headings are a popular way for screen readers to navigate a webpage.

Let's get into the news.

Safari’s tracking prevention limits web storage

Some social networks and other websites that engage in cross-site tracking add a query string or fragment to external links for tracking purposes. (Apple calls this "abuse of link decoration.")

When people navigate from websites with tracking abilities to other websites via such "decorated" links, Safari will expire the cookies that are created on the loaded web pages after 24 hours. This has led some trackers to start using other types of storage (e.g. localStorage) to track people on websites.

As a new countermeasure, Safari will now delete all non-cookie website data in these scenarios if the user hasn’t interacted with the website for seven days.

The reason why we cap the lifetime of script-writable storage is simple. Site owners have been convinced to deploy third-party scripts on their websites for years. Now those scripts are being repurposed to circumvent browsers’ protections against third-party tracking. By limiting the ability to use any script-writable storage for cross-site tracking purposes, [Safari’s tracking prevention] makes sure that third-party scripts cannot leverage the storage powers they have gained over all these websites.

(via John Wilander)

First Input Delay is much worse on mobile

First Input Delay (FID), the delay until the page is able to respond to the user, is much worse on mobile: Only 13% of websites have a fast FID on mobile, compared to 70% on desktop.

Tip: If your website is popular enough to be included in the Chrome UX Report, you can check your site’s mobile vs. desktop FID data on PageSpeed Insights.

(via Rick Viscomi)

Screen reader users navigate web pages by headings

According to WebAIM’s recent screen reader user survey, the most popular screen readers are NVDA (41%) and JAWS (40%) on desktop (primary screen reader) and VoiceOver (71%) and TalkBack (33%) on mobile (commonly used screen readers).

When trying to find information on a web page, most screen reader users navigate the page through the headings (<h1>, <h2>, <h3>, etc.).

The usefulness of proper heading structures is very high, with 86.1% of respondents finding heading levels very or somewhat useful.

Tip: You can check a web page’s heading structure with W3C’s Nu Html Checker (enable the "outline" option).

(via WebAIM)

More news...

Read even more news in my weekly Sunday issue that can be delivered to you via email every Monday morning.

More News ?

The post Weekly Platform News: Tracking via Web Storage, First Input Delay, Navigating by Headings appeared first on CSS-Tricks.

Awards That Look Beyond the Flashy

Css Tricks - Thu, 10/03/2019 - 8:11am

Dan Mall is judging the Communication Arts Interactive 2020 awards. These types of things are usually a celebration of flashy, short-lived, one-off designs. Those things are awesome, but Dan has more in mind:

I’d love to award work that demonstrates creative use of the highest level of color contrast ratios and works well on assistive devices. I’d love to award work that’s still useful when JavaScript fails. I’d love to award work that shows smart thinking and strategy in addition to flawless execution and art direction. I’d love to award work that serves business and user needs.

Direct Link to ArticlePermalink

The post Awards That Look Beyond the Flashy appeared first on CSS-Tricks.

Adaptive Photo Layout with Flexbox

Css Tricks - Thu, 10/03/2019 - 4:24am

Let’s take a look at a super lightweight way to create a horizontal masonry effect for a set of arbitrarily-sized photos. Throw any set of photos at it, and they will line up edge-to-edge with no gaps anywhere.

The solution is not only lightweight but also quite simple. We’ll be using an unordered list of images and just 17 lines of CSS, with the heavy lifters being flexbox and object-fit.

Why?

I have two hobbies: documenting my life with photos, and figuring out interesting ways to combine CSS properties, both old and new.

A couple of weeks ago, I attended XOXO and shot a ton of photos which I narrowed down to a nice little set of 39. Wanting to own my content, I’ve spent the past couple of years thinking about putting together a simple photo blog, but was never able to nail the layout I had in mind: a simple masonry layout where photos fill out rows while respecting their aspect ratio (think Photos.app on iOS, Google Photos, Flickr…).

I did some research to see if there were any lightweight, non-JavaScript options, but couldn’t find anything suiting my needs. Facing some delayed flights, I started playing around with some code, limiting myself to keep things as simple as possible (because that’s my definition of fun).

Basic Markup

Since I’m basically creating a list of images, I opted for an unordered list:

<ul> <li> <img> </li> <!-- ... --> <li> <img> </li> </ul> All hail flexbox

Then came a string of lightbulb moments:

  • Flexbox is great for filling up rows by determining cell width based on cell content.
  • This meant the images (landscape or portrait) all needed to have the same height.
  • I could use object-fit: cover; to make sure the images filled the cells.

In theory, this sounded like a solid plan, and it got me a result I was about 90% happy with.

ul { display: flex; flex-wrap: wrap; } li { height: 40vh; flex-grow: 1; } img { max-height: 100%; min-width: 100%; object-fit: cover; vertical-align: bottom; }

Note: 40vh seemed like the best initial approach for desktop browsers, showing two full rows of photos at a reasonable size, and hinting at more below. This also allowed more photos per line, which dramatically improves the aspect ratios.

Last row stretchiness

The only issue I ran into is that flexbox really wants to fill all the lines, and it did some silly things to the aspect ratios of the photos on the last row. This is probably my least favorite bit of this layout, but I had to add an empty <li> element at the end of the list.

<ul> <li> <img> </li> <!-- ... --> <li> <img> </li> <li></li> </ul>

Combined with this bit of CSS:

li:last-child { flex-grow: 10; }

Note: There's no science in using "10" here. In all my testing, this delivered the best results.

Demo

See the Pen
Adaptive Photo Layout
by Tim Van Damme (@maxvoltar)
on CodePen.

Viewport optimization

There are some considerations to keep in mind when working in different viewport orientations.

Portrait

If your viewport is taller than it is wide, this approach limits the amount of photos per line thus messing up their aspect ratios. To solve this, you can make the photo rows less tall with this simple media query:

@media (max-aspect-ratio: 1/1) { li { height: 30vh; } } Short Screens

To help with small devices in landscape, increasing the height of photos helps to see them as large as possible:

@media (max-height: 480px) { li { height: 80vh; } } Smaller Screens + Portrait

Most phones aren’t wide enough to allow flexbox to properly do its job without miniaturizing the photos, so here I opted to not try to fit multiple photos per line. Still, it’s worth setting a maximum height here so you’ll at least have a hint at the next photo in the list.

@media (max-aspect-ratio: 1/1) and (max-width: 480px) { ul { flex-direction: row; } li { height: auto; width: 100%; } img { width: 100%; max-height: 75vh; min-width: 0; } } There we have it!

This approach doesn’t fully respect the aspect ratios of photos (but it’s close) and occasionally leads to some weird results, but I absolutely love the simplicity and flexibility of it all. Want to have your gallery scroll horizontally instead of vertically? A couple of tweaks will allow you to do this. Are there 1,000 photos in the gallery or just one? It’ll all look good. Unclear about aspect ratios? Flexbox is your best friend. Take another look at the demo if you haven't yet, and let me know what you think!

Bonus

Depending on the size of these photos, a page like this can grow to multiple megabytes real quick. On the blog I’m working on, I’ve added loading="lazy" to help with this. With that attribute in place on the images, it only loads photos once you approach them while scrolling. It’s supported just in Chrome for now, but you could roll your own more supported technique.

The post Adaptive Photo Layout with Flexbox appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.