Front End Web Development

A Look at JAMstack’s Speed, By the Numbers

Css Tricks - Fri, 11/01/2019 - 4:29am

People say JAMstack sites are fast — let’s find out why by looking at real performance metrics! We’ll cover common metrics, like Time to First Byte (TTFB) among others, then compare data across a wide section of sites to see how different ways to slice those sites up compare.

First, I’d like to present a small analysis to provide some background. According to the HTTPArchive metrics report on page loading, users wait an average of 6.7 seconds to see primary content.

First Contentful Paint (FCP) - measures the point at which text or graphics are first rendered to the screen.

The FCP distribution for the 10th, 50th and 90th percentile values as reported on August 1, 2019.

If we are talking about engagement with a page (Time to Interactive), users wait even longer. The average time to interactive is 9.3 seconds.

Time to Interactive (TTI) - a time when user can interact with a page without delay.

TTI distribution for the 10th, 50th and 90th percentile values as reported on August 1, 2019. State of the real user web performance

The data above is from lab monitoring and doesn't fully represent real user experience. Real users data based taken from the Chrome User Experience Report (CrUX) shows an even wider picture.

??I’ll use data aggregated from users who use mobile devices. Specifically, we will use metrics like:

Time To First Byte

TTFB represents the time browser waits to receive first bytes of the response from server. TTFB takes from 200ms to 1 second for users around the world. It’s a pretty long time to receive the first chunks of the page.

TTFB mobile speed distribution (CrUX, July 2019) First Contentful Paint

FCP happens after 2.5 seconds for 23% of page views around the world.

FCP mobile speed distribution (CrUX, July 2019) First Input Delay

FID metrics show how fast web pages respond to user input (e.g. click, scroll, etc.).

CrUX doesn’t have TTI data due to different restrictions, but has FID, which is even better can reflect page interactivity. Over 75% of mobile user experiences have input delay for 50ms and users didn't experience any jank.

FID mobile speed distribution (CrUX, July 2019)

You can use the queries below and play with them on this site.

Data from July 2019 [ { "date": "2019_07_01", "timestamp": "1561939200000", "client": "desktop", "fastTTFB": "27.33", "avgTTFB": "46.24", "slowTTFB": "26.43", "fastFCP": "48.99", "avgFCP": "33.17", "slowFCP": "17.84", "fastFID": "95.78", "avgFID": "2.79", "slowFID": "1.43" }, { "date": "2019_07_01", "timestamp": "1561939200000", "client": "mobile", "fastTTFB": "23.61", "avgTTFB": "46.49", "slowTTFB": "29.89", "fastFCP": "38.58", "avgFCP": "38.28", "slowFCP": "23.14", "fastFID": "75.13", "avgFID": "17.95", "slowFID": "6.92" } ] BigQuery #standardSQL SELECT REGEXP_REPLACE(yyyymm, '(\\d{4})(\\d{2})', '\\1_\\2_01') AS date, UNIX_DATE(CAST(REGEXP_REPLACE(yyyymm, '(\\d{4})(\\d{2})', '\\1-\\2-01') AS DATE)) * 1000 * 60 * 60 * 24 AS timestamp, IF(device = 'desktop', 'desktop', 'mobile') AS client, ROUND(SUM(fast_fcp) * 100 / (SUM(fast_fcp) + SUM(avg_fcp) + SUM(slow_fcp)), 2) AS fastFCP, ROUND(SUM(avg_fcp) * 100 / (SUM(fast_fcp) + SUM(avg_fcp) + SUM(slow_fcp)), 2) AS avgFCP, ROUND(SUM(slow_fcp) * 100 / (SUM(fast_fcp) + SUM(avg_fcp) + SUM(slow_fcp)), 2) AS slowFCP, ROUND(SUM(fast_fid) * 100 / (SUM(fast_fid) + SUM(avg_fid) + SUM(slow_fid)), 2) AS fastFID, ROUND(SUM(avg_fid) * 100 / (SUM(fast_fid) + SUM(avg_fid) + SUM(slow_fid)), 2) AS avgFID, ROUND(SUM(slow_fid) * 100 / (SUM(fast_fid) + SUM(avg_fid) + SUM(slow_fid)), 2) AS slowFID FROM `chrome-ux-report.materialized.device_summary` WHERE yyyymm = '201907' GROUP BY date, timestamp, client ORDER BY date DESC, client State of Content Management Systems (CMS) performance

CMSs should have become our saviors, helping us build faster sites. But looking at the data, that is not the case. The current state of CMS performance around the world is not so great.

TTFB mobile speed distribution comparison between all web and CMS (CrUX, July 2019) Data from July 2019 [ { "freq": "1548851", "fast": "0.1951", "avg": "0.4062", "slow": "0.3987" } ] BigQuery #standardSQL SELECT COUNT(DISTINCT origin) AS freq, ROUND(SUM(IF(ttfb.start < 200, ttfb.density, 0)) / SUM(ttfb.density), 4) AS fastTTFB, ROUND(SUM(IF(ttfb.start >= 200 AND ttfb.start < 1000, ttfb.density, 0)) / SUM(ttfb.density), 4) AS avgTTFB, ROUND(SUM(IF(ttfb.start >= 1000, ttfb.density, 0)) / SUM(ttfb.density), 4) AS slowTTFB FROM `chrome-ux-report.all.201907`, UNNEST(experimental.time_to_first_byte.histogram.bin) AS ttfb JOIN ( SELECT url, app FROM `httparchive.technologies.2019_07_01_mobile` WHERE category = 'CMS' ) ON CONCAT(origin, '/') = url ORDER BY freq DESC

And here are the FCP results:

FCP mobile speed distribution comparison between all web and CMS (CrUX, July 2019)

At least the FID results are a bit better:

FID mobile speed distribution comparison between all web and CMS (CrUX, July 2019) Data from July 2019 [ { "freq": "546415", "fastFCP": "0.2873", "avgFCP": "0.4187", "slowFCP": "0.2941", "fastFID": "0.8275", "avgFID": "0.1183", "slowFID": "0.0543" } ] BigQuery #standardSQL SELECT COUNT(DISTINCT origin) AS freq, ROUND(SUM(IF(fcp.start < 1000, fcp.density, 0)) / SUM(fcp.density), 4) AS fastFCP, ROUND(SUM(IF(fcp.start >= 1000 AND fcp.start < 2500, fcp.density, 0)) / SUM(fcp.density), 4) AS avgFCP, ROUND(SUM(IF(fcp.start >= 2500, fcp.density, 0)) / SUM(fcp.density), 4) AS slowFCP, ROUND(SUM(IF(fid.start < 50, fid.density, 0)) / SUM(fid.density), 4) AS fastFID, ROUND(SUM(IF(fid.start >= 50 AND fid.start < 250, fid.density, 0)) / SUM(fid.density), 4) AS avgFID, ROUND(SUM(IF(fid.start >= 250, fid.density, 0)) / SUM(fid.density), 4) AS slowFID FROM `chrome-ux-report.all.201907`, UNNEST(first_contentful_paint.histogram.bin) AS fcp, UNNEST(experimental.first_input_delay.histogram.bin) AS fid JOIN ( SELECT url, app FROM `httparchive.technologies.2019_07_01_mobile` WHERE category = 'CMS' ) ON CONCAT(origin, '/') = url ORDER BY freq DESC

As you can see, sites built with a CMS perform not much better than the overall performance of sites on web.

You can find performance distribution across different CMSs on this HTTPArchive forum discussion.

E-Commerce websites, a good example of sites that are typically built on a CMS, have really bad stats for page views:

  • ~40% - 1second for TTFB
  • ~30% - more than 1.5 second for FCP
  • ~12% - lag for page interaction.

I faced clients who requested support of IE10-IE11 because the traffic from those users represented 1%, which equalled millions of dollars in revenue. Please, calculate your losses in case 1% of users leave immediately and never came back because of bad performance. If users aren’t happy, business will be unhappy, too.

To get more details about how web performance correlates with revenue, check out WPO Stats. It’s a list of case studies from real companies and their success after improving performance.

JAMstack helps improve web performance Credit: Snipcart

With JAMstack, developers do as little rendering on the client as possible, instead using server infrastructure for most things. Not to mention, most JAMstack workflows are great at handling deployments, and helping with scalability, among other benefits. Content is stored statically on a static file hosts and provided to the users via CDN.

Read Mathieu Dionne's "New to JAMstack? Everything You Need to Know to Get Started" for a great place to become more familiar with JAMstack.

I had two years of experience working with one of the popular CMSs for e-commerce and we had a lot of problems with deployments, performance, scalability. The team would spend days and fixing them. It’s not what customers want. These are the sorts of big issues JAMstack solves.

Looking at the CrUX data, JAMstack sites performance looks really solid. The following values are based on sites served by Netlify and GitHub. There is some discussion on the HTTPArchive forum where you can participate to make data more accurate.

Here are the results for TTFB:

TTFB mobile speed distribution comparison between all web, CMS and JAMstack sites (CrUX, July 2019) Data from July 2019 [ { "n": "7627", "fastTTFB": "0.377", "avgTTFB": "0.5032", "slowTTFB": "0.1198" } ] BigQuery #standardSQL SELECT COUNT(DISTINCT origin) AS n, ROUND(SUM(IF(ttfb.start < 200, ttfb.density, 0)) / SUM(ttfb.density), 4) AS fastTTFB, ROUND(SUM(IF(ttfb.start >= 200 AND ttfb.start < 1000, ttfb.density, 0)) / SUM(ttfb.density), 4) AS avgTTFB, ROUND(SUM(IF(ttfb.start >= 1000, ttfb.density, 0)) / SUM(ttfb.density), 4) AS slowTTFB FROM `chrome-ux-report.all.201907`, UNNEST(experimental.time_to_first_byte.histogram.bin) AS ttfb JOIN (SELECT url, REGEXP_EXTRACT(LOWER(CONCAT(respOtherHeaders, resp_x_powered_by, resp_via, resp_server)), '(netlify|x-github-request)') AS platform FROM `httparchive.summary_requests.2019_07_01_mobile`) ON CONCAT(origin, '/') = url WHERE platform IS NOT NULL ORDER BY n DESC

Here's how FCP shook out:

FCP mobile speed distribution comparison between all web, CMS and JAMstack sites (CrUX, July 2019)

Now let's look at FID:

FID mobile speed distribution comparison between all web, CMS and JAMstack sites (CrUX, July 2019) Data from July 2019 [ { "n": "4136", "fastFCP": "0.5552", "avgFCP": "0.3126", "slowFCP": "0.1323", "fastFID": "0.9263", "avgFID": "0.0497", "slowFID": "0.024" } ] BigQuery #standardSQL SELECT COUNT(DISTINCT origin) AS n, ROUND(SUM(IF(fcp.start < 1000, fcp.density, 0)) / SUM(fcp.density), 4) AS fastFCP, ROUND(SUM(IF(fcp.start >= 1000 AND fcp.start < 2500, fcp.density, 0)) / SUM(fcp.density), 4) AS avgFCP, ROUND(SUM(IF(fcp.start >= 2500, fcp.density, 0)) / SUM(fcp.density), 4) AS slowFCP, ROUND(SUM(IF(fid.start < 50, fid.density, 0)) / SUM(fid.density), 4) AS fastFID, ROUND(SUM(IF(fid.start >= 50 AND fid.start < 250, fid.density, 0)) / SUM(fid.density), 4) AS avgFID, ROUND(SUM(IF(fid.start >= 250, fid.density, 0)) / SUM(fid.density), 4) AS slowFID FROM `chrome-ux-report.all.201907`, UNNEST(first_contentful_paint.histogram.bin) AS fcp, UNNEST(experimental.first_input_delay.histogram.bin) AS fid JOIN (SELECT url, REGEXP_EXTRACT(LOWER(CONCAT(respOtherHeaders, resp_x_powered_by, resp_via, resp_server)), '(netlify|x-github-request)') AS platform FROM `httparchive.summary_requests.2019_07_01_mobile`) ON CONCAT(origin, '/') = url WHERE platform IS NOT NULL ORDER BY n DESC

The numbers show the performance of JAMstack sites is the best. The numbers are pretty much the same for mobile and desktop which is even more amazing!

Some highlights from engineering leaders

Let me show you a couple of examples from some prominent folks in the industry:

Out of 468 million requests in the @HTTPArchive, 48% were not served from a CDN. I've visualized where they were served from below. Many of them were requests to 3rd parties. The client requesting them was in Redwood City, CA. Latency matters. #WebPerf pic.twitter.com/0F7nFa1QgM

— Paul Calvano (@paulcalvano) August 29, 2019

JAMstack sites are generally CDN-hosted and mitigate TTFB. Since the file hosting is handled by infrastructures like Amazon Web Services or similar, all sites performance can be improved in one fix.

One more real investigation says that it is better to deliver static HTML for better FCP.

Which has a better First Meaningful Paint time?

? a raw 8.5MB HTML file with the full text of every single one of my 27,506 tweets
? a client rendered React site with exactly one tweet on it

(Spoiler: @____lighthouse reports 8.5MB of HTML wins by about 200ms)

— Zach Leatherman (@zachleat) September 6, 2019

Here's a comparison for all results shown above together:

Mobile speed distribution comparison between all web, CMS and JAMstack sites (CrUX, July 2019)

JAMstack brings better performance to the web by statically serving pages with CDNs. This is important because a fast back-end that takes a long time to reach users will be slow, and likewise, a slow back-end that is quick to reach users will also be slow.

JAMstack hasn’t won the perf race yet, because the number of sites built with it not so huge as for example for CMS, but the intention to win it is really great.

Adding these metrics to a performance budget can be one way make sure you are building good performance into your workflow. Something like:

  • TTFB: 200ms
  • FCP: 1s
  • FID: 50ms

Spend it wisely &#x1f642;

Editor’s note: Artem Denysov is from Stackbit, which is a service that helps tremendously with spinning up JAMstack sites and more upcoming tooling to smooth out some of the workflow edges with JAMstack sites and content. Artem told me he’d like to thank Rick Viscomi, Rob Austin, and Aleksey Kulikov for their help in reviewing the article.

The post A Look at JAMstack’s Speed, By the Numbers appeared first on CSS-Tricks.

Comparing the Different Types of Native JavaScript Popups

Css Tricks - Thu, 10/31/2019 - 4:13am

JavaScript has a variety of built-in popup APIs that display special UI for user interaction. Famously:

alert("Hello, World!");

The UI for this varies from browser to browser, but generally you’ll see a little window pop up front and center in a very show-stopping way that contains the message you just passed. Here’s Firefox and Chrome:

Native popups in Firefox (left) and Chrome (right). Note the additional UI preventing additional dialogs in Firefox from triggering it more than once. You can also see how Chrome is pinned to the top of the window. There is one big problem you should know about up front

JavaScript popups are blocking.

The entire page essentially stops when a popup is open. You can’t interact with anything on the page while one is open — that’s kind of the point of a “modal” but it’s still a UX consideration you should be keenly aware of. And crucially, no other main-thread JavaScript is running while the popup is open, which could (and probably is) unnecessarily preventing your site from doing things it needs to do.

Nine times out of ten, you’d be better off architecting things so that you don’t have to use such heavy-handed stop-everything behavior. Native JavaScript alerts are also implemented by browsers in such a way that you have zero design control. You can’t control *where* they appear on the page or what they look like when they get there. Unless you absolutely need the complete blocking nature of them, it’s almost always better to use a custom user interface that you can design to tailor the experience for the user.

With that out of the way, let’s look at each one of the native popups.

window.alert(); window.alert("Hello World"); <button onclick="alert('Hello, World!');">Show Message</button> const button = document.querySelectorAll("button"); button.addEventListener("click", () => { alert("Text of button: " + button.innerText); });

See the Pen
alert("Example");
by Elliot KG (@ElliotKG)
on CodePen.

What it’s for: Displaying a simple message or debugging the value of a variable.

How it works: This function takes a string and presents it to the user in a popup with a button with an “OK” label. You can only change the message and not any other aspect, like what the button says.

The Alternative: Like the other alerts, if you have to present a message to the user, it’s probably better to do it in a way that’s tailor-made for what you’re trying to do.

If you’re trying to debug the value of a variable, consider console.log(<code>"`Value of variable:"`, variable); and looking in the console.

window.confirm(); window.confirm("Are you sure?"); <button onclick="confirm('Would you like to play a game?');">Ask Question</button> let answer = window.confirm("Do you like cats?"); if (answer) { // User clicked OK } else { // User clicked Cancel }

See the Pen
confirm("Example");
by Elliot KG (@ElliotKG)
on CodePen.

What it’s for: “Are you sure?”-style messages to see if the user really wants to complete the action they’ve initiated.

How it works: You can provide a custom message and popup will give you the option of “OK” or “Cancel,” a value you can then use to see what was returned.

The Alternative: This is a very intrusive way to prompt the user. As Aza Raskin puts it:

...maybe you don’t want to use a warning at all.”

There are any number of ways to ask a user to confirm something. Probably a clear UI with a <button>Confirm</button> wired up to do what you need it to do.

window.prompt(); window.prompt("What’s your name?"); let answer = window.prompt("What is your favorite color?"); // answer is what the user typed in, if anything

See the Pen
prompt("Example?", "Default Example");
by Elliot KG (@ElliotKG)
on CodePen.

What it’s for: Prompting the user for an input. You provide a string (probably formatted like a question) and the user sees a popup with that string, an input they can type into, and “OK” and “Cancel” buttons.

How it works: If the user clicks OK, you’ll get what they entered into the input. If they enter nothing and click OK, you’ll get an empty string. If they choose Cancel, the return value will be null.

The Alternative: Like all of the other native JavaScript alerts, this doesn’t allow you to style or position the alert box. It’s probably better to use a <form> to get information from the user. That way you can provide more context and purposeful design.

window.onbeforeunload(); window.addEventListener("beforeunload", () => { // Standard requires the default to be cancelled. event.preventDefault(); // Chrome requires returnValue to be set (via MDN) event.returnValue = ''; });

See the Pen
Example of beforeunload event
by Chris Coyier (@chriscoyier)
on CodePen.

What it’s for: Warn the user before they leave the page. That sounds like it could be very obnoxious, but it isn’t often used obnoxiously. It’s used on sites where you can be doing work and need to explicitly save it. If the user hasn’t saved their work and is about to navigate away, you can use this to warn them. If they *have* saved their work, you should remove it.

How it works: If you’ve attached the beforeunload event to the window (and done the extra things as shown in the snippet above), users will see a popup asking them to confirm if they would like to “Leave” or “Cancel” when attempting to leave the page. Leaving the site may be because the user clicked a link, but it could also be the result of clicking the browser’s refresh or back buttons. You cannot customize the message.

MDN warns that some browsers require the page to be interacted with for it to work at all:

To combat unwanted pop-ups, some browsers don't display prompts created in beforeunload event handlers unless the page has been interacted with. Moreover, some don't display them at all.

The Alternative: Nothing that comes to mind. If this is a matter of a user losing work or not, you kinda have to use this. And if they choose to stay, you should be clear about what they should to to make sure it’s safe to leave.

Accessibility

Native JavaScript alerts used to be frowned upon in the accessibility world, but it seems that screen readers have since become smarter in how they deal with them. According to Penn State Accessibility:

The use of an alert box was once discouraged, but they are actually accessible in modern screen readers.

It’s important to take accessibility into account when making your own modals, but there are some great resources like this post by Ire Aderinokun to point you in the right direction.

General alternatives

There are a number of alternatives to native JavaScript popups such as writing your own, using modal window libraries, and using alert libraries. Keep in mind that nothing we’ve covered can fully block JavaScript execution and user interaction, but some can come close by greying out the background and forcing the user to interact with the modal before moving forward.

You may want to look at HTML’s native <dialog> element. Chris recently took a hands-on look) at it. It’s compelling, but apparently suffers from some significant accessibility issues. I’m not entirely sure if building your own would end up better or worse, since handling modals is an extremely non-trivial interactive element to dabble in. Some UI libraries, like Bootstrap, offer modals but the accessibility is still largely in your hands. You might to peek at projects like a11y-dialog.

Wrapping up

Using built-in APIs of the web platform can seem like you’re doing the right thing — instead of shipping buckets of JavaScript to replicate things, you’re using what we already have built-in. But there are serious limitations, UX concerns, and performance considerations at play here, none of which land particularly in favor of using the native JavaScript popups. It’s important to know what they are and how they can be used, but you probably won’t need them a heck of a lot in production web sites.

The post Comparing the Different Types of Native JavaScript Popups appeared first on CSS-Tricks.

Build a 100% Serverless REST API with Firebase Functions & FaunaDB

Css Tricks - Thu, 10/31/2019 - 4:12am

Indie and enterprise web developers alike are pushing toward a serverless architecture for modern applications. Serverless architectures typically scale well, avoid the need for server provisioning and most importantly are easy and cheap to set up! And that’s why I believe the next evolution for cloud is serverless because it enables developers to focus on writing applications.

With that in mind, let’s build a REST API (because will we ever stop making these?) using 100% serverless technology.

We’re going to do that with Firebase Cloud Functions and FaunaDB, a globally distributed serverless database with native GraphQL.

Those familiar with Firebase know that Google’s serverless app-building tools also provide multiple data storage options: Firebase Realtime Database and Cloud Firestore. Both are valid alternatives to FaunaDB and are effectively serverless.

But why choose FaunaDB when Firestore offers a similar promise and is available with Google’s toolkit? Since our application is quite simple, it does not matter that much. The main difference is that once my application grows and I add multiple collections, then FaunaDB still offers consistency over multiple collections whereas Firestore does not. In this case, I made my choice based on a few other nifty benefits of FaunaDB, which you will discover as you read along — and FaunaDB’s generous free tier doesn’t hurt, either. &#x1f609;

In this post, we’ll cover:

  • Installing Firebase CLI tools
  • Creating a Firebase project with Hosting and Cloud Function capabilities
  • Routing URLs to Cloud Functions
  • Building three REST API calls with Express
  • Establishing a FaunaDB Collection to track your (my) favorite video games
  • Creating FaunaDB Documents, accessing them with FaunaDB’s JavaScript client API, and performing basic and intermediate-level queries
  • And more, of course!
Set Up A Local Firebase Functions Project

For this step, you’ll need Node v8 or higher. Install firebase-tools globally on your machine:

$ npm i -g firebase-tools

Then log into Firebase with this command:

$ firebase login

Make a new directory for your project, e.g. mkdir serverless-rest-api and navigate inside.

Create a Firebase project in your new directory by executing firebase login.

Select Functions and Hosting when prompted.

Choose "functions" and "hosting" when the bubbles appear, create a brand new firebase project, select JavaScript as your language, and choose yes (y) for the remaining options.

Create a new project, then choose JavaScript as your Cloud Function language.

Once complete, enter the functions directory, this is where your code lives and where you’ll add a few NPM packages.

Your API requires Express, CORS, and FaunaDB. Install it all with the following:

$ npm i cors express faunadb Set Up FaunaDB with NodeJS and Firebase Cloud Functions

Before you can use FaunaDB, you need to sign up for an account.

When you’re signed in, go to your FaunaDB console and create your first database, name it "Games."

You’ll notice that you can create databases inside other databases . So you could make a database for development, one for production or even make one small database per unit test suite. For now we only need ‘Games’ though, so let’s continue.

Create a new database and name it "Games."

Then tab over to Collections and create your first Collection named ‘games’. Collections will contain your documents (games in this case) and are the equivalent of a table in other databases— don’t worry about payment details, Fauna has a generous free-tier, the reads and writes you perform in this tutorial will definitely not go over that free-tier. At all times you can monitor your usage in the FaunaDB console.

For the purpose of this API, make sure to name your collection ‘games’ because we’re going to be tracking your (my) favorite video games with this nerdy little API.

Create a Collection in your Games Database and name it "Games."

Tab over to Security, and create a new Key and name it "Personal Key." There are 3 different types of keys, Admin/Server/Client. Admin key is meant to manage multiple databases, A Server key is typically what you use in a backend which allows you to manage one database. Finally a client key is meant for untrusted clients such as your browser. Since we’ll be using this key to access one FaunaDB database in a serverless backend environment, choose ‘Server key’.

Under the Security tab, create a new Key. Name it Personal Key.

Save the key somewhere, you’ll need it shortly.

Build an Express REST API with Firebase Functions

Firebase Functions can respond directly to external HTTPS requests, and the functions pass standard Node Request and Response objects to your code — sweet. This makes Google’s Cloud Function requests accessible to middleware such as Express.

Open index.js inside your functions directory, clear out the pre-filled code, and add the following to enable Firebase Functions:

const functions = require('firebase-functions') const admin = require('firebase-admin') admin.initializeApp(functions.config().firebase)

Import the FaunaDB library and set it up with the secret you generated in the previous step:

admin.initializeApp(...) const faunadb = require('faunadb') const q = faunadb.query const client = new faunadb.Client({ secret: 'secrety-secret...that’s secret :)' })

Then create a basic Express app and enable CORS to support cross-origin requests:

const client = new faunadb.Client({...}) const express = require('express') const cors = require('cors') const api = express() // Automatically allow cross-origin requests api.use(cors({ origin: true }))

You’re ready to create your first Firebase Cloud Function, and it’s as simple as adding this export:

api.use(cors({...})) exports.api = functions.https.onRequest(api)

This creates a cloud function named, “api” and passes all requests directly to your api express server.

Routing an API URL to a Firebase HTTPS Cloud Function

If you deployed right now, your function’s public URL would be something like this: https://project-name.firebaseapp.com/api. That’s a clunky name for an access point if I do say so myself (and I did because I wrote this... who came up with this useless phrase?)

To remedy this predicament, you will use Firebase’s Hosting options to re-route URL globs to your new function.

Open firebase.json and add the following section immediately below the "ignore" array:

"ignore": [...], "rewrites": [ { "source": "/api/v1**/**", "function": "api" } ]

This setting assigns all /api/v1/... requests to your brand new function, making it reachable from a domain that humans won’t mind typing into their text editors.

With that, you’re ready to test your API. Your API that does... nothing!

Respond to API Requests with Express and Firebase Functions

Before you run your function locally, let’s give your API something to do.

Add this simple route to your index.js file right above your export statement:

api.get(['/api/v1', '/api/v1/'], (req, res) => { res .status(200) .send(`<img src="https://media.giphy.com/media/hhkflHMiOKqI/source.gif">`) }) exports.api = ...

Save your index.js fil, open up your command line, and change into the functions directory.

If you installed Firebase globally, you can run your project by entering the following: firebase serve.

This command runs both the hosting and function environments from your machine.

If Firebase is installed locally in your project directory instead, open package.json and remove the --only functions parameter from your serve command, then run npm run serve from your command line.

Visit localhost:5000/api/v1/ in your browser. If everything was set up just right, you will be greeted by a gif from one of my favorite movies.

And if it’s not one of your favorite movies too, I won’t take it personally but I will say there are other tutorials you could be reading, Bethany.

Now you can leave the hosting and functions emulator running. They will automatically update as you edit your index.js file. Neat, huh?

FaunaDB Indexing

To query data in your games collection, FaunaDB requires an Index.

Indexes generally optimize query performance across all kinds of databases, but in FaunaDB, they are mandatory and you must create them ahead of time.

As a developer just starting out with FaunaDB, this requirement felt like a digital roadblock.

"Why can’t I just query data?" I grimaced as the right side of my mouth tried to meet my eyebrow.

I had to read the documentation and become familiar with how Indexes and the Fauna Query Language (FQL) actually work; whereas Cloud Firestore creates Indexes automatically and gives me stupid-simple ways to access my data. What gives?

Typical databases just let you do what you want and if you do not stop and think: : "is this performant?" or “how much reads will this cost me?” you might have a problem in the long run. Fauna prevents this by requiring an index whenever you query.
As I created complex queries with FQL, I began to appreciate the level of understanding I had when I executed them. Whereas Firestore just gives you free candy and hopes you never ask where it came from as it abstracts away all concerns (such as performance, and more importantly: costs).

Basically, FaunaDB has the flexibility of a NoSQL database coupled with the performance attenuation one expects from a relational SQL database.

We’ll see more examples of how and why in a moment.

Adding Documents to a FaunaDB Collection

Open your FaunaDB dashboard and navigate to your games collection.

In here, click NEW DOCUMENT and add the following BioShock titles to your collection:

{ "title": "BioShock", "consoles": [ "windows", "xbox_360", "playstation_3", "os_x", "ios", "playstation_4", "xbox_one" ], "release_date": Date("2007-08-21"), "metacritic_score": 96 } { "title": "BioShock 2", "consoles": [ "windows", "playstation_3", "xbox_360", "os_x" ], "release_date": Date("2010-02-09"), "metacritic_score": 88 }{ "title": "BioShock Infinite", "consoles": [ "windows", "playstation_3", "xbox_360", "os_x", "linux" ], "release_date": Date("2013-03-26"), "metacritic_score": 94 }

As with other NoSQL databases, the documents are JSON-style text blocks with the exception of a few Fauna-specific objects (such as Date used in the "release_date" field).

Now switch to the Shell area and clear your query. Paste the following:

Map(Paginate(Match(Index("all_games"))),Lambda("ref",Var("ref")))

And click the "Run Query" button. You should see a list of three items: references to the documents you created a moment ago.

In the Shell, clear out the query field, paste the query provided, and click "Run Query."

It’s a little long in the tooth, but here’s what the query is doing.

Index("all_games") creates a reference to the all_games index which Fauna generated automatically for you when you established your collection.These default indexes are organized by reference and return references as values. So in this case we use the Match function on the index to return a Set of references. Since we do not filter anywhere, we will receive every document in the ‘games’ collection.

The set that was returned from Match is then passed to Paginate. This function as you would expect adds pagination functionality (forward, backward, skip ahead). Lastly, you pass the result of Paginate to Map, which much like its software counterpart lets you perform an operation on each element in a Set and return an array, in this case it is simply returning ref (the reference id).

As we mentioned before, the default index only returns references. The Lambda operation that we fed to Map, pulls this ref field from each entry in the paginated set. The result is an array of references.

Now that you have a list of references, you can retrieve the data behind the reference by using another function: Get.

Wrap Var("ref") with a Get call and re-run your query, which should look like this:

Map(Paginate(Match(Index("all_games"))),Lambda("ref",Get(Var("ref"))))

Instead of a reference array, you now see the contents of each video game document.

Wrap Var("ref") with a Get function, and re-run the query.

Now that you have an idea of what your game documents look like, you can start creating REST calls, beginning with a POST.

Create a Serverless POST API Request

Your first API call is straightforward and shows off how Express combined with Cloud Functions allow you to serve all routes through one method.

Add this below the previous (and impeccable) API call:

api.get(['/api/v1', '/api/v1/'], (req, res) => {...}) api.post(['/api/v1/games', '/api/v1/games/'], (req, res) => { let addGame = client.query( q.Create(q.Collection('games'), { data: { title: req.body.title, consoles: req.body.consoles, metacritic_score: req.body.metacritic_score, release_date: q.Date(req.body.release_date) } }) ) addGame .then(response => { res.status(200).send(`Saved! ${response.ref}`) return }) .catch(reason => { res.error(reason) }) })

Please look past the lack of input sanitization for the sake of this example (all employees must sanitize inputs before leaving the work-room).

But as you can see, creating new documents in FaunaDB is easy-peasy.

The q object acts as a query builder interface that maps one-to-one with FQL functions (find the full list of FQL functions here).

You perform a Create, pass in your collection, and include data fields that come straight from the body of the request.

client.query returns a Promise, the success-state of which provides a reference to the newly-created document.

And to make sure it’s working, you return the reference to the caller. Let’s see it in action.

Test Firebase Functions Locally with Postman and cURL

Use Postman or cURL to make the following request against localhost:5000/api/v1/ to add Halo: Combat Evolved to your list of games (or whichever Halo is your favorite but absolutely not 4, 5, Reach, Wars, Wars 2, Spartan...)

$ curl http://localhost:5000/api/v1/games -X POST -H "Content-Type: application/json" -d '{"title":"Halo: Combat Evolved","consoles":["xbox","windows","os_x"],"metacritic_score":97,"release_date":"2001-11-15"}'

If everything went right, you should see a reference coming back with your request and a new document show up in your FaunaDB console.

Now that you have some data in your games collection, let’s learn how to retrieve it.

Retrieve FaunaDB Records Using a REST API Request

Earlier, I mentioned that every FaunaDB query requires an Index and that Fauna prevents you from doing inefficient queries. Since our next query will return games filtered by a game console, we can’t simply use a traditional `where` clause since that might be inefficient without an index. In Fauna, we first need to define an index that allows us to filter.

To filter, we need to specify which terms we want to filter on. And by terms, I mean the fields of document you expect to search on.

Navigate to Indexes in your FaunaDB Console and create a new one.

Name it games_by_console, set data.consoles as the only term since we will filter on the consoles. Then set data.title and ref as values. Values are indexed by range, but they are also just the values that will be returned by the query. Indexes are in that sense a bit like views, you can create an index that returns a different combination of fields and each index can have different security.

To minimize request overhead, we’ve limited the response data (e.g. values) to titles and the reference.

Your screen should resemble this one:

Under indexes, create a new index named games_by_console using the parameters above.

Click "Save" when you’re ready.

With your Index prepared, you can draft up your next API call.

I chose to represent consoles as a directory path where the console identifier is the sole parameter, e.g. /api/v1/console/playstation_3, not necessarily best practice, but not the worst either — come on now.

Add this API request to your index.js file:

api.post(['/api/v1/games', '/api/v1/games/'], (req, res) => {...}) api.get(['/api/v1/console/:name', '/api/v1/console/:name/'], (req, res) => { let findGamesForConsole = client.query( q.Map( q.Paginate(q.Match(q.Index('games_by_console'), req.params.name.toLowerCase())), q.Lambda(['title', 'ref'], q.Var('title')) ) ) findGamesForConsole .then(result => { console.log(result) res.status(200).send(result) return }) .catch(error => { res.error(error) }) })

This query looks similar to the one you used in your SHELL to retrieve all games, but with a slight modification.This query looks similar to the one you used in your SHELL to retrieve all games, but with a slight modification. Note how your Match function now has a second parameter (req.params.name.toLowerCase()) which is the console identifier that was passed in through the URL.

The Index you made a moment ago, games_by_console, had one Term in it (the consoles array), this corresponds to the parameter we have provided to the match parameter. Basically, the Match function searches for the string you pass as its second argument in the index. The next interesting bit is the Lambda function. Your first encounter with Lamba featured a single string as Lambda’s first argument, “ref.”

However, the games_by_console Index returns two fields per result, the two values you specified earlier when you created the Index (data.title and ref). So basically we receive a paginated set containing tuples of titles and references, but we only need titles. In case your set contains multiple values, the parameter of your lambda will be an array. The array parameter above (`['title', 'ref']`) says that the first value is bound to the text variable title and the second is bound to the variable ref. text parameter. These variables can then be retrieved again further in the query by using Var(‘title’). In this case, both “title” and “ref,” were returned by the index and your Map with Lambda function maps over this list of results and simply returns only the list of titles for each game.

In fauna, the composition of queries happens before they are executed. When you write var q = q.Match(q.Index('games_by_console'))), the variable just contains a query but no query was executed yet. Only when you pass the query to client.query(q) to be executed, it will execute. You can even pass javascript variables in other Fauna FQL functions to start composing queries. his is a big benefit of querying in Fauna vs the chained asynchronous queries required of Firestore. If you ever have tried to generate very complex queries in SQL dynamically, then you will also appreciate the composition and less declarative nature of FQL.

Save index.js and test out your API with this:

$ curl http://localhost:5000/api/v1/xbox {"data":["Halo: Combat Evolved"]}

Neat, huh? But Match only returns documents whose fields are exact matches, which doesn’t help the user looking for a game whose title they can barely recall.

Although Fauna does not offer fuzzy searching via indexes (yet), we can provide similar functionality by making an index on all words in the string. Or if we want really flexible fuzzy searching we can use the filter syntax. Note that its is not necessarily a good idea from a performance or cost point of view… but hey, we’ll do it because we can and because it is a great example of how flexible FQL is!

Filtering FaunaDB Documents by Search String

The last API call we are going to construct will let users find titles by name. Head back into your FaunaDB Console, select INDEXES and click NEW INDEX. Name the new Index, games_by_title and leave the Terms empty, you won’t be needing them.

Rather than rely on Match to compare the title to the search string, you will iterate over every game in your collection to find titles that contain the search query.

Remember how we mentioned that indexes are a bit like views. In order to filter on title , we need to include `data.title` as a value returned by the Index. Since we are using Filter on the results of Match, we have to make sure that Match returns the title so we can work with it.

Add data.title and ref as Values, compare your screen to mine:

Create another index called games_by_title using the parameters above.

Click "Save" when you’re ready.

Back in index.js, add your fourth and final API call:

api.get(['/api/v1/console/:name', '/api/v1/console/:name/'], (req, res) => {...}) api.get(['/api/v1/games/', '/api/v1/games'], (req, res) => { let findGamesByName = client.query( q.Map( q.Paginate( q.Filter( q.Match(q.Index('games_by_title')), q.Lambda( ['title', 'ref'], q.GT( q.FindStr( q.LowerCase(q.Var('title')), req.query.title.toLowerCase() ), -1 ) ) ) ), q.Lambda(['title', 'ref'], q.Get(q.Var('ref'))) ) ) findGamesByName .then(result => { console.log(result) res.status(200).send(result) return }) .catch(error => { res.error(error) }) })

Big breath because I know there are many brackets (Lisp programmers will love this) , but once you understand the components, the full query is quite easy to understand since it’s basically just like coding.

Beginning with the first new function you spot, Filter. Filter is again very similar to the filter you encounter in programming languages. It reduces an Array or Set to a subset based on the result of a Lambda function.

In this Filter, you exclude any game titles that do not contain the user’s search query.

You do that by comparing the result of FindStr (a string finding function similar to JavaScript’s indexOf) to -1, a non-negative value here means FindStr discovered the user’s query in a lowercase-version of the game’s title.

And the result of this Filter is passed to Map, where each document is retrieved and placed in the final result output.

Now you may have thought the obvious: performing a string comparison across four entries is cheap, 2 million…? Not so much.

This is an inefficient way to perform a text search, but it will get the job done for the purpose of this example. (Maybe we should have used ElasticSearch or Solr for this?) Well in that case, FaunaDB is quite perfect as central system to keep your data safe and feed this data into a search engine thanks to the temporal aspect which allows you to ask Fauna: “Hey, give me the last changes since timestamp X?”. So you could setup ElasticSearch next to it and use FaunaDB (soon they have push messages) to update it whenever there are changes. Whoever did this once knows how hard it is to keep such an external search up to date and correct, FaunaDB makes it quite easy.

Test the API by searching for "Halo":

$ curl http://localhost:5000/api/v1/games?title=halo Don’t You Dare Forget This One Firebase Optimization

A lot of Firebase Cloud Functions code snippets make one terribly wrong assumption: that each function invocation is independent of another.

In reality, Firebase Function instances can remain "hot" for a short period of time, prepared to execute subsequent requests.

This means you should lazy-load your variables and cache the results to help reduce computation time (and money!) during peak activity, here’s how:

let functions, admin, faunadb, q, client, express, cors, api if (typeof api === 'undefined') { ... // dump the existing code here } exports.api = functions.https.onRequest(api) Deploy Your REST API with Firebase Functions

Finally, deploy both your functions and hosting configuration to Firebase by running firebase deploy from your shell.

Without a custom domain name, refer to your Firebase subdomain when making API requests, e.g. https://{project-name}.firebaseapp.com/api/v1/.

What Next?

FaunaDB has made me a conscientious developer.

When using other schemaless databases, I start off with great intentions by treating documents as if I instantiated them with a DDL (strict types, version numbers, the whole shebang).

While that keeps me organized for a short while, soon after standards fall in favor of speed and my documents splinter: leaving outdated formatting and zombie data behind.

By forcing me to think about how I query my data, which Indexes I need, and how to best manipulate that data before it returns to my server, I remain conscious of my documents.

To aid me in remaining forever organized, my catalog (in FaunaDB Console) of Indexes helps me keep track of everything my documents offer.

And by incorporating this wide range of arithmetic and linguistic functions right into the query language, FaunaDB encourages me to maximize efficiency and keep a close eye on my data-storage policies. Considering the affordable pricing model, I’d sooner run 10k+ data manipulations on FaunaDB’s servers than on a single Cloud Function.

For those reasons and more, I encourage you to take a peek at those functions and consider FaunaDB’s other powerful features.

The post Build a 100% Serverless REST API with Firebase Functions & FaunaDB appeared first on CSS-Tricks.

It’s All In the Head: Managing the Document Head of a React Powered Site With React Helmet

Css Tricks - Wed, 10/30/2019 - 5:10am

The document head might not be the most glamorous part of a website, but what goes into it is arguably just as important to the success of your website as its user interface. This is, after all, where you tell search engines about your website and integrate it with third-party applications like Facebook and Twitter, not to mention the assets, ranging from analytics libraries to stylesheets, that you load and initialize there.

A React application lives in the DOM node it was mounted to, and with this in mind, it is not at all obvious how to go about keeping the contents of the document head synchronized with your routes. One way might be to use the componentDidMount lifecycle method, like so:

componentDidMount() { document.title = "Whatever you want it to be"; }

However, you are not just going to want to change the title of the document, you are also going to want to modify an array of meta and other tags, and it will not be long before you conclude that managing the contents of the document head in this manner gets tedious pretty quickly and prone to error, not to mention that the code you end up with will be anything but semantic. There clearly has to be a better way to keep the document head up to date with your React application. And as you might suspect given the subject matter of this tutorial, there is a simple and easy to use component called React Helmet, which was developed by and is maintained by the National Football League(!).

In this tutorial, we are going to explore a number of common use cases for React Helmet that range from setting the document title to adding a CSS class to the document body. Wait, the document body? Was this tutorial not supposed to be about how to work with the document head? Well, I have got good news for you: React Helmet also lets you work with the attributes of the <html> and <body> tags; and it goes without saying that we have to look into how to do that, too!

View Repo

One important caveat of this tutorial is that I am going to ask you to install Gatsby — a static site generator built on top of React — instead of Create React App. That's because Gatsby supports server side rendering (SSR) out of the box, and if we truly want to leverage the full power of React Helmet, we will have to use SSR!

Why, you might ask yourself, is SSR important enough to justify the introduction of an entire framework in a tutorial that is about managing the document head of a React application? The answer lies in the fact that search engine and social media crawlers do a very poor job of crawling content that is generated through asynchronous JavaScript. That means, in the absence of SSR, it will not matter that the document head content is up to date with the React application, since Google will not know about it. Fortunately, as you will find out, getting started with Gatsby is no more complicated than getting started with Create React App. I feel quite confident in saying that if this is the first time you have encountered Gatsby, it will not be your last!

Getting started with Gatsby and React Helmet

As is often the case with tutorials like this, the first thing we will do is to install the dependencies that we will be working with.

Let us start by installing the Gatsby command line interface:

npm i -g gatsby-cli

While Gatsby's starter library contains a plethora of projects that provide tons of built-in features, we are going to restrict ourselves to the most basic of these starter projects, namely the Gatsby Hello World project.

Run the following from your Terminal:

gatsby new my-hello-world-starter https://github.com/gatsbyjs/gatsby-starter-hello-world

my-hello-world-starter is the name of your project, so if you want to change it to something else, do so by all means!

Once you have installed the starter project, navigate into its root directory by running cd [name of your project]/ from the Terminal, and once there, run gatsby develop. Your site is now running at http://localhost:8000, and if you open and edit src/pages/index.js, you will notice that your site is updated instantaneously: Gatsby takes care of all our hot-reloading needs without us even having to think of — and much less touch — a webpack configuration file. Just like Create React App does! While I would recommend all JavaScript developers learn how to set up and configure a project with webpack for a granular understanding of how something works, it sure is nice to have all that webpack boilerplate abstracted away so that we can focus our energy on learning about React Helmet and Gatsby!

Next up, we are going to install React Helmet:

npm i --save react-helmet

After that, we need to install Gatsby Plugin React Helmet to enable server rendering of data added with React Helmet:

npm i --save gatsby-plugin-react-helmet

When you want to use a plugin with Gatsby, you always need to add it to the plugins array in the gatsby-config.js file, which is located at the root of the project directory. The Hello World starter project does not ship with any plugins, so we need to make this array ourselves, like so:

module.exports = { plugins: [`gatsby-plugin-react-helmet`] }

Great! All of our dependencies are now in place, which means we can move on to the business end of things.

Our first foray with React Helmet

The first question that we need to answer is where React Helmet ought to live in the application. Since we are going to use React Helmet on all of our pages, it makes sense to nest it in a component together with the page header and footer components since they will also be used on every page of our website. This component will wrap the content on all of our pages. This type of component is commonly referred to as a "layout" component in React parlance.

In the src directory, create a new directory called components in which you create a file called layout.js. Once you have done this, copy and paste the code below into this file.

import React from "react" import Helmet from "react-helmet" export default ({ children }) => ( <> <Helmet> <title>Cool</title> </Helmet> <div> <header> <h1></h1> <nav> <ul> </ul> </nav> </header> {children} <footer>{`${new Date().getFullYear()} No Rights Whatsoever Reserved`}</footer> </div> </> )

Let's break down that code.

First off, if you are new to React, you might be asking yourself what is up with the empty tags that wrap the React Helmet component and the header and footer elements. The answer is that React will go bananas and throw an error if you try to return multiple elements from a component, and for a long time, there was no choice but to nest elements in a parent element — commonly a div — which led to a distinctly unpleasant element inspector experience littered with divs that serve no purpose whatsoever. The empty tags, which are a shorthand way for declaring the Fragment component, were introduced to React as a solution to this problem. They let us return multiple elements from a component without adding unnecessary DOM bloat.

That was quite a detour, but if you are like me, you do not mind a healthy dose of code-related trivia. In any case, let us move on to the <Helmet> section of the code. As you are probably able to deduce from a cursory glance, we are setting the title of the document here, and we are doing it in exactly the same way we would in a plain HTML document; quite an improvement over the clunky recipe I typed up in the introduction to this tutorial! However, the title is hard coded, and we would like to be able to set it dynamically. Before we take a look at how to do that, we are going to put our fancy Layout component to use.

Head over to src/pages/ and open ìndex.js. Replace the existing code with this:

import React from "react" import Layout from "../components/layout" export default () => <Layout> <div>I live in a layout component, and life is pretty good here!</div> </Layout>

That imports the Layout component to the application and provides the markup for it.

Making things dynamic

Hard coding things in React does not make much sense because one of the major selling points of React is that makes it's easy to create reusable components that are customized by passing props to them. We would like to be able to use props to set the title of the document, of course, but what exactly do we want the title to look like? Normally, the document title starts with the name of the website, followed by a separator and ends with the name of the page you are on, like Website Name | Page Name or something similar. You are probably right, in thinking, we could use template literals for this, and right you are!

Let us say that we are creating a website for a company called Cars4All. In the code below, you will see that the Layout component now accepts a prop called pageTitle, and that the document title, which is now rendered with a template literal, uses it as a placeholder value. Setting the title of the document does not get any more difficult than that!

import React from "react" import Helmet from "react-helmet" export default ({ pageTitle, children }) => ( <> <Helmet> <title>{`Cars4All | ${pageTitle}`}</title> </Helmet> <div> <header> <h1>Cars4All</h1> <nav> <ul> </ul> </nav> </header> {children} <footer>{`${new Date().getFullYear()} No Rights Whatsoever Reserved`}</footer> </div> </> )

Let us update ìndex.js accordingly by setting the pageTitle to "Home":

import React from "react" import Layout from "../components/layout" export default () => <Layout pageTitle="Home"> <div>I live in a layout component, and life is pretty good here!</div> </Layout>

If you open http://localhost:8000 in the browser, you will see that the document title is now Cars4All | Home. Victory! However, as stated in the introduction, we will want to do more in the document head than set the title. For instance, we will probably want to include charset, description, keywords, author and viewport meta tags.

How would we go about doing that? The answer is exactly the same way we set the title of the document:

import React from "react" import Helmet from "react-helmet" export default ({ pageMeta, children }) => ( <> <Helmet> <title>{`Cars4All | ${pageMeta.title}`}</title> {/* The charset, viewport and author meta tags will always have the same value, so we hard code them! */} <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <meta name="author" content="Bob Trustly" /> {/* The rest we set dynamically with props */} <meta name="description" content={pageMeta.description} /> {/* We pass an array of keywords, and then we use the Array.join method to convert them to a string where each keyword is separated by a comma */} <meta name="keywords" content={pageMeta.keywords.join(',')} /> </Helmet> <div> <header> <h1>Cars4All</h1> <nav> <ul> </ul> </nav> </header> {children} <footer>{`${new Date().getFullYear()} No Rights Whatsoever Reserved`}</footer> </div> </> )

As you may have noticed, the Layout component no longer accepts a pageTitle prop, but a pageMeta one instead, which is an object that encapsulates all the meta data on a page. You do not have to do bundle all the page data like this, but I am very averse to props bloat. If there is data with a common denominator, I will always encapsulate it like this. Regardless, let us update index.js with the relevant data:

import React from "react" import Layout from "../components/layout" export default () => <Layout pageMeta={{ title: "Home", keywords: ["cars", "cheap", "deal"], description: "Cars4All has a car for everybody! Our prices are the lowest, and the quality the best-est; we are all about having the cake and eating it, too!" }} > <div>I live in a layout component, and life is pretty good here!</div> </Layout>

If you open http://localhost:8000 again, fire up DevTools and dive into the document head, you will see that all of the meta tags we added are there. Regardless of whether you want to add more meta tags, a canonical URL or integrate your site with Facebook using the Open Graph Protocol, this is how you about about it. One thing that I feel is worth pointing out: if you need to add a script to the document head (maybe because you want to enhance the SEO of your website by including some structured data), then you have to render the script as a string within curly braces, like so:

<script type="application/ld+json">{` { "@context": "http://schema.org", "@type": "LocalBusiness", "address": { "@type": "PostalAddress", "addressLocality": "Imbrium", "addressRegion": "OH", "postalCode":"11340", "streetAddress": "987 Happy Avenue" }, "description": "Cars4All has a car for everybody! Our prices are the lowest, and the quality the best-est; we are all about having the cake and eating it, too!", "name": "Cars4All", "telephone": "555", "openingHours": "Mo,Tu,We,Th,Fr 09:00-17:00", "geo": { "@type": "GeoCoordinates", "latitude": "40.75", "longitude": "73.98" }, "sameAs" : ["http://www.facebook.com/your-profile", "http://www.twitter.com/your-profile", "http://plus.google.com/your-profile"] } `}</script>

For a complete reference of everything that you can put in the document head, check out Josh Buchea's great overview.

The escape hatch

For whatever reason, you might have to overwrite a value that you have already set with React Helmet — what do you do then? The clever people behind React Helmet have thought of this particular use case and provided us with an escape hatch: values set in components that are further down the component tree always take precedence over values set in components that find themselves higher up in the component tree. By taking advantage of this, we can overwrite existing values.

Say we have a fictitious component that looks like this:

import React from "react" import Helmet from "react-helmet" export default () => ( <> <Helmet> <title>The Titliest Title of Them All</title> </Helmet> <h2>I'm a component that serves no real purpose besides mucking about with the document title.</h2> </> )

And then we want to include this component in ìndex.js page, like so:

import React from "react" import Layout from "../components/layout" import Fictitious from "../components/fictitious" export default () => <Layout pageMeta={{ title: "Home", keywords: ["cars", "cheap", "deal"], description: "Cars4All has a car for everybody! Our prices are the lowest, and the quality the best-est; we are all about having the cake and eating it, too!" }} > <div>I live in a layout component, and life is pretty good here!</div> <Fictitious /> </Layout>

Because the Fictitious component hangs out in the underworld of our component tree, it is able to hijack the document title and change it from "Home" to "The Titliest Title of Them All." While I think it is a good thing that this escape hatch exists, I would caution against using it unless there really is no other way. If other developers pick up your code and have no knowledge of your Fictitious component and what it does, then they will probably suspect that the code is haunted, and we do not want to spook our fellow developers! After all, fighter jets do come with ejection seats, but that is not to say fighter pilots should use them just because they can.

Venturing outside of the document head

As mentioned earlier, we can also use React Helmet to change HTML and body attributes. For example, it's always a good idea to declare the language of your website, which you do with the HTML lang attribute. That's set with React Helmet like this:

<Helmet> /* Setting the language of your page does not get more difficult than this! */ <html lang="en" /> /* Other React Helmet-y stuff... */ </Helmet>

Now let us really tap into the power of React Helmet by letting the pageMeta prop of the Layout component accept a custom CSS class that is added to the document body. Thus far, our React Helmet work has been limited to one page, so we can really spice things up by creating another page for the Cars4All site and pass a custom CSS class with the Layout component's pageMeta prop.

First, we need to modify our Layout component. Note that since our Cars4All website will now consist of more than one page, we need to make it possible for site visitors to navigate between these pages: Gatsby's Link component to the rescue!

Using the Link component is no more difficult than setting its to prop to the name of the file that makes up the page you want to link to. So if we want to create a page for the cars sold by Cars4All and we name the page file cars.js, linking to it is no more difficult than typing out <Link to="/cars/">Our Cars</Link>. When you are on the Our Cars page, it should be possible to navigate back to the ìndex.js page, which we call Home. That means we need to add <Link to="/">Home</Link> to our navigation as well.

In the new Layout component code below, you can see that we are importing the Link component from Gatsby and that the previously empty unordered list in the head element is now populated with the links for our pages. The only thing left to do in the Layout component is add the following snippet:

<body className={pageMeta.customCssClass ? pageMeta.customCssClass : ''}/>

...to the <Helmet> code, which adds a CSS class to the document body if one has been passed with the pageMeta prop. Oh, and given that we are going to pass a CSS class, we do, of course, have to create one. Let's head back to the src directory and create a new directory called css in which we create a file called main.css. Last, but not least, we have to import it into the Layout component, because otherwise our website will not know that it exists. Then add the following CSS to the file:

.slick { background-color: yellow; color: limegreen; font-family: "Comic Sans MS", cursive, sans-serif; }

Now replace the code in src/components/layout.js with the new Layout code that we just discussed:

import React from "react" import Helmet from "react-helmet" import { Link } from "gatsby" import "../css/main.css" export default ({ pageMeta, children }) => ( <> <Helmet> {/* Setting the language of your page does not get more difficult than this! */} <html lang="en" /> {/* Add the customCssClass from our pageMeta prop to the document body */} <body className={pageMeta.customCssClass ? pageMeta.customCssClass : ''}/> <title>{`Cars4All | ${pageMeta.title}`}</title> {/* The charset, viewport and author meta tags will always have the same value, so we hard code them! */} <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <meta name="author" content="Bob Trustly" /> {/* The rest we set dynamically with props */} <meta name="description" content={pageMeta.description} /> {/* We pass an array of keywords, and then we use the Array.join method to convert them to a string where each keyword is separated by a comma */} <meta name="keywords" content={pageMeta.keywords.join(',')} /> </Helmet> <div> <header> <h1>Cars4All</h1> <nav> <ul> <li><Link to="/">Home</Link></li> <li><Link to="/cars/">Our Cars</Link></li> </ul> </nav> </header> {children} <footer>{`${new Date().getFullYear()} No Rights Whatsoever Reserved`}</footer> </div> </> )

We are only going to add a custom CSS class to the document body in the cars.js page, so there is no need to make any modifications to the ìndex.js page. In the src/pages/ directory, create a file called cars.js and add the code below to it.

import React from "react" import Layout from "../components/layout" export default () => <Layout pageMeta={{ title: "Our Cars", keywords: <a href="">"toyota", "suv", "volvo"], description: "We sell Toyotas, gas guzzlers and Volvos. If we don't have the car you would like, let us know and we will order it for you!!!", customCssClass: "slick" }} > <h2>Our Cars</h2> <div>A car</div> <div>Another car</div> <div>Yet another car</div> <div>Cars ad infinitum</div> </Layout>

If you head on over to http://localhost:8000, you will see that you can now navigate between the pages. Moreover, when you land on the cars.js page, you will notice that something looks slightly off... Hmm, no wonder I call myself a web developer and not a web designer! Let's open DevTools, toggle the document head and navigate back to the ìndex.js page. The content is updated when changing routes!

The icing on the cake

If you inspect the source of your pages, you might feel a tad bit cheated. I promised a SSR React website, but none of our React Helmet goodness can be found in the source.

What was the point of my foisting Gatsby on you, you might ask? Well, patience young padowan! Run gatsby build in Terminal from the root of the site, followed by gatsby serve.

Gatsby will tell you that the site is now running on http://localhost:9000. Dash over there and inspect the source of your pages again. Tadá, it's all there! You now have a website that has all the advantages of a React SPA without giving up on SEO or integrating with third-party applications and what not. Gatsby is amazing, and it is my sincere hope that you will continue to explore what Gatsby has to offer.

On that note, happy coding!

The post It’s All In the Head: Managing the Document Head of a React Powered Site With React Helmet appeared first on CSS-Tricks.

Using the Platform

Css Tricks - Tue, 10/29/2019 - 10:49am

Tim Kadlec:

So much care and planning has gone into creating the web platform, to ensure that even as new features are added, they’re added in a way that doesn’t break the web for anyone using an older device or browser. Can you say the same for any framework out there? I don’t mean that to be perceived as throwing shade (as the kids say). Building the actual web platform requires a deeper level of commitment to these sorts of things out of necessity.

The platform (meaning using standard features built into browsers) might not have everything you need (it often won't) and using those features will bring long-term resiliency to what you build in a way that a framework may not. The web evolves and very likely won't break things. Frameworks evolve and very likely will break things.

Sorta evokes the story of MooTools and Smooshgate.

Direct Link to ArticlePermalink

The post Using the Platform appeared first on CSS-Tricks.

Are There Random Numbers in CSS?

Css Tricks - Tue, 10/29/2019 - 5:00am

CSS allows you to create dynamic layouts and interfaces on the web, but as a language, it is static: once a value is set, it cannot be changed. The idea of randomness is off the table. Generating random numbers at runtime is the territory of JavaScript, not so much CSS. Or is it? If we factor in a little user interaction, we actually can generate some degree of randomness in CSS. Let’s take a look!

Randomization from other languages

There are ways to get some "dynamic randomization" using CSS variables as Robin Rendle explains in an article on CSS-Tricks. But these solutions are not 100% CSS, as they require JavaScript to update the CSS variable with the new random value.

We can use preprocessors such as Sass or Less to generate random values, but once the CSS code is compiled and exported, the values are fixed and the randomness is lost. As Jake Albaugh explains:

random in sass is like randomly choosing the name of a main character in a story. it's only random when written. it doesn't change.

— jake albaugh (@jake_albaugh) December 29, 2016

Why do I care about random values in CSS?

In the past, I've developed simple CSS-only apps such as a trivia game, a Simon game, and a magic trick. But I wanted to do something a little bit more complicated. I'll leave a discussion about the validity, utility, or practicality of creating these CSS-only snippets for a later time.

Based on the premise that some board games could be represented as Finite State Machines (FSM), they could be represented using HTML and CSS. So I started developing a game of Snakes and Ladders (aka Chutes and Ladders). It is a simple game. The goal is to advance a pawn from the beginning to the end of the board by avoiding the snakes and trying to go up the ladders.

The project seemed feasible, but there was something that I was missing: rolling dice!

The roll of dice (along with the flip of a coin) are universally recognized for randomization. You roll the dice or flip the coin, and you get an unknown value each time.

Simulating a random dice roll

I was going to superimpose layers with labels, and use CSS animations to "rotate" and exchange which layer was on top. Something like this:

Simulation of how the layers animate on a browser

The code to mimic this randomization is not excessively complicated and can be achieved with an animation and different animation delays:

/* The highest z-index is the numbers of sides in the dice */ @keyframes changeOrder { from { z-index: 6; } to { z-index: 1; } } /* All the labels overlap by using absolute positioning */ label { animation: changeOrder 3s infinite linear; background: #ddd; cursor: pointer; display: block; left: 1rem; padding: 1rem; position: absolute; top: 1rem; user-select: none; } /* Negative delay so all parts of the animation are in motion */ label:nth-of-type(1) { animation-delay: -0.0s; } label:nth-of-type(2) { animation-delay: -0.5s; } label:nth-of-type(3) { animation-delay: -1.0s; } label:nth-of-type(4) { animation-delay: -1.5s; } label:nth-of-type(5) { animation-delay: -2.0s; } label:nth-of-type(6) { animation-delay: -2.5s; }

The animation has been slowed down to allow easier interaction (but still fast enough to see the roadblock explained below). The pseudo-randomness is clearer, too.

See the Pen
Demo of pseudo-randomly generated number with CSS
by Alvaro Montoro (@alvaromontoro)
on CodePen.

But then I hit a roadblock: I was getting random numbers, but sometimes, even when I was clicking on my "dice," it was not returning any value.

I tried increasing the times in the animation, and that seemed to help a bit, but I was still having some unexpected values.

That's when I did what most developers do when they find a roadblock they cannot resolve just by searching online: I asked other developers for help in the form of a StackOverflow question.

Luckily for me, the always resourceful Temani Afif came up with an explanation and a solution.

To simplify a little, the problem was that the browser only triggers the click/press event when the element that is active on mouse down is the same element that is active on mouse up.

Because of the rotating animation, the top label on mouse down was not the top label on mouse up, unless I did it fast or slow enough for the animation to circle around. That's why increasing the animation times hid these issues.

The solution was to apply a position of "static" to break the stacking context, and use a pseudo-element like ::before or ::after with a higher z-index to occupy its place. This way, the active label would always be on top when the mouse went up.

/* The active tag will be static and moved out of the window */ label:active { margin-left: 200%; position: static; } /* A pseudo-element of the label occupies all the space with a higher z-index */ label:active::before { content: ""; position: absolute; top: 0; right: 0; left: 0; bottom: 0; z-index: 10; }

Here is the code with the solution with a faster animation time:

See the Pen
Demo of pseudo-randomly generated number with CSS
by Alvaro Montoro (@alvaromontoro)
on CodePen.

After making this change, the one thing left was to create a small interface to draw a fake dice to click, and the CSS Snakes and Ladders was completed.

This technique has some obvious inconveniences
  • It requires user input: a label must be clicked to trigger the "random number generation."
  • It doesn't scale well: it works great with small sets of values, but it is a pain for large ranges.
  • It’s not really random, but pseudo-random: a computer could easily detect which value would be generated in each moment.

But on the other hand, it is 100% CSS (no need for preprocessors or other external helpers) and, for a human user, it can look 100% random.

And talking about hands... This method can be used not only for random numbers but for random anything. In this case, we used it to "randomly" pick the computer choice in a Rock-Paper-Scissors game:

See the Pen
CSS Rock-Paper-Scissors
by Alvaro Montoro (@alvaromontoro)
on CodePen.

The post Are There Random Numbers in CSS? appeared first on CSS-Tricks.

Learn to Make Your Site Inclusive, by Design

Css Tricks - Tue, 10/29/2019 - 5:00am

Accessibility is our job. We hear it all the time. But the truth is that it often takes a back seat to competing priorities, deadlines, and decisions from above. How can we solve that?

That's where An Event Apart comes in. Making sites inclusive by design is just one of the many topics covered over three full days of sessions designed to inspire you and level up your skills while learning from 17 of today's most talented front-end professionals.

Whether, you're on the East Coast, West Coast or somewhere in between, An Event Apart is conveniently located near you with conferences happening in San Francisco, Washington D.C., Seattle, Boston, Minneapolis and Orlando. In fact, there's one happening in Denver right now!

And at An Event Apart, you don’t just learn from the best, you interact with them — at lunch, between sessions, and at the famous first-night Happy Hour party. Web design is more challenging than ever. Attend An Event Apart to be ready for anything the industry throws at you.

CSS-Tricks readers save $100 off any two or three days with code AEACP.

Register Today

The post Learn to Make Your Site Inclusive, by Design appeared first on CSS-Tricks.

The Current State of Styling Selects in 2019

Css Tricks - Mon, 10/28/2019 - 12:13pm

Best I could tell from the last time I compiled the most wished-for features of CSS, styling form controls was a major ask. Top 5, I'd say. And of the native form elements that people want to style, Greg Whitworth has some data that the <select> element is more requested than any other element — more than double the next element — and it's the one developers most often customize in some way.

Developers clearly want to style select dropdowns.

You actually do a little. Perhaps more than you realize.

The best crack at this out there comes from Scott Jehl over on the Filament Group blog. I'll embed a copy here so it's easy to see:

See the Pen
select-css by Scott/Filament Group
by Chris Coyier (@chriscoyier)
on CodePen.

Notably, this is an entirely cross-browser solution. It's not something limited to only the most progressive desktop browsers. There are some visual differences across browsers and platforms, but overall it's pretty consistent and gives you a baseline from which to further customize it.

That's just the "outside"

Open the select. Hmm, it looks and behaves like you did nothing to it at all.

Styling a <select> doesn't do anything to the opened dropdown of items. (Screenshot from macOS Chrome)

Some browsers do let you style the inside, but it's very limited. Any time I've gone down this road, I've had a bad time getting things cross-browser compliant.

Firefox letting me set the background of the dropdown and the color of a hovered option.

Greg's data shows that only 14% (third place) of developers found styling the outside to be the most painful part of select elements. I'm gonna steal his chart because it's absolutely fascinating:

Frustration % Count Not being able to create a good user experience for searching within the list 27.43% 186 Not being able to style the <option> element to the extent that you needed to 17.85% 121 Not being able to style the default state (dropdown arrow, etc.) 14.01% 95 Not being able to style the pop-up window on desktop (e.g. the border, drop shadows, etc.) 11.36% 77 Insertion of content beyond simple text in the <select> control or its <option>s 11.21% 76 Insertion of arbitrary HTML content in an <option> element 7.82% 53 Not being able to create distinctive unselected/placeholder style and behavior 3.39% 23 Being able to generate new options from a large dataset while the popup is open 3.10% 21 Not being able to style the currently selected <option>(s) to the extent you needed to 1.77% 12 Not being able to style the pop-up window on mobile 1.03% 7 Being able to have the options automatically repeat on scroll (i.e., if you have an list of options 1 – 100, as you reach 100 rather than having the user scroll back to the top, have 1 show up below 100) 1.03% 7

Boiled down, the most painful parts of styling selects are:

  • Search
  • Styling the open dropdown, including the individual options, including more than just text
  • Updating the element without closing it
  • Styling for cases where "nothing" is selected and when an item is selected

I'm surprised multi-select didn't make the cut. Maybe it's not on the table for <select> since it wouldn't be backwards-compatible?

Browser evolution

Edge recently announced they are improving the look of form controls, but no word just yet on standards or how to customize them.

Select styles in Edge/Chromium before (left) and after (right)

It seems like there is good momentum, though. If you want more information and to follow along with all this progress (I know I will!):

The post The Current State of Styling Selects in 2019 appeared first on CSS-Tricks.

A Business Case for Dropping Internet Explorer

Css Tricks - Mon, 10/28/2019 - 4:13am

The distance between Internet Explorer (IE) 11 and every other major browser is an increasingly gaping chasm. Adding support for a technologically obsolete browser adds an inordinate amount of time and frustration to development. Testing becomes onerous. Bug-fixing looms large. Developers have wanted to abandon IE for years, but is it now financially prudent to do so?

First off, we’re talking about a dead browser

Development of IE came to an end in 2015. Microsoft Edge was released as its replacement, with Microsoft announcing that “the latest features and platform updates will only be available in Microsoft Edge”.

Edge was a massive improvement over IE in every respect. Even so, Edge was itself so far behind in implementing web standards that Microsoft recently revealed that they were rebuilding Edge from the ground up using the same technology that powers Google Chrome.

Yet here we are, discussing whether to support Edge’s obsolete ancient relative. Internet Explorer is so bad that a Principal Program Manager at the company published a piece entitled The perils of using Internet Explorer as your default browser on the official Microsoft blog. It’s a browser frozen in time; the web has moved on.

Publications have spelled the fall of IE since 2015.

Browsers are moving faster than ever before. Consider everything that has happened since 2015. CSS Grid. Custom properties. IE11 will never implement any new features. It’s a browser frozen in time; the web has moved on.

It blocks opportunities and encourages inefficiency

The landscape of browsers has also changed dramatically since Microsoft deprecated IE in 2015. Google developer advocate Sam Thorogood has compiled a list of all the features that are supported by every browser other than IE. Once the new Chromium version of Edge is released, this list will further increase. Taken together, it’s a gargantuan feature set, comprising new HTML elements, new CSS properties and new JavaScript features. Many modern JavaScript features can be made compatible with legacy browsers through the use of polyfills and transpilation. Any CSS feature added to the web over the last four years, however, will fail to work in IE altogether.

Let’s dig a little deeper into the features we have today and how they are affected by IE11. Perhaps most notable of all, after decades of hacking layouts on the web, we finally have CSS grid, which massively simplifies responsive layout. Together with CSS custom properties, object-fit, display: contents and intrinsic sizing, they’re all examples of useful CSS features that are likely to leave a website looking broken if they’re not supported. We’ve had some major additions to CSS over the last five years. It’s the cumulative weight of so many things that undermines IE as much as one killer feature.

While many additions to the web over the last five years have been related to layout and styling, we’ve also had huge steps forwards in functionality, such as Progressive Web Apps. Not every modern API is unusable for websites that need to stay backwards compatible. Most can be wrapped in an if statement.

if ('serviceWorker' in navigator) { // do some stuff with a service worker } else { // ??? }

You will, however, be delivering a very different experience to IE users. Increasingly, support for IE will limit the choice of tools that are available as library and frameworks utilize modern features.

Take this announcement from Evan You about the release of Vue 3, for example:

The new codebase currently targets evergreen browsers only and assumes baseline native ES2015 support.

The Vue 3 codebase makes use of proxies — a JavaScript feature that cannot be transpiled. MobX is another popular framework that also relies on proxies. Both projects will continue to maintain backwards-compatible versions, but they’ll lack the performance improvements and API niceties gained from dropping IE. Pika, a great new approach to package management, requires support for JavaScript modules, which are not supported in IE. Then there is shadow DOM — a standardized part of the modern web platform that is unlikely to degrade gracefully.

Supporting it takes tremendous effort

When assessing how much extra work is required to provide backwards compatibility for a deprecated browser like IE11, the long list of unimplemented features is only part of the problem. Browsers are incredibly complex pieces of software and, despite web standards, browsers are inconsistent. IE has long been the most bug-ridden browser that is most at odds with web standards. Flexbox (a technology that developers have been using since 2013), for example, is listed on caniuse.com as having partial support on IE due to the "large amount of bugs present."

IE also offers by far the worst debugging experience — with only a primitive version of DevTools. This makes fixing bugs in IE undoubtedly the most frustrating part of being a developer, and it can be massively time-consuming — taking time away from organizations trying to ship features.

There’s a difference between support — making sure something is functional and looks good enough — versus optimization, where you aim to provide the best experience possible. This does, however, create a potentially confusing grey area. There could be differences of opinion on what constitutes good enough for IE. This comment about IE9 from Dave Rupert is still relevant:

The line for what is considered "broken" is fuzzy. How visually broken does it have to be in order to be functionally broken? I look for cheap fixes, but this is compounded by the fact the offshore QA team doesn’t abide in that nuance, a defect is a defect, which gets logged and assigned to my inbox and pollutes the backlog…Whether it’s polyfills, rogue if-statements, phantom styles, or QA kickbacks; there are costs and technical debt associated with rendering this site on an ever-dwindling sliver of browsers.

If you’re going to take the approach of supporting IE functionally, even if it’s not to the nth degree, still confines you to polyfill, transpile, prefix and test on top of everything else.

It’s already been abandoned by many top websites

Popular websites to officially drop support for IE include Youtube, GitHub, Meetup, Slack, Zendesk, Trello, Atlassian, Discord, Spotify, Behance, Wix, Huddle, WhatsApp, Google Earth and Yahoo. Even some of Microsoft’s own product’s, like Teams, have severely reduced support for IE.

Twitter displays a banner informing IE users that they will not receive the best experience and redirects users to a much older version of the Twitter website. When we think of disruptive companies that are pushing the best in web design, Monzo, Apple Music and Stripe break horribly in IE, while foregoing a warning banner.

Stripe offers no support or warning. Why the new Chromium-powered Edge browser matters

IE usage has been on a slower downward trend following an initial dramatic fall. There’s one primary reason the browser continues to hang on: ancient business applications that don’t work in anything else. Plenty of large companies still use applications that rely on APIs that were never standardized and are now obsolete. Thankfully, the new Edge looks set to solve this issue. In a recent post, the Microsoft Edge Team explained how these companies will finally be able to abandon IE:

The team designed Internet Explorer mode with a goal of 100% compatibility with sites that work today in IE11. Internet Explorer mode appears visually like it’s just a part of the next Microsoft Edge...By leveraging the Enterprise mode site list, IT professionals can enable users of the next Microsoft Edge to simply navigate to IE11-dependent sites and they will just work.

.@MicrosoftEdge: one browser for all web experiences. IE mode will allow you to view and access legacy sites directly in the same window. #MSBuild https://t.co/NXcDjDB5B4 pic.twitter.com/x7BtCtASNs

— Microsoft Edge Dev (@MSEdgeDev) May 6, 2019

After using the beta version for several months, I can say it’s a genuinely great browser. Dare I say, better than Google Chrome? Microsoft are already pushing it hard. Edge is the default browser for Windows 10. Hundreds of millions of devices still run earlier versions of the operating system, on which Edge has not been available. The new Chromium-powered version will bring support to both Windows 7 and 8. For users stuck on old devices with old operating systems, there is no excuse for using IE anymore. Windows 7, still one of the world’s most popular operating systems, is itself due for end-of-life in January 2020, which should also help drive adoption of Edge when individuals and businesses upgrade to Windows 10.

In other words, it's the perfect time to drop support.

Performance costs

All current browsers support ECMAScript 2015 (the latest version of JavaScript) — and have done so for quite some time. Transpiling JavaScript down to an older (and slower) version is still common across the industry, but at this point in time is needed only for Internet Explorer. This process, allowing developers to write modern syntax that still works in IE negatively impacts performance. Philip Walton, an engineer at Google, had this to say on the subject:

Larger files take longer to download, but they also take longer to parse and evaluate. When comparing the two versions from my site, the parse/eval times were also consistently about twice as long for the legacy version. [...] The cost of shipping lots of unneeded JavaScript to low-end mobile browsers can be significant! We (on the Chrome team) have seen numerous occurrences of polyfill bloat adding seconds to the total startup time of websites on low-end mobile devices.

It’s possible to take a differential serving approach to get around this issue, but it does add a small amount of complexity to build tooling. I’m not sure it’s worth bothering when looking at the entire picture of what it already takes to support IE.

Yet another example: IE requires a massive amount of polyfills if you’re going to utilize modern APIs. This normally involves sending additional, unnecessary code to other browsers in the process. An alternative approach, polyfill.io, costs an additional, blocking HTTP request — even for modern browsers that have no need for polyfills. Both of these approaches are bad for performance.

As for CSS, modern features like CSS grid decrease the need for bulky frameworks like Bootstrap. That's lots of extra bites we’re unable to shave off if we have to support IE. Other modern CSS properties can replace what’s traditionally done with JavaScript in a way that’s less fragile and more performant. It would be a boon for both performance and cost to take advantage of them.

Let’s talk money

One (overly simplistic) calculation would be to compare the cost of developer time spent on fixing IE bugs and the amount lost productivity working around IE issues versus the revenue from IE users. Unless you’re a large company generating significant revenue from IE, it’s an easy decision. For big corporations, the stakes are much higher. Websites at the scale of Amazon, for example, may generate tens of millions of dollars from IE users, even if they represent less than 1% of total traffic.

I’d argue that any site at such scale would benefit more by dropping support, thanks to reducing load times and bounce rates which are both even more important to revenue. For large companies, the question isn’t whether it’s worth spending a bit of extra development time to assure backwards compatibility. The question is whether you risk degrading the experience for the vast majority of users by compromising performance and opportunities offered by modern features. By providing no incentive for developers to care about new browser features, they're being held back from innovating and building the best product they can.

It’s a massively valuable asset to have developers who are so curious and inquisitive that they explore and keep up with new technology. By supporting IE, you’re effectively disengaging developers from what’s new. It’s dispiriting to attempt to keep up with what’s new only to learn about features we can’t use. But this isn’t about putting developer experience before user experience. When you improve developer experience, developers are enabled to increase their productivity and ship features — features that users want.

Web development is hard

It was reported earlier this year that the car rental company Hertz was suing Accenture for tens of millions of dollars. Accenture is a Fortune Global 500 company worth billions of dollars. Yet Hertz alleged that, despite an eye-watering price tag, they "never delivered a functional site or mobile app."

According to The Register:

Among the most mind-boggling allegations in Hertz's filed complaint is that Accenture didn't incorporate a responsive design… Despite having missed the deadline by five months, with no completed elements and weighed down by buggy code, Accenture told Hertz it would cost an additional $10m – on top of the $32m it had already been paid – to finish the project.

The Accenture/Hertz affair is an example of stunning ineptitude but it was also a glaring reminder of the fact that web development is hard. Yet, most companies are failing to take advantage of things that make it easier. Microsoft, Google, Mozilla and Apple are investing massive amounts of money into developing new browser features for a reason. Improvements and innovations that have come to browsers in recent years have expanded what is possible to deliver on the web platform while making developers’ lives easier.

Move fast and ship things

The development industry loves terms — like agile and disruptive — that imply light-footed innovation. Yet rather than focusing on shipping features and creating a great experience for the vast bulk of users, we’re catering to a single outdated legacy browser. All the companies I’ve worked for have constantly talked about technical debt. The weight of legacy code is accurately perceived as something that slows down developers. By failing to take advantage of what modern browsers have to offer, the code we write today is legacy code the moment it is written. By writing for the modern web, you don’t only increase productivity today but also create code that’s easier to maintain in the future. From a long-term perspective, it’s the right decision.

Recruitment and retainment

Developer happiness won’t be viewed as important to the bottom line by some business stakeholders. However, recruiting good engineers is notoriously difficult. Average tenure is low compared to other industries. Nothing can harm developer morale more than a day of IE debugging. In a survey of 76,118 developers conducted by Mozilla "Having to support specific browsers (e.g. IE11)" was ranked as the most frustrating thing in web development. "Avoiding or removing a feature that doesn't work across browsers" came third while testing across different browsers reached fourth place. By minimising these frustrations, deciding to end support for IE can help with engineer recruitment and retainment.

IE users can still access your website

We live in a multi-device world. Some users will be lucky enough to have a computer provided by their employer, a personal laptop and a tablet. Smartphones are ubiquitous. If an IE user runs into problems using your site, they can complete the transaction on another device. Or they could open a different browser, as Microsoft Edge comes preinstalled on Windows 10.

The reality of cross-browser testing

If you have a thorough and rigorous cross-browser testing process that always gets followed, congratulations! This is rare in my experience. Plenty of companies only test in Chrome. By making cross-browser testing less onerous, it can be made more likely that developers and stakeholders will actually do it. Eliminating all bugs in browsers that are popular is far more worthwhile monetarily than catering to IE.

When do you plan to drop IE support?

Inevitably, your own analytics will be the determining factor in whether dropping IE support is sensible for you. Browser usage varies massively around the world — from almost 10% in South Korea to well below one percent in many parts of the world. Even if you deem today as being too soon for your particular site, be sure to reassess your analytics after the new Microsoft Edge lands.

The post A Business Case for Dropping Internet Explorer appeared first on CSS-Tricks.

Fonts in Focus: Decimal & Chill

Typography - Mon, 10/28/2019 - 3:53am

Type designers take care of the details, so that we aren’t unnecessarily distracted by them. And good type designers relish those details. To listen to them explain their craft and describe their font-making processes is like watching the child of zero-sugar parents eat its first candy bar. Talking of ‘deep dives’, if you haven’t already, then […]

The post Fonts in Focus: Decimal & Chill appeared first on I Love Typography.

Animated Position of Focus Ring

Css Tricks - Sun, 10/27/2019 - 12:10pm

Maurice Mahan created FocusOverlay, a "library for creating overlays on focused elements." That description is a little confusing at you don't need a library to create focus styles. What the library actually does is animate the focus rings as focus moves from one element to another. It's based on the same idea as Flying Focus.

I'm not strong enough in my accessibility knowledge to give a definitive answer if this is a great idea or not, but my mind goes like this:

  • It's a neat effect.
  • I can imagine it being an accessibility win since, while the page will scroll to make sure the next focused element is visible, it doesn't otherwise help you see where that focus has gone. Movement that directs attention toward the next focused element may help make it more clear.
  • I can imagine it being harmful to accessibility in that it is motion that isn't usually there and could be surprising and possibly offputting.
  • If it "just works" on all my focusable elements, that's cool, but I see there are data attributes for controlling behavior. If I find myself needing to sprinkle behavior control all over my templates to accommodate this specific library, I'd probably be less into it.

On that last point, you could conditionally load it depending on a user's motion preference.

The library is on npm, but is also available as direct linkage thanks to UNPKG. Let's look at using the URLs to the resources directly to illustrate the concept of conditional loading:

<link rel="stylesheet" href="//unpkg.com/focus-overlay@latest/dist/focusoverlay.css" media="prefers-reduced-motion: no-preference" /> <script> const mq = window.matchMedia("(prefers-reduced-motion: no-preference)"); if (mq.matches) { let script = document.createElement("script"); script.src = "//unpkg.com/focus-overlay@latest/dist/focusoverlay.js"; document.head.appendChild(script); } </script>

The JavaScript is also 11.5 KB / 4.2 KB compressed and the CSS is 453 B / 290 B compressed, so you've always got to factor that into as performance and accessibility are related concepts.

Performance isn't just script size either. Looking through the code, it looks like the focus ring is created by appending a <div> to the <body> that has a super high z-index value in which to be seen and pointer-events: none as to not interfere. Then it is absolutely positioned with top and left values and sized with width and height. It looks like new positional information is calculated and then applied to this div, and CSS handles the movement. Last I understood, those aren't particularly performant CSS properties to animate, so I would think a future feature here would be to use animation FLIP to take advantage of only animating transforms.

The post Animated Position of Focus Ring appeared first on CSS-Tricks.

Bidirectional Horizontal Rules in CSS

Css Tricks - Fri, 10/25/2019 - 11:53am

Say you have a <blockquote> and the design calls for a thick border along the left side. Well, you might not necessarily mean left side, but actually mean on the side of the start of the text.

That's exactly what CSS logical properties are meant to address, and Hussein Al Hammad has a nice article laying out some use cases, including the blockquote thing I mentioned above.

By using logical properties, you don't have to mess around with manually writing selectors including [dir="rtl"] or needing to be aware of writing modes and such. The box model stuff (borders, padding, margin...) will adjust where it needs to be.

Hussein's blockquote is a good baby step example for understanding of all this:

blockquote { /* Rather than... */ border-left: 4px solid #aaa; padding-left: 1.75rem; /* You'd do... */ border-inline-start: 4px solid #aaa; padding-inline-start: 1.75rem; }

See the Pen
Logical properties demo: blockquote
by Hussein Al Hammad (@hus_hmd)
on CodePen.

Support is pretty good.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari6964*41No7612.1Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox12.2-12.446*No767868

One thing that threw me off in the article is the term "Horizontal Rules." First I imagined the <hr> element. Then I imagined wanting to reverse the direction of the design with logical properties. Usually an <hr> is just a line so horizontal direction doesn't matter, but let's say it's like a shorter line along the edge where new lines start after wrapping.

We could draw the shorter line with backgrounds that cover different parts of the box model, then use logical properties where the padding applies. This is a pretty unique edge case, but it's fun to fiddle with:

See the Pen
<hr>s that are direction aware kinda
by Chris Coyier (@chriscoyier)
on CodePen.

It would be even easier if we had logical gradients.

Direct Link to ArticlePermalink

The post Bidirectional Horizontal Rules in CSS appeared first on CSS-Tricks.

The Landscape of Cross-Platform App Development

Css Tricks - Fri, 10/25/2019 - 4:14am

I don't track this stuff very well, but I get it. If you want a native app for Android and iOS, it sure would be nice to only have to write it once rather than two very different languages. Roughly double your reach without doubling the work. More and more of these things are reaching into desktop as well, meaning three targets for one.

Stuff like PhoneGap comes to mind. They say, "Reuse existing web development skills to quickly make hybrid applications built with HTML, CSS and JavaScript." That's obviously compelling for web developers who would have to learn minimal new things. My brain leans more toward, "Well if I'm going to write this thing in HTML, CSS, and JavaScript, why don't I leave it at that?" Progressive Web Apps are doing great things. Still, I'm curious what the flagship PhoneGap apps are. Do I use any great ones and not even know it?

If you're going to layer on a framework, but still stay in JavaScript-land, I'd think the biggest player is React Native. I hear it's almost always used with Expo these days, which apparently has a thing to help React Native work on the web. Plus, there is literally React Native for Web.

In React-land, there is another new player: Ionic React. It targets all three platforms (iOS, Android, and Desktop) right out of the gate. Ionic isn't new though — it's long been a framework that does this in JavaScript (alternatively in Angular) and is apparently coming soon to Vue. Compelling. Nader Dabit has a first-look blog post that is pretty well done.

This all starts to get confusing to me as apparently Ionic ultimately uses Cordova under the hood... just like PhoneGap does? Or something? But now Ionic is moving to their own thing? I guess it makes sense that there are some low-level interpreter things that translate web primitives to native primitives and that people build developer tooling on top of that.

Google's got a stake in this game with Flutter. Flutter is about hitting all three targets and helping you build the UI. Material design, animation and performance are all first-class citizens. It's all in Dart though. Dart can compile to JavaScript (so it can be used for web stuff) but it also compiles to machine code. I imagine Flutter apps are compiled that way when they become native apps for bonus performance. I don't have a good sense of how popular Dart is, but I'd assume web developers really won't care what they're writing in if great performance on all three targets is the outcome.

Ever further outside my wheelhouse is Xamarin, which is Microsoft's take on unifying development on multiple platforms. The languages involved here are .NET and C#. It has all the same promises as everything else: build with this and it works everywhere! This is for developer convenience! It's fast and you will make amazing things with it!

I'm always of two minds with all this stuff. Some part of me is envious of really nice native apps. Most of my favorite apps on my phone feel very native, although I'm not sure I could spot which framework created them, if any. For example, I have a Dribbble app on my phone and I quite like it. It's simple and nice. I open it up and I'm logged in, which is usually not the case when I open a web app. It feels fast and has all the in-page animation stuff you expect from a native app. I totally wish we had an app like that for CodePen. Maybe if we were starting over today we'd write it in some cross-platform framework that targets all three platforms and maybe gives us some cool competitive advantage. Another part of me is like, meh, I'm a web guy on purpose. I think the native open web is the place to be and has the most longevity. A codebase that serves that well will be the least regrettable over time.

The post The Landscape of Cross-Platform App Development appeared first on CSS-Tricks.

Weekly Platform News: WebAPK Limited to Chrome, Discernible Focus Rectangles, Modal Window API

Css Tricks - Thu, 10/24/2019 - 12:07pm

In this week's roundup: "Add to home screen" has different meanings in Android, Chrome and Edge add some pop to focus rectangles on form inputs, and how third-party sites may be coming to a modal near you.

Let's get into the news.

WebAPKs are not available to Firefox on Android

On Android, both Chrome and Firefox have an “Add to home screen” option, but while Firefox merely adds a shortcut for the web app to the user’s home screen, Chrome actually installs the web app (as long as it meets the PWA install criteria) via a WebAPK.

Progressive Web Apps installed in such a way are added to the device’s app drawer, and URLs that are within the PWA’s scope (as specified in its manifest) open in the PWA instead of the default browser.

Tiger Oakes who is implementing PWA-related features at Mozilla, explains why Firefox cannot install PWAs on Android: “WebAPK is not available to us since we don’t own an app store like Google Play and Galaxy Apps.”

(via Tiger Oakes)

More accessible focus rectangles are coming to Chrome and Edge

Microsoft and Google have made accessibility improvements to various form controls. The two main changes are the larger touch targets on the time and date inputs, and the redesigned focus rectangles that are now easily discernible on any background.

The updated form controls are available in the preview version of Edge. Mac users may have to manually enable the “Web Platform Fluent Controls” flag on the about:flags page.

(via Microsoft Edge Dev)

A newly proposed API for loading third-parties in modal windows

The proposed Modal Window API would allow a website to load another website in a modal window (in a top-level browsing context) for the purposes of authentication, payments, sharing, access to third-party services, etc.

Only a single modal window would be allowed at a time, and the two websites could communicate with each other via message events (postMessage method).

This API is intended as a better alternative to existing methods, such as pop-ups, which can be confusing to users and blocked by browsers, and redirects, which cause the original context to be torn down and recreated (or completely lost in the case of an error in the third-party service).

(via Adrian Hope-Bailie)

More news...

Read even more news in my weekly Sunday issue that can be delivered to you via email every Monday morning.

More News ?

The post Weekly Platform News: WebAPK Limited to Chrome, Discernible Focus Rectangles, Modal Window API appeared first on CSS-Tricks.

Understanding How Reducers are Used in Redux

Css Tricks - Thu, 10/24/2019 - 4:21am

A reducer is a function that determines changes to an application’s state. It uses the action it receives to determine this change. We have tools, like Redux, that help manage an application’s state changes in a single store so that they behave consistently.

Why do we mention Redux when talking about reducers? Redux relies heavily on reducer functions that take the previous state and an action in order to execute the next state.

We’re going to focus squarely on reducers is in this post. Our goal is to get comfortable working with the reducer function so that we can see how it is used to update the state of an application — and ultimately understand the role they play in a state manager, like Redux.

What we mean by “state”

State changes are based on a user’s interaction, or even something like a network request. If the application’s state is managed by Redux, the changes happen inside a reducer function — this is the only place where state changes happen. The reducer function makes use of the initial state of the application and something called action, to determine what the new state will look like.

If we were in math class, we could say:

initial state + action = new state

In terms of an actual reducer function, that looks like this:

const contactReducer = (state = initialState, action) => { // Do something }

Where do we get that initial state and action? Those are things we define.

The state parameter

The state parameter that gets passed to the reducer function has to be the current state of the application. In this case, we’re calling that our initialState because it will be the first (and current) state and nothing will precede it.

contactReducer(initialState, action)

Let’s say the initial state of our app is an empty list of contacts and our action is adding a new contact to the list.

const initialState = { contacts: [] }

That creates our initialState, which is equal to the state parameter we need for the reducer function.

The action parameter

An action is an object that contains two keys and their values. The state update that happens in the reducer is always dependent on the value of action.type. In this scenario, we are demonstrating what happens when the user tries to create a new contact. So, let’s define the action.type as NEW_CONTACT.

const action = { type: 'NEW_CONTACT', name: 'John Doe', location: 'Lagos Nigeria', email: 'johndoe@example.com' }

There is typically a payload value that contains what the user is sending and would be used to update the state of the application. It is important to note that action.type is required, but action.payload is optional. Making use of payload brings a level of structure to how the action object looks like.

Updating state

The state is meant to be immutable, meaning it shouldn’t be changed directly. To create an updated state, we can make use of Object.assign or opt for the spread operator.

Object.assign const contactReducer = (state, action) => { switch (action.type) { case 'NEW_CONTACT': return Object.assign({}, state, { contacts: [ ...state.contacts, action.payload ] }) default: return state } }

In the above example, we made use of the Object.assign() to make sure that we do not change the state value directly. Instead, it allows us to return a new object which is filled with the state that is passed to it and the payload sent by the user.

To make use of Object.assign(), it is important that the first argument is an empty object. Passing the state as the first argument will cause it to be mutated, which is what we’re trying to avoid in order to keep things consistent.

The spread operator

The alternative to object.assign() is to make use of the spread operator, like so:

const contactReducer = (state, action) => { switch (action.type) { case 'NEW_CONTACT': return { ...state, contacts: [...state.contacts, action.payload] } default: return state } }

This ensures that the incoming state stays intact as we append the new item to the bottom.

Working with a switch statement

Earlier, we noted that the update that happens depends on the value of action.type. The switch statement conditionally determines the kind of update we're dealing with, based on the value of the action.type.

That means that a typical reducer will look like this:

const addContact = (state, action) => { switch (action.type) { case 'NEW_CONTACT': return { ...state, contacts: [...state.contacts, action.payload] } case 'UPDATE_CONTACT': return { // Handle contact update } case 'DELETE_CONTACT': return { // Handle contact delete } case 'EMPTY_CONTACT_LIST': return { // Handle contact list } default: return state } }

It’s important that we return state our default for when the value of action.type specified in the action object does not match what we have in the reducer — say, if for some unknown reason, the action looks like this:

const action = { type: 'UPDATE_USER_AGE', payload: { age: 19 } }

Since we don’t have this kind of action type, we’ll want to return what we have in the state (the current state of the application) instead. All that means is we’re unsure of what the user is trying to achieve at the moment.

Putting everything together

Here’s a simple example of how I implemented the reducer function in React.

See the Pen
reducer example
by Kingsley Silas Chijioke (@kinsomicrote)
on CodePen.

You can see that I didn’t make use of Redux, but this is very much the same way Redux uses reducers to store and update state changes. The primary state update happens in the reducer function, and the value it returns sets the updated state of the application.

Want to give it a try? You can extend the reducer function to allow the user to update the age of a contact. I’d like to see what you come up with in the comment section!

Understanding the role that reducers play in Redux should give you a better understanding of what happens underneath the hood. If you are interested in reading more about using reducers in Redux, it’s worth checking out the official documentation.

The post Understanding How Reducers are Used in Redux appeared first on CSS-Tricks.

ImageKit.io: Image Optimization That Plugs Into Your Infrastructure

Css Tricks - Thu, 10/24/2019 - 4:20am

Images are the most efficient means to showcase a product or service on a website. They make up for most of the visual content on our website.

But, the more images a webpage has, the more bandwidth it consumes, affecting the page load speed - a raging factor having a significant impact on not just our search ranking, but also on our conversion rates.

Image optimization helps serve the right images and improve the page load time. A faster website has a positive impact on our user experience.

With image optimization becoming a standard practice for websites and apps, finding a service that offers the most competent features with a coherent pricing model, one that integrates seamlessly with the existing infrastructure and business needs, is paramount to any website and its efficiency.

So what are the most common criteria for selecting an image optimization tool?

The importance of image optimization isn’t up for debate, our choice of tool or service for it may just be. And it is a factor we should consider carefully. So where are the challenges?

  • Inability to integrate with existing infrastructure - Image optimization is very fundamental on modern websites. To implement it, we should not have to make any changes to our existing setup. But unfortunately, a lot of these tools or services can only be used with specific CDNs or are incapable of being integrated with our existing image servers or storage.
  • Dependency on their image storage - Some tools require us to move our images to their system, making us more dependent on them — an added hassle. And nobody wants to spend their time and effort on a virtually unnecessary step like migrating assets from one platform to another (and may be migrating to some other service in the future, if this one doesn’t work out). Not when it’s not needed. Not if we have hundreds of thousands of images and can never be sure if all the images have been migrated to the new tool or not.

Apart from the above challenges, it is possible that the choice of tool could be expensive or has a complex pricing structure. Or could have only the basic image-related features or not deliver excellent performance across the globe.

Enter ImageKit.io.

It is the only tool that we will need for image optimization and transformation with minimal effort and almost no changes to our existing infrastructure.

It is a complete image CDN with optimization and transformation capabilities for images on websites and apps. What does that mean?

Feed ImageKit.io an original, un-optimized image and fetch it using an ImageKit.io URL, and it shall deliver an optimized, rightly resized image to our user’s devices!

For example:

ImageKit.io resizes the image automatically by simply specifying the size in the image URL.

https://ik.imagekit.io/demo/img/tr:w-300,h-100/default-image.jpg

Here, the width and height of the final image are specified with “w” and “h” parameters in the URL. The output image has dimensions 300×100px, scaled down from an original size of 1000×1000px.

As we know, the right image formats can have a significant impact on our website’s bandwidth consumption.

For instance, the following PNG image,

https://ik.imagekit.io/demo/img/seo-img_E3VIYyKu6.jpg?tr=f-png

is almost 2.5x the size of its JPG variant.

ImageKit.io automatically uses the correct output image format based on image content, image quality, browser support, and user device capabilities. Also, the above image will be converted and delivered as a WebP for all browsers supporting the WebP image format.

Just turn it on, and we’re good to go!

And not just simple resizing and cropping, ImageKit.io allows for more advanced transformations like smart crop, image and text overlays, image trimming, blurring, contrast and sharpness correction, and many more. It also comes with advanced delivery features like Brotli compression for SVG images and image security. ImageKit.io is the solution to all our image optimization needs. And more.

It also comes bundled with a stellar infrastructure — AWS CloudFront CDN, multi-region core processing servers, and a media library with automatic global backups.

Integrations

It isn’t enough to find the most appropriate tool for our website. It should also provide an optimum experience fitting in just right with our existing setup.

ImageKit.io integration with the existing infrastructure With S3 bucket or other web servers

It is usually a tedious task to integrate any image optimization tool into our infrastructure. For example, for images, we would already have a CMS in place to upload the images. And a server or storage to store all of those images. We would not want to make any significant changes to the existing setup that could potentially disrupt all our team's processes.

But with ImageKit.io, the whole process from integration to image delivery takes just a few minutes. ImageKit.io allows us to plug our S3 bucket or webserver and start delivering optimized and transformed images instantly. No movement or re-upload of images is required. It takes just a few minutes to get the best optimization results.

We can attach our server or storage using “ADD ORIGIN” in the ImageKit.io dashboard

Such simple integrations with AWS S3 or web servers are not available even with some leading players in the market.

Images fetched through ImageKit.io are automatically optimized and converted to the appropriate format. And we can resize and transform any image by just adding URL parameters.

For example:

Image with width 300px and height is adjusted automatically to preserve aspect ratio:

https://ik.imagekit.io/your_imagekit_id/rest-of-the-path.jpg?tr=w-300

or,

https://ik.imagekit.io/your_imagekit_id/tr:w-300/rest-of-the-path.jpg

We are using https://ik.imagekit.io/your_imagekit_id format for the URLs, but custom domain names are available with ImageKIt.io.

Learn more about it here.

Media library

ImageKit.io also provides a media library, a feature missing in many prominent image optimization tools. Media library is a highly available file storage for all ImageKit.io users. It comes with a simple user interface to upload, search, and manage files, images, and folders.

ImageKit.io Media Library With platforms like Magento and WordPress

Most SMBs use WordPress to build their websites. ImageKit.io proves handy and almost effortless to set up for these websites.

Installing a plugin on WordPress and making some minor changes in the settings from the WP plugin is all we need to do to integrate ImageKit.io with our WordPress website for all essential optimizations. Learn more about it here.

Similarly, we can integrate ImageKit.io with our Magento store. The essential format and quality optimizations do not require any code change and can be set up by making some changes in Settings in the Magento Admin Panel. Learn more about it here.

For advanced use cases like granular image quality control, smart cropping, watermarking, text overlays, and DPR support, ImageKit.io offers a much better solution, with more robust features, more control over the optimizations and more complex transformations, than the ones provided natively by WordPress or Magento. And we can implement them by making some changes in our website's template files.

With other CDNs

Most businesses use CDNs for their websites, usually under contracts, or running processes other than or in addition to image optimizations on these CDNs, or simply because a particular CDN performs better in a specific region.

Even then, it is possible to use ImageKit.io with our choice of CDN. They support integrations with:

  • Akamai
  • Azure
  • Alibaba Cloud CDN
  • CloudFlare (if one is on an Enterprise plan in CloudFlare)
  • CloudFront
  • Fastly
  • Google CDN
  • Zenedge

To ensure correct optimizations and caching on our CDN, their team works closely with their customer’s team to set up the configuration.

Custom domain names

As image optimization has become a norm, services like LetsEncrypt, which make obtaining and deploying an SSL certificate free, have also become standard practice. In such a scenario, enabling a custom domain name should not be charged heavily.

ImageKit.io helps us out here too.

It offers one custom domain name like images.example.com, SSL and HTTP2 enabled, free of charge, with each paid account. Additional domain names are available at a nominal one-time fee of $50. There are no hidden costs after that.

One ImageKit.io account, multiple websites

For image optimization needs for multiple websites or agencies handling hundreds of websites, ImageKit.io would prove to be an ideal solution. We’re free to attach as many “origins” (storages or servers) with ImageKit.io. What that means is each website has its storage or image server, which can be mapped to separate URLs within the same ImageKit.io account.

Configure the origin for a new site in the ImageKit.io dashboard and map it against a URL endpoint

The added advantage of using ImageKit.io for multiple websites and apps is their pricing model. When one consolidates the usage across their many business accounts, and if it crosses the 1TB mark, they can reach out to ImageKit.io’s team for a special quote. Learn more about it here.

Multiple server regions

ImageKit.io is a global image CDN, It has a distributed infrastructure and processing regions around the world, in North Virginia (US East), North California (US West), Frankfurt (EU), Mumbai (India), Sydney (Australia) and Singapore, aiming to reduce latency between their servers and our image origins.

Multiple server regions with processing and delivery nodes across the globe

Additionally, ImageKit.io stores the images (transformed as well as original) in their cache after fetching them from our origins. This intermediate caching is done to avoid processing the images again. Doing so not only reduces the load on our origin servers and storage but also reduces data transfer costs from them.

Custom cache time

There are times when a shorter cache time is required. For cases where one or more images are set to change at intervals, while the others remain the same, ImageKit.io offers the option to set up custom cache times in place of the default 180 days preset in ImageKit.io, giving us more control over how our resources are cached.

Custom cache time

This feature is usually not provided by third-party image CDNs or is available to enterprise customers only, but it is available with ImageKit.io on request.

Deliver non-image files through ImageKit.io CDN

ImageKit.io comes integrated with a CDN to store and serve resources. And while the primary use case is to deliver optimized images, it can do the same for the other static assets on our website, like JS, CSS as well as fonts.

There is no point dividing our static resources between two services when one, ImageKit.io, can do the whole job on its own.

It’s worth a mention here that though ImageKit.io provides delivery for non-image assets, it does not process or optimize them.

With ImageKit.io’s pricing model in mind, using their CDN may prove beneficial as it bills only based on one parameter — the total bandwidth consumed.

Cost efficiency

Most image optimization tools charge us for image storage or bill us based on requests or the number of master images or transformations.

ImageKit.io bills its customers on a single parameter, their output bandwidth. With ImageKit.io, we can optimize and transform to our heart’s content, without worrying about the costs they may incur. With our focus no longer on the billing costs, we can now focus on important tasks, like optimizing our images and website.

Usually, when an ImageKit.io account crosses 1TB total bandwidth consumption, that account becomes eligible for special pricing.

One can contact their team for a quote after crossing that threshold.

Final thoughts

Image optimization holds many benefits, but are we doing it right? And are we using the best service in the market for it?

ImageKit.io might prove to be the solution for all image optimization and management needs. But we don’t have to take their word for it. We can know for ourselves by joining the many developers and companies using ImageKit.io to deliver best-in-class image optimizations to their users by signing up for free here.

The post ImageKit.io: Image Optimization That Plugs Into Your Infrastructure appeared first on CSS-Tricks.

Why Are Accessible Websites so Hard to Build?

Css Tricks - Wed, 10/23/2019 - 7:36am

I was chatting with some front-end folks the other day about why so many companies struggle at making accessible websites. Why are accessible websites so hard to build? We learn about HTML, we make sure things are semantic and — voila! @— we have an accessible website. During the course of conversation, someone mentioned the Domino's pizza legal case, which is perhaps the most public example of a company being sued because of a lack of accessibility.

Here’s an interesting tidbit from that link:

According to CNBC, the number of lawsuits over inaccessible websites jumped 58 percent last year over 2017, to more than 2,200.

Inaccessible websites are not just a consideration for designers and engineers but a serious problem for a company’s legal team as well. Thankfully, it seems more of these cases will be brought to trial and (my personal hope is) this will get folks to care more about semantics and front-end development best practices. Although I'd like to think that companies would do what’s best for the web and make websites that meet the baseline requirements without a legal threat, we absolutely need to make inaccessible websites illegal for folks to really pay attention to this issue.

However! I also worry about attributing what might simply be a lack of knowledge to malice. I reckon a lot of websites have bad accessibility not because folks don’t care, but because they don’t know there's an issue in the first place. As my conversation with front-end engineers progressed, I realized that the reason accessibility isn’t tackled seriously probably doesn’t have anything to do with bandwidth, or experience, or money.

I reckon the problem is that the accessibility of a website can be invisibly and silently broken.

Here’s an example: when developing a site, JavaScript errors are probably going to be caught because everything breaks if something goes wrong. And CSS bugs are going to get caught because something will look off. But the accessibility or performance of a website can go from okay to terrible overnight and with no warning whatsoever. The only way to fix these invisibly broken things is to first make them visible.

So, here’s an idea: what if our text editors caught accessibility issues and showed them to us during development? It could look something like this:

I’m sure there are a ton of other ways we can make accessibility issues more public and visible. There are tools such as Lighthouse and browser extensions that are already out there, but making accessibility (and even performance, another silent fail) a part of our minute-to-minute workflow ensures that we can’t ignore it. Something like this would encourage us to learn about the problems, give us links to potential solutions, and encourage us all to care for a relatively misunderstood part of front-end development.

The post Why Are Accessible Websites so Hard to Build? appeared first on CSS-Tricks.

What I Like About Writing Styles with Svelte

Css Tricks - Wed, 10/23/2019 - 4:22am

There’s been a lot of well-deserved hype around Svelte recently, with the project accumulating over 24,000 GitHub stars. Arguably the simplest JavaScript framework out there, Svelte was written by Rich Harris, the developer behind Rollup. There’s a lot to like about Svelte (performance, built-in state management, writing proper markup rather than JSX), but the big draw for me has been its approach to CSS.

Single file components

??

React does not have an opinion about how styles are defined
—React Documentation?

?

????

A UI framework that doesn't have a built-in way to add styles to your components is unfinished.
—Rich Harris, creator of Svelte

?

In Svelte, you can write CSS in a stylesheet like you normally would on a typical project. You can also use CSS-in-JS solutions, like styled-components and Emotion, if you'd like. It’s become increasingly common to divide code into components, rather than by file type. React, for example, allows for the collocation of a components markup and JavaScript. In Svelte, this is taken one logical step further: the Javascript, markup and styling for a component can all exist together in a single `.svelte`? file. If you’ve ever used single file components in Vue, then Svelte will look familiar.

// button.svelte <style> button { border-radius: 0; background-color: aqua; } </style> <button> <slot/> </button> Styles are scoped by default

By default, styles defined within a Svelte file are scoped. Like CSS-in-JS libraries or CSS Modules, Svelte generates unique class names when it compiles to make sure the styles for one element never conflict with styles from another.

That means you can use simple element selectors like div and button in a Svelte component file without needing to work with class names. If we go back to the button styles in our earlier example, we know that a ruleset for <button> will only be applied to our <Button> component — not to any other HTML button elements within the page. If you were to have multiple buttons within a component and wanted to style them differently, you'd still need classes. Classes will also be scoped by Svelte.

The classes that Svelte generates look like gibberish because they are based on a hash of the component styles (e.g. svelte-433xyz). This is far easier than a naming convention like BEM. Admittedly though, the experience of looking at styles in DevTools is slightly worse as the class names lack meaning.

The markup of a Svelte component in DevTools.

It’s not an either/or situation. You can use Svelte’s scoped styling along with a regular stylesheet. I personally write component specific styles within .svelte files, but make use of utility classes defined in a stylesheet. For global styles to be available across an entire app — CSS custom properties, reusable CSS animations, utility classes, any ‘reset’ styles, or a CSS framework like Bootstrap — I suggest putting them in a stylesheet linked in the head of your HTML document.

It lets us create global styles

As we've just seen, you can use a regular stylesheet to define global styles. Should you need to define any global styles from within a Svelte component, you can do that too by using :global. This is essentially a way to opt out of scoping when and where you need to.

For example, a modal component may want to toggle a class to style the body element:

<style> :global(.noscroll) { overflow: hidden; } </style> Unused styles are flagged

Another benefit of Svelte is that it will alert you about any unused styles during compilation. In other words, it searches for places where styles are defined but never used in the markup.

Conditional classes are terse and effortless

If the JavaScript variable name and the class name is the same, the syntax is incredibly terse. In this example, I’m creating modifier props for a full-width button and a ghost button.

<script> export let big = false; export let ghost = false; </script> <style> .big { font-size: 20px; display: block; width: 100%; } .ghost { background-color: transparent; border: solid currentColor 2px; } </style> <button class:big class:ghost> <slot/> </button>

A class of ghost will be applied to the element when a ghost prop is used, and a class of big is applied when a big prop is used.

<script> import Button from './Button.svelte'; </script> <Button big ghost>Click Me</Button>

Svelte doesn’t require class names and prop names to be identical.

<script> export let primary = false; export let secondary = false; </script> <button class:c-btn--primary={primary} class:c-btn--secondary={secondary} class="c-btn"> <slot></slot> </button>

The above button component will always have a c-btn class but will include modifier classes only when the relevant prop is passed in, like this:

<Button primary>Click Me</Button>

That will generate this markup:

<button class="c-btn c-btn--primary">Click Me</button>

Any number of arbitrary classes can be passed to a component with a single prop:

<script> let class_name = ''; export { class_name as class }; </script> <button class="c-btn {class_name}"> <slot /> </button>

Then, classes can be used much the same way you would with HTML markup:

<Button class="mt40">Click Me</Button> From BEM to Svelte

Let's see how much easier Svelte makes writing styles compared to a standard CSS naming convention. Here's a simple component coded up using BEM.

.c-card { border-radius: 3px; border: solid 2px; } .c-card__title { text-transform: uppercase; } .c-card__text { color: gray; } .c-card--featured { border-color: gold; }

Using BEM, classes get long and ugly. In Svelte, things are a lot simpler.

<style> div { border-radius: 3px; border: solid 2px; } h2 { text-transform: uppercase; } p { color: gray; } .featured { border-color: gold; } </style> <div class:featured> <h2>{title}</h2> <p> <slot /> </p> </div> It plays well with preprocessors

CSS preprocessors feels a lot less necessary when working with Svelte, but they can work perfectly alongside one another by making use of a package called Svelte Preprocess. Support is available for Less, Stylus and PostCSS, but here we'll look at Sass. The first thing we need to do is to install some dependencies:

npm install -D svelte-preprocess node-sass

Then we need to import autoPreprocess in rollup.config.js at the top of the file.

import autoPreprocess from 'svelte-preprocess';

Next, let’s find the plugins array and add preprocess: autoPreprocess() to Svelte:

export default { plugins: [ svelte({ preprocess: autoPreprocess(), ...other stuff

Then all we need to do is specify that we’re using Sass when we’re working in a component file, using type="text/scss" or lang="scss" to the style tag.

<style type="text/scss"> $pink: rgb(200, 0, 220); p { color: black; span { color: $pink; } } </style> Dynamic values without a runtime

We’ve seen that Svelte comes with most of the benefits of CSS-in-JS out-of-the-box — but without external dependencies! However, there’s one thing that third-party libraries can do that Svelte simply can’t: use JavaScript variables in CSS.

The following code is not valid and will not work:

<script> export let cols = 4; </script> <style> ul { display: grid; width: 100%; grid-column-gap: 16px; grid-row-gap: 16px; grid-template-columns: repeat({cols}, 1fr); } </style> <ul> <slot /> </ul>

We can, however, achieve similar functionality by using CSS variables.

<script> export let cols = 4; </script> <style> ul { display: grid; width: 100%; grid-column-gap: 16px; grid-row-gap: 16px; grid-template-columns: repeat(var(--columns), 1fr); } </style> <ul style="--columns:{cols}"> <slot /> </ul>

I’ve written CSS in all kinds of different ways over the years: Sass, Shadow DOM, CSS-in-JS, BEM, atomic CSS and PostCSS. Svelte offers the most intuitive, approachable and user-friendly styling API. If you want to read more about this topic then check out the aptly titled The Zen of Just Writing CSS by Rich Harris.

The post What I Like About Writing Styles with Svelte appeared first on CSS-Tricks.

Digging Into the Preview Loading Animation in WordPress

Css Tricks - Tue, 10/22/2019 - 4:24am

WordPress shipped the Block Editor (aka Gutenberg) back in version 5.0 and with it came a snazzy new post preview screen that shows the WordPress logo drawing itself while the preview loads.

That's what you get when saving a post draft and clicking the "Preview" button in the editor. How'd they make it? I had a tough time viewing source for the page because the preview loads up pretty quickly, but I did see that SVG is being used for the WordPress logo. That's all I really needed because my mind instantly went back to something Chris wrote in 2014 that uses uses the stroke-dasharray and stroke-dashoffset properties to create the same effect.

This is the example Chris shared in that post:

See the Pen bGyoz by Chris Coyier (@chriscoyier) on CodePen.

I've since been able to get my hands on the CSS to confirm that the WordPress drawing is indeed using the same technique, but I'll share how my mind broke things out while I was trying to reverse-engineer it.

We're working with a straight-up inlined SVG

The neat thing about the WordPress version is that we're working with two SVG paths instead of one. That means we have two parts that appear to be drawing at the same time. Here's the SVG which is inlined in the HTML. We've got that "Generating preview" text as well, which can live outside the SVG.

<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 96 96" role="img" aria-hidden="true" focusable="false"> <path d="M48 12c19.9 0 36 16.1 36 36S67.9 84 48 84 12 67.9 12 48s16.1-36 36-36" fill="none"></path> <path d="M69.5 46.4c0-3.9-1.4-6.7-2.6-8.8-1.6-2.6-3.1-4.9-3.1-7.5 0-2.9 2.2-5.7 5.4-5.7h.4C63.9 19.2 56.4 16 48 16c-11.2 0-21 5.7-26.7 14.4h2.1c3.3 0 8.5-.4 8.5-.4 1.7-.1 1.9 2.4.2 2.6 0 0-1.7.2-3.7.3L40 67.5l7-20.9L42 33c-1.7-.1-3.3-.3-3.3-.3-1.7-.1-1.5-2.7.2-2.6 0 0 5.3.4 8.4.4 3.3 0 8.5-.4 8.5-.4 1.7-.1 1.9 2.4.2 2.6 0 0-1.7.2-3.7.3l11.5 34.3 3.3-10.4c1.6-4.5 2.4-7.8 2.4-10.5zM16.1 48c0 12.6 7.3 23.5 18 28.7L18.8 35c-1.7 4-2.7 8.4-2.7 13zm32.5 2.8L39 78.6c2.9.8 5.9 1.3 9 1.3 3.7 0 7.3-.6 10.6-1.8-.1-.1-.2-.3-.2-.4l-9.8-26.9zM76.2 36c0 3.2-.6 6.9-2.4 11.4L64 75.6c9.5-5.5 15.9-15.8 15.9-27.6 0-5.5-1.4-10.8-3.9-15.3.1 1 .2 2.1.2 3.3z" fill="none"></path> </svg> <p>Generating preview...</p>

The first path is an ellipse that acts as a border around the second path, which is the shape of the WordPress logo. It's probably a good idea to give each path a class — especially if this isn't the only element on the page — but I decided to leave things class-less since this is the only element in the demo. We can select both paths together in CSS or use pseudo-selectors (e.g. path:nth-child(2)) to select them individually in this instance.

There are a few other housekeeping things happening in there. For example, the SVG gets attributes to make it more accessible, such as identifying it as an image, hiding it from screen readers, and preventing it from being focused.

We need to lightly style the SVG

Very, very light styling. We need a stroke since there is no fill color on the paths. Otherwise we'll be looking at a whole lot of nothing. Well, an invisible shape, but essentially a nothing burger.

svg { stroke: #555; stroke-width: 0.5; width: 250px; }

That gives us the outline for both paths. The nice thing about the stroke-width property is that it accepts decimal values, so we get to make the lines a little thinner. The drawing sorta looks like it's drawn in pencil that way.

The width is pretty arbitrary here, but it's important because the stroke-dasharray and stroke-dashoffset properties we'll be working with rely on it. If those property values are smaller than the size of the SVG, then the drawing will stop short of completing. If it's too large, then the speed of the drawing becomes too fast.

Now that we know our width and can see the path strokes, we can set stroke-dasharray and stroke-dashoffset accordingly.

svg path { stroke-dasharray: 300; stroke-dashoffset: 300; }

A little larger than the SVG and lots of space between the dashes, which should be just about right. Those values can be adjusted to tailor the animation to your liking.

The rest is merely using Chris' technique

The drawing is a CSS animation using one keyframe. If we start the stroke-dashoffset at zero, then the paths will be invisible on initial load and grow to the 300 value we set earlier when the animation reaches 100%. Again, we set the offset at 300 so that the stroke dashes and the spaces between them will extend beyond the SVG to cover the entire thing.

All the magic is a mere five lines of code:

@keyframes draw { 0% { stroke-dashoffset: 0; } }

Name the animation whatever you'd like. We can get away with only defining the animation at 0% since 100% is implicit.

Oh! And we've gotta attach the animation to the paths, so:

svg path { animation: draw 2s ease-out infinite alternate; stroke-dasharray: 300; stroke-dashoffset: 300; }

You can definitely tweak those values as well to speed thing up or down. Easing gives the animation that slight pulsing effect where it starts and ends a little slower than the middle of the movement.

All together now!

See the Pen
WordPress Preview Loading State
by Geoff Graham (@geoffgraham)
on CodePen.

I mentioned it earlier, but I was eventually able to snatch the source code from the actual implementation and it's pretty darn close, using the same principles.

The post Digging Into the Preview Loading Animation in WordPress appeared first on CSS-Tricks.

Netlify Build Plugins Announcement

Css Tricks - Tue, 10/22/2019 - 3:25am

Netlify just dropped a new thing: Build Plugins. (It's in beta, so you have to request access for now.) Here's my crack at explaining it, which is heavily informed from David Well's announcement video.

You might think of Netlify as that service that makes it easy to sling up some static files from a repo and have a production site super fast. You aren't wrong. But let's step back and look at that. Netlify thinks about itself as a platform in three tiers:

  1. Netlify Build
  2. Netlify Dev
  3. Netlify Edge

Most of the stuff that Netlify does falls into those buckets. Connecting your Git repo and letting Netlify build and deploy the site? That's Build. Using Netlify's CLI to spin up the local dev environment and do stuff like test your local functions? That's Dev. The beefed-up CDN that actually runs our production sites? That's Edge. See the product page for that breakdown.

So even if you're just slapping up some files that come out of a static site generator, you're still likely taking advantage of all these layers. Build is taking care of the Git connection and possibly running a npm run build or something. You might run netlify dev locally to run your local dev server. And the live site is handled by Edge.

With this new Build Plugins release, Netlify is opening up access to how Build works. No longer is it just "connect to repo and run this command when the build runs." There is actually a whole lifecycle of things that happen during a build. This is how David described that lifecycle:

  1. Build Starts
  2. Cache is fetched
  3. Dependencies are installed
  4. Build commands are run
  5. Serverless Functions are built
  6. Cache is saved
  7. Deployment
  8. Post processing

What if you could hook into those lifecycle events and run your own code alongside them? That's the whole idea with Build Plugins. In fact, those lifecycle events are literally event hooks. Sarah Drasner listed them out with their official names in her intro blog post:

  • init: when the build starts
  • getCache: fetch the last build’s cache
  • install: when the project’s dependencies are installing
  • preBuild: runs directly before building the functions and running the build commands
  • functionsBuild: runs when the serverless functions are building, if they exist on the site
  • build: when the build commands are executing
  • package: package it to be deployed
  • preDeploy: runs before the built package is deployed
  • saveCache: save cached assets
  • finally: build finished, site deployed &#x1f680;

To use these hooks and run your own code during the build, you write a plugin (in Node JavaScript) and chuck it in a plugins folder at like ./plugins/myPlugin/index.js

function netlifyPlugin(config) { return { name: 'my-plugin-name', init: () => { console.log('Hi from init') }, } } module.exports = netlifyPlugin

...and adjust your Netlify config (file) to point to it. You're best off reading Sarah's post for the whole low-down and example.

Finished my first @Netlify build plugin. Now all I need is my pre-beta access to run a test.

KISS #jamstackconf pic.twitter.com/VIAnal1SpD

— Tony Alves (@3_Alves) October 17, 2019

OK. What's the point?

This is the crucial part, right? Kind of the only thing that matters. Having control is great and all, but it only matters if it's actually useful. So now that we can hook into parts of the build process on the platform itself, what can we make it do that makes our lives and sites better?

Here's some ideas I've gathered so far.

Sitemaps

David demoed having the build process build a sitemap. Sitemaps are great (for SEO), but I definitely don't need to be wasting time building them locally very often and they don't really need to be in my repo. Let the platform do it and put the file live as "a build artifact." You can do this for everything (e.g. my local build process needs to compile CSS and such, so I can actually work locally), but if production needs files that local doesn't, it's a good fit.

Notifications

Sarah demoed a plugin that hits a Twilio API to send a text message when a build completes. I do this same kind of thing having Buddy send a Slack message when this site's deployment is done. You can imagine how team communication can be facilitated by programmatic messaging like this.

Performance monitoring

Build time is a great time to get performance metrics. Netlify says they are working on a plugin to track your Lighthouse score between deployments. Why not run your SpeedCurve CLI thing or Build Tracker CLI there to see if you've broken your performance budget?

Performance expert @tkadlec wrote a Netlify Build Plugin for @SpeedCurve!
In this post, he breaks down how he our new private beta feature to create some perf tests for your site. Great writeup! https://t.co/aYU9NmosQQ pic.twitter.com/FnX6KgeLRG

— Netlify (@Netlify) October 22, 2019

Optimizations

Why not use the build time to run all your image optimizations? Image Optim has an API you could hit. SVGO works on the command line and Netlify says they are working on that plugin already. I'd think some of this you'd want to run in your local build process (e.g. drop image in folder, Gulp is watching, image gets optimized) but remember you can run netlify dev locally which will run your build steps locally, and you could also organize your Gulp such that the code that does image optimization can build part of a watch process or called explicitly during a build.

Images are a fantastic target for optimzation, but just about any resource can be optimized in some way!

I built a @Netlify build plugin to run subfont on your website. Seems like I don't have access to the build plugins myself yet, so I'd appreciate it if someone would try it out and give me feedback.https://t.co/PrnL65JSwb

— Peter Müller (@_munter_) October 18, 2019

Bailing out a problematic builds

If your build process fails, Netlify already won't deploy it. Clearly useful. But now you could trigger that failure yourself. What if that performance monitoring didn't just report on what is happening, but literally killed the build if a budget wasn't met? All you have to do is throw an error or process.exit, I hear.

Even more baller, how about fail a build on an accessibility regression? Netlify is working on an Axe plugin for audits.

Clearly you could bail if your unit tests (e.g. Jest) fail, or your end-to-end tests (e.g. Cypress) fail, meaning you could watch for 404's and all sorts of user-facing problems and prevent problematic deploys at all.

Use that build

Netlify is clearly all-in on this JAMstack concept. Some of it is pretty obvious. Chuck some static files on a killer CDN and the site has a wonderfully fast foundation. Some of it is less obvious. If you need server-powered code still, you still have it in the form of cloud functions, which are probably more powerful than most people realize. Some of it requires you to think about your site in a new way, like the fact that pre-building markup is not an all-or-nothing choice. You can build as much as you can, and leave client-side work to do things that are more practical for the client-side to do (e.g. personalized information). If you start thinking of your build process as this powerful and flexible tool to offload as much work as possible to, that's a great place to start.

The post Netlify Build Plugins Announcement appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.