Developer News

Zero hands up.

Css Tricks - Wed, 10/02/2019 - 4:13am

Asked an entire room full of webdevs yesterday if any of them knew that FF/Chrome/Opera/Brave/etc. for iOS weren't allowed to compete on engine quality.

Zero hands up.

— Alex Russell (@slightlylate) September 25, 2019

It's worth making this clear then. On iOS, the only browser engine is WebKit. There are other browsers, but they can't bring their own engine (Blink/Gecko). So, if you're using Chrome or Firefox on iOS, it's really the same engine Safari is using, only slightly less capable (e.g. no third-party content blocker apps work in them).

It's worth knowing that as a developer. While Chrome supports stuff like service workers on their desktop browser and on other platforms, the browser engine made available to non-Safari browsers on iOS does not. You don't have them there. Likewise for Firefox.

The post Zero hands up. appeared first on CSS-Tricks.

The Many Ways to Link Up Shapes and Images with HTML and CSS

Css Tricks - Tue, 10/01/2019 - 4:20am

Different website designs often call for a shape other than a square or rectangle to respond to a click event. Perhaps your site has some kind of tilted or curved banner where the click area would be awkwardly large as a straight rectangle. Or you have a large uniquely shaped logo where you only want that unique shape to be clickable. Or you have an interactive image that responds differently when different regions of it are clicked.

You can surround those assets with an un-styled <a> tag to get a clickable rectangle that's approximately the right size. However, you can also control the shape of that region with different techniques, making sure the target for your click area exactly matches what’s visible on the screen.

SVG shapes

If your click target is an image or a portion of an image, and you have the ability to choose SVG as its format, you already have a great deal of control over how that element will behave on your page. The simplest way to make a portion of an SVG clickable is to add an an SVG hyperlink element to the markup. This is as easy as wrapping the target with an <a> tag, just as you would a nested html element. Your <a> tag can surround a simple shape or more complex paths. It can surround a group of SVG elements or just one. In this example the link for the bullseye wraps a single circle element, but the more complex arrow shape is made up of two polygons and a path element.

See the Pen
target svg
by Bailey Jones (@bailey_jones)
on CodePen.

Note that I’ve used the deprecated xlink:href property in this demo to ensure that the link will work on Safari. The href alone would have worked in Internet Explorer, Chrome, and Firefox.

The only trick here is to make sure the <a> tag is inside the SVG markup and that the tag wraps the shape you want to be clickable. The viewbox for this SVG is still a rectangle, so wrapping the entire SVG element wouldn't have the same effect.

Image maps

Let’s say you don’t have control over the SVG markup, or that you need to add a clickable area to a raster image instead. It’s possible to apply a clickable target to a portion of an <img> tag using an image map.

Image maps are defined separately from the image source. The map will effectively overlay the entire image element, but it's up to you to define the clickable area. Unlike the hyperlink element in the SVG example, the coordinates in the image map don’t have anything to do with the definition of the source image. Image maps have been around since HTML 3, meaning they have excellent browser support. However, they can’t be styled with CSS alone to provide interactive cues, like we were able to do with SVG on hover — the cursor is the only visual indicator that the target area of the image can be clicked. There are, however, options for styling the areas with JavaScript.

W3 Schools has an excellent example of an image map using a picture of the solar system where the sun and planets are linked to close-up images of those targets — everywhere else in the image is un-clickable. That’s because the coordinates of the areas defined in their image map match the locations of the sun and planets in the base image.

Here’s another example from Derek Fogge that uses uses maps to create more interesting click targets. It does use jQuery to style the areas on click, but notice the way a map overlays the image and coordinates are used to create the targets.

See the Pen
responsive image map demo
by Derek Fogge (@PositionRelativ)
on CodePen.

You can implement image maps on even more complex shapes too. In fact, let’s go back to the same target shape from the SVG example but using a raster image instead. We still want to link up the arrow and the bullseye but this time do not have SVG elements to help us out. For the bullseye, we know the X and Y coordinates and its radius in the underlying image, so it’s fairly easy to define a circle for the region. The arrow shape is more complicated. I used https://www.image-map.net to plot out the shape and generate the area for the image map — it’s made up of one polygon and one circle for the rounded edge at the top.

See the Pen
target image map
by Bailey Jones (@bailey_jones)
on CodePen.

Clip-path

What if you want to use CSS to define the shape of a custom click region without resorting to JavaScript for the styling? The CSS clip-path property provides considerable flexibility for defining and styling target areas on any HTML element.

Here we have a click area in the shape of a five-pointed star. The star is technically a polygon, so we could use a star-shaped base image and an image map with corresponding coordinates like we did in the previous image map example. However, let’s put clip-path to use. The following example shows the same clip-path applied to both a JPG image and an absolutely positioned hyperlink element.

See the Pen
Clip-path
by Bailey Jones (@bailey_jones)
on CodePen.

Browser support for clip-path has gotten much better, but it can still be inconsistent for some values. Be sure to check support and vendor prefixes before relying on it.

We can also mix and match different approaches depending on what best suits the shape of a particular click target. Here, I’ve combined the "close" shape using Bennet Freely’s clippy with an SVG hyperlink element to build the start of a clickable tic-tac-toe game. SVG is useful here to make sure the "hole" in the middle of the "O" shape isn’t clickable. For the "X" though, which is a polygon, a single clip-path can style it.

See the Pen
tic tac toe
by Bailey Jones (@bailey_jones)
on CodePen.

Again, beware of browser support especially when mixing and matching techniques. The demo above will not be supported everywhere.

CSS shapes without transparent borders

The clip-path property allowed us to apply a predefined shape to an HTML element of our choice, including hyperlink elements. There are plenty of other options for creating elements HTML and CSS that aren’t squares and rectangles — you can see some of them in The Shapes of CSS. However, not all techniques will effect the shape of the click area as you might expect. Most of the examples in the Shapes of CSS rely on transparent borders, which the DOM will still recognize as part of your click target even if your users can’t see them. Other tricks like positioning, transform, and pseudo elements like ::before and ::after will keep your styled hyperlink aligned with its visible shape.

Here’s a CSS heart shape that does not rely on transparent borders. You can see how the the red heart shape is the only clickable area of the element.

See the Pen
Clickable heart
by Bailey Jones (@bailey_jones)
on CodePen.

Here’s another example that creates a CSS triangle shape using transparent borders. You can see how the click area winds up being outside the actual shape. Hover over the element and you’ll be able to see the true size of the click area.

See the Pen
clickable triangle
by Bailey Jones (@bailey_jones)
on CodePen.

Hopefully this gives you a good baseline understanding of the many ways to create clickable regions on images and shapes, relying on HTML and CSS alone. You may find that it’s necessary to reach for JavaScript in order to get a more advanced interactive experience. However, the combined powers of HTML, CSS, and SVG provide considerable options for controlling the precise shape of your click target.

The post The Many Ways to Link Up Shapes and Images with HTML and CSS appeared first on CSS-Tricks.

Enhancing The Clickable Area Size

Css Tricks - Tue, 10/01/2019 - 4:20am

Here’s a great post by Ahmad Shadeed on making sure that clickable areas in our interfaces are, well, clickable. He writes about making sure that links, buttons and other elements meet accessibility standards for both touch and mouse, too.

I particularly like the section where Ahmad writes about making a fake circle around an element and where he shows this example:

On a similar note, Andy Bell just wrote up a few notes about creating a semantic “breakout” button to make an entire element clickable. This is a common pattern where you might want a design element that looks like a card but the whole thing is effectively a single button. There are a few accessibility tips that Andy brings up that are very much worth taking a look at!

The same sort of thing might be said about buttons. Using a rounded border-radius trims the corners, creating small un-clickable areas.

Direct Link to ArticlePermalink

The post Enhancing The Clickable Area Size appeared first on CSS-Tricks.

Multi-Million Dollar HTML

Css Tricks - Mon, 09/30/2019 - 9:40am

Two stories:

  • Jason Grigsby finds Chipotle's online ordering form makes use of an input-masking technique that chops up a credit card expiration year making it invalid and thus denying the order. If pattern="\d\d" maxlength="2" was used instead (native browser feature), the browser is smart enough to do the right thing and not deny the order. Scratchpad math, based on published data, makes that worth $4.4 million dollars.
  • Adrian Roselli recalls an all-too-common form accessibility fail of missing a for/id attribute on labels and inputs results in an unusable experience and a scratchpad math loss of $18 million dollars to campaigns.

The label/input attribution thing really gets me. I feel like that's an extremely basic bit of HTML knowledge that benefits both accessibility and general UX. It's part of every HTML curriculum I've ever seen, and regularly pointed at as something you need to get right. And never a week goes by I don't find some production website not doing it.

We can do this everyone!

<label for="name">Name:</label> <input id="name" name="name" type="text"> <!-- or --> <label> Name: <input name="name" type="text"> </label>

The post Multi-Million Dollar HTML appeared first on CSS-Tricks.

How I Learned to Stop Worrying and Love Git Hooks

Css Tricks - Mon, 09/30/2019 - 5:41am

The merits of Git as a version control system are difficult to contest, but while Git will do a superb job in keeping track of the commits you and your teammates have made to a repository, it will not, in itself, guarantee the quality of those commits. Git will not stop you from committing code with linting errors in it, nor will it stop you from writing commit messages that convey no information whatsoever about the nature of the commits themselves, and it will, most certainly, not stop you from committing poorly formatted code.

Fortunately, with the help of Git hooks, we can rectify this state of affairs with only a few lines of code. In this tutorial, I will walk you through how to implement Git hooks that will only let you make a commit provided that it meets all the conditions that have been set for what constitutes an acceptable commit. If it does not meet one or more of those conditions, an error message will be shown that contains information about what needs to be done for the commit to pass the checks. In this way, we can keep the commit histories of our code bases neat and tidy, and in doing so make the lives of our teammates, and not to mention our future selves, a great deal easier and more pleasant.

As an added bonus, we will also see to it that code that passes all the tests is formatted before it gets committed. What is not to like about this proposition? Alright, let us get cracking.

Prerequisites

In order to be able to follow this tutorial, you should have a basic grasp of Node.js, npm and Git. If you have never heard of something called package.json and git commit -m [message] sounds like code for something super-duper secret, then I recommend that you pay this and this website a visit before you continue reading.

Our plan of action

First off, we are going to install the dependencies that make implementing pre-commit hooks a walk in the park. Once we have our toolbox, we are going to set up three checks that our commit will have to pass before it is made:

  • The code should be free from linting errors.
  • Any related unit tests should pass.
  • The commit message should adhere to a pre-determined format.

Then, if the commit passes all of the above checks, the code should be formatted before it is committed. An important thing to note is that these checks will only be run on files that have been staged for commit. This is a good thing, because if this were not the case, linting the whole code base and running all the unit tests would add quite an overhead time-wise.

In this tutorial, we will implement the checks discussed above for some front-end boilerplate that uses TypeScript and then Jest for the unit tests and Prettier for the code formatting. The procedure for implementing pre-commit hooks is the same regardless of the stack you are using, so by all means, do not feel compelled to jump on the TypeScript train just because I am riding it; and if you prefer Mocha to Jest, then do your unit tests with Mocha.

Installing the dependencies

First off, we are going to install Husky, which is the package that lets us do whatever checks we see fit before the commit is made. At the root of your project, run:

npm i husky --save-dev

However, as previously discussed, we only want to run the checks on files that have been staged for commit, and for this to be possible, we need to install another package, namely lint-staged:

npm i lint-staged --save-dev

Last, but not least, we are going to install commitlint, which will let us enforce a particular format for our commit messages. I have opted for one of their pre-packaged formats, namely the conventional one, since I think it encourages commit messages that are simple yet to the point. You can read more about it here.

npm install @commitlint/{config-conventional,cli} --save-dev ## If you are on a device that is running windows npm install @commitlint/config-conventional @commitlint/cli --save-dev

After the commitlint packages have been installed, you need to create a config that tells commitlint to use the conventional format. You can do this from your terminal using the command below:

echo "module.exports = {extends: ['@commitlint/config-conventional']}" > commitlint.config.js

Great! Now we can move on to the fun part, which is to say implementing our checks!

Implementing our pre-commit hooks

Below is an overview of the scripts that I have in the package.json of my boilerplate project. We are going to run two of these scripts out of the box before a commit is made, namely the lint and prettier scripts. You are probably asking yourself why we will not run the test script as well, since we are going to implement a check that makes sure any related unit tests pass. The answer is that you have to be a little bit more specific with Jest if you do not want all unit tests to run when a commit is made.

"scripts": { "start": "webpack-dev-server --config ./webpack.dev.js --mode development", "build": "webpack --config ./webpack.prod.js --mode production", "test": "jest", "lint": "tsc --noEmit", "prettier": "prettier --single-quote --print-width 80 "./**/*.{js,ts}" --write" }

As you can tell from the code we added to the package.json file below, creating the pre-commit hooks for the lint and prettier scripts does not get more complicated than telling Husky that before a commit is made, lint-staged needs to be run. Then you tell lint-staged to run the lint and prettier scripts on all staged JavaScript and TypeScript files, and that is it!

"scripts": { "start": "webpack-dev-server --config ./webpack.dev.js --mode development", "build": "webpack --config ./webpack.prod.js --mode production", "test": "jest", "lint": "tsc --noEmit", "prettier": "prettier --single-quote --print-width 80 "./**/*.{js,ts}" --write" }, "husky": { "hooks": { "pre-commit": "lint-staged" } }, "lint-staged": { "./**/*.{ts}": [ "npm run lint", "npm run prettier" ] }

At this point, if you set out to anger the TypeScript compiler by passing a string to a function that expects a number and then try to commit this code, our lint check will stop your commit in its tracks and tell you about the error and where to find it. This way, you can correct the error of your ways, and while I think that, in itself, is pretty powerful, we are not done yet!

By adding "jest --bail --coverage --findRelatedTests" to our configuration for lint-staged, we also make sure that the commit will not be made if any related unit tests do not pass. Coupled with the lint check, this is the code equivalent of wearing two safety harnesses while fixing broken tiles on your roof.

What about making sure that all commit messages adhere to the commitlint conventional format? Commit messages are not files, so we can not handle them with lint-staged, since lint-staged only works its magic on files staged for commit. Instead, we have to return to our configuration for Husky and add another hook, in which case our package.json will look like so:

"scripts": { "start": "webpack-dev-server --config ./webpack.dev.js --mode development", "build": "webpack --config ./webpack.prod.js --mode production", "test": "jest", "lint": "tsc --noEmit", "prettier": "prettier --single-quote --print-width 80 "./**/*.{js,ts}" --write" }, "husky": { "hooks": { "commit-msg": "commitlint -E HUSKY_GIT_PARAMS", //Our new hook! "pre-commit": "lint-staged" } }, "lint-staged": { "./**/*.{ts}": [ "npm run lint", "jest --bail --coverage --findRelatedTests", "npm run prettier" ] }

If your commit message does not follow the commitlint conventional format, you will not be able to make your commit: so long, poorly formatted and obscure commit messages!

If you get your house in order and write some code that passes both the linting and unit test checks, and your commit message is properly formatted, lint-staged will run the Prettier script on the files staged for commit before the commit is made, which feels like the icing on the cake. At this point, I think we can feel pretty good about ourselves; a bit smug even.

Implementing pre-commit hooks is not more difficult than that, but the gains of doing so are tremendous. While I am always skeptical of adding yet another step to my workflow, using pre-commit hooks has saved me a world of bother, and I would never go back to making my commits in the dark, if I am allowed to end this tutorial on a somewhat pseudo-poetical note.

The post How I Learned to Stop Worrying and Love Git Hooks appeared first on CSS-Tricks.

Preloading Pages Just Before They are Needed

Css Tricks - Fri, 09/27/2019 - 11:08am

The typical journey for a person browsing a website: view a page, click a link, browser loads new page. That's assuming no funny business like a Single Page App, which still follows that journey, but the browser doesn't load a new page — the client fakes it for the sake of a snappier transition.

What if you could load that new page before the person clicks the link so that, when they do, the loading of that next page is much faster? There are two notable projects that try to help with that:

  • quicklink: detects visible links, waits for the browser to be idle and if it isn't on slow connection, it prefetches those links.
  • instant.page: if you hover over a link for 65ms, it preloads that link. The new Version 2 allows you to configure of the time delay or whether to wait for a click or press before preloading.

Combine those things with technological improvements like paint holding, and building a SPA architecture just for the speed benefits may become unnecessary (though it may still be desirable for other reasons, like code-splitting, putting the onus of routing onto front-end developers, etc.).

The post Preloading Pages Just Before They are Needed appeared first on CSS-Tricks.

A Codebase and a Community

Css Tricks - Fri, 09/27/2019 - 4:09am

I woke up one morning and realized that I had it all wrong. I discovered that code and design are unable to solve every problem on a design systems team, even if many problems can be solved by coding and designing in a dark room all day. Wait, huh? How on earth does that make any sense? Well, that’s because good design systems work is about hiring, too.

Let me explain.

First, let’s take a look at some common design systems issues. One might be that your components are a thick div soup which is causing issues for users and is all-round bad for accessibility. Another issue might be that you have a large number of custom components that are fragile and extremely difficult to use. Or maybe you have eight different illustration styles and four different modal components. Maybe you have a thousand different color values that are used inconsistently.

Everyone in an organization can typically feel these problems but they’re really hard to describe. Folks can see that it takes way longer to build things than it should and miscommunication is rampant. Our web app might have bad web performance, major accessibility issues and wildly inconsistent design. But why is this? What’s the root cause of all these dang problems?

The strange thing about design systems is it’s difficult to see what the root cause of all these inconsistencies and issues might be. And even the answer isn’t always entirely obvious once you see the problem.

A design systems team can write component documentation to fix these issues, or perhaps refactor things, audit patterns, refactor even more things, redesign components, and provide training and guidance. But when apps get to a certain size then one person (or even a whole team of people) tackling these problems isn’t enough to solve them.

Sure a design systems team can spend a whole bunch of time helping fix an issue but is that really the best use of their time? What if they convinced another team in the company to instead hire a dedicated front-end engineer to build a sustainable working environment? What if they hired an illustrator to make things consistent and ensure high quality across the entire app?

This is why design systems work is also about hiring.

A design systems team is in the perfect place to provide guidance around hiring because they’ll be the first to spot issues across an organization. They’ll see how components are being hacked together or where there’s too many unnecessary custom components that are not part of a library or style guide. The design systems team will see weaknesses in the codebase that no one else can see and they can show which teams are missing which particular skill sets — and fix that issue by hiring folks with skills in those specific areas.

If you're in management and don’t see all those inconsistencies every day, then it’s likely nothing will get done about it. We're unlikely to fix the issues we cannot see.

So as design systems folks, we ultimately need to care about hiring because of this: a codebase is a neighborhood and a community.

And the only way we can fix the codebase is by fixing the community.

The post A Codebase and a Community appeared first on CSS-Tricks.

What happens when you open a new install of browsers for the 1st time?

Css Tricks - Fri, 09/27/2019 - 4:09am

Interesting research from Jonathan Sampson, where he watches the network requests a browser makes the very first time you launch it on a fresh install, and otherwise do nothing. This gives you a little insight into what kind of information that browser wants to collect and disseminate.

This was all shared as tweets, but I'm linking to an unrolled thread if there's one available:

Looks like Brave is the cleanest and the most questionable is... Opera?

Direct Link to ArticlePermalink

The post What happens when you open a new install of browsers for the 1st time? appeared first on CSS-Tricks.

Weekly Platform News: Layout Shifts, Stalled High-Bitrate Videos, Screenshots in Firefox

Css Tricks - Thu, 09/26/2019 - 9:33am

In this week's roundup: fighting shifty layouts, some videos might be a bit stalled, and a new way to take screenshots in Firefox.

Let's get into the news!

Identifying the causes of layout shifts during page load

You can now use WebPageTest to capture any layout shifts that occur on your website during page load, and identify what caused them.

Step 1: Paste a snippet

Paste the following snippet into the “Custom Metrics” on webpagetest.org in field in the Custom tab (under Advanced Settings) and make sure that a Chrome browser is selected.

[LayoutShifts] return new Promise(resolve => { new PerformanceObserver(list => { resolve(JSON.stringify(list.getEntries().filter(entry => !entry.hadRecentInput))); }).observe({type: "layout-shift", buffered: true}); }); Step 2: Inspect entries

After completing the test, inspect the captured LayoutShifts entries on the Custom Metrics page, which is linked from the Details section.

Step 3: Check the filmstrip

Based on the "startTime" and "value" numbers in the data, use WebPageTest’s filmstrip view to pinpoint the individual layout shifts and identify their causes.

(via Rick Viscomi)

A high bitrate can cause your videos to stall

If you serve videos for your website from your own web server, keep an eye on the video bitrate (the author suggests FFmpeg and streamclarity.com). If your video has a bitrate of over 1.5 Mbps, playback may stall one or more times for people on 3G connections, depending on the video’s length.

50% of videos in this study have a bitrate that is greater than the downlink speed of a 3G connection — meaning that video playback will be delayed and contain stalls.

(via Doug Sillars)

Firefox’s :screenshot command

Firefox’s DevTools console includes a powerful command for capturing screenshots of the current web page. Like in Chrome DevTools, you can capture a screenshot of an individual element, the current viewport, or the full page, but Firefox’s :screenshot command also provides advanced options for adjusting the device pixel ratio and setting a delay.

// capture a full-page screenshot at a device pixel ratio of 2 :screenshot --fullpage --dpr 2 // capture a screenshot of the viewport with a 5-second delay :screenshot --delay 5

(via Reddit)

Read even more news in my weekly Sunday issue, which can be delivered to you via email every Monday morning.

Web Platform News: Sunday issue ?

The post Weekly Platform News: Layout Shifts, Stalled High-Bitrate Videos, Screenshots in Firefox appeared first on CSS-Tricks.

Meeting GraphQL at a Cocktail Mixer

Css Tricks - Thu, 09/26/2019 - 4:15am

GraphQL and REST are two specifications used when building APIs for websites to use. REST defines a series of unique identifiers (URLs) that applications use to request and send data. GraphQL defines a query language that allows client applications to specify precisely the data they need from a single endpoint. They are related technologies and used for largely the same things (in fact, they can and often do co-exist), but they are also very different.

That’s a little dry, eh? Let’s explain it a much more entertaining way that might help you understand better, and maybe just get you a little excited about GraphQL!

&#x1f379; You’re at a cocktail mixer

You’re attending this mixer to help build your professional network, so naturally, you want to collect some data about the people around you. Close by, there are five other attendees.

Their name tags read:

  • Richy REST
  • Friend of Richy REST
  • Employer of Richy REST
  • Georgia GraphQL

Being the dynamic, social, outgoing animal that you are, you walk right up to Richy REST and say, "Hi, I’m Adam Application, who are you?" Richy REST responds:

{ name: "Richy REST", age: 33, married: false, hometown: "Circuits-ville", employed: true // ... 20 other things about Richy REST }

"Whoa, that was a lot to take in," you think to yourself. In an attempt to avoid any awkward silence, you remember Richy REST specifying that he was employed and ask, "Where do you work?"

Strangely, Richy REST has no idea where he works. Maybe the Employer of Richy Rest does?

You ask the same question to the Employer of Richy REST, who is delighted to answer your inquiry! He responds like so:

{ company: "Mega Corp", employee_count: 11230, head_quarters: "1 Main Avenue, Big City, 10001, PL" year_founded: 2005, revenue: 100000000, // ... 20 other things about Richy REST Employer }

At this point, you’re exhausted. You don’t even want to meet the Friend of Richy Rest! That might take forever, use up all your energy and, you don’t have the time.

However, Georgia GraphQL has been standing there politely, so you decide to engage her.

"Hi, what’s your name?"

{ name: "Georgia GraphQL" }

"Where are you from, and how old are you?"*

{ hometown: "Pleasant-Ville", age: 28 }

"How many hobbies and friends do you have and what are your friends' names?"

{ hobbies_count: 12, friends_count: 50, friends: [ { name: "Steve" }, { name: "Anjalla" }, // ...etc ] }

Georgia GraphQL is incredible, articulate, concise, and to the point. You 100% want to swap business cards and work together on future projects with Georgia.

This anecdote encapsulates the developer's experience of working with GraphQL over REST. GraphQL allows developers to articulate what they want with a tidy query and, in response, only receive what they specified. Those queries are fully dynamic, so only a single endpoint is required. REST, on the other hand, has predefined responses and often requires applications to utilize multiple endpoints to satisfy a full data requirement.

Metaphor over! Let’s talk brass tacks.

To expand further upon essential concepts presented in the cocktail mixer metaphor let’s specifically address two limitations that often surface when using REST.

1. Several trips when fetching related resources

Data-driven mobile and web applications often require related resources and data sets. Thus, retrieving data using a REST API can entail multiple requests to numerous endpoints. For instance, requesting a Post entity and related Author might be completed by two requests to different endpoints:

someServer.com/authors/:id someServer.com/posts/:id

Multiple trips to an API impacts the performance and readiness of an application. It is also a more significant issue for low bandwidth devices (e.g. smart-watches, IoT, older mobile devices, and others).

2. Over-fetching and under-fetching

Over/under-fetching is inevitable with RESTful APIs. Using the example above, the endpoint domainName.com/posts/:id fetches data for a specific Post. Every Post is made up of attributes, such as id, body, title, publishingDate, authorId, and others. In REST, the same data object always gets returned; the response is predefined.

In situations where only a Post title and body are needed, over-fetching happens — as more data gets sent over the network than data that is actually utilized. When an entire Post is required, along with related data about its Author, under-fetching gets experienced — as less data gets sent over the network than is actually utilized. Under-fetching leads to bandwidth overuse from multiple requests to the API.

Client-side querying with GraphQL

GraphQL introduces a genuinely unique approach that provides a tremendous amount of flexibility to client-apps. With GraphQL, a query gets sent to your API and precisely what you need is returned — nothing more and nothing less — in a single request. Query results are returned in the same shape as your query, ensuring that response structures are always predictable. These factors allow apps to run faster and be more stable because they are in control of the data they get, and not the server.

"Results are returned in the same shape as queries." /* Query */ { myFriends(first: 2) { items { name age } } } /* Response */ { "data": { "items": [ { "name": "Steve", "age": 27 }, { "name": "Kelly", "age": 31 } ] } } Reality Check ✅

Now, at this point, you probably think GraphQL is as easy as slicing warm butter with a samurai sword. That might be the reality for front-end developers — specifically developers consuming GraphQL APIs. However, when it comes to server-side setup, someone has to make the sausage. Our friend Georgia GraphQL put in some hard work to become the fantastic professional she is!

Building a GraphQL API, server-side, is something that takes time, energy, and expertise. That said, it's nothing that someone ready for a challenge is unable to handle! There are many different ways to hop in at all levels of abstraction. For example:

  • Un-assisted: If you really want to get your hands dirty, try building a GraphQL API with the help of packages. For instance, like Rails? Checkout graphql-ruby. Love Node.js? Try express-graphql.
  • Assisted: If fully maintaining your server/self-hosting is a priority, something like Graph.cool can help you get going on a GraphQL project.
  • Instant: Tired of writing CRUD boilerplate and want to get going fast? 8base provides an instant GraphQL API and serverless backend that's fully extensible.
Wrapping up

REST was a significant leap forward for web services in enabling highly available resource-specific APIs. That said, its design didn't account for today's proliferation of connected devices, all with differing data constraints and requirements. This oversight has quickly lead to the spread of GraphQL — open-sourced by Facebook in 2015 — for the tremendous flexibility that it gives front-end developers. Working with GraphQL is a fantastic developer experience, both for individual developers as well as teams.

The post Meeting GraphQL at a Cocktail Mixer appeared first on CSS-Tricks.

Paperform

Css Tricks - Thu, 09/26/2019 - 4:15am

Buy or build is a classic debate in technology. Building things yourself might feel less expensive because there is no line item on your credit card bill, but has cost in the form of time. Buying things, believe it or not, is usually less expensive when it comes to technology that isn't your core focus. Build your core technology, buy everything else.

That's what I think of with a tool like Paperform. A powerful form builder like Paperform costs me a few bucks, but saves me countless hours in building something that might become extremely complex and a massive distraction from my more important goals.

Paperform is a form builder, but disguised within a page builder

Imagine you're building a registration form for a conference. (That's a perfect fit for Paperform, by the way, as Paperform has payment features that can even handle the money part.) The page that explains the conference and the registration form for the conference can be, and maybe should be, the same thing.

The Paperform designer makes it quite intuitive to build out a page of content.

It is equally intuitive to sprinkle questions into the page as you are writing and design it, making it an interactive form.

Block editing

As a little aside, I dig how the editor is block-based. That's a trend I can get behind lately, with some of my favorite apps like Notion really embracing it and huge projects like Gutenberg in WordPress.

This feels lighter than both of those, a little bit more like the Dropbox Paper editor. Building a form is just like editing a document is a principle they have and I think they are right on. It really is just like working in a document with perhaps a bit more configuration possibilities when you get into the details.

You've got a lot of power at the question level

With a form builder, you really want that power. You don't want to get knee-deep into building a form only to find out you can't do the thing you need to with the functionality and flow of the form. Here's a bunch of yes's:

  • Can you make a field be required? Yes.
  • Can you apply conditional logic to hide/show fields? Yes.
  • Can you apply put default and placeholder text? Yes.
  • Can you set minimum and maximums? Yes.
  • Can you control the layout? Yes.
  • Can you control the design? Yes.
  • Can you programmatically prefill fields? Yes.
  • Can you have hidden fields? Yes.
  • Can you have complex fields like address and signatures? Yes.

Features like logic on questions, I happen to know, are particularly tricky to nail the UX on, and Paperform does a great job with it. You control the logic from option built naturally into the form builder where you would expect to find it.

Based on a "yes" answer to a yes/no question, I reveal an extra address field. Even the address field has powerful features, like limiting the countries you use. Theming is very natural

Controlling color and typography are right up front and very obvious. Color pickers for all the major typographic elements, and the entire kitchen sink of Google Fonts for you to pick from.

I really like that at no point does it feel like you are leaving "the place where you're building the form". You can easily flip around between building the content and design and theme and logic and all that while it's saving your work as you go.

All the most common stuff has UI controls for you, and other big features are easy to find, like uploading background images and controlling buttons. Then if you really need to exert 100% control, their highest plan allows you to inject your own CSS into the form.

"After Submission"

What an obvious thing to call it! I love that. This is where you configure everything that happens with the form data after you've captured it. But instead of calling it something dry and obtuse like "Form Data Configuration Options" or something, it's named after what you are thinking: "What happens after the form is submitted? That's what I'm trying to control."

There are three things that I tend to think about after form submission:

  1. Where is the confirmation email going to go?
  2. What does the success message say to the user?
  3. What integrations can I use?

A nice touch? By default, the form is set up to email you all submissions. Do nothing, and you get that. That's probably what you want 90% of the time anyway. But if you want to get in there and manipulate that email with custom destinations, subject lines, and even entirely reformatted content, you've got it.

In the same fashion, you can create custom PDFs from the submitted data, which is a pretty unique feature and I imagine quite important for some folks.

Customizing that success message, which I find to be appropriate pretty much 100% of the time, is just a matter of changing two fields.

Integrations-wise, for me, the big ones I find myself using are:

  1. Make a Trello card from this.
  2. Put the person on a MailChimp list.
  3. Send a Slack notification.

They've got all those, plus webhooks (hit this arbitrary URL with the data after submission) and Zapier, which covers just about every use case under the sun.

Of course, when you're done building your form, you get a URL you can send people to. For me, the vast majority of the time, what I want to do is embed the form right onto another site, and they have all the options you could want for that as well.

New feature: Calculations

Just a few days ago, they added the ability to essentially do math with form data. There are some fairly straightforward use-cases, like calculating the price of a product based on questions on the form. But the feature was built to be rather open-ended in what you can do with it. Imagine using a calculation as part of a Buzzfeed style quiz that tells you what kind of dragon you are, or disabling submission of the form until a calculation can be made that is satisfactory, or the notification from the form goes to a different person based on the calculation.

I can't cover everything

Paperform is too big for one blog post. I barely mentioned payments, which is a massive feature they handle very well. I'd just end by saying it's a very impressive product and if you're picking a form builder, picking one as feature-rich as Paperform isn't likely to be one you'll regret.

Go Try Paperform

The post Paperform appeared first on CSS-Tricks.

A Dark Mode Toggle with React and ThemeProvider

Css Tricks - Wed, 09/25/2019 - 4:10am

I like when websites have a dark mode option. Dark mode makes web pages easier for me to read and helps my eyes feel more relaxed. Many websites, including YouTube and Twitter, have implemented it already, and we’re starting to see it trickle onto many other sites as well.

In this tutorial, we’re going to build a toggle that allows users to switch between light and dark modes, using a <ThemeProvider wrapper from the styled-components library. We’ll create a useDarkMode custom hook, which supports the prefers-color-scheme media query to set the mode according to the user’s OS color scheme settings.

If that sounds hard, I promise it’s not! Let’s dig in and make it happen.

See the Pen
Day/night mode switch toggle with React and ThemeProvider
by Maks Akymenko (@maximakymenko)
on CodePen.

Let’s set things up

We’ll use create-react-app to initiate a new project:

npx create-react-app my-app cd my-app yarn start

Next, open a separate terminal window and install styled-components:

yarn add styled-components

Next thing to do is create two files. The first is global.js, which will contain our base styling, and the second is theme.js, which will include variables for our dark and light themes:

// theme.js export const lightTheme = { body: '#E2E2E2', text: '#363537', toggleBorder: '#FFF', gradient: 'linear-gradient(#39598A, #79D7ED)', } export const darkTheme = { body: '#363537', text: '#FAFAFA', toggleBorder: '#6B8096', gradient: 'linear-gradient(#091236, #1E215D)', }

Feel free to customize variables any way you want, because this code is used just for demonstration purposes.

// global.js // Source: https://github.com/maximakymenko/react-day-night-toggle-app/blob/master/src/global.js#L23-L41 import { createGlobalStyle } from 'styled-components'; export const GlobalStyles = createGlobalStyle` *, *::after, *::before { box-sizing: border-box; } body { align-items: center; background: ${({ theme }) => theme.body}; color: ${({ theme }) => theme.text}; display: flex; flex-direction: column; justify-content: center; height: 100vh; margin: 0; padding: 0; font-family: BlinkMacSystemFont, -apple-system, 'Segoe UI', Roboto, Helvetica, Arial, sans-serif; transition: all 0.25s linear; }

Go to the App.js file. We’re going to delete everything in there and add the layout for our app. Here’s what I did:

import React from 'react'; import { ThemeProvider } from 'styled-components'; import { lightTheme, darkTheme } from './theme'; import { GlobalStyles } from './global'; function App() { return ( <ThemeProvider theme={lightTheme}> <> <GlobalStyles /> <button>Toggle theme</button> <h1>It's a light theme!</h1> <footer> </footer> </> </ThemeProvider> ); } export default App;

This imports our light and dark themes. The ThemeProvider component also gets imported and is passed the light theme (lightTheme) styles inside. We also import GlobalStyles to tighten everything up in one place.

Here’s roughly what we have so far:

Now, the toggling functionality

There is no magic switching between themes yet, so let’s implement toggling functionality. We are only going to need a couple lines of code to make it work.

First, import the useState hook from react:

// App.js import React, { useState } from 'react';

Next, use the hook to create a local state which will keep track of the current theme and add a function to switch between themes on click:

// App.js const [theme, setTheme] = useState('light'); // The function that toggles between themes const toggleTheme = () => { // if the theme is not light, then set it to dark if (theme === 'light') { setTheme('dark'); // otherwise, it should be light } else { setTheme('light'); } }

After that, all that’s left is to pass this function to our button element and conditionally change the theme. Take a look:

// App.js import React, { useState } from 'react'; import { ThemeProvider } from 'styled-components'; import { lightTheme, darkTheme } from './theme'; import { GlobalStyles } from './global'; // The function that toggles between themes function App() { const [theme, setTheme] = useState('light'); const toggleTheme = () => { if (theme === 'light') { setTheme('dark'); } else { setTheme('light'); } } // Return the layout based on the current theme return ( <ThemeProvider theme={theme === 'light' ? lightTheme : darkTheme}> <> <GlobalStyles /> // Pass the toggle functionality to the button <button onClick={toggleTheme}>Toggle theme</button> <h1>It's a light theme!</h1> <footer> </footer> </> </ThemeProvider> ); } export default App; How does it work? // global.js background: ${({ theme }) => theme.body}; color: ${({ theme }) => theme.text}; transition: all 0.25s linear;

Earlier in our GlobalStyles, we assigned background and color properties to values from the theme object, so now, every time we switch the toggle, values change depending on the darkTheme and lightTheme objects that we are passing to ThemeProvider. The transition property allows us to make this change a little more smoothly than working with keyframe animations.

Now we need the toggle component

We’re generally done here because you now know how to create toggling functionality. However, we can always do better, so let’s improve the app by creating a custom Toggle component and make our switch functionality reusable. That’s one of the key benefits to making this in React, right?

We’ll keep everything inside one file for simplicity’s sake,, so let’s create a new one called Toggle.js and add the following:

// Toggle.js import React from 'react' import { func, string } from 'prop-types'; import styled from 'styled-components'; // Import a couple of SVG files we'll use in the design: https://www.flaticon.com import { ReactComponent as MoonIcon } from 'icons/moon.svg'; import { ReactComponent as SunIcon } from 'icons/sun.svg'; const Toggle = ({ theme, toggleTheme }) => { const isLight = theme === 'light'; return ( <button onClick={toggleTheme} > <SunIcon /> <MoonIcon /> </button> ); }; Toggle.propTypes = { theme: string.isRequired, toggleTheme: func.isRequired, } export default Toggle;

You can download icons from here and here. Also, if we want to use icons as components, remember about importing them as React components.

We passed two props inside: the theme will provide the current theme (light or dark) and toggleTheme function will be used to switch between them. Below we created an isLight variable, which will return a boolean value depending on our current theme. We’ll pass it later to our styled component.

We’ve also imported a styled function from styled-components, so let’s use it. Feel free to add this on top your file after the imports or create a dedicated file for that (e.g. Toggle.styled.js) like I have below. Again, this is purely for presentation purposes, so you can style your component as you see fit.

// Toggle.styled.js const ToggleContainer = styled.button` background: ${({ theme }) => theme.gradient}; border: 2px solid ${({ theme }) => theme.toggleBorder}; border-radius: 30px; cursor: pointer; display: flex; font-size: 0.5rem; justify-content: space-between; margin: 0 auto; overflow: hidden; padding: 0.5rem; position: relative; width: 8rem; height: 4rem; svg { height: auto; width: 2.5rem; transition: all 0.3s linear; // sun icon &:first-child { transform: ${({ lightTheme }) => lightTheme ? 'translateY(0)' : 'translateY(100px)'}; } // moon icon &:nth-child(2) { transform: ${({ lightTheme }) => lightTheme ? 'translateY(-100px)' : 'translateY(0)'}; } } `;

Importing icons as components allows us to directly change the styles of the SVG icons. We’re checking if the lightTheme is an active one, and if so, we move the appropriate icon out of the visible area — sort of like the moon going away when it’s daytime and vice versa.

Don’t forget to replace the button with the ToggleContainer component in Toggle.js, regardless of whether you’re styling in separate file or directly in Toggle.js. Be sure to pass the isLight variable to it to specify the current theme. I called the prop lightTheme so it would clearly reflect its purpose.

The last thing to do is import our component inside App.js and pass required props to it. Also, to add a bit more interactivity, I’ve passed condition to toggle between "light" and “dark" in the heading when the theme changes:

// App.js <Toggle theme={theme} toggleTheme={toggleTheme} /> <h1>It's a {theme === 'light' ? 'light theme' : 'dark theme'}!</h1>

Don’t forget to credit the flaticon.com authors for the providing the icons.

// App.js <span>Credits:</span> <small><b>Sun</b> icon made by <a href="https://www.flaticon.com/authors/smalllikeart">smalllikeart</a> from <a href="https://www.flaticon.com">www.flaticon.com</a></small> <small><b>Moon</b> icon made by <a href="https://www.freepik.com/home">Freepik</a> from <a href="https://www.flaticon.com">www.flaticon.com</a></small>

Now that’s better:

The useDarkMode hook

While building an application, we should keep in mind that the app must be scalable, meaning, reusable, so we can use in it many places, or even different projects.

That is why it would be great if we move our toggle functionality to a separate place — so, why not to create a dedicated account hook for that?

Let’s create a new file called useDarkMode.js in the project src directory and move our logic into this file with some tweaks:

// useDarkMode.js import { useEffect, useState } from 'react'; export const useDarkMode = () => { const [theme, setTheme] = useState('light'); const toggleTheme = () => { if (theme === 'light') { window.localStorage.setItem('theme', 'dark') setTheme('dark') } else { window.localStorage.setItem('theme', 'light') setTheme('light') } }; useEffect(() => { const localTheme = window.localStorage.getItem('theme'); localTheme && setTheme(localTheme); }, []); return [theme, toggleTheme] };

We’ve added a couple of things here. We want our theme to persist between sessions in the browser, so if someone has chosen a dark theme, that’s what they’ll get on the next visit to the app. That’s a huge UX improvement. For this reasons we use localStorage.

We’ve also implemented the useEffect hook to check on component mounting. If the user has previously selected a theme, we will pass it to our setTheme function. In the end, we will return our theme, which contains the chosen theme and toggleTheme function to switch between modes.

Now, let’s implement the useDarkMode hook. Go into App.js, import the newly created hook, destructure our theme and toggleTheme properties from the hook, and, put them where they belong:

// App.js import React from 'react'; import { ThemeProvider } from 'styled-components'; import { useDarkMode } from './useDarkMode'; import { lightTheme, darkTheme } from './theme'; import { GlobalStyles } from './global'; import Toggle from './components/Toggle'; function App() { const [theme, toggleTheme] = useDarkMode(); const themeMode = theme === 'light' ? lightTheme : darkTheme; return ( <ThemeProvider theme={themeMode}> <> <GlobalStyles /> <Toggle theme={theme} toggleTheme={toggleTheme} /> <h1>It's a {theme === 'light' ? 'light theme' : 'dark theme'}!</h1> <footer> Credits: <small>Sun icon made by smalllikeart from www.flaticon.com</small> <small>Moon icon made by Freepik from www.flaticon.com</small> </footer> </> </ThemeProvider> ); } export default App;

This almost works almost perfectly, but there is one small thing we can do to make our experience better. Switch to dark theme and reload the page. Do you see that the sun icon loads before the moon icon for a brief moment?

That happens because our useState hook initiates the light theme initially. After that, useEffect runs, checks localStorage and only then sets the theme to dark.

So far, I found two solutions. The first is to check if there is a value in localStorage in our useState:

// useDarkMode.js const [theme, setTheme] = useState(window.localStorage.getItem('theme') || 'light');

However, I am not sure if it’s a good practice to do checks like that inside useState, so let me show you a second solution, that I’m using.

This one will be a bit more complicated. We will create another state and call it componentMounted. Then, inside the useEffect hook, where we check our localTheme, we’ll add an else statement, and if there is no theme in localStorage, we’ll add it. After that, we’ll set setComponentMounted to true. In the end, we add componentMounted to our return statement.

// useDarkMode.js import { useEffect, useState } from 'react'; export const useDarkMode = () => { const [theme, setTheme] = useState('light'); const [componentMounted, setComponentMounted] = useState(false); const toggleTheme = () => { if (theme === 'light') { window.localStorage.setItem('theme', 'dark'); setTheme('dark'); } else { window.localStorage.setItem('theme', 'light'); setTheme('light'); } }; useEffect(() => { const localTheme = window.localStorage.getItem('theme'); if (localTheme) { setTheme(localTheme); } else { setTheme('light') window.localStorage.setItem('theme', 'light') } setComponentMounted(true); }, []); return [theme, toggleTheme, componentMounted] };

You might have noticed that we’ve got some pieces of code that are repeated. We always try to follow the DRY principle while writing the code, and right here we’ve got a chance to use it. We can create a separate function that will set our state and pass theme to the localStorage. I believe, that the best name for it will be setTheme, but we’ve already used it, so let’s call it setMode:

// useDarkMode.js const setMode = mode => { window.localStorage.setItem('theme', mode) setTheme(mode) };

With this function in place, we can refactor our useDarkMode.js a little:

// useDarkMode.js import { useEffect, useState } from 'react'; export const useDarkMode = () => { const [theme, setTheme] = useState('light'); const [componentMounted, setComponentMounted] = useState(false); const setMode = mode => { window.localStorage.setItem('theme', mode) setTheme(mode) }; const toggleTheme = () => { if (theme === 'light') { setMode('dark'); } else { setMode('light'); } }; useEffect(() => { const localTheme = window.localStorage.getItem('theme'); if (localTheme) { setTheme(localTheme); } else { setMode('light'); } setComponentMounted(true); }, []); return [theme, toggleTheme, componentMounted] };

We’ve only changed code a little, but it looks so much better and is easier to read and understand!

Did the component mount?

Getting back to componentMounted property. We will use it to check if our component has mounted because this is what happens in useEffect hook.

If it hasn’t happened yet, we will render an empty div:

// App.js if (!componentMounted) { return <div /> };

Here is how complete code for the App.js:

// App.js import React from 'react'; import { ThemeProvider } from 'styled-components'; import { useDarkMode } from './useDarkMode'; import { lightTheme, darkTheme } from './theme'; import { GlobalStyles } from './global'; import Toggle from './components/Toggle'; function App() { const [theme, toggleTheme, componentMounted] = useDarkMode(); const themeMode = theme === 'light' ? lightTheme : darkTheme; if (!componentMounted) { return <div /> }; return ( <ThemeProvider theme={themeMode}> <> <GlobalStyles /> <Toggle theme={theme} toggleTheme={toggleTheme} /> <h1>It's a {theme === 'light' ? 'light theme' : 'dark theme'}!</h1> <footer> <span>Credits:</span> <small><b>Sun</b> icon made by <a href="https://www.flaticon.com/authors/smalllikeart">smalllikeart</a> from <a href="https://www.flaticon.com">www.flaticon.com</a></small> <small><b>Moon</b> icon made by <a href="https://www.freepik.com/home">Freepik</a> from <a href="https://www.flaticon.com">www.flaticon.com</a></small> </footer> </> </ThemeProvider> ); } export default App; Using the user’s preferred color scheme

This part is not required, but it will let you achieve even better user experience. This media feature is used to detect if the user has requested the page to use a light or dark color theme based on the settings in their OS. For example, if a user’s default color scheme on a phone or laptop is set to dark, your website will change its color scheme accordingly to it. It’s worth noting that this media query is still a work in progress and is included in the Media Queries Level 5 specification, which is in Editor’s Draft.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari766267No7612.1Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox13NoNo76No68

The implementation is pretty straightforward. Because we’re working with a media query, we need to check if the browser supports it in the useEffect hook and set appropriate theme. To do that, we’ll use window.matchMedia to check if it exists and whether dark mode is supported. We also need to remember about the localTheme because, if it’s available, we don’t want to overwrite it with the dark value unless, of course, the value is set to light.

If all checks are passed, we will set the dark theme.

// useDarkMode.js useEffect(() => { if ( window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches && !localTheme ) { setTheme('dark') } })

As mentioned before, we need to remember about the existence of localTheme — that’s why we need to implement our previous logic where we’ve checked for it.

Here’s what we had from before:

// useDarkMode.js useEffect(() => { const localTheme = window.localStorage.getItem('theme'); if (localTheme) { setTheme(localTheme); } else { setMode('light'); } })

Let’s mix it up. I’ve replaced the if and else statements with ternary operators to make things a little more readable as well:

// useDarkMode.js useEffect(() => { const localTheme = window.localStorage.getItem('theme'); window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches && !localTheme ? setMode('dark') : localTheme ? setTheme(localTheme) : setMode('light');}) })

Here’s the userDarkMode.js file with the complete code:

// useDarkMode.js import { useEffect, useState } from 'react'; export const useDarkMode = () => { const [theme, setTheme] = useState('light'); const [componentMounted, setComponentMounted] = useState(false); const setMode = mode => { window.localStorage.setItem('theme', mode) setTheme(mode) }; const toggleTheme = () => { if (theme === 'light') { setMode('dark') } else { setMode('light') } }; useEffect(() => { const localTheme = window.localStorage.getItem('theme'); window.matchMedia && window.matchMedia('(prefers-color-scheme: dark)').matches && !localTheme ? setMode('dark') : localTheme ? setTheme(localTheme) : setMode('light'); setComponentMounted(true); }, []); return [theme, toggleTheme, componentMounted] };

Give it a try! It changes the mode, persists the theme in localStorage, and also sets the default theme accordingly to the OS color scheme if it’s available.

Congratulations, my friend! Great job! If you have any questions about implementation, feel free to send me a message!

The post A Dark Mode Toggle with React and ThemeProvider appeared first on CSS-Tricks.

Thinking in React Hooks

Css Tricks - Wed, 09/25/2019 - 4:09am

Amelia Wattenberger has written this wonderful and interactive piece about React Hooks and details how they can clean up code and remove all those troubling lifecycle events:

React introduced hooks one year ago, and they've been a game-changer for a lot of developers. There are tons of how-to introduction resources out there, but I want to talk about the fundamental mindset change when switching from React class components to function components + hooks.

Make sure you check out that lovely animation when you select the code, too. It’s a pretty smart effect to show the old way of doing things versus the new and fancy way. Also make sure to check out our very own Intro to React Hooks once you’re finished to find more examples of this pattern in use.

Direct Link to ArticlePermalink

The post Thinking in React Hooks appeared first on CSS-Tricks.

Filtering Data Client-Side: Comparing CSS, jQuery, and React

Css Tricks - Tue, 09/24/2019 - 9:43am

Say you have a list of 100 names:

<ul> <li>Randy Hilpert</li> <li>Peggie Jacobi</li> <li>Ethelyn Nolan Sr.</li> <!-- and then some --> </ul>

...or file names, or phone numbers, or whatever. And you want to filter them client-side, meaning you aren't making a server-side request to search through data and return results. You just want to type "rand" and have it filter the list to include "Randy Hilpert" and "Danika Randall" because they both have that string of characters in them. Everything else isn't included in the results.

Let's look at how we might do that with different technologies.

CSS can sorta do it, with a little help.

CSS can't select things based on the content they contain, but it can select on attributes and the values of those attributes. So let's move the names into attributes as well.

<ul> <li data-name="Randy Hilpert">Randy Hilpert</li> <li data-name="Peggie Jacobi">Peggie Jacobi</li> <li data-name="Ethelyn Nolan Sr.">Ethelyn Nolan Sr.</li> ... </ul>

Now to filter that list for names that contain "rand", it's very easy:

li { display: none; } li[data-name*="rand" i] { display: list-item; }

Note the i on Line 4. That means "case insensitive" which is very useful here.

To make this work dynamically with a filter <input>, we'll need to get JavaScript involved to not only react to the filter being typed in, but generate CSS that matches what is being searched.

Say we have a <style> block sitting on the page:

<style id="cssFilter"> /* dynamically generated CSS will be put in here */ </style>

We can watch for changes on our filter input and generate that CSS:

filterElement.addEventListener("input", e => { let filter = e.target.value; let css = filter ? ` li { display: none; } li[data-name*="${filter}" i] { display: list-item; } ` : ``; window.cssFilter.innerHTML = css; });

Note that we're emptying out the style block when the filter is empty, so all results show.

See the Pen
Filtering Technique: CSS
by Chris Coyier (@chriscoyier)
on CodePen.

I'll admit it's a smidge weird to leverage CSS for this, but Tim Carry once took it way further if you're interested in the concept.

jQuery makes it even easier.

Since we need JavaScript anyway, perhaps jQuery is an acceptable tool. There are two notable changes here:

  • jQuery can select items based on the content they contain. It has a selector API just for this. We don't need the extra attribute anymore.
  • This keeps all the filtering to a single technology.

We still watch the input for typing, then if we have a filter term, we hide all the list items and reveal the ones that contain our filter term. Otherwise, we reveal them all again:

const listItems = $("li"); $("#filter").on("input", function() { let filter = $(this).val(); if (filter) { listItems.hide(); $(`li:contains('${filter}')`).show(); } else { listItems.show(); } });

It's takes more fiddling to make the filter case-insensitive than CSS does, but we can do it by overriding the default method:

jQuery.expr[':'].contains = function(a, i, m) { return jQuery(a).text().toUpperCase() .indexOf(m[3].toUpperCase()) >= 0; };

See the Pen
Filtering Technique: jQuery
by Chris Coyier (@chriscoyier)
on CodePen.

React can do it with state and rendering only what it needs.

There is no one-true-way to do this in React, but I would think it's React-y to keep the list of names as data (like an Array), map over them, and only render what you need. Changes in the input filter the data itself and React re-renders as necessary.

If we have an names = [array, of, names], we can filter it pretty easily:

filteredNames = names.filter(name => { return name.includes(filter); });

This time, case sensitivity can be done like this:

filteredNames = names.filter(name => { return name.toUpperCase().includes(filter.toUpperCase()); });

Then we'd do the typical .map() thing in JSX to loop over our array and output the names.

See the Pen
Filtering Technique: React
by Chris Coyier (@chriscoyier)
on CodePen.

I don't have any particular preference

This isn't the kind of thing you choose a technology for. You do it in whatever technology you already have. I also don't think any one approach is particularly heavier than the rest in terms of technical debt.

The post Filtering Data Client-Side: Comparing CSS, jQuery, and React appeared first on CSS-Tricks.

Filtering Data Client-Side: Comparing CSS, jQuery, and React

Css Tricks - Tue, 09/24/2019 - 9:43am

Say you have a list of 100 names:

<ul> <li>Randy Hilpert</li> <li>Peggie Jacobi</li> <li>Ethelyn Nolan Sr.</li> <!-- and then some --> </ul>

...or file names, or phone numbers, or whatever. And you want to filter them client-side, meaning you aren't making a server-side request to search through data and return results. You just want to type "rand" and have it filter the list to include "Randy Hilpert" and "Danika Randall" because they both have that string of characters in them. Everything else isn't included in the results.

Let's look at how we might do that with different technologies.

CSS can sorta do it, with a little help.

CSS can't select things based on the content they contain, but it can select on attributes and the values of those attributes. So let's move the names into attributes as well.

<ul> <li data-name="Randy Hilpert">Randy Hilpert</li> <li data-name="Peggie Jacobi">Peggie Jacobi</li> <li data-name="Ethelyn Nolan Sr.">Ethelyn Nolan Sr.</li> ... </ul>

Now to filter that list for names that contain "rand", it's very easy:

li { display: none; } li[data-name*="rand" i] { display: list-item; }

Note the i on Line 4. That means "case insensitive" which is very useful here.

To make this work dynamically with a filter <input>, we'll need to get JavaScript involved to not only react to the filter being typed in, but generate CSS that matches what is being searched.

Say we have a <style> block sitting on the page:

<style id="cssFilter"> /* dynamically generated CSS will be put in here */ </style>

We can watch for changes on our filter input and generate that CSS:

filterElement.addEventListener("input", e => { let filter = e.target.value; let css = filter ? ` li { display: none; } li[data-name*="${filter}" i] { display: list-item; } ` : ``; window.cssFilter.innerHTML = css; });

Note that we're emptying out the style block when the filter is empty, so all results show.

See the Pen
Filtering Technique: CSS
by Chris Coyier (@chriscoyier)
on CodePen.

I'll admit it's a smidge weird to leverage CSS for this, but Tim Carry once took it way further if you're interested in the concept.

jQuery makes it even easier.

Since we need JavaScript anyway, perhaps jQuery is an acceptable tool. There are two notable changes here:

  • jQuery can select items based on the content they contain. It has a selector API just for this. We don't need the extra attribute anymore.
  • This keeps all the filtering to a single technology.

We still watch the input for typing, then if we have a filter term, we hide all the list items and reveal the ones that contain our filter term. Otherwise, we reveal them all again:

const listItems = $("li"); $("#filter").on("input", function() { let filter = $(this).val(); if (filter) { listItems.hide(); $(`li:contains('${filter}')`).show(); } else { listItems.show(); } });

It's takes more fiddling to make the filter case-insensitive than CSS does, but we can do it by overriding the default method:

jQuery.expr[':'].contains = function(a, i, m) { return jQuery(a).text().toUpperCase() .indexOf(m[3].toUpperCase()) >= 0; };

See the Pen
Filtering Technique: jQuery
by Chris Coyier (@chriscoyier)
on CodePen.

React can do it with state and rendering only what it needs.

There is no one-true-way to do this in React, but I would think it's React-y to keep the list of names as data (like an Array), map over them, and only render what you need. Changes in the input filter the data itself and React re-renders as necessary.

If we have an names = [array, of, names], we can filter it pretty easily:

filteredNames = names.filter(name => { return name.includes(filter); });

This time, case sensitivity can be done like this:

filteredNames = names.filter(name => { return name.toUpperCase().includes(filter.toUpperCase()); });

Then we'd do the typical .map() thing in JSX to loop over our array and output the names.

See the Pen
Filtering Technique: React
by Chris Coyier (@chriscoyier)
on CodePen.

I don't have any particular preference

This isn't the kind of thing you choose a technology for. You do it in whatever technology you already have. I also don't think any one approach is particularly heavier than the rest in terms of technical debt.

The post Filtering Data Client-Side: Comparing CSS, jQuery, and React appeared first on CSS-Tricks.

An Explanation of How the Intersection Observer Watches

Css Tricks - Tue, 09/24/2019 - 4:18am

There have been several excellent articles exploring how to use this API, including choices from authors such as Phil Hawksworth, Preethi, and Mateusz Rybczonek, just to name a few. But I’m aiming to do something a bit different here. I had an opportunity earlier in the year to present the VueJS transition component to the Dallas VueJS Meetup of which my first article on CSS-Tricks was based on. During the question-and-answer session of that presentation I was asked about triggering the transitions based on scroll events — which of course you can, but it was suggested by a member of the audience to look into the Intersection Observer.

This got me thinking. I knew the basics of Intersection Observer and how to make a simple example of using it. Did I know how to explain not only how to use it but how it works? What exactly does it provide to us as developers? As a "senior" dev, how would I explain it to someone right out of a bootcamp who possibly doesn’t even know it exists?

I decided I needed to know. After spending some time researching, testing, and experimenting, I’ve decided to share a good bit of what I’ve learned. Hence, here we are.

A brief explanation of the Intersection Observer

The abstract of the W3C public working draft (first draft dated September 14, 2017) describes the Intersection Observer API as:

This specification describes an API that can be used to understand the visibility and position of DOM elements ("targets") relative to a containing element or to the top-level viewport ("root"). The position is delivered asynchronously and is useful for understanding the visibility of elements and implementing pre-loading and deferred loading of DOM content.

The general idea being a way to watch a child element and be informed when it enters the bounding box of one of its parents. This is most commonly going to be used in relation to the target element scrolling into view in the root element. Up until the introduction of the Intersection Observer, this type of functionality was accomplished by listening for scroll events.

Although the Intersection Observer is a more performant solution for this type of functionality, I do not suggest we necessarily look at it as a replacement to scroll events. Instead, I suggest we look at this API as an additional tool that has a functional overlap with scroll events. In some cases, the two can work together to solve specific problems.

A basic example

I know I risk repeating what's already been explained in other articles, but let’s see a basic example of an Intersection Observer and what it gives us.

The Observer is made up of four parts:

  1. the "root," which is the parent element the observer is tied to, which can be the viewport
  2. the "target," which is a child element being observed and there can be more than one
  3. the options object, which defines certain aspects of the observer’s behavior
  4. the callback function, which is invoked each time an intersection change is observed

The code of a basic example could look something like this:

const options = { root: document.body, rootMargin: '0px', threshold: 0 } function callback (entries, observer) { console.log(observer); entries.forEach(entry => { console.log(entry); }); } let observer = new IntersectionObserver(callback, options); observer.observe(targetElement);

The first section in the code is the options object which has root, rootMargin, and threshold properties.

The root is the parent element, often a scrolling element, that contains the observed elements. This can be just about any single element on the page as needed. If the property isn’t provided at all or the value is set to null, the viewport is set to be the root element.

The rootMargin is a string of values describing what can be called the margin of the root element, which affects the resulting bounding box that the target element scrolls into. It behaves much like the CSS margin property. You can have values like 10px 15px 20px which gives us a top margin of 10px, left and right margins of 15px, and a bottom margin of 20px. Only the bounding box is affected and not the element itself. Keep in mind that the only lengths allowed are pixels and percentage values, which can be negative or positive. Also note that the rootMargin does not work if the root element is not an actual element on the page, such as the viewport.

The threshold is the value used to determine when an intersection change should be observed. More than one value can be included in an array so that the same target can trigger the intersection multiple times. The different values are a percentage using zero to one, much like opacity in CSS, so a value of 0.5 would be considered 50% and so on. These values relate to the target’s intersection ratio, which will be explained in just a moment. A threshold of zero triggers the intersection when the first pixel of the target element intersects the root element. A threshold of one triggers when the entire target element is inside the root element.

The second section in the code is the callback function that is called whenever a intersection change is observed. Two parameters are passed; the entries are stored in an array and represent each target element that triggers the intersection change. This provides a good bit of information that can be used for the bulk of any functionality that a developer might create. The second parameter is information about the observer itself, which is essentially the data from the provided options object. This provides a way to identify which observer is in play in case a target is tied to multiple observers.

The third section in the code is the creation of the observer itself and where it is observing the target. When creating the observer, the callback function and options object can be external to the observer, as shown. A developer could write the code inline, but the observer is very flexible. For example, the callback and options can be used across multiple observers, if needed. The observe() method is then passed the target element that needs to be observed. It can only accept one target but the method can be repeated on the same observer for multiple targets. Again, very flexible.

Notice the console logs in the code. Here is what those output.

The observer object

Logging the observer data passed into the callback gets us something like this:

IntersectionObserver root: null rootMargin: "0px 0px 0px 0px" thresholds: Array [ 0 ] <prototype>: IntersectionObserverPrototype { }

...which is essentially the options object passed into the observer when it was created. This can be used to determine the root element that the intersection is tied to. Notice that even though the original options object had 0px as the rootMargin, this object reports it as 0px 0px 0px 0px, which is what would be expected when considering the rules of CSS margins. Then there’s the array of thresholds the observer is operating under.

The entry object

Logging the entry data passed into the callback gets us something like this:

IntersectionObserverEntry boundingClientRect: DOMRect bottom: 923.3999938964844, top: 771 height: 152.39999389648438, width: 411 left: 9, right: 420 x: 9, y: 771 <prototype>: DOMRectPrototype { } intersectionRatio: 0 intersectionRect: DOMRect bottom: 0, top: 0 height: 0, width: 0 left: 0, right: 0 x: 0, y: 0 <prototype>: DOMRectPrototype { } isIntersecting: false rootBounds: null target: <div class="item"> time: 522 <prototype>: IntersectionObserverEntryPrototype { }

Yep, lots of things going on here.

For most devs, the two properties that are most likely to be useful are intersectionRatio and isIntersecting. The isIntersecting property is a boolean that is exactly what one might think it is — the target element is intersecting the root element at the time of the intersection change. The intersectionRatio is the percentage of the target element that is currently intersecting the root element. This is represented by a percentage of zero to one, much like the threshold provided in the observer’s option object.

Three properties — boundingClientRect, intersectionRect, and rootBounds — represent specific data about three aspects of the intersection. The boundingClientRect property provides the bounding box of the target element with bottom, left, right, and top values from the top-left of the viewport, just like with Element.getBoundingClientRect(). Then the height and width of the target element is provided as the X and Y coordinates. The rootBounds property provides the same form of data for the root element. The intersectionRect provides similar data but its describing the box formed by the intersection area of the target element inside the root element, which corresponds to the intersectionRatio value. Traditional scroll events would require this math to be done manually.

One thing to keep in mind is that all these shapes that represent the different elements are always rectangles. No matter the actual shape of the elements involved, they are always reduced down to the smallest rectangle containing the element.

The target property refers to the target element that is being observed. In cases where an observer contains multiple targets, this is the easy way to determine which target element triggered this intersection change.

The time property provides the time (in milliseconds) from when the observer is first created to the time this intersection change is triggered. This is how you can track the time it takes for a viewer to come across a particular target. Even if the target is scrolled into view again at a later time, this property will have the new time provided. This can be used to track the time of a target entering and leaving the root element.

While all this information is provided to us whenever an intersection change is observed, it's also provided to us when the observer is first started. For example, on page load the observers on the page will immediately invoke the callback function and provide the current state of every target element it is observing.

This is a wealth of data about the relationships of elements on the page provided in a very performant way.

Intersection Observer methods

Intersection Observer has three methods of note: observe(), unobserve(), and disconnect().

  • observe(): The observe method takes in a DOM reference to a target element to be added to the list of elements to be watched by the observer. An observer can have more than one target element, but this method can only accept one target at a time.
  • unobserve(): The unobserve method takes in a DOM reference to a target element to be removed from the list of elements watched by the observer.
  • disconnect(): The disconnect method causes the observer to stop watching all of its target elements. The observer itself is still active, but has no targets. After disconnect(), target elements can still be passed to the observer with observe().

These methods provide the ability to watch and unwatch target elements, but there’s no way to change the options passed to the observer when once it is created. You’ll have to manually recreate the observer if different options are required.

Performance: Intersection Observer versus scroll events

In my exploration of the Intersection Observer and how it compares to using scroll events, I knew that I needed to do some performance testing. A totally unscientific effort was thus created using Puppeteer. For the sake of time, I only wanted a general idea of what the performance difference is between the two. Therefore, three simple tests were created.

First, I created a baseline HTML file that included one hundred divs with a bit of height to create a long scrolling page. With a basic http-server active, I loaded the HTML file with Puppeteer, started a trace, forced the page to scroll downward in preset increments to the bottom, stopped the trace once the bottom is reached, and finally saved the results of the trace. I also made it so the test can be repeated multiple times and output data each time. Then I duplicated the baseline HTML and wrote my JavaScript in a script tag for each type of test I wanted to run. Each test has two files: one for the Intersection Observer and the other for scroll events.

The purpose of all the tests is to detect when a target element scrolls upward through the viewport at 25% increments. At each increment, a CSS class is applied that changes the background color of the element. In other words, each element has DOM changes applied to it that would cause repaints. Each test was run five times on two different machines: my development Mac that’s rather up-to-date hardware-wise and my personal Windows 7 machine that’s probably average these days. The results of the trace summary of scripting, rendering, painting, and system were recorded and then averaged. Again, nothing too scientific about all this — just a general idea.

The first test has one observer or one scroll event with one callback each. This is a fairly standard setup for both the observer and scroll event. Although, in this case, the scroll event has a bit more work to do because it attempts to mimic the data that the observer provides by default. Once all those calculations are done, the data is stored in an entry array just like the observer does. Then the functionality for removing and applying classes between the two is exactly the same. I do throttle the scroll event a bit with requestAnimationFrame.

The second test has 100 observers or 100 scroll events with one callback for each type. Each element is assigned its own observer and event but the callback function is the same. This is actually inefficient because each observer and event behaves exactly the same, but I wanted a simple stress test without having to create 100 unique observers and events — though I have seen many examples of using the observer this way.

The third test has 100 observers or 100 scroll events with 100 callbacks for each type. This means each element has its own observer, event, and callback function. This, of course, is horribly inefficient since this is all duplicated functionality stored in huge arrays. But this inefficiency is the point of this test.

Intersection Observer versus Scroll Events stress tests

In the charts above, you’ll see the first column represents our baseline where no JavaScript was run at all. The next two columns represent the first type of test. The Mac ran both quite well as I would expect for a top-end machine for development. The Windows machine gave us a different story. For me, the main point of interest is the scripting results in red. On the Mac, the difference was around 88ms for the observer while around 300ms for the scroll event. The overall result on the Mac is fairly close for each but that scripting took a beating with the scroll event. For the Windows machine its far, far worse. The observer was around 150ms versus around 1400ms for the first and easiest test of the three.

For the second test, we start to see the inefficiency of the scroll test made clearer. Both the Mac and Windows machines ran the observer test with much the same results as before. For the scroll event test, the scripting gets more bogged down to complete the tasks given. The Mac jumped to almost a full second of scripting while the Windows machine jumped approximately to a staggering 3200ms.

For the third test, things thankfully did not get worse. The results are roughly the same as the second test. One thing to note is that across all three tests the results for the observer were consistent for both computers. Despite no efforts in efficiency for the observer tests, the Intersection Observer outperformed the scroll events by a strong margin.

So, after my non-scientific testing on my own two machines, I felt I had a decent idea of the differences in performance between scroll events and the Intersection Observer. I’m sure with some effort I could make the scroll events more efficient but is it worth doing? There are cases where the precision of the scroll events is necessary, but in most cases, the Intersection Observer will suffice nicely — especially since it appears to be far more efficient with no effort at all.

Understanding the intersectionRatio property

The intersectionRatio property, given to us by IntersectionObserverEntry, represents the percentage of the target element that is within the boundaries of the root element on an intersection change. I found I didn’t quite understand what this value actually represented at first. For some reason I was thinking it was a straightforward zero to 100 percent representation of the appearance of the target element, which it sort of is. It is tied to the thresholds passed to the observer when it is created. It could be used to determine which threshold was the cause of the intersection change just triggered, as an example. However, the values it provides are not always straightforward.

Take this demo for instance:

See the Pen
Intersection Observer: intersectionRatio
by Travis Almand (@talmand)
on CodePen.

In this demo, the observer has been assigned the parent container as the root element. The child element with the target background has been assigned as the target element. The threshold array has been created with 100 entries with the sequence 0, 0.01, 0.02, 0.03, and so on, until 1. The observer triggers every one percent of the target element appearing or disappearing inside the root element so that, whenever the ratio changes by at least one percent, the output text below the box is updated. In case you’re curious, this threshold was accomplished with this code:

[...Array(100).keys()].map(x => x / 100) }

I don’t recommend you set your thresholds in this way for typical use in projects.

At first, the target element is completely contained within the root element and the output above the buttons will show a ratio of one. It should be one on first load but we’ll soon see that the ratio is not always precise; it’s possible the number will be somewhere between 0.99 and 1. That does seem odd, but it can happen, so keep that in mind if you create any checks against the ratio equaling a particular value.

Clicking the "left" button will cause the target element to be transformed to the left so that half of it is in the root element and the other half is out. The intersectionRatio should then change to 0.5, or something close to that. We now know that half of the target element is intersecting the root element, but we have no idea where it is. More on that later.

Clicking the "top" button does much the same. It transforms the target element to the top of the root element with half of it in and half of it out again. And again, the intersectionRatio should be somewhere around 0.5. Even though the target element is in a completely different location than before, the resulting ratio is the same.

Clicking the "corner" button again transforms the target element to the upper-right corner of the root element. At this point only a quarter of the target element is within the root element. The intersectionRatio should reflect this with a value of around 0.25. Clicking "center" will transform the target element back to the center and fully contained within the root element.

If we click the "large" button, that changes the height of the target element to be taller than the root element. The intersectionRatio should be somewhere around 0.8, give or take a few ten-thousandths of a percent. This is the tricky part of relying on intersectionRatio. Creating code based on the thresholds given to the observer makes it possible to have thresholds that will never trigger. In this "large" example, any code based on a threshold of 1 will fail to execute. Also consider situations where the root element can be resized, such as the viewport being rotated from portrait to landscape.

Finding the position

So then, how do we know where the target element is in relation to the root element? Thankfully, the data for this calculation is provided by IntersectionObserverEntry, so we only have to do simple comparisons.

Consider this demo:

See the Pen
Intersection Observer: Finding the Position
by Travis Almand (@talmand)
on CodePen.

The setup for this demo is much the same as the one before. The parent container is the root element and the child inside with the target background is the target element. The threshold is an array of 0, 0.5, and 1. As you scroll inside the root element, the target will appear and its position will be reported in the output above the buttons.

Here's the code that performs these checks:

const output = document.querySelector('#output pre'); function io_callback (entries) { const ratio = entries[0].intersectionRatio; const boundingRect = entries[0].boundingClientRect; const intersectionRect = entries[0].intersectionRect; if (ratio === 0) { output.innerText = 'outside'; } else if (ratio < 1) { if (boundingRect.top < intersectionRect.top) { output.innerText = 'on the top'; } else { output.innerText = 'on the bottom'; } } else { output.innerText = 'inside'; } }

I should point out that I’m not looping over the entries array as I know there will always only be one entry because there’s only one target. I’m taking a shortcut by making use of entries[0] instead.

You’ll see that a ratio of zero puts the target on the "outside." A ratio of less than one puts it either at the top or bottom. That lets us see if the target’s "top" is less than the intersectionRect’s top, which actually means it’s higher on the page and is seen as "on the top." In fact, checking against the root element’s "top" would work for this as well. Logically, if the target isn’t at the top, then it must be at the bottom. If the ratio happens to equal one, then it is "inside" the root element. Checking the horizontal position is the same except it's done with the left or right property.

This is part of the efficiency of using the Intersection Observer. Developers don’t need to request this information from various places on a throttled scroll event (which fires quite a lot regardless) and then calculate the related math to figure all this out. It’s provided by the observer and all that’s needed is a simple if check.

At first, the target element is taller than the root element, so it is never reported as being "inside." Click the "toggle target size" button to make it smaller than the root. Now, the target element can be inside the root element when scrolling up and down.

Restore the target element to its original size by clicking on "toggle target size" again, then click on the "toggle root size" button. This resizes the root element so that it is taller than the target element. Once again, while scrolling up and down, it is possible for the target element to be "inside" the root element.

This demo demonstrates two things about the Intersection Observer: how to determine the position of the target element in relation to the root element and what happens when resizing the two elements. This reaction to resizing is another advantage over scroll events — no need for code to adjust to a resize event.

Creating a position sticky event

The "sticky" value for the CSS position property can be a useful feature, yet it’s a bit limiting in terms of CSS and JavaScript. The styling of the sticky element can only be of one design, whether in its normal state or within its sticky state. There’s no easy way to know the state for JavaScript to react to these changes. So far, there’s no pseudo-class or JavaScript event that makes us aware of the changing state of the element.

I’ve seen examples of having an event of sorts for sticky positioning using both scroll events and the Intersection Observer. The solutions using scroll events always have issues similar to using scroll events for other purposes. The usual solution with an observer is with a "dummy" element that serves little purpose other than being a target for the observer. I like to avoid using single purpose elements like that, so I decided to tinker with this particular idea.

In this demo, scroll up and down to see the section titles reacting to being "sticky" to their respective sections.

See the Pen
Intersection Observer: Position Sticky Event
by Travis Almand (@talmand)
on CodePen.

This is an example of detecting when a sticky element is at the top of the scrolling container so a class name can be applied to the element. This is accomplished by making use of an interesting quirk of the DOM when giving a specific rootMargin to the observer. The values given are:

rootMargin: '0px 0px -100% 0px'

This pushes the bottom margin of the root’s boundary to the top of the root element, which leaves a small sliver of area available for intersection detection that's zero pixels. A target element touching this zero pixel area triggers the intersection change, even though it doesn’t exist by the numbers, so to speak. Consider that we can have elements in the DOM that exist with a collapsed height of zero.

This solution takes advantage of this by recognizing the sticky element is always in its "sticky" position at the top of the root element. As scrolling continues, the sticky element eventually moves out of view and the intersection stops. Therefore, we add and remove the class based on the isIntersecting property of the entry object.

Here’s the HTML:

<section> <div class="sticky-container"> <div class="sticky-content"> <span>&sect;</span> <h2>Section 1</h2> </div> </div> {{ content here }} </section>

The outer div with the class sticky-container is the target for our observer. This div will be set as the sticky element and acts as the container. The element used to style and change the element based on the sticky state is the sticky-content div and its children. This insures that the actual sticky element is always in contact with the shrunken rootMargin at the top of the root element.

Here’s the CSS:

.sticky-content { position: relative; transition: 0.25s; } .sticky-content span { display: inline-block; font-size: 20px; opacity: 0; overflow: hidden; transition: 0.25s; width: 0; } .sticky-content h2 { display: inline-block; } .sticky-container { position: sticky; top: 0; } .sticky-container.active .sticky-content { background-color: rgba(0, 0, 0, 0.8); color: #fff; padding: 10px; } .sticky-container.active .sticky-content span { opacity: 1; transition: 0.25s 0.5s; width: 20px; }

You’ll see that .sticky-container creates our sticky element at the top of zero. The rest is a mixture of styles for the regular state in .sticky-content and the sticky state with .active .sticky-content. Again, you can pretty much do anything you want inside the sticky content div. In this demo, there’s a hidden section symbol that appears from a delayed transition when the sticky state is active. This effect would be difficult without something to assist, like the Intersection Observer.

The JavaScript:

const stickyContainers = document.querySelectorAll('.sticky-container'); const io_options = { root: document.body, rootMargin: '0px 0px -100% 0px', threshold: 0 }; const io_observer = new IntersectionObserver(io_callback, io_options); stickyContainers.forEach(element => { io_observer.observe(element); }); function io_callback (entries, observer) { entries.forEach(entry => { entry.target.classList.toggle('active', entry.isIntersecting); }); }

This is actually a very straightforward example of using the Intersection Observer for this task. The only oddity is the -100% value in the rootMargin. Take note that this can be repeated for the other three sides as well; it just requires a new observer with its own unique rootMargin with -100% for the appropriate side. There will have to be more unique sticky containers with their own classes such as sticky-container-top and sticky-container-bottom.

The limitation of this is that the top, right, bottom, or left property for the sticky element must always be zero. Technically, you could use a different value but then you’d have to do the math to figure out the proper value for the rootMargin. This is can be done easily, but if things get resized, not only does the math need to be done again, the observer has to be stopped and restarted with the new value. It’s easier to set the position property to zero and use the interior elements to style things how you want them.

Combining with Scrolling Events

As we've seen in some of the demos so far, the intersectionRatio can be imprecise and somewhat limiting. Using scroll events can be more precise but at the cost of inefficiency in performance. What if we combined the two?

See the Pen
Intersection Observer: Scroll Events
by Travis Almand (@talmand)
on CodePen.

In this demo, we've created an Intersection Observer and the callback function serves the sole purpose of adding and removing an event listener that listens for the scroll event on the root element. When the target first enters the root element, the scroll event listener is created and then is removed when the target leaves the root. As the scrolling happens, the output simply shows each event’s timestamp to show it changing in real time — far more precise than the observer alone.

The setup for the HTML and CSS is fairly standard at this point, so here’s the JavaScript.

const root = document.querySelector('#root'); const target = document.querySelector('#target'); const output = document.querySelector('#output pre'); const io_options = { root: root, rootMargin: '0px', threshold: 0 }; let io_observer; function scrollingEvents (e) { output.innerText = e.timeStamp; } function io_callback (entries) { if (entries[0].isIntersecting) { root.addEventListener('scroll', scrollingEvents); } else { root.removeEventListener('scroll', scrollingEvents); output.innerText = 0; } } io_observer = new IntersectionObserver(io_callback, io_options); io_observer.observe(target);

This is a fairly standard example. Take note that we’ll want the threshold to be zero because we’ll get multiple event listeners at the same time if there’s more than one threshold. The callback function is what we’re interested in and even that is a simple setup: add and remove the event listener in an if-else block. The event’s callback function simply updates the div in the output. Whenever the target triggers an intersection change and is not intersecting with the root, we set the output back to zero.

This gives the benefits of both Intersection Observer and scroll events. Consider having a scrolling animation library in place that works only when the section of the page that requires it is actually visible. The library and scroll events are not inefficiently active throughout the entire page.

Interesting differences with browsers

You're probably wondering how much browser support there is for Intersection Observer. Quite a bit, actually!

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari584555No1612.1Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox12.2-12.346No767668

All the major browsers have supported it for some time now. As you might expect, Internet Explorer doesn’t support it at any level, but there’s a polyfill available from the W3C that takes care of that.

As I was experimenting with different ideas using the Intersection Observer, I did come across a couple of examples that behave differently between Firefox and Chrome. I wouldn't use these example on a production site, but the behaviors are interesting.

Here's the first example:

See the Pen
Intersection Observer: Animated Transform
by Travis Almand (@talmand)
on CodePen.

The target element is moving within the root element by way of the CSS transform property. The demo has a CSS animation that transforms the target element in and out of the root element on the horizontal axis. When the target element enters or leaves the root element, the intersectionRatio is updated.

If you view this demo in Firefox, you should see the intersectionRatio update properly as the target elements slides back and forth. Chrome behaves differently. The default behavior there does not update the intersectionRatio display at all. It seems that Chrome doesn’t keep tabs of a target element that is transformed with CSS. However, if we were to move the mouse around in the browser as the target element moves in and out of the root element, the intersectionRatio display does indeed update. My guess is that Chrome only "activates" the observer when there is some form of user interaction.

Here's the second example:

See the Pen
Intersection Observer: Animated Clip-Path
by Travis Almand (@talmand)
on CodePen.

This time we're animating a clip-path that morphs a square into a circle in a repeating loop. The square is the same size as the root element so the intersectionRatio we get will always be less than one. As the clip-path animates, Firefox does not update the intersectionRatio display at all. And this time moving the mouse around does not work. Firefox simply ignores the changing size of the element. Chrome, on the other hand, actually updates the intersectionRatio display in real time. This happens even without user interaction.

This seems to happen because of a section of the spec that says the boundaries of the intersection area (the intersectionRect) should include clipping the target element.

If container has overflow clipping or a css clip-path property, update intersectionRect by applying container’s clip.

So, when a target is clipped, the boundaries of the intersection area are recalculated. Firefox apparently hasn’t implemented this yet.

Intersection Observer, version 2

So what does the future hold for this API?

There are proposals from Google that will add an interesting feature to the observer. Even though the Intersection Observer tells us when a target element crosses into the boundaries of a root element, it doesn’t necessarily mean that element is actually visible to the user. It could have a zero opacity or it could be covered by another element on the page. What if the observer could be used to determine these things?

Please keep in mind that we're still in the early days for such a feature and that it shouldn't be used in production code. Here’s the updated proposal with the differences from the spec's first version highlighted.

If you you've been viewing the the demos in this article with Chrome, you might have noticed a couple of things in the console — such as entries object properties that don’t appear in Firefox. Here's an example of what Firefox logs in the console:

IntersectionObserver root: null rootMargin: "0px 0px 0px 0px" thresholds: Array [ 0 ] <prototype>: IntersectionObserverPrototype { } IntersectionObserverEntry boundingClientRect: DOMRect { x: 9, y: 779, width: 707, ... } intersectionRatio: 0 intersectionRect: DOMRect { x: 0, y: 0, width: 0, ... } isIntersecting: false rootBounds: null target: <div class="item"> time: 261 <prototype>: IntersectionObserverEntryPrototype { }

Now, here’s the same output from the same console code in Chrome:

IntersectionObserver delay: 500 root: null rootMargin: "0px 0px 0px 0px" thresholds: [0] trackVisibility: true __proto__: IntersectionObserver IntersectionObserverEntry boundingClientRect: DOMRectReadOnly {x: 9, y: 740, width: 914, height: 146, top: 740, ...} intersectionRatio: 0 intersectionRect: DOMRectReadOnly {x: 0, y: 0, width: 0, height: 0, top: 0, ...} isIntersecting: false isVisible: false rootBounds: null target: div.item time: 355.6550000066636 __proto__: IntersectionObserverEntry

There are some differences in how a few of the properties are displayed, such as target and prototype, but they operate the same in both browsers. What’s different is that Chrome has a few extra properties that don’t appear in Firefox. The observer object has a boolean called trackVisibility, a number called delay, and the entry object has a boolean called isVisible. These are the newly proposed properties that attempt to determine whether the target element is actually visible to the user.

I’ll give a brief explanation of these properties but please read this article if you want more details.

The trackVisibility property is the boolean given to the observer in the options object. This informs the browser to take on the more expensive task of determining the true visibility of the target element.

The delay property what you might guess: it delays the intersection change callback by the specified amount of time in milliseconds. This is sort of the same as if your callback function's code were wrapped in setTimeout. For the trackVisibility to work, this value is required and must be at least 100. If a proper amount is not given, the console will display this error and the observer will not be created.

Uncaught DOMException: Failed to construct 'IntersectionObserver': To enable the 'trackVisibility' option, you must also use a 'delay' option with a value of at least 100. Visibility is more expensive to compute than the basic intersection; enabling this option may negatively affect your page's performance. Please make sure you really need visibility tracking before enabling the 'trackVisibility' option.

The isVisible property in the target’s entry object is the boolean that reports the output of the visibility tracking. This can be used as part of any code much the same way isIntersecting can be used.

In all my experiments with these features, seeing it actually working seems to be hit or miss. For example, delay works consistently but isVisible doesn’t always report true — for me at least — when the element is clearly visible. Sometimes this is by design as the spec does allow for false negatives. That would help explain the inconsistent results.

I, for one, can’t wait for this feature to be more feature complete and working in all browsers that support Intersection Observer.

Now thy watch is over

So comes an end to my research on Intersection Observer, an API that I definitely look forward to using in future projects. I spent a good number of nights researching, experimenting, and building examples to understand how it works. But the result was this article, plus some new ideas for how to leverage different features of the observer. Plus, at this point, I feel I can effectively explain how the observer works when asked. Hopefully, this article helps you do the same.

The post An Explanation of How the Intersection Observer Watches appeared first on CSS-Tricks.

An Explanation of How the Intersection Observer Watches

Css Tricks - Tue, 09/24/2019 - 4:18am

There have been several excellent articles exploring how to use this API, including choices from authors such as Phil Hawksworth, Preethi, and Mateusz Rybczonek, just to name a few. But I’m aiming to do something a bit different here. I had an opportunity earlier in the year to present the VueJS transition component to the Dallas VueJS Meetup of which my first article on CSS-Tricks was based on. During the question-and-answer session of that presentation I was asked about triggering the transitions based on scroll events — which of course you can, but it was suggested by a member of the audience to look into the Intersection Observer.

This got me thinking. I knew the basics of Intersection Observer and how to make a simple example of using it. Did I know how to explain not only how to use it but how it works? What exactly does it provide to us as developers? As a "senior" dev, how would I explain it to someone right out of a bootcamp who possibly doesn’t even know it exists?

I decided I needed to know. After spending some time researching, testing, and experimenting, I’ve decided to share a good bit of what I’ve learned. Hence, here we are.

A brief explanation of the Intersection Observer

The abstract of the W3C public working draft (first draft dated September 14, 2017) describes the Intersection Observer API as:

This specification describes an API that can be used to understand the visibility and position of DOM elements ("targets") relative to a containing element or to the top-level viewport ("root"). The position is delivered asynchronously and is useful for understanding the visibility of elements and implementing pre-loading and deferred loading of DOM content.

The general idea being a way to watch a child element and be informed when it enters the bounding box of one of its parents. This is most commonly going to be used in relation to the target element scrolling into view in the root element. Up until the introduction of the Intersection Observer, this type of functionality was accomplished by listening for scroll events.

Although the Intersection Observer is a more performant solution for this type of functionality, I do not suggest we necessarily look at it as a replacement to scroll events. Instead, I suggest we look at this API as an additional tool that has a functional overlap with scroll events. In some cases, the two can work together to solve specific problems.

A basic example

I know I risk repeating what's already been explained in other articles, but let’s see a basic example of an Intersection Observer and what it gives us.

The Observer is made up of four parts:

  1. the "root," which is the parent element the observer is tied to, which can be the viewport
  2. the "target," which is a child element being observed and there can be more than one
  3. the options object, which defines certain aspects of the observer’s behavior
  4. the callback function, which is invoked each time an intersection change is observed

The code of a basic example could look something like this:

const options = { root: document.body, rootMargin: '0px', threshold: 0 } function callback (entries, observer) { console.log(observer); entries.forEach(entry => { console.log(entry); }); } let observer = new IntersectionObserver(callback, options); observer.observe(targetElement);

The first section in the code is the options object which has root, rootMargin, and threshold properties.

The root is the parent element, often a scrolling element, that contains the observed elements. This can be just about any single element on the page as needed. If the property isn’t provided at all or the value is set to null, the viewport is set to be the root element.

The rootMargin is a string of values describing what can be called the margin of the root element, which affects the resulting bounding box that the target element scrolls into. It behaves much like the CSS margin property. You can have values like 10px 15px 20px which gives us a top margin of 10px, left and right margins of 15px, and a bottom margin of 20px. Only the bounding box is affected and not the element itself. Keep in mind that the only lengths allowed are pixels and percentage values, which can be negative or positive. Also note that the rootMargin does not work if the root element is not an actual element on the page, such as the viewport.

The threshold is the value used to determine when an intersection change should be observed. More than one value can be included in an array so that the same target can trigger the intersection multiple times. The different values are a percentage using zero to one, much like opacity in CSS, so a value of 0.5 would be considered 50% and so on. These values relate to the target’s intersection ratio, which will be explained in just a moment. A threshold of zero triggers the intersection when the first pixel of the target element intersects the root element. A threshold of one triggers when the entire target element is inside the root element.

The second section in the code is the callback function that is called whenever a intersection change is observed. Two parameters are passed; the entries are stored in an array and represent each target element that triggers the intersection change. This provides a good bit of information that can be used for the bulk of any functionality that a developer might create. The second parameter is information about the observer itself, which is essentially the data from the provided options object. This provides a way to identify which observer is in play in case a target is tied to multiple observers.

The third section in the code is the creation of the observer itself and where it is observing the target. When creating the observer, the callback function and options object can be external to the observer, as shown. A developer could write the code inline, but the observer is very flexible. For example, the callback and options can be used across multiple observers, if needed. The observe() method is then passed the target element that needs to be observed. It can only accept one target but the method can be repeated on the same observer for multiple targets. Again, very flexible.

Notice the console logs in the code. Here is what those output.

The observer object

Logging the observer data passed into the callback gets us something like this:

IntersectionObserver root: null rootMargin: "0px 0px 0px 0px" thresholds: Array [ 0 ] <prototype>: IntersectionObserverPrototype { }

...which is essentially the options object passed into the observer when it was created. This can be used to determine the root element that the intersection is tied to. Notice that even though the original options object had 0px as the rootMargin, this object reports it as 0px 0px 0px 0px, which is what would be expected when considering the rules of CSS margins. Then there’s the array of thresholds the observer is operating under.

The entry object

Logging the entry data passed into the callback gets us something like this:

IntersectionObserverEntry boundingClientRect: DOMRect bottom: 923.3999938964844, top: 771 height: 152.39999389648438, width: 411 left: 9, right: 420 x: 9, y: 771 <prototype>: DOMRectPrototype { } intersectionRatio: 0 intersectionRect: DOMRect bottom: 0, top: 0 height: 0, width: 0 left: 0, right: 0 x: 0, y: 0 <prototype>: DOMRectPrototype { } isIntersecting: false rootBounds: null target: <div class="item"> time: 522 <prototype>: IntersectionObserverEntryPrototype { }

Yep, lots of things going on here.

For most devs, the two properties that are most likely to be useful are intersectionRatio and isIntersecting. The isIntersecting property is a boolean that is exactly what one might think it is — the target element is intersecting the root element at the time of the intersection change. The intersectionRatio is the percentage of the target element that is currently intersecting the root element. This is represented by a percentage of zero to one, much like the threshold provided in the observer’s option object.

Three properties — boundingClientRect, intersectionRect, and rootBounds — represent specific data about three aspects of the intersection. The boundingClientRect property provides the bounding box of the target element with bottom, left, right, and top values from the top-left of the viewport, just like with Element.getBoundingClientRect(). Then the height and width of the target element is provided as the X and Y coordinates. The rootBounds property provides the same form of data for the root element. The intersectionRect provides similar data but its describing the box formed by the intersection area of the target element inside the root element, which corresponds to the intersectionRatio value. Traditional scroll events would require this math to be done manually.

One thing to keep in mind is that all these shapes that represent the different elements are always rectangles. No matter the actual shape of the elements involved, they are always reduced down to the smallest rectangle containing the element.

The target property refers to the target element that is being observed. In cases where an observer contains multiple targets, this is the easy way to determine which target element triggered this intersection change.

The time property provides the time (in milliseconds) from when the observer is first created to the time this intersection change is triggered. This is how you can track the time it takes for a viewer to come across a particular target. Even if the target is scrolled into view again at a later time, this property will have the new time provided. This can be used to track the time of a target entering and leaving the root element.

While all this information is provided to us whenever an intersection change is observed, it's also provided to us when the observer is first started. For example, on page load the observers on the page will immediately invoke the callback function and provide the current state of every target element it is observing.

This is a wealth of data about the relationships of elements on the page provided in a very performant way.

Intersection Observer methods

Intersection Observer has three methods of note: observe(), unobserve(), and disconnect().

  • observe(): The observe method takes in a DOM reference to a target element to be added to the list of elements to be watched by the observer. An observer can have more than one target element, but this method can only accept one target at a time.
  • unobserve(): The unobserve method takes in a DOM reference to a target element to be removed from the list of elements watched by the observer.
  • disconnect(): The disconnect method causes the observer to stop watching all of its target elements. The observer itself is still active, but has no targets. After disconnect(), target elements can still be passed to the observer with observe().

These methods provide the ability to watch and unwatch target elements, but there’s no way to change the options passed to the observer when once it is created. You’ll have to manually recreate the observer if different options are required.

Performance: Intersection Observer versus scroll events

In my exploration of the Intersection Observer and how it compares to using scroll events, I knew that I needed to do some performance testing. A totally unscientific effort was thus created using Puppeteer. For the sake of time, I only wanted a general idea of what the performance difference is between the two. Therefore, three simple tests were created.

First, I created a baseline HTML file that included one hundred divs with a bit of height to create a long scrolling page. With a basic http-server active, I loaded the HTML file with Puppeteer, started a trace, forced the page to scroll downward in preset increments to the bottom, stopped the trace once the bottom is reached, and finally saved the results of the trace. I also made it so the test can be repeated multiple times and output data each time. Then I duplicated the baseline HTML and wrote my JavaScript in a script tag for each type of test I wanted to run. Each test has two files: one for the Intersection Observer and the other for scroll events.

The purpose of all the tests is to detect when a target element scrolls upward through the viewport at 25% increments. At each increment, a CSS class is applied that changes the background color of the element. In other words, each element has DOM changes applied to it that would cause repaints. Each test was run five times on two different machines: my development Mac that’s rather up-to-date hardware-wise and my personal Windows 7 machine that’s probably average these days. The results of the trace summary of scripting, rendering, painting, and system were recorded and then averaged. Again, nothing too scientific about all this — just a general idea.

The first test has one observer or one scroll event with one callback each. This is a fairly standard setup for both the observer and scroll event. Although, in this case, the scroll event has a bit more work to do because it attempts to mimic the data that the observer provides by default. Once all those calculations are done, the data is stored in an entry array just like the observer does. Then the functionality for removing and applying classes between the two is exactly the same. I do throttle the scroll event a bit with requestAnimationFrame.

The second test has 100 observers or 100 scroll events with one callback for each type. Each element is assigned its own observer and event but the callback function is the same. This is actually inefficient because each observer and event behaves exactly the same, but I wanted a simple stress test without having to create 100 unique observers and events — though I have seen many examples of using the observer this way.

The third test has 100 observers or 100 scroll events with 100 callbacks for each type. This means each element has its own observer, event, and callback function. This, of course, is horribly inefficient since this is all duplicated functionality stored in huge arrays. But this inefficiency is the point of this test.

Intersection Observer versus Scroll Events stress tests

In the charts above, you’ll see the first column represents our baseline where no JavaScript was run at all. The next two columns represent the first type of test. The Mac ran both quite well as I would expect for a top-end machine for development. The Windows machine gave us a different story. For me, the main point of interest is the scripting results in red. On the Mac, the difference was around 88ms for the observer while around 300ms for the scroll event. The overall result on the Mac is fairly close for each but that scripting took a beating with the scroll event. For the Windows machine its far, far worse. The observer was around 150ms versus around 1400ms for the first and easiest test of the three.

For the second test, we start to see the inefficiency of the scroll test made clearer. Both the Mac and Windows machines ran the observer test with much the same results as before. For the scroll event test, the scripting gets more bogged down to complete the tasks given. The Mac jumped to almost a full second of scripting while the Windows machine jumped approximately to a staggering 3200ms.

For the third test, things thankfully did not get worse. The results are roughly the same as the second test. One thing to note is that across all three tests the results for the observer were consistent for both computers. Despite no efforts in efficiency for the observer tests, the Intersection Observer outperformed the scroll events by a strong margin.

So, after my non-scientific testing on my own two machines, I felt I had a decent idea of the differences in performance between scroll events and the Intersection Observer. I’m sure with some effort I could make the scroll events more efficient but is it worth doing? There are cases where the precision of the scroll events is necessary, but in most cases, the Intersection Observer will suffice nicely — especially since it appears to be far more efficient with no effort at all.

Understanding the intersectionRatio property

The intersectionRatio property, given to us by IntersectionObserverEntry, represents the percentage of the target element that is within the boundaries of the root element on an intersection change. I found I didn’t quite understand what this value actually represented at first. For some reason I was thinking it was a straightforward zero to 100 percent representation of the appearance of the target element, which it sort of is. It is tied to the thresholds passed to the observer when it is created. It could be used to determine which threshold was the cause of the intersection change just triggered, as an example. However, the values it provides are not always straightforward.

Take this demo for instance:

See the Pen
Intersection Observer: intersectionRatio
by Travis Almand (@talmand)
on CodePen.

In this demo, the observer has been assigned the parent container as the root element. The child element with the target background has been assigned as the target element. The threshold array has been created with 100 entries with the sequence 0, 0.01, 0.02, 0.03, and so on, until 1. The observer triggers every one percent of the target element appearing or disappearing inside the root element so that, whenever the ratio changes by at least one percent, the output text below the box is updated. In case you’re curious, this threshold was accomplished with this code:

[...Array(100).keys()].map(x => x / 100) }

I don’t recommend you set your thresholds in this way for typical use in projects.

At first, the target element is completely contained within the root element and the output above the buttons will show a ratio of one. It should be one on first load but we’ll soon see that the ratio is not always precise; it’s possible the number will be somewhere between 0.99 and 1. That does seem odd, but it can happen, so keep that in mind if you create any checks against the ratio equaling a particular value.

Clicking the "left" button will cause the target element to be transformed to the left so that half of it is in the root element and the other half is out. The intersectionRatio should then change to 0.5, or something close to that. We now know that half of the target element is intersecting the root element, but we have no idea where it is. More on that later.

Clicking the "top" button does much the same. It transforms the target element to the top of the root element with half of it in and half of it out again. And again, the intersectionRatio should be somewhere around 0.5. Even though the target element is in a completely different location than before, the resulting ratio is the same.

Clicking the "corner" button again transforms the target element to the upper-right corner of the root element. At this point only a quarter of the target element is within the root element. The intersectionRatio should reflect this with a value of around 0.25. Clicking "center" will transform the target element back to the center and fully contained within the root element.

If we click the "large" button, that changes the height of the target element to be taller than the root element. The intersectionRatio should be somewhere around 0.8, give or take a few ten-thousandths of a percent. This is the tricky part of relying on intersectionRatio. Creating code based on the thresholds given to the observer makes it possible to have thresholds that will never trigger. In this "large" example, any code based on a threshold of 1 will fail to execute. Also consider situations where the root element can be resized, such as the viewport being rotated from portrait to landscape.

Finding the position

So then, how do we know where the target element is in relation to the root element? Thankfully, the data for this calculation is provided by IntersectionObserverEntry, so we only have to do simple comparisons.

Consider this demo:

See the Pen
Intersection Observer: Finding the Position
by Travis Almand (@talmand)
on CodePen.

The setup for this demo is much the same as the one before. The parent container is the root element and the child inside with the target background is the target element. The threshold is an array of 0, 0.5, and 1. As you scroll inside the root element, the target will appear and its position will be reported in the output above the buttons.

Here's the code that performs these checks:

const output = document.querySelector('#output pre'); function io_callback (entries) { const ratio = entries[0].intersectionRatio; const boundingRect = entries[0].boundingClientRect; const intersectionRect = entries[0].intersectionRect; if (ratio === 0) { output.innerText = 'outside'; } else if (ratio < 1) { if (boundingRect.top < intersectionRect.top) { output.innerText = 'on the top'; } else { output.innerText = 'on the bottom'; } } else { output.innerText = 'inside'; } }

I should point out that I’m not looping over the entries array as I know there will always only be one entry because there’s only one target. I’m taking a shortcut by making use of entries[0] instead.

You’ll see that a ratio of zero puts the target on the "outside." A ratio of less than one puts it either at the top or bottom. That lets us see if the target’s "top" is less than the intersectionRect’s top, which actually means it’s higher on the page and is seen as "on the top." In fact, checking against the root element’s "top" would work for this as well. Logically, if the target isn’t at the top, then it must be at the bottom. If the ratio happens to equal one, then it is "inside" the root element. Checking the horizontal position is the same except it's done with the left or right property.

This is part of the efficiency of using the Intersection Observer. Developers don’t need to request this information from various places on a throttled scroll event (which fires quite a lot regardless) and then calculate the related math to figure all this out. It’s provided by the observer and all that’s needed is a simple if check.

At first, the target element is taller than the root element, so it is never reported as being "inside." Click the "toggle target size" button to make it smaller than the root. Now, the target element can be inside the root element when scrolling up and down.

Restore the target element to its original size by clicking on "toggle target size" again, then click on the "toggle root size" button. This resizes the root element so that it is taller than the target element. Once again, while scrolling up and down, it is possible for the target element to be "inside" the root element.

This demo demonstrates two things about the Intersection Observer: how to determine the position of the target element in relation to the root element and what happens when resizing the two elements. This reaction to resizing is another advantage over scroll events — no need for code to adjust to a resize event.

Creating a position sticky event

The "sticky" value for the CSS position property can be a useful feature, yet it’s a bit limiting in terms of CSS and JavaScript. The styling of the sticky element can only be of one design, whether in its normal state or within its sticky state. There’s no easy way to know the state for JavaScript to react to these changes. So far, there’s no pseudo-class or JavaScript event that makes us aware of the changing state of the element.

I’ve seen examples of having an event of sorts for sticky positioning using both scroll events and the Intersection Observer. The solutions using scroll events always have issues similar to using scroll events for other purposes. The usual solution with an observer is with a "dummy" element that serves little purpose other than being a target for the observer. I like to avoid using single purpose elements like that, so I decided to tinker with this particular idea.

In this demo, scroll up and down to see the section titles reacting to being "sticky" to their respective sections.

See the Pen
Intersection Observer: Position Sticky Event
by Travis Almand (@talmand)
on CodePen.

This is an example of detecting when a sticky element is at the top of the scrolling container so a class name can be applied to the element. This is accomplished by making use of an interesting quirk of the DOM when giving a specific rootMargin to the observer. The values given are:

rootMargin: '0px 0px -100% 0px'

This pushes the bottom margin of the root’s boundary to the top of the root element, which leaves a small sliver of area available for intersection detection that's zero pixels. A target element touching this zero pixel area triggers the intersection change, even though it doesn’t exist by the numbers, so to speak. Consider that we can have elements in the DOM that exist with a collapsed height of zero.

This solution takes advantage of this by recognizing the sticky element is always in its "sticky" position at the top of the root element. As scrolling continues, the sticky element eventually moves out of view and the intersection stops. Therefore, we add and remove the class based on the isIntersecting property of the entry object.

Here’s the HTML:

<section> <div class="sticky-container"> <div class="sticky-content"> <span>&sect;</span> <h2>Section 1</h2> </div> </div> {{ content here }} </section>

The outer div with the class sticky-container is the target for our observer. This div will be set as the sticky element and acts as the container. The element used to style and change the element based on the sticky state is the sticky-content div and its children. This insures that the actual sticky element is always in contact with the shrunken rootMargin at the top of the root element.

Here’s the CSS:

.sticky-content { position: relative; transition: 0.25s; } .sticky-content span { display: inline-block; font-size: 20px; opacity: 0; overflow: hidden; transition: 0.25s; width: 0; } .sticky-content h2 { display: inline-block; } .sticky-container { position: sticky; top: 0; } .sticky-container.active .sticky-content { background-color: rgba(0, 0, 0, 0.8); color: #fff; padding: 10px; } .sticky-container.active .sticky-content span { opacity: 1; transition: 0.25s 0.5s; width: 20px; }

You’ll see that .sticky-container creates our sticky element at the top of zero. The rest is a mixture of styles for the regular state in .sticky-content and the sticky state with .active .sticky-content. Again, you can pretty much do anything you want inside the sticky content div. In this demo, there’s a hidden section symbol that appears from a delayed transition when the sticky state is active. This effect would be difficult without something to assist, like the Intersection Observer.

The JavaScript:

const stickyContainers = document.querySelectorAll('.sticky-container'); const io_options = { root: document.body, rootMargin: '0px 0px -100% 0px', threshold: 0 }; const io_observer = new IntersectionObserver(io_callback, io_options); stickyContainers.forEach(element => { io_observer.observe(element); }); function io_callback (entries, observer) { entries.forEach(entry => { entry.target.classList.toggle('active', entry.isIntersecting); }); }

This is actually a very straightforward example of using the Intersection Observer for this task. The only oddity is the -100% value in the rootMargin. Take note that this can be repeated for the other three sides as well; it just requires a new observer with its own unique rootMargin with -100% for the appropriate side. There will have to be more unique sticky containers with their own classes such as sticky-container-top and sticky-container-bottom.

The limitation of this is that the top, right, bottom, or left property for the sticky element must always be zero. Technically, you could use a different value but then you’d have to do the math to figure out the proper value for the rootMargin. This is can be done easily, but if things get resized, not only does the math need to be done again, the observer has to be stopped and restarted with the new value. It’s easier to set the position property to zero and use the interior elements to style things how you want them.

Combining with Scrolling Events

As we've seen in some of the demos so far, the intersectionRatio can be imprecise and somewhat limiting. Using scroll events can be more precise but at the cost of inefficiency in performance. What if we combined the two?

See the Pen
Intersection Observer: Scroll Events
by Travis Almand (@talmand)
on CodePen.

In this demo, we've created an Intersection Observer and the callback function serves the sole purpose of adding and removing an event listener that listens for the scroll event on the root element. When the target first enters the root element, the scroll event listener is created and then is removed when the target leaves the root. As the scrolling happens, the output simply shows each event’s timestamp to show it changing in real time — far more precise than the observer alone.

The setup for the HTML and CSS is fairly standard at this point, so here’s the JavaScript.

const root = document.querySelector('#root'); const target = document.querySelector('#target'); const output = document.querySelector('#output pre'); const io_options = { root: root, rootMargin: '0px', threshold: 0 }; let io_observer; function scrollingEvents (e) { output.innerText = e.timeStamp; } function io_callback (entries) { if (entries[0].isIntersecting) { root.addEventListener('scroll', scrollingEvents); } else { root.removeEventListener('scroll', scrollingEvents); output.innerText = 0; } } io_observer = new IntersectionObserver(io_callback, io_options); io_observer.observe(target);

This is a fairly standard example. Take note that we’ll want the threshold to be zero because we’ll get multiple event listeners at the same time if there’s more than one threshold. The callback function is what we’re interested in and even that is a simple setup: add and remove the event listener in an if-else block. The event’s callback function simply updates the div in the output. Whenever the target triggers an intersection change and is not intersecting with the root, we set the output back to zero.

This gives the benefits of both Intersection Observer and scroll events. Consider having a scrolling animation library in place that works only when the section of the page that requires it is actually visible. The library and scroll events are not inefficiently active throughout the entire page.

Interesting differences with browsers

You're probably wondering how much browser support there is for Intersection Observer. Quite a bit, actually!

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari584555No1612.1Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox12.2-12.346No767668

All the major browsers have supported it for some time now. As you might expect, Internet Explorer doesn’t support it at any level, but there’s a polyfill available from the W3C that takes care of that.

As I was experimenting with different ideas using the Intersection Observer, I did come across a couple of examples that behave differently between Firefox and Chrome. I wouldn't use these example on a production site, but the behaviors are interesting.

Here's the first example:

See the Pen
Intersection Observer: Animated Transform
by Travis Almand (@talmand)
on CodePen.

The target element is moving within the root element by way of the CSS transform property. The demo has a CSS animation that transforms the target element in and out of the root element on the horizontal axis. When the target element enters or leaves the root element, the intersectionRatio is updated.

If you view this demo in Firefox, you should see the intersectionRatio update properly as the target elements slides back and forth. Chrome behaves differently. The default behavior there does not update the intersectionRatio display at all. It seems that Chrome doesn’t keep tabs of a target element that is transformed with CSS. However, if we were to move the mouse around in the browser as the target element moves in and out of the root element, the intersectionRatio display does indeed update. My guess is that Chrome only "activates" the observer when there is some form of user interaction.

Here's the second example:

See the Pen
Intersection Observer: Animated Clip-Path
by Travis Almand (@talmand)
on CodePen.

This time we're animating a clip-path that morphs a square into a circle in a repeating loop. The square is the same size as the root element so the intersectionRatio we get will always be less than one. As the clip-path animates, Firefox does not update the intersectionRatio display at all. And this time moving the mouse around does not work. Firefox simply ignores the changing size of the element. Chrome, on the other hand, actually updates the intersectionRatio display in real time. This happens even without user interaction.

This seems to happen because of a section of the spec that says the boundaries of the intersection area (the intersectionRect) should include clipping the target element.

If container has overflow clipping or a css clip-path property, update intersectionRect by applying container’s clip.

So, when a target is clipped, the boundaries of the intersection area are recalculated. Firefox apparently hasn’t implemented this yet.

Intersection Observer, version 2

So what does the future hold for this API?

There are proposals from Google that will add an interesting feature to the observer. Even though the Intersection Observer tells us when a target element crosses into the boundaries of a root element, it doesn’t necessarily mean that element is actually visible to the user. It could have a zero opacity or it could be covered by another element on the page. What if the observer could be used to determine these things?

Please keep in mind that we're still in the early days for such a feature and that it shouldn't be used in production code. Here’s the updated proposal with the differences from the spec's first version highlighted.

If you you've been viewing the the demos in this article with Chrome, you might have noticed a couple of things in the console — such as entries object properties that don’t appear in Firefox. Here's an example of what Firefox logs in the console:

IntersectionObserver root: null rootMargin: "0px 0px 0px 0px" thresholds: Array [ 0 ] <prototype>: IntersectionObserverPrototype { } IntersectionObserverEntry boundingClientRect: DOMRect { x: 9, y: 779, width: 707, ... } intersectionRatio: 0 intersectionRect: DOMRect { x: 0, y: 0, width: 0, ... } isIntersecting: false rootBounds: null target: <div class="item"> time: 261 <prototype>: IntersectionObserverEntryPrototype { }

Now, here’s the same output from the same console code in Chrome:

IntersectionObserver delay: 500 root: null rootMargin: "0px 0px 0px 0px" thresholds: [0] trackVisibility: true __proto__: IntersectionObserver IntersectionObserverEntry boundingClientRect: DOMRectReadOnly {x: 9, y: 740, width: 914, height: 146, top: 740, ...} intersectionRatio: 0 intersectionRect: DOMRectReadOnly {x: 0, y: 0, width: 0, height: 0, top: 0, ...} isIntersecting: false isVisible: false rootBounds: null target: div.item time: 355.6550000066636 __proto__: IntersectionObserverEntry

There are some differences in how a few of the properties are displayed, such as target and prototype, but they operate the same in both browsers. What’s different is that Chrome has a few extra properties that don’t appear in Firefox. The observer object has a boolean called trackVisibility, a number called delay, and the entry object has a boolean called isVisible. These are the newly proposed properties that attempt to determine whether the target element is actually visible to the user.

I’ll give a brief explanation of these properties but please read this article if you want more details.

The trackVisibility property is the boolean given to the observer in the options object. This informs the browser to take on the more expensive task of determining the true visibility of the target element.

The delay property what you might guess: it delays the intersection change callback by the specified amount of time in milliseconds. This is sort of the same as if your callback function's code were wrapped in setTimeout. For the trackVisibility to work, this value is required and must be at least 100. If a proper amount is not given, the console will display this error and the observer will not be created.

Uncaught DOMException: Failed to construct 'IntersectionObserver': To enable the 'trackVisibility' option, you must also use a 'delay' option with a value of at least 100. Visibility is more expensive to compute than the basic intersection; enabling this option may negatively affect your page's performance. Please make sure you really need visibility tracking before enabling the 'trackVisibility' option.

The isVisible property in the target’s entry object is the boolean that reports the output of the visibility tracking. This can be used as part of any code much the same way isIntersecting can be used.

In all my experiments with these features, seeing it actually working seems to be hit or miss. For example, delay works consistently but isVisible doesn’t always report true — for me at least — when the element is clearly visible. Sometimes this is by design as the spec does allow for false negatives. That would help explain the inconsistent results.

I, for one, can’t wait for this feature to be more feature complete and working in all browsers that support Intersection Observer.

Now thy watch is over

So comes an end to my research on Intersection Observer, an API that I definitely look forward to using in future projects. I spent a good number of nights researching, experimenting, and building examples to understand how it works. But the result was this article, plus some new ideas for how to leverage different features of the observer. Plus, at this point, I feel I can effectively explain how the observer works when asked. Hopefully, this article helps you do the same.

The post An Explanation of How the Intersection Observer Watches appeared first on CSS-Tricks.

Browser Engine Diversity

Css Tricks - Tue, 09/24/2019 - 4:18am

We lost Opera when they went Chrome in 2013. Same deal with Edge when it also went Chrome earlier this year. Mike Taylor called these changes a "Decreasingly Diverse Browser Engine World" in a talk I'd like to see.

So all we've got left is Chrome-stuff, Firefox-stuff, and Safari-stuff. Chrome and Safari share the same lineage but have diverged enough, evolve separately enough, and are walled away from each other enough that it makes sense to think of them as different from one another.

I know there are fancier words to articulate this. For example, browser engines themselves have names that are distinct and separate from the names of the browsers.

Take Chrome, which is based on the open-source project Chromium, which uses the rendering engine Blink and the JavaScript engine V8.

Firefox uses Gecko as its browser engine, which is turning into Quantum, which has sub-parts like Servo for CSS and rendering.

Safari uses WebKit as a browser engine, which has parts like WebCore and JavaScriptCore.

It's all kinda complicated and I'm not even sure I quite understand it all. My brain just thinks of it as everything under the umbrella of the main browser name.

The two extremes of looking at this from the perspective of decreasing diversity:

  • This is bad. Decreased diversity may hinder ecosystems from competing and innovating.
  • This is good. Cross-engine problems are a major productivity loss for the world. Getting down to one ecosystem would be even better.

Whichever it is, the ship has sailed. All we can do is look forward.

Random thoughts:

  • Perhaps diversity has just moved scope. Rather than the browser engines themselves representing diversity, maybe forks of the engnies we have left can compete against each other. Maybe starting from a strong foundation is a good place to start innovating?
  • If, god forbid, we got down to one browser engine, what happens to the web standards process? The fear would be that the last-engine-standing doesn't have to worry about interop anymore and they run wild with implementations. But does running wild mean the playing field can never be competitive again?
  • It's awesome when browsers compete on features that are great for users but don't affect web standards. Great password managers, user protection features, clever bookmarking ideas, reader modes, clean integrations with payment APIs, free VPNs, etc. That was Opera's play, and now we see many more in the same vein. Vivaldi is all about customization, Brave doubles down on privacy and security, and Puma is about monetization.

Brian Kardell wrote about some of this stuff recently in his "Beyond Browser Vendors" post. An interesting point is that the remaining browser engines are all open source. That means they can and do take outside contributions, which is exactly how CSS Grid came to exist.

Most of the work on CSS Grid in both WebKit and Chromium (Blink) was done, not by Google or Apple, but by teams at Igalia.

Think about that for a minute: The prioritization of its work was determined in 2 browsers not by a vendor, but by an investment from Bloomberg who had the foresight to fund this largely uncontroversial work.

And now, that idea continues:

This isn't a unique story, it's just a really important and highly visible one that's fun to hold up. In fact, just in the last 6 months engineers as Igalia have worked on CSS Containment, ResizeObserver, BigInt, private fields and methods, responsive image preloading, CSS Text Level 3, bringing MathML to Chromium, normalizing SVG and MathML DOMs and a lot more.

What we may have lost in browser engine diversity we may gain back in the openness of browser engines and outside players stepping up.

The post Browser Engine Diversity appeared first on CSS-Tricks.

Browser Engine Diversity

Css Tricks - Tue, 09/24/2019 - 4:18am

We lost Opera when they went Chrome in 2013. Same deal with Edge when it also went Chrome earlier this year. Mike Taylor called these changes a "Decreasingly Diverse Browser Engine World" in a talk I'd like to see.

So all we've got left is Chrome-stuff, Firefox-stuff, and Safari-stuff. Chrome and Safari share the same lineage but have diverged enough, evolve separately enough, and are walled away from each other enough that it makes sense to think of them as different from one another.

I know there are fancier words to articulate this. For example, browser engines themselves have names that are distinct and separate from the names of the browsers.

Take Chrome, which is based on the open-source project Chromium, which uses the rendering engine Blink and the JavaScript engine V8.

Firefox uses Gecko as its browser engine, which is turning into Quantum, which has sub-parts like Servo for CSS and rendering.

Safari uses WebKit as a browser engine, which has parts like WebCore and JavaScriptCore.

It's all kinda complicated and I'm not even sure I quite understand it all. My brain just thinks of it as everything under the umbrella of the main browser name.

The two extremes of looking at this from the perspective of decreasing diversity:

  • This is bad. Decreased diversity may hinder ecosystems from competing and innovating.
  • This is good. Cross-engine problems are a major productivity loss for the world. Getting down to one ecosystem would be even better.

Whichever it is, the ship has sailed. All we can do is look forward.

Random thoughts:

  • Perhaps diversity has just moved scope. Rather than the browser engines themselves representing diversity, maybe forks of the engnies we have left can compete against each other. Maybe starting from a strong foundation is a good place to start innovating?
  • If, god forbid, we got down to one browser engine, what happens to the web standards process? The fear would be that the last-engine-standing doesn't have to worry about interop anymore and they run wild with implementations. But does running wild mean the playing field can never be competitive again?
  • It's awesome when browsers compete on features that are great for users but don't affect web standards. Great password managers, user protection features, clever bookmarking ideas, reader modes, clean integrations with payment APIs, free VPNs, etc. That was Opera's play, and now we see many more in the same vein. Vivaldi is all about customization, Brave doubles down on privacy and security, and Puma is about monetization.

Brian Kardell wrote about some of this stuff recently in his "Beyond Browser Vendors" post. An interesting point is that the remaining browser engines are all open source. That means they can and do take outside contributions, which is exactly how CSS Grid came to exist.

Most of the work on CSS Grid in both WebKit and Chromium (Blink) was done, not by Google or Apple, but by teams at Igalia.

Think about that for a minute: The prioritization of its work was determined in 2 browsers not by a vendor, but by an investment from Bloomberg who had the foresight to fund this largely uncontroversial work.

And now, that idea continues:

This isn't a unique story, it's just a really important and highly visible one that's fun to hold up. In fact, just in the last 6 months engineers as Igalia have worked on CSS Containment, ResizeObserver, BigInt, private fields and methods, responsive image preloading, CSS Text Level 3, bringing MathML to Chromium, normalizing SVG and MathML DOMs and a lot more.

What we may have lost in browser engine diversity we may gain back in the openness of browser engines and outside players stepping up.

The post Browser Engine Diversity appeared first on CSS-Tricks.

Get Geographic Information from an IP Address for Free

Css Tricks - Tue, 09/24/2019 - 4:17am

Say you need to know what country someone visiting your website is from, because you have an internationalized site and display different things based on that country. You could ask the user. You might want to have that functionality anyway to make sure your visitors have control, but surely they will appreciate it just being correct out of the gate, to the best of your ability.

There are no native web technologies that have this information. JavaScript has geolocation, but users would have to approve that, and even then you'd have to use some library to convert the coordinates into a more usable country. Even back-end languages, which have access to the IP address in a way that JavaScript doesn't, don't just automatically know the country of origin.

CANADA. I JUST WANNA KNOW IF THEY ARE FROM CANADA OR NOT.

You have to ask some kind of service that knows this. The IP Geolocation API is that service, and it's free.

You perform a GET against the API. You can do it right in the browser if you want to test it:

https://api.ipgeolocationapi.com/geolocate/184.149.48.32

But you don't just get the country. You get a whole pile of information you might need to use. I happen to be sitting in Canada and this is what I get for my IP:

{ "continent":"North America", "address_format":"{{recipient}}\n{{street}}\n{{city}} {{region_short}} {{postalcode}}\n{{country}}", "alpha2":"CA", "alpha3":"CAN", "country_code":"1", "international_prefix":"011", "ioc":"CAN", "gec":"CA", "name":"Canada", "national_destination_code_lengths":[ 3 ], "national_number_lengths":[ 10 ], "national_prefix":"1", "number":"124", "region":"Americas", "subregion":"Northern America", "world_region":"AMER", "un_locode":"CA", "nationality":"Canadian", "postal_code":true, "unofficial_names":[ "Canada", "Kanada", "Canadá", "???" ], "languages_official":[ "en", "fr" ], "languages_spoken":[ "en", "fr" ], "geo":{ "latitude":56.130366, "latitude_dec":"62.832908630371094", "longitude":-106.346771, "longitude_dec":"-95.91332244873047", "max_latitude":83.6381, "max_longitude":-50.9766, "min_latitude":41.6765559, "min_longitude":-141.00187, "bounds":{ "northeast":{ "lat":83.6381, "lng":-50.9766 }, "southwest":{ "lat":41.6765559, "lng":-141.00187 } } }, "currency_code":"CAD", "start_of_week":"sunday" }

With that information, I could easily decide to redirect to the Canadian version of my website, if I have one, or show prices in CAD, or offer a French translation, or whatever else I can think of.

Say you were in PHP. You could get the IP like...

function getUserIpAddr() { if (!empty($_SERVER['HTTP_CLIENT_IP'])) { $ip = $_SERVER['HTTP_CLIENT_IP']; } elseif (!empty($_SERVER['HTTP_X_FORWARDED_FOR'])) { $ip = $_SERVER['HTTP_X_FORWARDED_FOR']; } else { $ip = $_SERVER['REMOTE_ADDR']; } return $ip; }

The cURL to get the information:

$url = "https://api.ipgeolocationapi.com/geolocate/184.149.48.32"; $ch = curl_init(); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_URL, $url); $result = curl_exec($ch);

It comes back as JSON, so:

$json = json_decode($result);

And then $json->{'name'}; will be "Canada" if I'm in Canada.

So I can do like:

if ($json->{'name'} == "Canada") { // serve index-ca.php } else { // server index.php }

The post Get Geographic Information from an IP Address for Free appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.