Front End Web Development

How to Implement Logging in a Node.js Application With Pino-logger

Css Tricks - Wed, 09/22/2021 - 4:33am

Logging, on its own, is a key aspect of any application. Logging helps developers comprehend what it is that their code is doing. It also helps save developers hours of debugging work. This tutorial is about implementing logging in a Node.js application using Pino-logger.

With logging, you can store every bit of information about the flow of the application. With Pino as a dependency for a Node.js application, it becomes effortless to implement logging, and even storing these logs in a separate log file. And its 7.8K stars on GitHub are a testament to that.

In this guide:

  • You will study how to configure logging services with different logging levels.
  • You will learn how to prettify the logs in your terminal as well as whether or not to include the JSON response in your logs.
  • You will see how to save these logs in a separate log file.

When you’re done, you’ll be able to implement logging with coding-best practices in your Node.js application using Pino-logger.


Before following this tutorial make sure you have:

  • Familiarity with using Express for a server.
  • Familiarity with setting up a REST API without any authentication.
  • An understanding of command-line tools or integrated terminals in code editors.

Downloading and installing a tool like Postman is recommended for testing API endpoints.

Step 1: Setting up the project

In this step, you set up a basic Node.js CRUD application using Express and Mongoose. You do this because it is better to implement logging functionality in a codebase that mimics a real-world application.

Since this article is about implementing the logger, you can follow “How To Perform CRUD Operations with Mongoose and MongoDB Atlas” to create your basic CRUD application in Node.js.

After completing that tutorial, you should be ready with a Node.js application that includes create, read, update, and delete routes.

Also, at this point. You can download nodemon so that each time you save changes in your codebase, the server automatically restarts and you don’t have to manually start it again with node server.js.

So, write this command in your terminal:

npm install -g --force nodemon

The -g flag depicts that the dependency is installed globally and, to perform something globally, you are adding the --force flag in the command.

Step 2: Installing Pino

In this step, you install the latest versions of dependencies required for the logging. These include Pino, Express-Pino-logger, and Pino-pretty. You need the following command in your command-line tool from the project’s root directory.

npm install pino@6.11.3 express-pino-logger@6.0.0 pino-pretty@5.0.2

At this point, you are ready to create a logger service with Pino.

Step 3: Creating the logger service

In this step, you create a Pino-logger service with different levels of logs, like warning, error, info, etc.

After that, you configure this logger-service in your app using Node.js middleware. Start by creating a new services directory in the root folder:

mkdir services

Inside of this new directory, create a new loggerService.js file and add the following code:

const pino = require('pino') module.exports = pino({})

This code defines the most basic logger service that you can create using Pino-logger. The exported pino function takes two optional arguments, options, and destination, and returns a logger instance.

However, you are not passing any options currently because you will configure this logger service in the later steps. But, this can create a little problem with this logger-service: the JSON log that you will see in a minute is not readable. So, to change it into the readable format, you mention the prettyPrint option in the exported pino function and, after that, your loggerService.js file should look something like this:

const pino = require('pino') module.exports = pino( { prettyPrint: true, }, )

Configuring your loggerService is covered in later steps.

The next step to complete this logger service is to add the following lines of code in your server.js file in the root directory:

const expressPinoLogger = require('express-pino-logger'); const logger = require('./services/loggerService');

In this code, you are importing the logger service that you just made as well as the express-pino-logger npm package that you installed earlier.

The last step is to configure the express-pino-logger with the logger service that you made. Add this piece of code after const app = express(); in the same file:

// ... const loggerMidlleware = expressPinoLogger({ logger: logger, autoLogging: true, }); app.use(loggerMidlleware); // ...

This code establishes a loggerMiddleware creation using the expressPinoLogger. The first option passed in the function is the logger itself that depicts the loggerService that you created earlier. The second option is autoLogging that can take either true or false as value. It specifies whether you want the JSON response in your logs or not. That’s coming up.

Now, finally, to test the loggerService, revisit your foodRoutes.js file. Import the loggerService with this code at the top:

const logger = require('../services/loggerService')

Then, in the GET route controller method that you created earlier, put this line of code at the start of the callback function:

// ... app.get("/food", async (request, response) => {'GET route is accessed') // ... }); // ...

The info method is one of the default levels that comes with Pino-logger. Other methods are: fatal, error, warn, debug, trace or silent.

You can use any of these by passing a message string as the argument in it.

Now, before testing the logging service, here the complete code for the server.js file up to this point:

const express = require("express"); const expressPinoLogger = require('express-pino-logger'); const logger = require('./services/loggerService'); const mongoose = require("mongoose"); const foodRouter = require("./routes/foodRoutes.js"); const app = express(); // ... const loggerMidleware = expressPinoLogger({ logger: logger, autoLogging: true, }); app.use(loggerMidleware); // ... app.use(express.json()); mongoose.connect( "mongodb+srv://madmin:<password><dbname>?retryWrites=true&w=majority", { useNewUrlParser: true, useFindAndModify: false, useUnifiedTopology: true } ); app.use(foodRouter); app.listen(3000, () => { console.log("Server is running..."); });

Also, don’t forget to restart your server:

nodemon server.js

Now, you can see the log in your terminal. Test this API route endpoint in Postman, or something like that to see it. After testing the API, you should see something like this in your terminal:

This provides a lot of information:

  • The first piece of the information is the log’s timestamp, which is displayed in the default format, but we can change it into something more readable in later steps.
  • Next is the info which is one of the default levels that comes with Pino-logger.
  • Next is a little message saying that the request has been completed.
  • At last, you can see the whole JSON response for that particular request in the very next line.
Step 4: Configuring the logs

In this step, you learn how to configure the Logger service and how to prettify the logs in your terminal using pino-pretty along with built-in options from the pino package you installed earlier.

Custom levels

At this point, you know that the pino-logger comes with default levels of Logging that you can use as methods to display Logs.

You used in the previous step.

But, pino-logger gives you the option to use custom levels. Start by revisiting the loggerService.js file in your services directory. Add the following lines of code after you have imported the pino package at the top:

// ... const levels = { http: 10, debug: 20, info: 30, warn: 40, error: 50, fatal: 60, }; // ...

This code is a plain JavaScript object defining additional logging levels. The keys of this object correspond to the namespace of the log level, and the values should be the numerical value of that level.

Now, to use this, you have to specify all that in the exported Pino function that you defined earlier. Remember that the first argument it takes is an object with some built-in options.

Rewrite that function like this:

module.exports = pino({ prettyPrint: true, customLevels: levels, // our defined levels useOnlyCustomLevels: true, level: 'http', })

In the above code:

  • The first option, customLevels: levels, specifies that our custom log levels should be used as additional log methods.
  • The second option, useOnlyCustomLevels: true, specifies that you only want to use your customLevels and omit Pino’s levels.

/explanation To specify second option, useOnlyCustomLevels, Logger’s default level must be changed to a value in customLevels. That is why you specified the third option.

Now, you can again test your loggerService and try using it with one of your customLevels. Try it with something like this in your foodRoutes.js file:

// ... app.get"/foods", async (request, response) => { logger.http('GET route is accessed') }); // ...

/explanation Don’t forget to make the autoLogging: false in your server.js file because there is no actual need for the irrelevant JSON response that comes with it.

const pino = require('pino') const levels = { http: 10, debug: 20, info: 30, warn: 40, error: 50, fatal: 60, }; module.exports = pino( { prettyPrint: true, customLevels: levels, // our defined levels useOnlyCustomLevels: true, level: 'http', }, )

You should get something like this in your terminal:

And, all the unnecessary information should be gone.

Pretty printing the Logs

Now you can move ahead and prettify the logs. In other words, you are adding some style to the terminal output that makes it easier (or “prettier”) to read.

Start by passing another option in the exported pino function. Your pino function should look something like this once that option is added:

module.exports = pino({ customLevels: levels, // our defined levels useOnlyCustomLevels: true, level: 'http', prettyPrint: { colorize: true, // colorizes the log levelFirst: true, translateTime: 'yyyy-dd-mm, h:MM:ss TT', }, })

You have added another option, prettyPrint, which is a JavaScript object that enables pretty-printing. Now, inside this object, there are other properties as well:

  • colorize: This adds colors to the terminal logs. Different levels of logs are assigned different colors.
  • levelFirst: This displays the log level name before the logged date and time.
  • translateTime: This translates the timestamp into a human-readable date and time format.

Now, try the API endpoint again, but before that, make sure to put more than one logging statement to take a look at different types of logs in your terminal.

// ... app.get("/foods", async (request, response) => {'GET route is accessed') logger.debug('GET route is accessed') logger.warn('GET route is accessed') logger.fatal('GET route is accessed') // ...

You should see something like this in your terminal:

At this point, you have configured your logger service enough to be used in a production-grade application.

Step 5: Storing logs in a file

In this last step, you learn how to store these logs in a separate log file. Storing logs in a separate file is pretty easy. All you have to do is make use of the destination option in your exported pino-function.

You can start by editing the pino-function by passing the destination option to it like this:

module.exports = pino( { customLevels: levels, // the defined levels useOnlyCustomLevels: true, level: 'http', prettyPrint: { colorize: true, // colorizes the log levelFirst: true, translateTime: 'yyyy-dd-mm, h:MM:ss TT', }, }, pino.destination(`${__dirname}/logger.log`) )

pino.destination takes the path for the log file as the argument. The __dirname variable points to the current directory, which is the services directory for this file.

/explanation You added the logger.log file in your path even though it doesn’t exist yet. That’s because the file is created automatically when saving this file. If, for some reason, it does not create the file, you can create one manually and add it to the folder.

Here is the complete loggerService.js file:

const pino = require('pino') const levels = { http: 10, debug: 20, info: 30, warn: 40, error: 50, fatal: 60, }; module.exports = pino( { customLevels: levels, // our defined levels useOnlyCustomLevels: true, level: 'http', prettyPrint: { colorize: true, // colorizes the log levelFirst: true, translateTime: 'yyyy-dd-mm, h:MM:ss TT', }, }, pino.destination(`${__dirname}/logger.log`) )

Test your API again, and you should see your logs in your log file instead of your terminal.


In this article, you learned how to create a logging service that you can use in production-grade applications. You learned how to configure logs and how you can store those logs in a separate file for your future reference.

You can still experiment with various configuring options by reading the official Pino-logger documentation.

Here are a few best practices you can keep in mind when creating a new logging service:

  • Context: A log should always have some context about the data, the application, the time, etc.
  • Purpose: Each log should have a specific purpose. For example, if the given log is used for debugging, then you can make sure to delete it before making a commit.
  • Format: The format for all the logs should always be easy to read.

The post How to Implement Logging in a Node.js Application With Pino-logger appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

An Event Apart Fall Summit 2021! (Use Coupon AEACSST21)

Css Tricks - Tue, 09/21/2021 - 4:37am

(This is a sponsored post.)

The web’s premier conference is online this fall, October 11–13, 2021: An Event Apart Fall Summit. If you already know how good of a conference this is (i.e. that some of the web’s biggest ideas debut at AEA) then just go buy tickets and please enjoy yourself. You can buy literally any combination of the three days. That coupon code, AEACSST21, is good for $100 off if you buy two or more days.

That’s only half!

If you’d like to know more, just have a peek at the speaker list — every name there has changed the game in this industry for the better in their own way, including five speakers hitting the AEA stage for the first time ever. Or, read up on why you should attend.

Spanning the spectrum from climate-conscious development to design beyond the screen, and from advanced CSS to inclusive design and development, An Event Apart Online Together: Fall Summit 2021 will give you deep insights into where we are now, and where things are going next.

Direct Link to ArticlePermalink

The post An Event Apart Fall Summit 2021! (Use Coupon AEACSST21) appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Resources aren’t requested by CSS, but by the DOM

Css Tricks - Mon, 09/20/2021 - 1:16pm

This is a good tweet from Harry:

Simple yet significant thing all developers should keep in mind: CSS resources (fonts, background images) are not requested by your CSS, but by the DOM node that needs them [Note: slight oversimplification, but the correct way to think about it.]

— Harry Roberts (@csswizardry) September 10, 2021

I like it because, as he says, it’s the correct way to think about it. It helps form a mental model of how websites work.

Just to spell it out a bit more…

/* Just because I'm in the CSS, doesn't mean I'll load! In order for `myfont.woff2` to load, a selector needs to set `font-family: 'MyWebFont';` AND something in the DOM needs to match that selector for that file to be requested. */ @font-face { font-family: 'MyWebFont'; src: url('myfont.woff2') format('woff2'); } /* Just because I'm in the CSS, doesn't mean I'll load! In order for `whatever.jpg` to load, the selector `.some-element` needs to be in the DOM. */ .some-element { background-image: url(whatever.jpg); }

The post Resources aren’t requested by CSS, but by the DOM appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Embracing Asymmetrical Design

Css Tricks - Mon, 09/20/2021 - 9:17am

I’ll never forget one of Karen McGrane’s great lessons to the world: truncation is not a content strategy. The idea is that just clipping off text programmatically is a sledgehammer, and avoids the kind of real thinking and planning that makes for good experiences.

Truncation is not a content strategy.

— Karen McGrane (@karenmcgrane) October 10, 2014

Truncation is not a content strategy

— Karen McGrane (@karenmcgrane) July 29, 2020

You certainly can trucate text with CSS. A bit of overflow: hidden; will clip anything, and you can class it up with text-overflow: ellipsis; Even multiple line clamping is extremely easy these days. The web is a big place. I’m glad we have these tools.

But a better approach is a combination of actual content strategy (i.e. planning text to be of a certain length and using that human touch to get it right) and embracing asymetric design. On the latter, Ben Nadel had a nice shout to that idea recently:

Unfortunately, data is not symmetrical. Which is why every Apple product demo is mocked for showcasing users that all have four-letter names: Dave, John, Anna, Sara, Bill, Jill, etc.. Apple uses this type of symmetrical data because it fits cleanly into their symmetrical user interface (UI) design.

Once you release a product into “the real world”, however, and users start to enter “real world data” into it, you immediately see that asymmetrical data, shoe-horned into a symmetrical design, can start to look terrible. Well, actually, it may still look good; but, it provides a terrible user experience.

To fix this, we need to lean into an asymmetric reality. We need to embrace the fact that data is asymmetric and we need to design user interfaces that can expand and contract to work with the asymmetry, not against it.

Ben Nadel, “Embracing Asymmetrical Design And Overcoming The Harmful Effects Of “text-overflow: ellipsis” In CSS”

Fortunately, these days, CSS has so many tools to help do that embracing of the asymetric. We’ve got CSS grid, which can do things like overlap areas easily, position image and text such that the text can grow upwards, and align them with siblings, even if they aren’t the same size.

Combine that with things like aspect-ratio and object-fit and we have all the tools we need to embrace asymetry, but not suffer problems like awkward white space and malalignment.

Direct Link to ArticlePermalink

The post Embracing Asymmetrical Design appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.


Css Tricks - Mon, 09/20/2021 - 9:13am

It’s not every day you see a new processor for building websites that reinvents the syntax for HTML and CSS and JavaScript. That’s what imba is doing.

That’s an awful lot of vendor lock-in, but I guess if you get over the learning curve and it helps you build performant websites quickly, then it’s no different than picking any other stack of processing languages.

I would hope their ultimate goal is to compile to native apps across platforms, but if not, if a developer wants to learn an entirely new way to craft an app, they might as well pick Flutter. As far as I understand it, the Flutter syntax is also quite a learning curve, but if you build your app that way, it makes good on the promise that it runs natively across all the major native mobile and desktop platforms, including the web.

Direct Link to ArticlePermalink

The post imba appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Exploring the CSS Paint API: Polygon Border

Css Tricks - Mon, 09/20/2021 - 4:36am

Nowadays, creating complex shapes is an easy task using clip-path, but adding a border to the shapes is always a pain. There is no robust CSS solution and we always need to produce specific “hacky” code for each particular case. In this article, I will show you how to solve this problem using the CSS Paint API.

Exploring the CSS Paint API series:

Before we dig into this third experimentation, Here is a small overview of what we are building. And, please note that everything we’re doing here is only supported in Chromium-based browsers so you’ll want to view the demos in Chrome, Edge, or Opera. See caniuse for the latest support.

Live demo

You will find no complex CSS code there but rather a generic code where we only adjust a few variables to control the shape.

The main idea

In order to achieve the polygon border, I am going to rely on a combination of the CSS clip-path property and a custom mask created with the Paint API.

Live Demo
  1. We start with a basic rectangular shape.
  2. We apply clip-path to get our polygon shape.
  3. We apply the custom mask to get our polygon border
The CSS setup

Here’s the CSS for the clip-path step we’ll get to:

.box { --path: 50% 0,100% 100%,0 100%; width: 200px; height: 200px; background: red; display: inline-block; clip-path: polygon(var(--path)); }

Nothing complex so far but note the use of the CSS variable --path. The entire trick relies on that single variable. Since I will be using a clip-path and a mask, both need to use the same parameters, hence the --path variable. And, yes, the Paint API will use that same variable to create the custom mask.

The CSS code for the whole process becomes:

.box { --path: 50% 0,100% 100%,0 100%; --border: 5px; width: 200px; height: 200px; background: red; display: inline-block; clip-path: polygon(var(--path)); -webkit-mask: paint(polygon-border) }

In addition to the clip-path, we apply the custom mask, plus we add an extra variable, --border, to control the thickness of the border. As you can see, everything is still pretty basic and generic CSS so far. After all, this is one of the things that makes the CSS Paint API so great to work with.

The JavaScript setup

I highly recommend reading the first part of my previous article to understand the structure of the Paint API.

Now, let’s see what is happening inside the paint() function as we jump into JavaScript:

const points = properties.get('--path').toString().split(','); const b = parseFloat(properties.get('--border').value); const w = size.width; const h = size.height; const cc = function(x,y) { // ... } var p = points[0].trim().split(" "); p = cc(p[0],p[1]); ctx.beginPath(); ctx.moveTo(p[0],p[1]); for (var i = 1; i < points.length; i++) { p = points[i].trim().split(" "); p = cc(p[0],p[1]); ctx.lineTo(p[0],p[1]); } ctx.closePath(); ctx.lineWidth = 2*b; ctx.strokeStyle = '#000'; ctx.stroke();

The ability to get and set CSS custom properties is one of the reasons they’re so great. We can reach for JavaScript to first read the value of the --path variable, then convert it into an array of points (seen on the very first line above). So, that means 50% 0,100% 100%,0 100% become the points for the mask, i.e. points = ["50% 0","100% 100%","0 100%"].

Then we loop through the points to draw a polygon using moveTo and lineTo. This polygon is exactly the same as the one drawn in CSS with the clip-path property.

Finally, and after drawing the shape, I add a stroke to it. I define the thickness of the stroke using lineWidth and I set a solid color using strokeStyle. In other words, only the stroke of the shape is visible since I am not filling the shape with any color (i.e. it’s transparent).

Now all we have to do is to update the path and the thickness to create any polygon border. It’s worth noting that we are not limited to solid color here since we are using the CSS background property. We can consider gradients or images.

Live Demo

In case we need to add content, we have to consider a pseudo-element. Otherwise, the content gets clipped in the process. It’s not incredibly tough to support content. We move the mask property to the pseudo-element. We can keep the clip-path declaration on the main element.

CodePen Embed Fallback

Questions so far?

I know you probably have some burning questions you want to ask after looking over that last script. Allow me to preemptively answer a couple things I bet you have in mind.

What is that cc() function?

I am using that function to convert the value of each point into pixel values. For each point, I get both x and y coordinates — using points[i].trim().split(" ") — and then I convert those coordinates to make them usable inside the canvas element that allows us to draw with those points.

const cc = function(x,y) { var fx=0,fy=0; if (x.indexOf('%') > -1) { fx = (parseFloat(x)/100)*w; } else if(x.indexOf('px') > -1) { fx = parseFloat(x); } if (y.indexOf('%') > -1) { fy = (parseFloat(y)/100)*h; } else if(y.indexOf('px') > -1) { fy = parseFloat(y); } return [fx,fy]; }

The logic is simple: if it’s a percentage value, I use the width (or the height) to find the final value. If it’s a pixel value, I simply get the value without the unit. If, for, example we have [50% 20%] where the width is equal to 200px and the height is equal to 100px, then we get [100 20]. If it’s [20px 50px], then we get [20 50]. And so on.

Why are you using CSS clip-path if the mask is already clipping the element to the stroke of the shape?

Using only the mask was the first idea I had in mind, but I stumbled upon two major issues with that approach. The first is related to how stroke() works. From MDN:

Strokes are aligned to the center of a path; in other words, half of the stroke is drawn on the inner side, and half on the outer side.

That “half inner side, half outer side” gave me a lot of headaches, and I always end up with a strange overflow when putting everything together. That’s where CSS clip-path helps; it clips the outer part and only keeps the inner side — no more overflow!

You will notice the use of ctx.lineWidth = 2*b. I am adding double the border thickness because I will clip half of it to end with the right thickness needed around the entire shape.

The second issue is related to the shape’s hover-able area. It’s known that masking does not affect that area and we can still hover/interact with the whole rectangle. Again, reaching for clip-path fixes the issue, plus we limit the interaction just to the shape itself.

The following demo illustrates these two issues. The first element has both a mask and clip-path, while the second only has the mask. We can clearly see the overflow issue. Try to hover the second one to see that we can change the color even if the cursor is outside the triangle.

CodePen Embed Fallback Why are you using @property with the border value?

This is an interesting — and pretty tricky — part. By default, custom properties (like --border) are considered a “CSSUnparsedValue” which means they are treated as strings. From the CSS spec:

CSSUnparsedValue’ objects represent property values that reference custom properties. They are comprised of a list of string fragments and variable references.

With @property, we can register the custom property and give it a type so that it can be recognized by the browser and handled as a valid type instead of a string. In our case, we are registering the border as a <length> type so later it becomes a CSSUnitValue. What this also does is allow us to use any length unit (px, em, ch,vh, etc.) for the border value.

This may sound a bit complex but let me try to illustrate the difference with a DevTools screenshot.

I am using console.log() on a variable where I defined 5em. The first one is registered but the second one is not.

In the first case, the browser recognizes the type and makes the conversion into a pixel value, which is useful since we only need pixel values inside the paint() function. In the second case, we get the variable as a string which is not very useful since we cannot convert em units into px units inside the paint() function.

Try all the units. It will always results with the computed pixel value inside the paint() function.

What about the --path variable?

I wanted to use the same approach with the --path variable but, unfortunately, I think I pushed CSS right up to the limits of what it can do here. Using @property, we can register complex types, even multi-value variables. But that’s still not enough for the path we need.

We can use the + and # symbols to define a space-separated or comma-separated list of values, but our path is a comma-separated list of space-separated percentage (or length) values. I would use something like [<length-percentage>+]#, but it doesn’t exist.

For the path, I am obliged to manipulate it as a string value. That limits us just to percentage and pixel values for now. For this reason, I defined the cc() function to convert the string values into pixel values.

We can read in the CSS spec:

The internal grammar of the syntax strings is a subset of the CSS Value Definition Syntax. Future levels of the specification are expected to expand the complexity of the allowed grammar, allowing custom properties that more closely resemble the full breadth of what CSS properties allow.

Even if the grammar is extend to be able to register the path, we will still face issue in case we need to include calc() inside our path:

--path: 0 0,calc(100% - 40px) 0,100% 40px,100% 100%,0 100%;

In the above, calc(100% - 40px) is a value that the browser considers a <length-percentage>, but the browser cannot compute that value until it knows the reference for the percentage. In other words, we cannot get the equivalent pixel value inside the paint() function since the reference can only be known when the value gets used within var().

To overcome this, we can can extend the cc() function to do the conversion. We did the conversion of a percentage value and a pixel value, so let’s combine those into one conversion. We will consider 2 cases: calc(P% - Xpx) and calc(P% + Xpx). Our script becomes:

const cc = function(x,y) { var fx=0,fy=0; if (x.indexOf('calc') > -1) { var tmp = x.replace('calc(','').replace(')',''); if (tmp.indexOf('+') > -1) { tmp = tmp.split('+'); fx = (parseFloat(tmp[0])/100)*w + parseFloat(tmp[1]); } else { tmp = tmp.split('-'); fx = (parseFloat(tmp[0])/100)*w - parseFloat(tmp[1]); } } else if (x.indexOf('%') > -1) { fx = (parseFloat(x)/100)*w; } else if(x.indexOf('px') > -1) { fx = parseFloat(x); } if (y.indexOf('calc') > -1) { var tmp = y.replace('calc(','').replace(')',''); if (tmp.indexOf('+') > -1) { tmp = tmp.split('+'); fy = (parseFloat(tmp[0])/100)*h + parseFloat(tmp[1]); } else { tmp = tmp.split('-'); fy = (parseFloat(tmp[0])/100)*h - parseFloat(tmp[1]); } } else if (y.indexOf('%') > -1) { fy = (parseFloat(y)/100)*h; } else if(y.indexOf('px') > -1) { fy = parseFloat(y); } return [fx,fy]; }

We’re using indexOf() to test the existence of calc, then, with some string manipulation, we extract both values and find the final pixel value.

And, as a result, we also need to update this line:

p = points[i].trim().split(" ");


p = points[i].trim().split(/(?!\(.*)\s(?![^(]*?\))/g);

Since we need to consider calc(), using the space character won’t work for splitting. That’s because calc() also contains spaces. So we need a regex. Don’t ask me about it — it’s the one that worked after trying a lot from Stack Overflow.

Here is basic demo to illustrate the update we did so far to support calc()

CodePen Embed Fallback

Notice that we have stored the calc() expression within the variable --v that we registered as a <length-percentage>. This is also a part of the trick because if we do this, the browser uses the correct format. Whatever the complexity of the calc() expression, the browser always converts it to the format calc(P% +/- Xpx). For this reason, we only have to deal with that format inside the paint() function.

Below different examples where we are using a different calc() expression for each one:

CodePen Embed Fallback

If you inspect the code of each box and see the computed value of --v, you will always find the same format which is super useful because we can have any kind of calculation we want.

It should be noted that using the variable --v is not mandatory. We can include the calc() directly inside the path. We simply need to make sure we insert the correct format since the browser will not handle it for us (remember that we cannot register the path variable so it’s a string for the browser). This can be useful when we need to have many calc() inside the path and creating a variable for each one will make the code too lengthy. We will see a few examples at the end.

Can we have dashed border?

We can! And it only takes one instruction. The <canvas> element already has a built-in function to draw dashed stroke setLineDash():

The setLineDash() method of the Canvas 2D API’s CanvasRenderingContext2D interface sets the line dash pattern used when stroking lines. It uses an array of values that specify alternating lengths of lines and gaps which describe the pattern.

All we have to do is to introduce another variable to define our dash pattern.

Live Demo

In the CSS, we simply added a CSS variable, --dash, and within the mask is the following:

// ... const d = properties.get('--dash').toString().split(','); // ... ctx.setLineDash(d);

We can also control the offset using lineDashOffset. We will see later how controlling the offset can help us reach some cool animations.

Why not use @property instead to register the dash variable?

Technically, we can register the dash variable as a <length># since it’s a comma-separated list of length values. It does work, but I wasn’t able to retrieve the values inside the paint() function. I don’t know if it’s a bug, a lack of support, or I’m just missing a piece of the puzzle.

Here is a demo to illustrate the issue:

CodePen Embed Fallback

I am registering the --dash variable using this:

@property --dash{ syntax: '<length>#'; inherits: true; initial-value: 0; }

…and later declaring the variable as this:

--dash: 10em,3em;

If we inspect the element, we can see that the browser is handling the variable correctly since the computed values are pixel ones

But we only get the first value inside the paint() function

Until I find the a fix for this, I am stuck using the --dash variable as a string, like the --path. Not a big deal in this case as I don’t think we will need more than pixel values.

Use cases!

After exploring the behind the scene of this technique, let’s now focus on the CSS part and check out a few uses cases for our polygon border.

A collection of buttons

We can easily generate custom shape buttons having cool hover effect.

CodePen Embed Fallback

Notice how calc() is used inside the path of the last button the way we described it earlier. It works fine since I am following the correct format.


No more headaches when creating a breadcrumb system! Below, you will find no “hacky” or complex CSS code, but rather something that’s pretty generic and easy to understand where all we have to do is to adjust a few variables.

CodePen Embed Fallback

Card reveal animation

If we apply some animation to the thickness, we can get some fancy hover effect

CodePen Embed Fallback

We can use that same idea to create an animation that reveals the card:

CodePen Embed Fallback Callout & speech bubble

“How the hell we can add border to that small arrow???” I think everyone has stumbled on this issue when dealing with either a callout or speech bubble sort of design. The Paint API makes this trivial.

CodePen Embed Fallback

In that demo, you will find a few examples that you can extend. You only need to find the path for your speech bubble, then adjust some variables to control the border thickness and the size/position of the arrow.

Animating dashes

A last one before we end. This time we will focus on the dashed border to create more animations. We already did one in the button collection where we transform a dashed border into a solid one. Let’s tackle two others.

Hover the below and see the nice effect we get:

CodePen Embed Fallback

Those who have worked with SVG for some time are likely familiar with the sort effect that we achieve by animating stroke-dasharray. Chris even tackled the concept a while back. Thanks to the Paint API, we can do this directly in CSS. The idea is almost the same one we use with SVG. We define the dash variable:

--dash: var(--a),1000;

The variable --a starts at 0, so our pattern is a solid line (where the length equals 0) with a gap (where length 1000); hence no border. We animate --a to a big value to draw our border.

We also talked about using lineDashOffset, which we can use for another kind of animation. Hover the below and see the result:

CodePen Embed Fallback

Finally, a CSS solution to animate the position of dashes that works with any kind of shape!

What I did is pretty simple. I added an extra variable, --offset, to which I apply a transition from 0 to N. Then, inside the paint() function, I do the following:

const o = properties.get('--offset'); ctx.lineDashOffset=o;

As simple as that! Let’s not forget an infinite animation using keyframes:

CodePen Embed Fallback

We can make the animation run continuously by offsetting 0 to N where N is the sum of the values used in the dash variable (which, in our case, is 10+15=25). We use a negative value to have the opposite direction direction.

I have probably missed a lot of use cases that I let you discover!

Exploring the CSS Paint API series:

The post Exploring the CSS Paint API: Polygon Border appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Designing Beautiful Shadows in CSS

Css Tricks - Fri, 09/17/2021 - 12:49pm

My favorite kind of blog post is when someone takes a subject that I’ve spent all of five minutes considering and then says—no!—this is an enormous topic worthy of a dissertation. Look at all the things you can do with this tiny CSS property!

I was reminded of this when I spotted this post by Josh Comeau about designing beautiful shadows in CSS:

In my humble opinion, the best websites and web applications have a tangible “real” quality to them. There are lots of factors involved to achieve this quality, but shadows are a critical ingredient.

When I look around the web, though, it’s clear that most shadows aren’t as rich as they could be. The web is covered in fuzzy grey boxes that don’t really look much like shadows.

Josh shows the regular old boring shadow approaches and then explores all the ways to improve and optimize them into shadows with real depth. It all comes down to taking a closer look color and exploring the box-shadow CSS property. And speaking of depth, Rob O’Leary’s “Getting Deep Into Shadows” is another comprehensive look at shadows.

I had also completely forgotten about filter: drop-shadow; which is particularly useful on adding shadows to images that you want to throw onto a page. Great stuff all round.

Direct Link to ArticlePermalink

The post Designing Beautiful Shadows in CSS appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Shadow Roots and Inheritance

Css Tricks - Thu, 09/16/2021 - 10:14am

There is a helluva gotcha with styling a <details> element, as documented here by Kitty Guiraudel. It’s obscure enough that you might never run into it, but if you do, I could see it being very confusing (it would confuse me, at least).

Perhaps you’re aware of the shadow DOM? It’s talked about a lot in terms of web components and comes up when thinking in terms of <svg> and <use>. But <details> has a shadow DOM too:

<details> #shadow-root (user-agent) <slot name="user-agent-custom-assign-slot" id="details-summary"> <!-- <summary> reveal --> </slot> <slot name="user-agent-default-slot" id="details-content"> <!-- <p> reveal --> </slot> <summary>System Requirements</summary> <p> Requires a computer running an operating system. The computer must have some memory and ideally some kind of long-term storage. An input device as well as some form of output device is recommended. </p> </details>

As Amelia explains, the <summary> is inserted in the first shadow root slot, while the rest of the content (called “light DOM”, or the <p> tag in our case) is inserted in the second slot.

The thing is, none of these slots or the shadow root are matched by the universal selector *, which only matches elements from the light DOM. 

So the <slot> is kind of “in the way” there. That <p> is actually a child of the <slot>, in the end. It’s extra weird, because a selector like details > p will still select it just fine. Presumably, that selector gets resolved in the light DOM and then continues to work after it gets slotted in.

But if you tell a property to inherit, things break down. If you did something like…

<div> <p></p> </div> div { border-radius: 8px; } div p { border-radius: inherit; }

…that <p> is going to have an 8px border radius.

But if you do…

<details> <summary>Summary</summary> <p>Lorem ipsum...</p> </details> details { border-radius: 8px; } details p { border-radius: inherit; }

That <p> is going to be square as a square doorknob. I guess that’s either because you can’t force inheritance through the shadow DOM, or the inherit only happens from the parent which is a <slot>? Whatever the case, it doesn’t work.

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post Shadow Roots and Inheritance appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Static Site Generators vs. CMS-powered Websites: How to Keep Marketers and Devs Happy

Css Tricks - Thu, 09/16/2021 - 4:31am

(This is a sponsored post.)

Many developers love working with static site generators like Gatsby and Hugo. These powerful yet flexible systems help create beautiful websites using familiar tools like Markdown and React. Nearly every popular modern programming language has at least one actively developed, fully-featured static site generator.

Static site generators boast a number of advantages, including fast page loads. Quickly rendering web pages isn’t just a technical feat, it improves audience attraction, retention, and conversion. But as much as developers love these tools, marketers and other less technical end users may struggle with unfamiliar workflows and unclear processes.

The templates, easy automatic deploys, and convenient asset management provided by static site generators all free up developers to focus on creating more for their audiences to enjoy. However, while developers take the time to build and maintain static sites, it is the marketing teams that use them daily, creating and updating content. Unfortunately, many of the features that make static site generators awesome for developers make them frustrating to marketers.

Let’s explore some of the disadvantages of using a static site generator. Then, see how switching to a dynamic content management system (CMS) — especially one powered by a CRM (customer relationship management) platform — can make everyone happy, from developers to marketers to customers.

Static Site Generator Disadvantages

Developers and marketers typically thrive using different workflows. Marketers don’t usually want to learn Markdown just to write a blog post or update site copy — and they shouldn’t need to. 

Frankly, it isn’t reasonable to expect marketers to learn complex systems for everyday tasks like embedding graphs or adjusting image sizes just to complete simple tasks. Marketers should have tools that make it easier to create and circulate content, not more complicated.

Developers tend to dedicate most of their first week on a project to setting up a development environment and getting their local and staging tooling up and running. When a development team decides that a static site generator is the right tool, they also commit to either configuring and maintaining local development environments for each member of the marketing team or providing a build server to preview changes.

Both approaches have major downsides. When marketers change the site, they want to see their updates instantly. They don’t want to commit their changes to a Git repository then wait for a CI/CD pipeline to rebuild and redeploy the site every time. Local tooling enabling instant updates tends to be CLI-based and therefore inaccessible for less technical users.

This does not have to devolve into a prototypical development-versus-marketing power struggle. A dynamic website created with a next-generation tool like HubSpot’s CMS Hub can make everyone happy.

A New Generation of Content Management Systems

One reason developers hold static site generators in such high regard is the deficiency of the systems they replaced. Content management systems of the past were notorious for slow performance, security flaws, and poor user experiences for both developers and content creators. However, some of today’s CMS platforms have learned from these mistakes and deficiencies and incorporated the best static site generator features while developing their own key advantages.

A modern, CMS-based website gives developers the control they need to build the features their users demand while saving implementation time. Meanwhile, marketing teams can create content with familiar, web-based, what-you-see-is-what-you-get tools that integrate directly with existing data and software.

For further advantages, consider a CRM-powered solution, like HubSpot’s CMS Hub. Directly tied to your customer data, a CRM-powered site builder allows you to create unique and highly personalized user experiences, while also giving you greater visibility into the customer journey.

Content Management Systems Can Solve for Developers

Modern content management systems like CMS Hub allow developers to build sites locally with the tools and frameworks they prefer, then easily deploy to them their online accounts. Once deployed, marketers can create and edit content using drag-and-drop and visual design tools within the guardrails set by the developers. This gives both teams the flexibility they need and streamlines workflows. 

Solutions like CMS Hub also replace the need for unreliable plugins with powerful serverless functions. Serverless functions, which are written in JavaScript and use the NodeJS runtime, allow for more complex user interactions and dynamic experiences. Using these tools, developers can build out light web applications without ever configuring or managing a server. This elevates websites from static flyers to a modern, personalized customer experience without piling on excess developer work. 

While every content management system will have its advantages, CMS Hub also includes a built-in relational database, multi-language support, and the ability to build dynamic content and login pages based on CRM data. All features designed to make life easier for developers.

Modern CMS-Based Websites Make Marketers Happy, Too

Marketing teams can immediately take advantage of CMS features, especially when using a CRM-powered solution. They can add pages, edit copy, and even alter styling using a drag-and-drop editor, without needing help from a busy developer. This empowers the marketing team and reduces friction when making updates. It also reduces the volume of support requests that developers have to manage.

Marketers can also benefit from built-in tools for search engine optimization (SEO), A/B testing, and specialized analytics. In addition to standard information like page views, a CRM-powered website offers contact attribution reporting. This end-to-end measurement reveals which initiatives generate actual leads via the website. These leads then flow seamlessly into the CRM for the sales team to close deals.

CRM-powered websites also support highly customized experiences for site users. The CRM behind the website already holds the customer data. This data automatically synchronizes because it lives within one system as a single source of truth for both marketing pages and sales workflows. This default integration saves development teams time that they would otherwise spend building data pipelines.

Next Steps

Every situation is unique, and in some cases, a static site generator is the right decision. But if you are building a site for an organization and solving for the needs of developers and marketers, a modern CMS may be the way to go. 

Options like CMS Hub offer all the benefits of a content management system while coming close to matching static site generators’ marquee features: page load speed, simple deployment, and stout reliability. But don’t take my word for it. Create a free CMS Hub developer test account and take it for a test drive.

The post Static Site Generators vs. CMS-powered Websites: How to Keep Marketers and Devs Happy appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

2021 Scroll Survey Report

Css Tricks - Wed, 09/15/2021 - 12:39pm

Here’s a common thought and question: how do browsers prioritize what they work on? We get little glimpses of it sometimes. We’re told to “star issues” in bug trackers to signal interest. We’re told to get involved in GitHub threads for spec issues. We’re told they do read the blog posts. And, sometimes, we get to see the results of surveys. Chrome ran a survey about scrolling on the web back in April and has published the results with an accompanying a blog post.

“Scrolling” is a big landscape:

From our research, these difficulties come from the multitude of use cases for scroll. When we talk about scrolling, that might include:

According to the results, dang near half of developers are dissatisfied with scrolling on the web, so this is a metric Google devs want to change and they will prioritize it.

To add to the list above, I think even smooth scrolling is a little frustrating in how you can’t control the speed or other behaviors of it. For example, you can’t say “smooth scroll an on-page jump-down link, but don’t smooth scroll a find-on-page jump.”

And that’s not to mention scroll snapping, which is another whole thing with the occasional bug. Speaking of which, Dave had an idea on the show the other day that was pretty interesting. Now that scroll snapping is largely supported, even on desktop, and feels pretty smooth for the most part, should we start using it more liberally, like on whole page sections? Maybe even like…

/* Reset stylesheet */ main, section, article, footer { scroll-snap-align: start; }

I’ve certainly seen scroll snapping in more places. Like this example from Scott Jehl where he was playing with scroll snapping on fixed table headers and columns. It’s a very nice touch:

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post 2021 Scroll Survey Report appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.


Css Tricks - Wed, 09/15/2021 - 8:51am

It’s not every day that a new pattern emerges across the web, but I think cmd + k is here to stay. It’s a keyboard shortcut that usually pops open a search UI and it lets you toggle settings on or off, such as dark mode. And lots of apps support it now—Slack, Notion, Linear, and Sentry (my current gig) are the ones that I’ve noticed lately, but I’m sure tons of others have started picking up on this pattern.

Speaking of which, this looks like a great project:

kbar is a fully extensible command+k interface for your site

My only hope is that more websites and applications start to support it in the future—with kbar being a great tool to help spread the good word about this shortcut.

Direct Link to ArticlePermalink

The post kbar appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

An Intro to JavaScript Proxy

Css Tricks - Wed, 09/15/2021 - 4:21am

Have you ever been in a situation where you wish you could have some control over the values in an object or array? Maybe you wanted to prevent certain types of data or even validate the data before storing it in the object. Suppose you wanted to react to the incoming data in some way, or even the outgoing data? For example, maybe you wanted to update the DOM by displaying results or swap classes for styling changes as data changes. Ever wanted to work on a simple idea or section of page that needed some of the features of a framework, like Vue or React, but didn’t want to start up a new app?

Then JavaScript Proxy might be what you’re looking for!

A brief introduction

I’ll say up front: when it comes to front-end technologies, I’m more of a UI developer; much like described non-JavaScript-focused side of The Great Divide. I’m happy just creating nice-looking projects that are consistent in browsers and all the quirks that go with that. So when it comes to more pure JavaScript features, I tend not to go too deep.

Yet I still like to do research and I’m always looking for something to add to that list of new things to learn. Turns out JavaScript proxies are an interesting subject because just going over the basics opens up many possible ideas of how to leverage this feature. Despite that, at first glance, the code can get heavy quick. Of course, that all depends on what you need.

The concept of the proxy object has been with us for quite some time now. I could find references to it in my research going back several years. Yet it was not high on my list because it has never had support in Internet Explorer. In comparison, it has had excellent support across all the other browsers for years. This is one reason why Vue 3 isn’t compatible with Internet Explorer 11, because of the use of the proxy within the newest Vue project.

So, what is the proxy object exactly?

The Proxy object

MDN describes the Proxy object as something that:

[…] enables you to create a proxy for another object, which can intercept and redefine fundamental operations for that object.

The general idea is that you can create an object that has functionality that lets you take control of typical operations that happen while using an object. The two most common would be getting and setting values stored in the object.

const myObj = { mykey: 'value' } console.log(myObj.mykey); // "gets" value of the key, outputs 'value' myObj.mykey = 'updated'; // "sets" value of the key, makes it 'updated'

So, in our proxy object we would create “traps” to intercept these operations and perform whatever functionality we might wish to accomplish. There are up to thirteen of these traps available. I’m not necessarily going to cover all these traps as not all of them are necessary for my simple examples that follow. Again, this depends on what you’re needing for the particular context of what you’re trying to create. Trust me, you can go a long way with just the basics.

To expand on our example above to create a proxy, we would do something like this:

const myObj = { mykey: 'value' } const handler = { get: function (target, prop) { return target[prop]; }, set: function (target, prop, value) { target[prop] = value; return true; } } const proxy = new Proxy(myObj, handler); console.log(proxy.mykey); // "gets" value of the key, outputs 'value' proxy.mykey = 'updated'; // "sets" value of the key, makes it 'updated'

First we start with our standard object. Then we create a handler object that holds the handler functions, often called traps. These represent the operations that can be done on a traditional object which, in this case, are the get and set that just pass things along with no changes. After that, we create our proxy using the constructor with our target object and the handler object. At that point, we can reference the proxy object in getting and setting values which will be a proxy to the original target object, myObj.

Note return true at the end of the set trap. That’s intended to inform the proxy that setting the value should be considered successful. In some situations where you wish to prevent a value being set (think of a validation error), you would return false instead. This would also cause a console error with a TypeError being outputted.

Now one thing to keep in mind with this pattern is that the original target object is still available. That means you could bypass the proxy and alter values of the object without the proxy. In my reading about using the Proxy object, I found useful patterns that can help with that.

let myObj = { mykey: 'value' } const handler = { get: function (target, prop) { return target[prop]; }, set: function (target, prop, value) { target[prop] = value; return true; } } myObj = new Proxy(myObj, handler); console.log(myObj.mykey); // "gets" value of the key, outputs 'value' myObj.mykey = 'updated'; // "sets" value of the key, makes it 'updated'

In this pattern, we’re using the target object as the proxy object while referencing the target object within the proxy constructor. Yeah, that happened. This works, but I found it somewhat easy to get confused over what’s happening. So let’s create the target object inside the proxy constructor instead:

const handler = { get: function (target, prop) { return target[prop]; }, set: function (target, prop, value) { target[prop] = value; return true; } } const proxy = new Proxy({ mykey: 'value' }, handler); console.log(proxy.mykey); // "gets" value of the key, outputs 'value' proxy.mykey = 'updated'; // "sets" value of the key, makes it 'updated'

For that matter, we could create both the target and handler objects inside the constructor if we prefer:

const proxy = new Proxy({ mykey: 'value' }, { get: function (target, prop) { return target[prop]; }, set: function (target, prop, value) { target[prop] = value; return true; } }); console.log(proxy.mykey); // "gets" value of the key, outputs 'value' proxy.mykey = 'updated'; // "sets" value of the key, makes it 'updated'

In fact, this is the most common pattern I use in my examples below. Thankfully, there is flexibility in how to create a proxy object. Just use whatever patterns suits you.

The following are some examples covering usage of the JavaScript Proxy from basic data validation up to updating form data with a fetch. Keep in mind these examples really do cover the basics of JavaScript Proxy; it can go deeper quick if you wish. In some cases they are just about creating regular JavaScript code doing regular JavaScript things within the proxy object. Look at them as ways to extend some common JavaScript tasks with more control over data.

A simple example for a simple question

My first example covers what I’ve always felt was a rather simplistic and strange coding interview question: reverse a string. I’ve never been a fan and never ask it when conducting an interview. Being someone that likes to go against the grain in this kind of thing, I played with outside-the-box solutions. You know, just to throw it out there sometimes for fun and one of these solutions is a good bit of front end fun. It also makes for a simple example showing a proxy in use.

CodePen Embed Fallback

If you type into the input you will see whatever is typed is printed out below, but reversed. Obviously, any of the many ways to reverse a string could be used here. Yet, let’s go over my strange way to do the reversal.

const reverse = new Proxy( { value: '' }, { set: function (target, prop, value) { target[prop] = value; document.querySelectorAll('[data-reverse]').forEach(item => { let el = document.createElement('div'); el.innerHTML = '\u{202E}' + value; item.innerText = el.innerHTML; }); return true; } } ) document.querySelector('input').addEventListener('input', e => { reverse.value =; });

First, we create our new proxy and the target object is a single key value that holds whatever is typed into the input. The get trap isn’t there since we would just need a simple pass-through as we don’t have any real functionality tied to it. There’s no need to do anything in that case. We’ll get to that later.

For the set trap we do have a small bit of functionality to perform. There is still a simple pass-through where the value is set to the value key in the target object like normal. Then there is a querySelectorAll that finds all elements with a data-reverse data attribute on the page. This allows us to target multiple elements on the page and update them all in one go. This gives us our framework-like binding action that everybody likes to see. This could also be updated to target inputs to allow for a proper two-way binding type of situation.

This is where my little fun oddball way of reversing a string kicks in. A div is created in memory and then the innerHTML of the element is updated with a string. The first part of the string uses a special Unicode decimal code that actually reverses everything after, making it right-to-left. The innerText of the actual element on the page is then given the innerHTML of the div in memory. This runs each time something is entered into the input; therefore, all elements with the data-reverse attribute is updated.

Lastly, we set up an event listener on the input that sets the value key in our target object by the input’s value that is the target of the event.

In the end, a very simple example of performing a side effect on the page’s DOM through setting a value to the object.

Live-formatting an input value

A common UI pattern is to format the value of an input into a more exact sequence than just a string of letters and numbers. An example of this is an telephone input. Sometimes it just looks and feels better if the phone number being typed actually looks like a phone number. The trick though is that, when we format the input’s value, we probably still want an unformatted version of the data.

This is an easy task for a JavaScript Proxy.

CodePen Embed Fallback

As you type numbers into the input, they’re formatted into a standard U.S. phone number (e.g. (123) 456-7890). Notice, too, that the phone number is displayed in plain text underneath the input just like the reverse string example above. The button outputs both the formatted and unformatted versions of the data to the console.

So here’s the code for the proxy:

const phone = new Proxy( { _clean: '', number: '', get clean() { return this._clean; } }, { get: function (target, prop) { if (!prop.startsWith('_')) { return target[prop]; } else { return 'entry not found!' } }, set: function (target, prop, value) { if (!prop.startsWith('_')) { target._clean = value.replace(/\D/g, '').substring(0, 10); const sections = { area: target._clean.substring(0, 3), prefix: target._clean.substring(3, 6), line: target._clean.substring(6, 10) } target.number = target._clean.length > 6 ? `(${sections.area}) ${sections.prefix}-${sections.line}` : target._clean.length > 3 ? `(${sections.area}) ${sections.prefix}` : target._clean.length > 0 ? `(${sections.area}` : ''; document.querySelectorAll('[data-phone_number]').forEach(item => { if (item.tagName === 'INPUT') { item.value = target.number; } else { item.innerText = target.number; } }); return true; } else { return false; } } } );

There’s more code in this example, so let’s break it down. The first part is the target object that we are initializing inside the proxy itself. It has three things happening.

{ _clean: '', number: '', get clean() { return this._clean; } },

The first key, _clean, is our variable that holds the unformatted version of our data. It starts with the underscore with a traditional variable naming pattern of considering it “private.” We would like to make this unavailable under normal circumstances. There will be more to this as we go.

The second key, number, simply holds the formatted phone number value.

The third "key" is a get function using the name clean. This returns the value of our private _clean variable. In this case, we’re simply returning the value, but this provides the opportunity to do other things with it if we wish. This is like a proxy getter for the get function of the proxy. It seems strange but it makes for an easy way to control our data. Depending on your specific needs, this might be a rather simplistic way to handle this situation. It works for our simple example here but there could be other steps to take.

Now for the get trap of the proxy.

get: function (target, prop) { if (!prop.startsWith('_')) { return target[prop]; } else { return 'entry not found!' } },

First, we check for the incoming prop, or object key, to determine if it does not start with an underscore. If it does not start with an underscore, we simply return it. If it does, then we return a string saying the entry was not found. This type of negative return could be handled different ways depending on what is needed. Return a string, return an error, or run code with different side effects. It all depends on the situation.

One thing to note in my example is that I’m not handling other proxy traps that may come into play with what would be considered a private variable in the proxy. For a more complete protection of this data, you would have to consider other traps, such as [defineProperty](, deleteProperty, or ownKeys — typically anything about manipulating or referring to object keys. Whether you go this far could depend on who would be making use of the proxy. If it’s for you, then you know how you are using the proxy. But if it’s someone else, you may want to consider locking things down as much as possible.

Now for where most of the magic happens for this example — the set trap:

set: function (target, prop, value) { if (!prop.startsWith('_')) { target._clean = value.replace(/\D/g, '').substring(0, 10); const sections = { area: target._clean.substring(0, 3), prefix: target._clean.substring(3, 6), line: target._clean.substring(6, 10) } target.number = target._clean.length > 6 ? `(${sections.area}) ${sections.prefix}-${sections.line}` : target._clean.length > 3 ? `(${sections.area}) ${sections.prefix}` : target._clean.length > 0 ? `(${sections.area}` : ''; document.querySelectorAll('[data-phone_number]').forEach(item => { if (item.tagName === 'INPUT') { item.value = target.number; } else { item.innerText = target.number; } }); return true; } else { return false; } }

First, the same check against the private variable we have in the proxy. I don’t really test for other types of props, but you might consider doing that here. I’m assuming only that the number key in the proxy target object will be adjusted.

The incoming value, the input’s value, is stripped of everything but number characters and saved to the _clean key. This value is then used throughout to rebuild into the formatted value. Basically, every time you type, the entire string is being rebuilt into the expected format, live. The substring method keeps the number locked down to ten digits.

Then a sections object is created to hold the different sections of our phone number based on the breakdown of a U.S. phone number. As the _clean variable increases in length, we update number to a formatting pattern we wish to see at that point in time.

A querySelectorAll is looking for any element that has the data-phone_number data attribute and run them through a forEach loop. If the element is an input where the value is updated, the innerText of anything else is updated. This is how the text appears underneath the input. If we were to place another input element with that data attribute, we would see its value updated in real time. This is a way to create one-way or two-way binding, depending on the requirements.

In the end, true is returned to let the proxy know everything went well. If the incoming prop, or key, starts with an underscore, then false is returned instead.

Finally, the event listeners that makes this work:

document.querySelectorAll('input[data-phone_number]').forEach(item => { item.addEventListener('input', (e) => { phone.number =; }); }); document.querySelector('#get_data').addEventListener('click', (e) => { console.log(phone.number); // (123) 456-7890 console.log(phone.clean); // 1234567890 });

The first set finds all the inputs with our specific data attribute and adds an event listener to them. For each input event, the proxy’s number key value is updated with the current input’s value. Since we’re formatting the value of the input that gets sent along each time, we strip out any characters that are not numbers.

The second set finds the button that outputs both sets of data, as requested, to the console. This shows how we could write code that requests the data that is needed at any time. Hopefully it is clear that phone.clean is referring to our get proxy function that’s in the target object that returns the _clean variable in the object. Notice that it isn’t invoked as a function, like phone.clean(), since it behaves as a get proxy in our proxy.

Storing numbers in an array

Instead of an object you could use an array as the target “object” in the proxy. Since it would be an array there are some things to consider. Features of an array such as push() would be treated certain ways in the setter trap of the proxy. Plus, creating a custom function inside the target object concept doesn’t really work in this case. Yet, there are some useful things to be done with having an array as the target.

Sure, storing numbers in an array isn’t a new thing. Obviously. Yet I’m going to attach a few rules to this number-storing array, such as no repeating values and allowing only numbers. I’ll also provide some outputting options, such sort, sum, average, and clearing the values. Then update a small user interface that controls it all.

CodePen Embed Fallback

Here’s the proxy object:

const numbers = new Proxy([], { get: function (target, prop) { message.classList.remove('error'); if (prop === 'sort') return [].sort((a, b) => a - b); if (prop === 'sum') return [].reduce((a, b) => a + b); if (prop === 'average') return [].reduce((a, b) => a + b) / target.length; if (prop === 'clear') { message.innerText = `${target.length} number${target.length === 1 ? '' : 's'} cleared!`; target.splice(0, target.length); collection.innerText = target; } return target[prop]; }, set: function (target, prop, value) { if (prop === 'length') return true; dataInput.value = ''; message.classList.remove('error'); if (!Number.isInteger(value)) { console.error('Data provided is not a number!'); message.innerText = 'Data provided is not a number!'; message.classList.add('error'); return false; } if (target.includes(value)) { console.error(`Number ${value} has already been submitted!`); message.innerText = `Number ${value} has already been submitted!`; message.classList.add('error'); return false; } target[prop] = value; collection.innerText = target; message.innerText = `Number ${value} added!`; return true; } });

With this example, I’ll start with the setter trap.

First thing to do is to check against the length property being set to the array. It just returns true so that it would happen the normal way. It could always have code in place in case reacting to the length being set if we needed.

The next two lines of code refer to two HTML elements on the page stored with a querySelector. The dataInput is the input element and we wish to clear it on every entry. The message is the element that holds responses to changes to the array. Since it has the concept of an error state, we make sure it is not in that state on every entry.

The first if checks to see if the entry is in fact a number. If it is not, then it does several things. It emits a console error stating the problem. The message element gets the same statement. Then the message is placed into an error state via a CSS class. Finally, it returns false which also causes the proxy to emit its own error to the console.

The second if checks to see if the entry already exists within the array; remember we do not want repeats. If there is a repeat, then the same messaging happens as in the first if. The messaging is a bit different as it’s a template literal so we can see the repeated value.

The last section assumes everything has gone well and things can proceed. The value is set as usual and then we update the collection list. The collection is referring to another element on the page that shows us the current collection of numbers in the array. Again, the message is updated with the entry that was added. Finally, we return true to let the proxy know all is well.

Now, the get trap is a bit different than the previous examples.

get: function (target, prop) { message.classList.remove('error'); if (prop === 'sort') return [].sort((a, b) => a - b); if (prop === 'sum') return [].reduce((a, b) => a + b); if (prop === 'average') return [].reduce((a, b) => a + b) / target.length; if (prop === 'clear') { message.innerText = `${target.length} number${target.length === 1 ? '' : 's'} cleared!`; target.splice(0, target.length); collection.innerText = target; } return target[prop]; },

What’s going on here is taking advantage of a “prop” that’s not a normal array method; it gets passed along to the get trap as the prop. Take for instance the first “prop” is triggered by this event listener:

dataSort.addEventListener('click', () => { message.innerText = numbers.sort; });

So when the sort button is clicked, the message element’s innerText is updated with whatever numbers.sort returns. It acts as a getter that the proxy intercepts and returns something other than typical array-related results.

After removing the potential error state of the message element, we then figure out if something other than a standard array get operation is expected to happen. Each one returns a manipulation of the original array data without altering the original array. This is done by using the spread operator on the target to create a new array and then standard array methods are used. Each name should suggest what it does: sort, sum, average, and clear. Well, OK, clear isn’t exactly a standard array method, but it sounds good. Since the entries can be in any order, we can have it give us the sorted list or do math functions on the entries. Clearing simply wipes out the array as you might expect.

Here are the other event listeners used for the buttons:

dataForm.addEventListener('submit', (e) => { e.preventDefault(); numbers.push(Number.parseInt(dataInput.value)); }); dataSubmit.addEventListener('click', () => { numbers.push(Number.parseInt(dataInput.value)); }); dataSort.addEventListener('click', () => { message.innerText = numbers.sort; }); dataSum.addEventListener('click', () => { message.innerText = numbers.sum; }); dataAverage.addEventListener('click', () => { message.innerText = numbers.average; }); dataClear.addEventListener('click', () => { numbers.clear; });

There are many ways we could extend and add features to an array. I’ve seen examples of an array that allows selecting an entry with a negative index that counts from the end. Finding an entry in an array of objects based on a property value within an object. Have a message returned on trying to get a nonexistent value within the array instead of undefined. There are lots of ideas that can be leveraged and explored with a proxy on an array.

Interactive address form

An address form is a fairly standard thing to have on a web page. Let’s add a bit of interactivity to it for fun (and non-standard) confirmation. It can also act as a data collection of the values of the form within a single object that can be requested on demand.

CodePen Embed Fallback

Here’s the proxy object:

const model = new Proxy( { name: '', address1: '', address2: '', city: '', state: '', zip: '', getData() { return { name: || 'no entry!', address1: this.address1 || 'no entry!', address2: this.address2 || 'no entry!', city: || 'no entry!', state: this.state || 'no entry!', zip: || 'no entry!' }; } }, { get: function (target, prop) { return target[prop]; }, set: function (target, prop, value) { target[prop] = value; if (prop === 'zip' && value.length === 5) { fetch(`${value}`) .then(response => response.json()) .then(data => { = data.places[0]['place name']; document.querySelector('[data-model="city"]').value =; model.state = data.places[0]['state abbreviation']; document.querySelector('[data-model="state"]').value = target.state; }); } document.querySelectorAll(`[data-model="${prop}"]`).forEach(item => { if (item.tagName === 'INPUT' || item.tagName === 'SELECT') { item.value = value; } else { item.innerText = value; } }) return true; } } );

The target object is quite simple; the entries for each input in the form. The getData function will return the object but if a property has an empty string for a value it will change to “no entry!” This is optional but the function gives a cleaner object than what we would get by just getting the state of the proxy object.

The getter function simply passes things along as usual. You could probably do without that, but I like to include it for completeness.

The setter function sets the value to the prop. The if, however, checks to see if the prop being set happens to be the zip code. If it is, then we check to see if the length of the value is five. When the evaluation is true, we perform a fetch that hits an address finder API using the zip code. Any values that are returned are inserted into the object properties, the city input, and selects the state in the select element. This an example of a handy shortcut to let people skip having to type those values. The values can be changed manually, if needed.

For the next section, let’s look at an example of an input element:

<input class="in__input" id="name" data-model="name" placeholder="name" />

The proxy has a querySelectorAll that looks for any elements that have a matching data attribute. This is the same as the reverse string example we saw earlier. If it finds a match, it updates either the input’s value or element’s innerText. This is how the rotated card is updated in real-time to show what the completed address will look like.

One thing to note is the data-model attribute on the inputs. The value of that data attribute actually informs the proxy what key to latch onto during its operations. The proxy finds the elements involved based on that key involves. The event listener does much the same by letting the proxy know which key is in play. Here’s what that looks like:

document.querySelector('main').addEventListener('input', (e) => { model[] =; });

So, all the inputs within the main element are targeted and, when the input event is fired, the proxy is updated. The value of the data-model attribute is used to determine what key to target in the proxy. In effect, we have a model-like system in play. Think of ways such a thing could be leveraged even further.

As for the “get data” button? It’s a simple console log of the getData function…

getDataBtn.addEventListener('click', () => { console.log(model.getData()); });

This was a fun example to build and use to explore the concept. This is the kind of example that gets me thinking about what I could build with the JavaScript Proxy. Sometimes, you just want a small widget that has some data collection/protection and ability to manipulate the DOM just by interacting with data. Yes, you could go with Vue or React, but sometimes even they can be too much for such a simple thing.

That’s all, for now

“For now” meaning that could depend on each of you and whether you’ll dig a bit deeper into the JavaScript Proxy. Like I said at the beginning of this article, I only cover the basics of this feature. There is a great deal more it can offer and it can go bigger than the examples I’ve provided. In some cases it could provide the basis of a small helper for a niche solution. It’s obvious that the examples could easily be created with basic functions doing much the same functionality. Even most of my example code is regular JavaScript mixed with the proxy object.

The point though is to offer examples of using the proxy to show how one could react to interactions to data — even control how to react to those interactions to protect data, validate data, manipulate the DOM, and fetch new data — all based on someone trying to save or get the data. In the long run, this can be very powerful and allow for simple apps that may not warrant a larger library or framework.

So, if you’re a front-end developer that focuses more on the UI side of things, like myself, you can explore a bit of the basics to see if there are smaller projects that could benefit from JavaScript Proxy. If you’re more of a JavaScript developer, then you can start digging deeper into the proxy for larger projects. Maybe a new framework or library?

Just a thought…

The post An Intro to JavaScript Proxy appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

On the `dl`

Css Tricks - Tue, 09/14/2021 - 12:37pm

Blogging about HTML elements¹? *chefs kiss*

Here’s Ben Myers on the (aptly described) “underrated” Definition List (<dl>) element in HTML:

You might have also seen lists of name–value pairs to describe lodging amenities, or to list out individual charges in your monthly rent, or in glossaries of technical terms. Each of these is a candidate to be represented with the <dl> element.

Definition List
Coolness factor

Ben says he’s satisfied with HTML semantics, even when the benefits of using them are theoretical. But in the case of <dl>, there are at least some tangible screen reader benefits, like the fact that the number of items in the list is announced, as expected (for the most part), like ordered and unordered lists. Although that makes you curious what number it announces, doesn’t it? Is it the number of children, regardless of type? Just the <dt> elements?

Speaking of children, this might look weird:

<dl> <div> <dt>Title</dt> <dd>Designing with Web Standards</dd> </div> <div> <dt>Author</dt> <dd>Jeffrey Zeldman</dd> <dd>Ethan Marcotte</dd> </div> <div> <dt>Publisher</dt> <dd>New Riders Pub; 3rd edition (October 19, 2009)</dd> </div> </dl>

But those intermediary <div>s that group things together are cool now. They’re awfully handy when you want to style the groupings as “rows” or do something like add a border below each group. No <div>s for ordered or unordered list though, just definition lists. Lucky sacks. What’s next? Is <hgroup> gonna make a comeback?

  1. I remember Jen Kramer did 30 days of HTML not long ago, and that was fun.

The post On the `dl` appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Jamstack Conf 2021

Css Tricks - Tue, 09/14/2021 - 4:32am

(This is a sponsored post.)

What? Jamstack Conf! It’s the best! Learn what’s happening and what’s next for this hot ecosystem.

When? October 6–7, 2021

Where? Virtual / online.

How much? It’s free! There are workshops as well though, at $100 a seat.

Who? You! Oh you mean speakers? Netlify’s CEO Matt Biilmann gives the opening talk and I’d expect some zingers in there (I’ve been surprised at stuff in this talk three years in a row now). Oh look, Ben Holmes is there — remember me mentioning Slinkity the other day? And Alex Riviere — remember his CSS-Trickz that I riffed off with Astro, which Netlify is now supporting. Those are just some names I recognize. I’m equally excited about hearing from people I don’t know (yet!) and their interesting topics.

Why? Because conferences focused around important of-the-time technologies are the best. And because you can make a cool badge.

Thanks for the support Netlify!

Ooooo looks like that interesting image situation Zach was blogging about the other day is the header for this very conference.

Direct Link to ArticlePermalink

The post Jamstack Conf 2021 appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Developers and Designers Work on a Single Source of Truth With UXPin

Css Tricks - Mon, 09/13/2021 - 9:21am

(This is a sponsored post.)

There is a conversation that has been percolating for as long as I’ve been in the web design and development industry. It’s centered around the conflict between design tools and development tools. The final product of web design is often a mockup. The old joke was that web developers make websites and web designers make paintings of websites. That disconnect is a source of immense friction. Which is the source of truth?

What if there really could be a single source of truth. What if the design tool works on the same exact code as the production website? The latest chapter in this epic conversation is UXPin.

Let’s set up the facts so you can see this all play out.

UXPin is an in-browser & code-based design tool.

UXPin is a powerful design tool with all the features you’d expect, particularly focused on digital screen-based design and advanced prototyping.

The fact that it is code-based is extra great here. Designing websites with all the visual components actually rooted in code brings the design much closer to the real end-product. What you design won’t only look like a website or app but also work like it. For example, an input field is not a static box with an outline, but it’ll give you the real experience of filling it with text.

Code-based design already provides all the specs for each element – like with this card component; exact colors (in the right formats), as well as the exact pixel dimensions, etc. In some cases – even the exact right code of the UI component for your dev can be pulled.

This is laid out nicely by Ania Kubów in a video about UXPin.

Over a decade ago, Jason Santa Maria thought a lot about what a next-gen design tool would look like. Could we just use the browser directly?

I don’t think the browser is enough. A web designer jumping into the browser before tackling the creative and messaging problems is akin to an architect hammering pieces of wood together and then measuring afterwards. The imaginative process is cut short by the tools at hand; and it’s that imagination—or spark—at the beginning of a design that lays the path for everything that follows.

Jason Santa Maria, “A Real Web Design Application”

Perhaps not the browser directly, but a code-based tool that makes UI work like your website or app could be the best of both worlds:

Webpages are living, dynamic spaces where the smallest interaction from a visitor can change the scope of an entire site. […] Because we’re not dealing with a static medium, we need to be able to design for interactions and the shifting landscapes of a webpage […] an application needs to see elements rather than blocks of color or text. Photoshop, Illustrator, and Fireworks have some low-level functionality in this regard, but the need for more dynamic and non-destructive handling is clear.

You can work on your own React components in UXPin.

This is where the single source of truth magic can happen. It’s one thing if a design tool can output a React (or any other framework) component. That’s a neat trick. But it’s likely to be a one-way trip. Components in real-world projects are full of other things that aren’t entirely the domain of design. Perhaps a component uses a hook to return the current user’s permissions and disable a button if they don’t have access. The disabled button has an element of design to it, but most of that code does not.

It’s impractical to have a design tool that can’t respect other code in that component and essentially just leave it alone. Essentially, the design tool is not that useful if it exports components as code but doesn’t allow designers to import those UI components in the first place.

This is where UXPin Merge comes in.

Now, fair is fair, this is going to take a little work to set up. Might just be a couple of hours, or it might take a few weeks for a complete design system. UXPin, for now, only works with React and uses a webpack configuration to integrate it.

Once you’ve gotten in going, the components you use in UXPin are very literally the components you use to build your production website.

It’s pretty impressive really, to see a design tool digest pre-built components and allow them to be used on an entirely new canvas for prototyping.

UXPin helps you with implementing this in your project, including:

As it should, it’s likely to influence how you build components.

Components tend to have props, and props control things like design and content inside. UXPin gives you a UI for the props, meaning you have total control over the component.

<LineChart barColor="green" height="200" width="500" showXAxis="false" showYAxis="true" data={[ ... ]} />

Knowing that, you might give yourself a prop interface for your components that provides you with lots of design control. For example, integrating theme switching.

This is all even faster with Storybook.

Another awfully popular tool in JavaScript-components-land to test and build your components is Storybook. It’s not a design tool like UXPin—it’s more like a zoo for your components. You might already have it set up, or you might find value in using Storybook as well.

The great news? UXPin Merge works together awesomely with Storybook. It makes integration super quick and easy. Plus then it supports any framework, like Angular, Svelte, Vue, etc—in addition to React.

Look how fast:

UXPin CEO Marcin Treder had a strong vision:

What if designers could use the very same components used by engineers and they’re all stored in a shared design system (with accurate documentation and tests)? Many of the frustrating and expensive misunderstandings between designers and engineers would stop happening.

And a plan:

  1. Connect to Git repo or Storybook library.
  2. Import components from there to UXPin design tool.
  3. All the changes in the repo will be synced automatically in UXPin Watch for any changes to the repo and sync those changes in the visual editor.
  4. Let designers design and deliver accurate specs and fully functional design to developers.

And that’s what they’ve pulled off here.

Try UXPin Merge

The post Developers and Designers Work on a Single Source of Truth With UXPin appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Social Image Generator + Jetpack

Css Tricks - Mon, 09/13/2021 - 4:20am

I feel like my quest to make sure this site had pretty sweet (and automatically-generated) social media images (e.g. Open Graph) came to a close once I found Social Image Generator.

The trajectory there was that I ended up talking about it far too much on ShopTalk, to the point it became a common topic in our Discord (join via Patreon), Andy Bell pointed me at Daniel Post’s Social Image Generator and I immediately bought and installed it. I heard from Daniel over Twitter, and we ended up having long conversations about the plugin and my desires for it. Ultimately, Daniel helped me code up some custom designs and write logic to create different social media image designs depending on the information it had (for example, if we provide quote text, it uses a special design for that).

As you likely know, Automattic has been an awesome and long time sponsor for this site, and we often promote Jetpack as a part of that (as I’m a heavy user of it, it’s easy to talk about). One of Jetpack’s many features is helping out with social media. (I did a video on how we do it.) So, it occurred to me… maybe this would be a sweet feature for Jetpack. I mentioned it to the Automattic team and they were into the idea of talking to Daniel. I introduced them back in May, and now it’s September and… Jetpack Acquires WordPress Plugin Social Image Generator

“When I initially saw Social Image Generator, the functionality looked like a ideal fit with our existing social media tools,’ said James Grierson, General Manager of Jetpack. ‘I look forward to the future functionality and user experience improvements that will come out of this acquisition. The goal of our social product is to help content creators expand their audience through increased distribution and engagement. Social Image Generator will be a key component of helping us deliver this to our customers.”

Daniel will also be joining Jetpack to continue developing Social Image Generator and integrating it with Jetpack’s social media features.

Rob Pugh

Heck yeah, congrats Daniel. My dream for this thing is that, eventually, we could start building social media images via regular WordPress PHP templates. The trick is that you need something to screenshot them, like Puppeteer or Playwright. An average WordPress install doesn’t have that available, but because Jetpack is fundamentally a service that leverages the great WordPress cloud to do above-and-beyond things, this is in the realm of possibility.

WP Tavern also covered the news:

Automattic is always on the prowl for companies that are doing something interesting in the WordPress ecosystem. The Social Image Generator plugin expertly captured a new niche with an interface that feels like a natural part of WordPress and impressed our chief plugin critic, Justin Tadlock, in a recent review.

“Automattic approached me and let me know they were fans of my plugin,” Post said. “And then we started talking to see what it would be like to work together. We were actually introduced by Chris Coyier from CSS-Tricks, who uses both our products.”

Sarah Gooding

Just had to double-toot my own horn there, you understand.

The post Social Image Generator + Jetpack appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Improve Largest Contentful Paint (LCP) on Your Website With Ease

Css Tricks - Thu, 09/09/2021 - 9:43am

(This is a sponsored post.)

Optimizing the user experience you offer on your website is essential for the success of any online business. Google does use different user experience-related metrics to rank web pages for SEO and has continued to provide multiple tools to measure and improve web performance.

In its recent attempt to simplify the measurement and understanding of what qualifies as a good user experience, Google standardized the page’s user experience metrics.

These standardized metrics are called Core Web Vitals and help evaluate the real-world user experience on your web page.

Largest Contentful Paint or LCP is one of the Core Web Vitals metrics, which measures when the largest content element in the viewport becomes visible. While other metrics like TTFB and First Contentful Paint also help measure the page experience, they do not represent when the page has become “meaningful” for the user.

Usually, unless the largest element on the page becomes completely visible, the page may not provide much context for the user. LCP is, therefore, more representative of the user’s expectations.As a Core Web Vital metric, LCP accounts for 25% of the Performance Score, making it one of the most important metrics to optimize.

Checking your LCP time

As per Google, the types of elements considered for Largest Contentful Paint are:

  • <img> elements
  • <image> elements inside an <svg> element
  • <video> elements (the poster image is used)
  • An element with a background image loaded via the url() function (as opposed to a CSS gradient)
  • Block-level elements containing text nodes or other inline-level text elements children.

Now, there are multiple ways to measure the LCP of your page.

The easiest ways to measure it are PageSpeed Insights, Lighthouse, Search Console (Core Web Vitals Report), and the Chrome User Experience Report.

For example, Google PageSpeed Insights in its report indicates the element considered for calculating the LCP.

What is a good LCP time?

To provide a good user experience, you should strive to have a Largest Contentful Paint of 2.5 seconds or less on your website. A majority of your page loads should be happening under this threshold.

Now that we know what is LCP and what our target should be let’s look at ways to improve LCP on our website.

How to optimize Largest Contentful Paint (LCP)

The underlying principle of reducing LCP in all of the techniques mentioned below is to reduce the data downloaded on the user’s device and reduce the time it takes to send and execute that content.

1. Optimize your images

On most websites, the above-the-fold content usually contains a large image which gets considered for LCP. It could either be a hero image, a banner, or a carousel. It is, therefore, crucial that you optimize these images for a better LCP.

To optimize your images, you should use a third-party image CDN like The advantage of using a third-party image CDN is that you can focus on your actual business and leave image optimization to the image CDN.

The image CDN would stay at the edge of technology evolution, and you always get the best possible features with minimum ongoing investment.

ImageKit is a complete real-time image CDN that integrates with any existing cloud storage like AWS S3, Azure, Google Cloud Storage, etc. It even comes with its integrated image storage and manager called the Media Library.

Here is how ImageKit can help you improve your LCP score.

1. Deliver your images in lighter formats

ImageKit detects if the user’s browser supports modern lighter formats like WebP or AVIF and automatically delivers the image in the lightest possible format in real-time. Formats like WebP are over 30% lighter compared to their JPEG equivalents.

2. Automatically compress your images

Not just converting the image to the correct format, ImageKit also compresses your image to a smaller size. In doing so, it balances the image’s visual quality and the output size.

You get the option to alter the compression level (or quality) in real-time by just changing a URL parameter, thereby balancing your business requirements of visual quality and load time.

3. Provide real-time transformations for responsive images

Google uses mobile-first indexing for almost all websites. It is therefore essential to optimize LCP for mobile more than that for desktop. Every image needs to be scaled down to as per the layout’s requirement.

For example, you would need the image in a smaller size on the product listing page and a larger size on the product detail page. This resizing ensures that you are not sending any additional bytes than what is required for that particular page.

ImageKit allows you to transform responsive images in real-time just by adding the corresponding transformation in the image URL. For example, the following image is resized to width 200px and height 300px by adding the height and width transformation parameters in its URL.

4. Cache images and improve delivery time

Image CDNs use a global Content Delivery Network (CDN) to deliver the images. Using a CDN ensures that images load from a location closer to the user instead of your server, which could be halfway across the globe.

ImageKit, for example, uses AWS Cloudfront as its CDN, which has over 220 deliver nodes globally. A vast majority of the images get loaded in less than 50ms. Additionally, it uses the proper caching directives to cache the images on the user’s device, CDN nodes, and even its processing network for a faster load time.

This helps to improve LCP on your website.

2. Preload critical resources

There are certain cases where the browser may not prioritize loading a visually important resource that impacts LCP. For example, a banner image above the fold could be specified as a background image inside a CSS file. Since the browser would never know about this image until the CSS file is downloaded and parsed along with the DOM tree, it will not prioritize loading it.

For such resources, you can preload them by adding a <link> tag with a rel= "preload" attribute to the head section of your HTML document.

<!-- Example of preloading --> <link rel="preload" src="banner_image.jpg" />

While you can preload multiple resources in a document, you should always restrict it to above-the-fold images or videos, page-wide font files, or critical CSS and JS files.

3. Reduce server response times

If your server takes long to respond to a request, then the time it takes to render the page on the screen also goes up. It, therefore, negatively affects every page speed metric, including LCP. To improve your server response times, here is what you should do.

1. Analyze and optimize your servers

A lot of computation, DB queries, and page construction happens on the server. You should analyze the requests going to your servers and identify the possible bottlenecks for responding to the requests. It could be a DB query slowing things down or the building of the page on your server.

You can apply best practices like caching of DB responses, pre-rendering of pages, amongst others, to reduce the time it takes for your server to respond to requests.

Of course, if the above does not improve the response time, you might need to increase your server capacity to handle the number of requests coming in.

2. Use a Content Delivery Network

We have already seen above that using an image CDN like ImageKit improves the loading time for your images. Your users get the content delivered from a CDN node close to their location in milliseconds.

You should extend the same to other content on your website. Using a CDN for your static content like JS, CSS, and font files will significantly speed up their load time. ImageKit does support the delivery of static content through its systems.

You can also try to use a CDN for your HTML and APIs to cache those responses on the CDN nodes. Given the dynamic nature of such content, using a CDN for HTML or APIs can be a lot more complex than using a CDN for static content.

3. Preconnect to third-party origins

If you use third-party domains to deliver critical above-the-fold content like JS, CSS, or images, then you would benefit by indicating to the browser that a connection to that third-party domain needs to be made as soon as possible. This is done using the rel="preconnect" attribute of the <link> tag.

<link rel="preconnect" href="" />

With preconnect in place, the browser can save the domain connection time when it downloads the actual resource later.

Subdomains like, of your main website domain are also third-party domains in this context.

You can also use the dns-prefetch as a fallback in browsers that don’t support preconnect. This directive instructs the browser to complete the DNS resolution to the third-party domain even if it cannot establish a proper connection.

4. Serve content cache-first using a Service Worker

Service workers can intercept requests originating from the user’s browser and serve cached responses for the same. This allows us to cache static assets and HTML responses on the user’s device and serve them without going to the network.

While the service worker cache serves the same purpose as the HTTP or browser cache, it offers fine-grained control and can work even if the user is offline. You can also use service workers to serve precached content from the cache to users on slow network speeds, thereby bringing down LCP time.

5. Compress text files

Any text-based data you load on your webpage should be compressed when transferred over the network using a compression algorithm like gzip or Brotli. SVGs, JSONs, API responses, JS and CSS files, and your main page’s HTML are good candidates for compression using these algorithms. This compression significantly reduces the amount of data that will get downloaded on page load, therefore bringing down the LCP.

4. Remove render-blocking resources

When the browser receives the HTML page from your server, it parses the DOM tree. If there is any external stylesheet or JS file in the DOM, the browser has to pause for them before moving ahead with the parsing of the remaining DOM tree.

These JS and CSS files are called render-blocking resources and delay the LCP time. Here are some ways to reduce the blocking time for JS and CSS files:

1. Do not load unnecessary bundles

Avoid shipping huge bundles of JS and CSS files to the browser if they are not needed. If the CSS can be downloaded a lot later, or a JS functionality is not needed on a particular page, there is no reason to load it up front and block the render in the browser.

Suppose you cannot split a particular file into smaller bundles, but it is not critical to the functioning of the page either. In that case, you can use the defer attribute of the script tag to indicate to the browser that it can go ahead with the DOM parsing and continue to execute the JS file at a later stage. Adding the defer attribute removes any blocker for DOM parsing. The LCP, therefore, goes down.

2. Inline critical CSS

Critical CSS comprises the style definitions needed for the DOM that appears in the first fold of your page. If the style definitions for this part of the page are inline, i.e., in each element’s style attribute, the browser has no dependency on the external CSS to style these elements. Therefore, it can render the page quickly, and the LCP goes down.

3. Minify and compress the content

You should always minify the CSS and JS files before loading them in the browser. CSS and JS files contain whitespace to make them legible, but they are unnecessary for code execution. So, you can remove them, which reduces the file size on production. Smaller file size means that the files can load quickly, thereby reducing your LCP time.

Compression techniques, as discussed earlier, use data compression algorithms to bring down the file size delivered over the network. Gzip and Brotli are two compression algorithms. Brotli compression offers a superior compression ratio compared to Gzip and is now supported on all major browsers, servers, and CDNs.

5. Optimize LCP for client-side rendering

Any client-side rendered website requires a considerable amount of Javascript to load in the browser. If you do not optimize the Javascript sent to the browser, then the user may not see or be able to interact with any content on the page until the Javascript has been downloaded and executed.

We discussed a few JS-related optimizations above, like optimizing the bundles sent to the browser and compressing the content. There are a couple of more things you can do to optimize the rendering on client devices.

1. Using server-side rendering

Instead of shipping the entire JS to the client-side and doing all the rendering there, you can generate the page dynamically on the server and then send it to the client’s device. This would increase the time it takes to generate the page, but it will decrease the time it takes to make a page active in the browser.

However, maintaining both client-side and server-side frameworks for the same page can be time-consuming.

2. Using pre-rendering

Pre-rendering is a different technique where a headless browser mimics a regular user’s request and gets the server to render the page. This rendered page is stored during the build cycle once, and then every subsequent request uses that pre-rendered page without any computation on the server, resulting in a fast load time.

This improves the TTFB compared to server-side rendering because the page is prepared beforehand. But the time to interactive might still take a hit as it has to wait for the JS to download for the page to become interactive. Also, since this technique requires pre-rendering of pages, it may not be scalable if you have a large number of pages.


Core Web Vitals, which include LCP, have become a significant search ranking factor and strongly correlate with the user experience. Therefore, if you run an online business, you should optimize these vitals to ensure the success of the same.

The above techniques have a significant impact on optimizing LCP. Using ImageKit as your image CDN will give you a quick headstart.

Sign-up for a forever free account, upload your images to the ImageKit storage, or connect your origin, and start delivering optimized images in minutes.

The post Improve Largest Contentful Paint (LCP) on Your Website With Ease appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Don’t attach tooltips to document.body

Css Tricks - Wed, 09/08/2021 - 9:08am

Here’s Atif Afzal on using a <div> that is permanently on the page where tooltips are added/removed and how they perform vastly better than plopping those same tooltips right into the <body>. It’s not really discussed, but the reason you put them that high-up in the DOM is so you can absolutely position them exactly where you need to on the page without having to deal with hidden overflow or relative parents and the like.

To my amazement, just having a separate container without even adding the [CSS] contain property fixed the performance. The main problem now, was to explain it. First I thought this might be some internal browser heuristic optimizing the Recalculate Style, but there is no black magic and I discovered the reason.

The trick is to avoid forced recalculations of style:

[…] The tooltip container is not visible in the page, so modifying it doesn’t invalidate the complete page render tree. If the tooltip container would have been visible in the page, then the complete render tree would be invalidated but in this case only an independent subtree was invalidated. Recalculating Style for a small subtree of 3 doesn’t take a lot of time and hence is faster.

Looks like popper.js was used here, so you have to be smart about it. We use toast messages on CodePen, and it’s the only third-party component we use at the moment: react-hot-toast. I checked it, and not only do we tuck the messages in a <div> of our own, but the library itself does that, so I think we’re in the clear.

The post Don’t attach tooltips to document.body appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

position: sticky, draft 1

QuirksBlog - Wed, 09/08/2021 - 7:44am

I’m writing the position: sticky part of my book, and since I never worked with sticky before I’m not totally sure if what I’m saying is correct.

This is made worse by the fact that there are no very clear tutorials on sticky. That’s partly because it works pretty intuitively in most cases, and partly because the details can be complicated.

So here’s my draft 1 of position: sticky. There will be something wrong with it; please correct me where needed.

The inset properties are top, right, bottom and left. (I already introduced this terminology earlier in the chapter.)

h3,h4,pre {clear: left} section.scroll-container { border: 1px solid black; width: 300px; height: 250px; padding: 1em; overflow: auto; --text: 'scroll box'; float: left; clear: left; margin-right: 0.5em; margin-bottom: 1em; position: relative; font-size: 1.3rem; } .container,.outer-container { border: 1px solid black; padding: 1em; position: relative; --text: 'container'; } .outer-container { --text: 'outer container'; } :is(.scroll-container,.container,.outer-container):before { position: absolute; content: var(--text); top: 0.2em; left: 0.2em; font-size: 0.8rem; } section.scroll-container h2 { position: sticky; top: 0; background: white; margin: 0 !important; color: inherit !important; padding: 0.5em !important; border: 1px solid; font-size: 1.4rem !important; } .nowrap p { white-space: nowrap; } Introduction

position: sticky is a mix of relative and fixed. A sticky box takes its normal position in the flow, as if it had position: relative, but if that position scrolls out of view the sticky box remains in a position defined by its inset properties, as if it has position: fixed. A sticky box never escapes its container, though. If the container start or end scrolls past the sticky box abandons its fixed position and sticks to the top or the bottom of its container.

It is typically used to make sure that headers remain in view no matter how the user scrolls. It is also useful for tables on narrow screens: you can keep headers or the leftmost table cells in view while the user scrolls.

Scroll box and container

A sticky box needs a scroll box: a box that is able to scroll. By default this is the browser window — or, more correctly, the layout viewport — but you can define another scroll box by setting overflow on the desired element. The sticky box takes the first ancestor that could scroll as its scroll box and calculates all its coordinates relative to it.

A sticky box needs at least one inset property. These properties contain vital instructions, and if the sticky box doesn’t receive them it doesn’t know what to do.

A sticky box may also have a container: a regular HTML element that contains the sticky box. The sticky box will never be positioned outside this container, which thus serves as a constraint.

The first example shows this set-up. The sticky <h2> is in a perfectly normal <div>, its container, and that container is in a <section> that is the scroll box because it has overflow: auto. The sticky box has an inset property to provide instructions. The relevant styles are:

section.scroll-container { border: 1px solid black; width: 300px; height: 300px; overflow: auto; padding: 1em; } div.container { border: 1px solid black; padding: 1em; } section.scroll-container h2 { position: sticky; top: 0; } The rules Sticky header

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

Now let’s see exactly what’s going on.

A sticky box never escapes its containing box. If it cannot obey the rules that follow without escaping from its container, it instead remains at the edge. Scroll down until the container disappears to see this in action.

A sticky box starts in its natural position in the flow, as if it has position: relative. It thus participates in the default flow: if it becomes higher it pushes the paragraphs below it downwards, just like any other regular HTML element. Also, the space it takes in the normal flow is kept open, even if it is currently in fixed position. Scroll down a little bit to see this in action: an empty space is kept open for the header.

A sticky box compares two positions: its natural position in the flow and its fixed position according to its inset properties. It does so in the coordinate frame of its scroll box. That is, any given coordinate such as top: 20px, as well as its default coordinates, is resolved against the content box of the scroll box. (In other words, the scroll box’s padding also constrains the sticky box; it will never move up into that padding.)

A sticky box with top takes the higher value of its top and its natural position in the flow, and positions its top border at that value. Scroll down slowly to see this in action: the sticky box starts at its natural position (let’s call it 20px), which is higher than its defined top (0). Thus it rests at its position in the natural flow. Scrolling up a few pixels doesn’t change this, but once its natural position becomes less than 0, the sticky box switches to a fixed layout and stays at that position.

The sticky box has bottom: 0

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Sticky header

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

It does the same for bottom, but remember that a bottom is calculated relative to the scroll box’s bottom, and not its top. Thus, a larger bottom coordinate means the box is positioned more to the top. Now the sticky box compares its default bottom with the defined bottom and uses the higher value to position its bottom border, just as before.

With left, it uses the higher value of its natural position and to position its left border; with right, it does the same for its right border, bearing in mind once more that a higher right value positions the box more to the left.

If any of these steps would position the sticky box outside its containing box it takes the position that just barely keeps it within its containing box.

Details Sticky header

Very, very long line of content to stretch up the container quite a bit

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

The four inset properties act independently of one another. For instance the following box will calculate the position of its top and left edge independently. They can be relative or fixed, depending on how the user scrolls.

p.testbox { position: sticky; top: 0; left: 0; }

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

The sticky box has top: 0; bottom: 0

Regular content

Regular content

Regular content

Regular content

Sticky header

Regular content

Regular content

Regular content

Regular content

Regular content

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

Setting both a top and a bottom, or both a left and a right, gives the sticky box a bandwidth to move in. It will always attempt to obey all the rules described above. So the following box will vary between 0 from the top of the screen to 0 from the bottom, taking its default position in the flow between these two positions.

p.testbox { position: sticky; top: 0; bottom: 0; } No container

Regular content

Regular content

Sticky header

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

So far we put the sticky box in a container separate from the scroll box. But that’s not necessary. You can also make the scroll box itself the container if you wish. The sticky element is still positioned with respect to the scroll box (which is now also its container) and everything works fine.

Several containers Sticky header

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Content outside container

Content outside container

Content outside outer container

Content outside outer container

Or the sticky item can be several containers removed from its scroll box. That’s fine as well; the positions are still calculated relative to the scroll box, and the sticky box will never leave its innermost container.

Changing the scroll box Sticky header

The container has overflow: auto.

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Content outside container

Content outside container

Content outside container

One feature that catches many people (including me) unaware is giving the container an overflow: auto or hidden. All of a sudden it seems the sticky header doesn’t work any more.

What’s going on here? An overflow value of auto, hidden, or scroll makes an element into a scroll box. So now the sticky box’s scroll box is no longer the outer element, but the inner one, since that is now the closest ancestor that is able to scroll.

The sticky box appears to be static, but it isn’t. The crux here is that the scroll box could scroll, thanks to its overflow value, but doesn’t actually do so because we didn’t give it a height, and therefore it stretches up to accomodate all of its contents.

Thus we have a non-scrolling scroll box, and that is the root cause of our problems.

As before, the sticky box calculates its position by comparing its natural position relative to its scroll box with the one given by its inset properties. Point is: the sticky box doesn’t scroll relative to its scroll box, so its position always remains the same. Where in earlier examples the position of the sticky element relative to the scroll box changed when we scrolled, it no longer does so, because the scroll box doesn’t scroll. Thus there is no reason for it to switch to fixed positioning, and it stays where it is relative to its scroll box.

The fact that the scroll box itself scrolls upward is irrelevant; this doesn’t influence the sticky box in the slightest.

Sticky header

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Regular content

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

Content outside container

One solution is to give the new scroll box a height that is too little for its contents. Now the scroll box generates a scrollbar and becomes a scrolling scroll box. When we scroll it the position of the sticky box relative to its scroll box changes once more, and it switches from fixed to relative or vice versa as required.

Minor items

Finally a few minor items:

  • It is no longer necessary to use position: -webkit-sticky. All modern browsers support regular position: sticky. (But if you need to cater to a few older browsers, retaining the double syntax doesn’t hurt.)
  • Chrome (Mac) does weird things to the borders of the sticky items in these examples. I don’t know what’s going on and am not going to investigate.

The Story Behind TryShape, a Showcase for the CSS clip-path property

Css Tricks - Wed, 09/08/2021 - 4:30am

I love shapes, especially colorful ones! Shapes on websites are in the same category of helpfulness as background colors, images, banners, section separators, artwork, and many more: they can help us understand context and inform our actions through affordances.

A few months back, I built an application to engage my 7-year old daughter with mathematics. Apart from basic addition and subtraction, my aim was to present questions using shapes. That’s when I got familiar with the CSS clip-path property, a reliable way to make shapes on the web. Then, I ended up building another app called, TryShape using the power of clip-path.

I’ll walk you through the story behind TryShape and how it helps create, manage, share, and export shapes. We’ll cover a lot about CSS clip-path along the way and how it helped me quickly build the app.

Here are a few important links:

First, the CSS clip-path property and shapes

Imagine you have a plain piece of paper and a pencil to draw a shape (say, a square) on it. How will you proceed? Most likely, you will start from a point, then draw a line to reach another point, then repeat it exact three more times to come back to the initial point. You also have to make sure you have opposite lines parallel and of the same length.

So, the essential ingredients for a shape are points, lines, directions, curves, angles, and lengths, among many others. The CSS clip-path helps specify many of these properties to clip a region of an HTML element to show a specific region. The part that is inside the clipped region is shown, and the rest is hidden. It gives an ocean of opportunities to developers to create various shapes using clip-path property.

Learn more about clipping and how it is different from masking.

The clip-path values for shape creation

The clip-path property accepts the following values for creating shapes:

  • circle()
  • ellipse()
  • inset()
  • polygon()
  • A clip source using url() function
  • path()

We need to understand the basic coordinate system a bit to use these values. When applying the clip-path property on an element to create shapes, we must consider the x-axis, y-axis, and the initial coordinates (0,0) at the element’s top-left corner.

Here is a div element with its x-axis, y-axis, and initial coordinates (0,0).

Initial coordinates(0,0) with x-axis and y-axis

Now let’s use the circle() value to create a circular shape. We can specify the position and radius of the circle using this value. For example, to clip a circular shape at the coordinate position (70, 70) with a radius of 70px, we can specify the clip-path property value as:

clip-path: circle(70px at 70px 70px)

So, the center of the circle is placed at the coordinate (70, 70) with a 70px radius. Now, only this circular region is clipped and shown on the element. The rest of the portion of the element is hidden to create the impression of a circle shape.

The center of the circle is placed at (70, 70) coordinates with a 70px x 70px area clipped. Hence the full circle is shown.

Next, what if we want to specify the position at (0,0)? In this case, the circle’s center is placed at the (0,0) position with a radius of 70px. That makes only a portion of the circle visible inside the element.

The center of the circle is placed at (0, 0) coordinates with a 70px x 70px area clipping the bottom-left region of the circle.

Let’s move on to use the other two essential values, inset() and polygon(). We use an inset to define a rectangular shape. We can specify the gap that each of the four edges may have to clip a region from an element. For example:

clip-path: inset(30px)

The above clip-path value clips a region by leaving out the 30px values from element’s edges. We can see that in the image below. We can also specify a different inset value for each of the edges.

The inset() function allows us to clip and area from the outside edge of a shape.

Next is the polygon() value. We can create a polygonal shape using a set of vertices. Take this example:

clip-path: polygon(10% 10%, 90% 10%, 90% 90%, 10% 80%)

Here we are specifying a set of vertices to create a region for clipping. The image below shows the position of each vertex to create a polygonal shape. We can specify as many vertices as we want.

The polygon() function allows us to create polygonal shapes using the set of vertices passed to it.

Next, let’s take a look at the ellipse() and the url() values. The ellipse() value helps create shapes by specifying two radii values and a position. In the image below, we see an ellipse at the position where the radii is at (50%,50%) and the shape is 70px wide and 100px tall.

We need to specify two radii values and a position to create an ellipse.

url() is a CSS function to specify the clip-path element’s ID value to render an SVG shape. Please take a look at the image below. We have defined a SVG shape using clipPath and path elements. You can use the ID value of the clipPath element as an argument to the url() function to render this shape.

Here, we are creating a heart shape using the url() function

Additionally, we can use the path values directly in the path() function to draw the shape.

Here we are creating a curvy shape using the path() function.

Alright. I hope you have got an understanding of different clip-path property values. With this understanding, let’s take a loot at some implementations and play around with them. Here is a Pen for you. Please use it to try adding, modifying values to create a new shape.

CodePen Embed Fallback Let’s talk about TryShape

It’s time to talk about TryShape and its background story. TryShape is an open-source application that helps create, export, share, and use any shapes of your choice. You can create banners, circles, arts, polygons and export them as SVG, PNG, and JPEG files. You can also create a CSS code snippet to copy and use in your application.

TryShape is built using the following framework and libraries (and clip-path, of course):

  • CSS clip-path: We’ve already discussed the power of this awesome CSS property.
  • Next.js: The coolest React-based framework around. It helped me create pages, components, interactions, and APIs to connect to the back-end database.
  • HarperDB: A flexible database to store data and query them using both SQL and No-SQL interactions. TryShape has its schema and tables created in the HarperDB cloud. The Next.js APIs interact with the schema and tables to perform required CRUD operations from the user interface.
  • Firebase: Authentication services from Google. TryShape uses it to get the social login working using Google, GitHub, Twitter, and other accounts.
  • react-icons: One shop for all the icons for a React-based application
  • date-fns: The modern, lightweight library for date formatting
  • axios: Making the API calls easy from the React components
  • styled-components: A structured way to create CSS rules from react components
  • react-clip-path: A homegrown module to handle clip-path property in a React app
  • react-draggable: Make an HTML element draggable in a React app. TryShape uses it to adjust the position of shape vertices.
  • downloadjs: Trigger a download from JavaScript
  • html-to-image: Converts an HTML element to image (including SVG, JPEG, and PNG)
  • Vercel: Best for hosting a Next.js app
Creating shapes in TryShape using CSS clip-path

Let me highlight the source code that helps create a shape using the CSS clip-path property. The code snippet below defines the user interface structure for a container element (Box) that’s 300px square. The Box element has two child elements, Shadow and Component.

<Box height="300px" width="300px" onClick={(e) => props.handleChange(e)}> { props.shapeInformation.showShadow && <Shadow backgroundColor={props.shapeInformation.backgroundColor} id="shapeShadow" /> } <Component formula={props.shapeInformation.formula} backgroundColor={props.shapeInformation.backgroundColor} id="clippedShape" /> </Box>

The Shadow component defines the area that is hidden by the clip-path clipping. We create it to show a light color background to make this area partially visible to the end user. The Component is to assign the clip-path value to show the clipped area.

See the styled-component definitions of Box, Shadow, and Component below:

// The styled-components code to create the UI components using CSS properties // The container div const Box = styled.div` width: ${props => props.width || '100px'}; height: ${props => props.height || '100px'}; margin: 0 auto; position: relative; `; // Shadow defines the area that is hidden by the `clip-path` clipping // We show a light color background to make this area partially visible. const Shadow = styled.div` background-color: ${props => props.backgroundColor || '#00c4ff'}; opacity: 0.25; position: absolute; top: 10px; left: 10px; right: 10px; bottom: 10px; `; // The actual component that takes the `clip-path` value (formula) and set // to the `clip-path` property. const Component = styled.div` clip-path: ${props => props.formula}; // the formula is the clip-path value background-color: ${props => props.backgroundColor || '#00c4ff'}; position: absolute; top: 10px; left: 10px; right: 10px; bottom: 10px; `; The components to show a shape(both visible and hidden areas) after the clipping.

Please feel free to look into the entire codebase in the GitHub repo.

The future scope of TryShape

TryShape works well with the creation and management of basic shapes using CSS clip-path in the background. It is helpful to export the shapes and the CSS code snippets to use in your web applications. It has the potential to grow with many more valuable features. The primary one will be the ability to create shapes with curvy edges.

To support the curvy shapes, we need the support of the following values in TryShape:

  • a clip source using url() and
  • path().

With the help of these values, we can create shapes using SVG and then use one of the above values. Here is an example of the url() CSS function to create a shape using the SVG support.

<div class="heart">Heart</div> <svg> <clipPath id="heart-path" clipPathUnits="objectBoundingBox"> <path d="M0.5,1 C 0.5,1,0,0.7,0,0.3 A 0.25,0.25,1,1,1,0.5,0.3 A 0.25,0.25,1,1,1,1,0.3 C 1,0.7,0.5,1,0.5,1 Z" /> </clipPath> </svg>

Then, the CSS::

.heart { clip-path: url(#heart-path); }

Now, let’s create a shape using the path() value. The HTML should have an element like a div:

<div class="curve">Curve</div>


.curve { clip-path: path("M 10 80 C 40 10, 65 10, 95 80 S 150 150, 180 80"); } Before we end…

I hope you enjoyed meeting my TryShape app and learning about the idea that leads to it, the strategies I considered, the technology under the hood, and its future potential. Please consider trying it and looking through the source code. And, of course, feel free to contribute to it with issues, feature requests, and code.

Before we end, I want to leave you with this short video prepared for the Hashnode hackathon where TryShape was an entry and finally in the list of winners. I hope you enjoy it.

Let’s connect. You can @ me on Twitter (@tapasadhikary) with comments, or feel free to follow.

The post The Story Behind TryShape, a Showcase for the CSS clip-path property appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.