Web Standards

Using Web Components in WordPress is Easier Than You Think

Css Tricks - Thu, 08/12/2021 - 4:38am

Now that we’ve seen that web components and interactive web components are both easier than you think, let’s take a look at adding them to a content management system, namely WordPress.

There are three major ways we can add them. First, through manual input into the siteputting them directly into widgets or text blocks, basically anywhere we can place other HTML. Second, we can add them as the output of a theme in a theme file. And, finally, we can add them as the output of a custom block.

Loading the web component files

Now whichever way we end up adding web components, there’s a few things we have to ensure:

  1. our custom element’s template is available when we need it,
  2. any JavaScript we need is properly enqueued, and
  3. any un-encapsulated styles we need are enqueued.

We’ll be adding the <zombie-profile> web component from my previous article on interactive web components. Check out the code over at CodePen.

Let’s hit that first point. Once we have the template it’s easy enough to add that to the WordPress theme’s footer.php file, but rather than adding it directly in the theme, it’d be better to hook into wp_footer so that the component is loaded independent of the footer.php file and independent of the overall theme— assuming that the theme uses wp_footer, which most do. If the template doesn’t appear in your theme when you try it, double check that wp_footer is called in your theme’s footer.php template file.

<?php function diy_ezwebcomp_footer() { ?> <!-- print/echo Zombie profile template code. --> <!-- It's available at https://codepen.io/undeadinstitute/pen/KKNLGRg --> <?php } add_action( 'wp_footer', 'diy_ezwebcomp_footer');

Next is to enqueue our component’s JavaScript. We can add the JavaScript via wp_footer as well, but enqueueing is the recommended way to link JavaScript to WordPress. So let’s put our JavaScript in a file called ezwebcomp.js (that name is totally arbitrary), stick that file in the theme’s JavaScript directory (if there is one), and enqueue it (in the functions.php file).

wp_enqueue_script( 'ezwebcomp_js', get_template_directory_uri() . '/js/ezwebcomp.js', '', '1.0', true );

We’ll want to make sure that last parameter is set to true , i.e. it loads the JavaScript before the closing body tag. If we load it in the head instead, it won’t find our HTML template and will get super cranky (throw a bunch of errors.)

If you can fully encapsulate your web component, then you can skip this next step. But if you (like me) are unable to do it, you’ll need to enqueue those un-encapsulated styles so that they’re available wherever the web component is used. (Similar to JavaScript, we could add this directly to the footer, but enqueuing the styles is the recommended way to do it). So we’ll enqueue our CSS file:

wp_enqueue_style( 'ezwebcomp_style', get_template_directory_uri() . '/ezwebcomp.css', '', '1.0', 'screen' );

That wasn’t too tough, right? And if you don’t plan to have any users other than Administrators use it, you should be all set for adding these wherever you want them. But that’s not always the case, so we’ll keep moving ahead!

Don’t filter out your web component

WordPress has a few different ways to both help users create valid HTML and prevent your Uncle Eddie from pasting that “hilarious” picture he got from Shady Al directly into the editor (complete with scripts to pwn every one of your visitors).

So when adding web-components directly into blocks or widgets, we’ll need to be careful about WordPress’s built-in code filtering . Disabling it all together would let Uncle Eddie (and, by extension, Shady Al) run wild, but we can modify it to let our awesome web component through the gate that (thankfully) keeps Uncle Eddie out.

First, we can use the wp_kses_allowed filter to add our web component to the list of elements not to filter out. It’s sort of like we’re whitelisting the component, and we do that by adding it to the the allowed tags array that’s passed to the filter function.

function add_diy_ezwebcomp_to_kses_allowed( $the_allowed_tags ) { $the_allowed_tags['zombie-profile'] = array(); } add_filter( 'wp_kses_allowed_html', 'add_diy_ezwebcomp_to_kses_allowed');

We’re adding an empty array to the <zombie-profile> component because WordPress filters out attributes in addition to elements—which brings us to another problem: the slot attribute (as well as part and any other web-component-ish attribute you might use) are not allowed by default. So, we have to explitcly allow them on every element on which you anticipate using them, and, by extension, any element your user might decide to add them to. (Wait, those element lists aren’t the same even though you went over it six times with each user… who knew?) Thus, below I have set slot to true on <span>, <img> and <ul>, the three elements I’m putting into slots in the <zombie-profile> component. (I also set part to true on span elements so that I could let that attribute through too.)

function add_diy_ezwebcomp_to_kses_allowed( $the_allowed_tags ) { $the_allowed_tags['zombie-profile'] = array(); $the_allowed_tags\['span'\]['slot'] = true; $the_allowed_tags\['span'\]['part'] = true; $the_allowed_tags\['ul'\]['slot'] = true; $the_allowed_tags\['img'\]['slot'] = true; return $the_allowed_tags; } add_filter( 'wp_kses_allowed_html', 'add_diy_ezwebcomp_to_kses_allowed');

We could also enable the slot (and part) attribute in all allowed elements with something like this:

function add_diy_ezwebcomp_to_kses_allowed($the_allowed_tags) { $the_allowed_tags['zombie-profile'] = array(); foreach ($the_allowed_tags as &$tag) { $tag['slot'] = true; $tag['part'] = true; } return $the_allowed_tags; } add_filter('wp_kses_allowed_html', 'add_diy_ezwebcomp_to_kses_allowed');

Sadly, there is one more possible wrinkle with this. You may not run into this if all the elements you’re putting in your slots are inline/phrase elements, but if you have a block level element to put into your web component, you’ll probably get into a fistfight with the block parser in the Code Editor. You may be a better fist fighter than I am, but I always lost.

The code editor is an option that allows you to inspect and edit the markup for a block.

For reasons I can’t fully explain, the client-side parser assumes that the web component should only have inline elements within it, and if you put a <ul> or <div>, <h1> or some other block-level element in there, it’ll move the closing web component tag to just after the last inline/phrase element. Worse yet, according to a note in the WordPress Developer Handbook, it’s currently “not possible to replace the client-side parser.”

While this is frustrating and something you’ll have to train your web editors on, there is a workaround. If we put the web component in a Custom HTML block directly in the Block Editor, the client-side parser won’t leave us weeping on the sidewalk, rocking back and forth, and questioning our ability to code… Not that that’s ever happened to anyone… particularly not people who write articles…

Component up the theme

Outputting our fancy web component in our theme file is straightforward as long as it isn’t updated outside the HTML block. We add it the way we would add it in any other context, and, assuming we have the template, scripts and styles in place, things will just work.

But let’s say we want to output the contents of a WordPress post or custom post type in a web component. You know, write a post and that post is the content for the component. This allows us to use the WordPress editor to pump out an archive of <zombie-profile> elements. This is great because the WordPress editor already has most of the UI we need to enter the content for one of the <zombie-profile> components:

  • The post title can be the zombie’s name.
  • A regular paragraph block in the post content can be used for the zombie’s statement.
  • The featured image can be used for the zombie’s profile picture.

That’s most of it! But we’ll still need fields for the zombie’s age, infection date, and interests. We’ll create these with WordPress’s built in Custom Fields feature.

We’ll use the template part that handles printing each post, e.g. content.php, to output the web component. First, we’ll print out the opening <zombie-profile> tag followed by the post thumbnail (if it exists).

<zombie-profile> <?php // If the post featured image exists... if (has_post_thumbnail()) { $src = wp_get_attachment_image_url(get_post_thumbnail_id()); ?> <img src="<?php echo $src; ?>" slot="profile-image"> <?php } ?>

Next we’ll print the title for the name

<?php // If the post title field exits... if (get_the_title()) { ?> <span slot="zombie-name"><?php echo get_the_title(); ?></span> <?php } ?>

In my code, I have tested whether these fields exist before printing them for two reasons:

  1. It’s just good programming practice (in most cases) to hide the labels and elements around empty fields.
  2. If we end up outputting an empty <span> for the name (e.g. <span slot="zombie-name"></span>), then the field will show as empty in the final profile rather than use our web component’s built-in default text, image, etc. (If you want, for instance, the text fields to be empty if they have no content, you can either put in a space in the custom field or skip the if statement in the code).

Next, we will grab the custom fields and place them into the slots they belong to. Again, this goes into the theme template that outputs the post content.

<?php // Zombie age $temp = get_post_meta(the_ID(), 'Age', true); if ($temp) { ?> <span slot="z-age"><?php echo $temp; ?></span> <?php } // Zombie infection date $temp = get_post_meta(the_ID(), 'Infection Date', true); if ($temp) { ?> <span slot="idate"><?php echo $temp; ?></span> <?php } // Zombie interests $temp = get_post_meta(the_ID(), 'Interests', true); if ($temp) { ?> <ul slot="z-interests"><?php echo $temp; ?></ul> <?php } ?>

One of the downsides of using the WordPress custom fields is that you can’t do any special formatting, A non-technical web editor who’s filling this out would need to write out the HTML for the list items (<li>) for each and every interest in the list. (You can probably get around this interface limitation by using a more robust custom field plugin, like Advanced Custom Fields, Pods, or similar.)

Lastly. we add the zombie’s statement and the closing <zombie-profile> tag.

<?php $temp = get_the_content(); if ($temp) { ?> <span slot="statement"><?php echo $temp; ?></span> <?php } ?> </zombie-profile>

Because we’re using the body of the post for our statement, we’ll get a little extra code in the bargain, like paragraph tags around the content. Putting the profile statement in a custom field will mitigate this, but depending on your purposes, it may also be intended/desired behavior.

You can then add as many posts/zombie profiles as you need simply by publishing each one as a post!

Block party: web components in a custom block

Creating a custom block is a great way to add a web component. Your users will be able to fill out the required fields and get that web component magic without needing any code or technical knowledge. Plus, blocks are completely independent of themes, so really, we could use this block on one site and then install it on other WordPress sites—sort of like how we’d expect a web component to work!

There are the two main parts of a custom block: PHP and JavaScript. We’ll also add a little CSS to improve the editing experience.

First, the PHP:

function ez_webcomp_register_block() { // Enqueues the JavaScript needed to build the custom block wp_register_script( 'ez-webcomp', plugins_url('block.js', __FILE__), array('wp-blocks', 'wp-element', 'wp-editor'), filemtime(plugin_dir_path(__FILE__) . 'block.js') ); // Enqueues the component's CSS file wp_register_style( 'ez-webcomp', plugins_url('ezwebcomp-style.css', __FILE__), array(), filemtime(plugin_dir_path(__FILE__) . 'ezwebcomp-style.css') ); // Registers the custom block within the ez-webcomp namespace register_block_type('ez-webcomp/zombie-profile', array( // We already have the external styles; these are only for when we are in the WordPress editor 'editor_style' =&gt; 'ez-webcomp', 'editor_script' =&gt; 'ez-webcomp', )); } add_action('init', 'ez_webcomp_register_block');

The CSS isn’t necessary, it does help prevent the zombie’s profile image from overlapping the content in the WordPress editor.

/* Sets the width and height of the image. * Your mileage will likely vary, so adjust as needed. * "pic" is a class we'll add to the editor in block.js */ #editor .pic img { width: 300px; height: 300px; } /* This CSS ensures that the correct space is allocated for the image, * while also preventing the button from resizing before an image is selected. */ #editor .pic button.components-button { overflow: visible; height: auto; }

The JavaScript we need is a bit more involved. I’ve endeavored to simplify it as much as possible and make it as accessible as possible to everyone, so I’ve written it in ES5 to remove the need to compile anything.

Show code (function (blocks, editor, element, components) { // The function that creates elements var el = element.createElement; // Handles text input for block fields var RichText = editor.RichText; // Handles uploading images/media var MediaUpload = editor.MediaUpload; // Harkens back to register_block_type in the PHP blocks.registerBlockType('ez-webcomp/zombie-profile', { title: 'Zombie Profile', //User friendly name shown in the block selector icon: 'id-alt', //the icon to usein the block selector category: 'layout', // The attributes are all the different fields we'll use. // We're defining what they are and how the block editor grabs data from them. attributes: { name: { // The content type type: 'string', // Where the info is available to grab source: 'text', // Selectors are how the block editor selects and grabs the content. // These should be unique within an instance of a block. // If you only have one img or one <ul> etc, you can use element selectors. selector: '.zname', }, mediaID: { type: 'number', }, mediaURL: { type: 'string', source: 'attribute', selector: 'img', attribute: 'src', }, age: { type: 'string', source: 'text', selector: '.age', }, infectdate: { type: 'date', source: 'text', selector: '.infection-date' }, interests: { type: 'array', source: 'children', selector: 'ul', }, statement: { type: 'array', source: 'children', selector: '.statement', }, }, // The edit function handles how things are displayed in the block editor. edit: function (props) { var attributes = props.attributes; var onSelectImage = function (media) { return props.setAttributes({ mediaURL: media.url, mediaID: media.id, }); }; // The return statement is what will be shown in the editor. // el() creates an element and sets the different attributes of it. return el( // Using a div here instead of the zombie-profile web component for simplicity. 'div', { className: props.className }, // The zombie's name el(RichText, { tagName: 'h2', inline: true, className: 'zname', placeholder: 'Zombie Name…', value: attributes.name, onChange: function (value) { props.setAttributes({ name: value }); }, }), el( // Zombie profile picture 'div', { className: 'pic' }, el(MediaUpload, { onSelect: onSelectImage, allowedTypes: 'image', value: attributes.mediaID, render: function (obj) { return el( components.Button, { className: attributes.mediaID ? 'image-button' : 'button button-large', onClick: obj.open, }, !attributes.mediaID ? 'Upload Image' : el('img', { src: attributes.mediaURL }) ); }, }) ), // We'll include a heading for the zombie's age in the block editor el('h3', {}, 'Age'), // The age field el(RichText, { tagName: 'div', className: 'age', placeholder: 'Zombie\'s Age…', value: attributes.age, onChange: function (value) { props.setAttributes({ age: value }); }, }), // Infection date heading el('h3', {}, 'Infection Date'), // Infection date field el(RichText, { tagName: 'div', className: 'infection-date', placeholder: 'Zombie\'s Infection Date…', value: attributes.infectdate, onChange: function (value) { props.setAttributes({ infectdate: value }); }, }), // Interests heading el('h3', {}, 'Interests'), // Interests field el(RichText, { tagName: 'ul', // Creates a new <li> every time `Enter` is pressed multiline: 'li', placeholder: 'Write a list of interests…', value: attributes.interests, onChange: function (value) { props.setAttributes({ interests: value }); }, className: 'interests', }), // Zombie statement heading el('h3', {}, 'Statement'), // Zombie statement field el(RichText, { tagName: 'div', className: "statement", placeholder: 'Write statement…', value: attributes.statement, onChange: function (value) { props.setAttributes({ statement: value }); }, }) ); }, // Stores content in the database and what is shown on the front end. // This is where we have to make sure the web component is used. save: function (props) { var attributes = props.attributes; return el( // The <zombie-profile web component 'zombie-profile', // This is empty because the web component does not need any HTML attributes {}, // Ensure a URL exists before it prints attributes.mediaURL && // Print the image el('img', { src: attributes.mediaURL, slot: 'profile-image' }), attributes.name && // Print the name el(RichText.Content, { tagName: 'span', slot: 'zombie-name', className: 'zname', value: attributes.name, }), attributes.age && // Print the zombie's age el(RichText.Content, { tagName: 'span', slot: 'z-age', className: 'age', value: attributes.age, }), attributes.infectdate && // Print the infection date el(RichText.Content, { tagName: 'span', slot: 'idate', className: 'infection-date', value: attributes.infectdate, }), // Need to verify something is in the first element since the interests's type is array attributes.interests[0] && // Pint the interests el(RichText.Content, { tagName: 'ul', slot: 'z-interests', value: attributes.interests, }), attributes.statement[0] && // Print the statement el(RichText.Content, { tagName: 'span', slot: 'statement', className: 'statement', value: attributes.statement, }) ); }, }); })( //import the dependencies window.wp.blocks, window.wp.blockEditor, window.wp.element, window.wp.components );

Plugging in to web components

Now, wouldn’t it be great if some kind-hearted, article-writing, and totally-awesome person created a template that you could just plug your web component into and use on your site? Well that guy wasn’t available (he was off helping charity or something) so I did it. It’s up on github:

Do It Yourself – Easy Web Components for WordPress

The plugin is a coding template that registers your custom web component, enqueues the scripts and styles the component needs, provides examples of the custom block fields you might need, and even makes sure things are styled nicely in the editor. Put this in a new folder in /wp-content/plugins like you would manually install any other WordPress plugin, make sure to update it with your particular web component, then activate it in WordPress on the “Installed Plugins” screen.

Not that bad, right?

Even though it looks like a lot of code, we’re really doing a few pretty standard WordPress things to register and render a custom web component. And, since we packaged it up as a plugin, we can drop this into any WordPress site and start publishing zombie profiles to our heart’s content.

I’d say that the balancing act is trying to make the component work as nicely in the WordPress block editor as it does on the front end. We would have been able to knock this out with a lot less code without that consideration.

Still, we managed to get the exact same component we made in my previous articles into a CMS, which allows us to plop as many zombie profiles on the site. We combined our knowledge of web components with WordPress blocks to develop a reusable block for our reusable web component.

What sort of components will you build for your WordPress site? I imagine there are lots of possibilities here and I’m interested to see what you wind up making.

Article series
  1. Web Components Are Easier Than You Think
  2. Interactive Web Components Are Easier Than You Think
  3. Using Web Components in WordPress is Easier Than You Think

The post Using Web Components in WordPress is Easier Than You Think appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Wanna see a whiter white?

Css Tricks - Wed, 08/11/2021 - 9:09am

Heck of a CSS trick here from Dongsung Kim.

There are hidden HDR videos playing at the corners of this page. When a HDR-capable browser encounters one, it switches to HDR mode. For some reason, CSS backdrop-filter + brightness >100% combo seems to behave like HDR—reaching beyond the user-controlled display brightness, up to the maximum HDR brightness—while the everything in between follow[s] along. At least that’s the overall idea, but I still don’t know exactly why it works; especially why with those two CSS properties.

As I look at that demo in Chrome, I see an extra-white text-shadow. In Safari, I see extra-white text. In Firefox, the whites match so I see nothing. Probably a bug.

I wouldn’t recommend actually using the trick, as I’d think the extra-whiteness almost certainly takes extra battery power that a user isn’t opting into, even without the video playing—even though it does feel like a bummer that our screens are capable of whiter whites than we normally have access to. The good news is that the gamut of color on the web is expanding, generally.

Direct Link to ArticlePermalink

The post Wanna see a whiter white? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Static vs. Dynamic vs. Jamstack: Where’s The Line?

Css Tricks - Wed, 08/11/2021 - 4:31am

You’ll often hear developers talking about “static” vs. “dynamic” sites, or you may have heard someone use the term Jamstack. What do these terms mean, and when does a “static” site become either a Jamstack or dynamic site? These questions sound simple, but they’re more nuanced than they appear. Let’s explore these terms to gain a deeper understanding of Jamstack.

Finding the line

What’s the difference between a chair and a stool? Most people will respond that a chair has four legs and back support, whereas a stool has three legs with no back support.

Credit: Rumman Amin Credit: Rumman Amin

OK, that’s a great starting point, but what about these?

Credit: Valerii Zorin Credit: Krisztian Tabori

The more stool-like a chair becomes, the fewer people will unequivocally agree that it’s a chair. Eventually, we’ll reach a point where most people agree it’s a stool rather than a chair. It may sound like a silly exercise, but if we want to have a deep appreciation of what it means to be a chair, it’s a valuable one. We find out where the limits of a chair are for most people. We also build an understanding of the gray area beyond. Eventually, we get to the point where even the biggest die-hard chair fans concede and admit there’s a stool in front of them.

As interesting as chairs are, this is an article about website delivery technology. Let’s perform this same exercise for static, dynamic, and Jamstack websites.

At a high level

When you go to a website in your browser, there’s a lot going on behind the scenes:

  1. Your browser performs a DNS lookup to turn the domain name into an IP address.
  2. It requests an HTML file from that IP address.
  3. The webserver sends back the requested file.
  4. As the browser renders the web page, it may come across a reference for an asset, such as a CSS, JavaScript, or image file. The browser then performs a request for this asset.
  5. This cycle continues until the browser has all the files for the web page. It’s not unusual for a single webpage to make 50+ requests.

For every request, the response from the webserver is always a static file, even on a dynamic website. You could save these files to a USB drive, email them to a friend just like any other file on your computer.

When comparing static and dynamic, we’re talking about what the webserver is doing. On a static site, the files the browser requests already exist on the webserver. The webserver sends them back exactly as they are. On a dynamic site, the response gets generated by software. This software might connect to a database to retrieve data, build a layout from template files, and add today’s date to the footer. It does all of this for every request.

That’s the foundational difference between static and dynamic websites.

Where does Jamstack fit in?

Static websites are restrictive. They’re great for informational websites; however, you can’t have any dynamic content or behavior by definition. Jamstack blurs the line between static and dynamic. The idea is to take advantage of all the things that make static websites awesome while enabling dynamic functionality where necessary.

The ‘stack’ in Jamstack is a misnomer. The truth is, Jamstack is not a stack at all. It’s a philosophy that exhibits a striking resemblance to The 5 Pillars of the AWS Well-Architected Framework. The ambiguity in the term has led to extensive community discussion about what it means to be Jamstack.

What is Jamstack?

Jamstack is a superset of static. But to truly understand Jamstack, let’s start with the seeds that led to the coining of the term.

In 2002, the late Aaron Swartz published a blog post titled “Bake, Don’t Fry.” While Aaron didn’t coin “Bake, Don’t Fry,” it’s the first time I can find someone recognizing the benefits of static websites while breaking out perceived constraints of the word.

I care about not having to maintain cranky AOLserver, Postgres and Oracle installs. I care about being able to back things up with scp. I care about not having to do any installation or configuration to move my site to a new server. I care about being platform and server independent.

If we trawl through history, we can find similar frustrations that led to Jamstack seeds:

  • Ben and Mena Trott created MovableType because of a [d]issatisfaction with existing blog CMSes — performance, stability.
  • Tom Preston-Werner created Jekyll to move away from complexity:I already knew a lot about what I didn’t want. I was tired of complicated blogging engines like WordPress and Mephisto. I wanted to write great posts, not style a zillion template pages, moderate comments all day long, and constantly lag behind the latest software release.
  • Steve Francia created Hugo for performance:The past few years this blog has [been] powered by wordpress [sic] and drupal prior to that. Both are are fine pieces of software, but over time I became increasingly disappointed with how they are both optimized for writing content even though significantly most common usage is reading content. Due to the need to load the PHP interpreter on each request it could never be considered fast and consumed a lot of memory on my VPS.

The same themes surface as you look at the origins of many early Jamstack tools:

  • Reduce complexity
  • Improve performance
  • Reduce vendor lock-in
  • Better workflows for developers

In the past 20 years, JavaScript has evolved from a language for adding small interactions to a website to becoming a platform for building rich web applications in the browser. In parallel, we’ve seen a movement of splitting large applications into smaller microservices. These two developments gave rise to a new way of building websites where you could have a static front-end decoupled from a dynamic back-end.

In 2015, Mathias Biilmann wanted to talk about this modern way of building websites but was struggling with the constricting definition of static:

We were in this space of modern static websites. That’s a really bad description of what we’re doing, right? And we kept having that problem that, talking to people about static sites, they would think about something very static. They would think about a brochure or something with no moving parts. A little one-pager or something like that.

To break out of these constraints, he coined the term “Jamstack” to talk about this new approach, and it caught on like wildfire. What was old static technology from the 90s became new again and pushed to new limits. Many developers caught on to the benefits of the Jamstack approach, which helped Jamstack grow into the thriving ecosystem it is today.

Aaron Swartz put it nicely, 13 years before Jamstack was coined: keep a strict separation between input (which needs dynamic code to be processed) and output (which can usually be baked). In other words, decouple the front end from the back end. Prerender content whenever possible. Layer on dynamic functionality where necessary. That’s the crux of Jamstack.

The reason you might want to build a Jamstack site over a dynamic site come down to the six pillars of Jamstack:

Security

Jamstack sites have fewer moving parts and less surface area for malicious exploitation from outside sources.

Scale

Jamstack sites are static where possible. Static sites can live entirely in a CDN, making them much easier and cheaper to scale.

Performance

Serving a web page from a CDN rather than generating it from a centralized server on-demand improves the page load speed.

Maintainability


Static websites are simple. You need a webserver capable of serving files. With a dynamic site, you might need an entire team to keep a website online and fast.

Portability


Again, a static website is made up of files. As long as you find a webserver capable of serving website files, you can move your site anywhere.

Developer experience

Git workflows are a core part of software development today. With many legacy CMSs, it’s difficult to have Git development workflows. With a Jamstack site, everything is a file making it seamless to use Git.

Chris touches on some of these points in a deep-dive comparison between Jamstack and WordPress. He also compares the reasons for choosing a Jamstack architecture versus a server-side one in “Static or Not?”.

Let’s use these pillars to evaluate Jamstack use cases.

Where is the edge of static and Jamstack?

Now that we have the basics of static and Jamstack, let’s dive in and see what lies at the edge of each definition. We have four categories each edge case can fall under.

  • Static – This strictly adheres to the definition of static.
  • Basically static – While not precisely static, most people would call it a static site.
  • Jamstack – A static frontend decoupled from a dynamic backend.
  • Dynamic – Renders web pages on-demand.

Many of these use cases can be placed in multiple categories. In this exercise, we’re putting them in the most restrictive category they fit.

JavaScript interaction Static

Let’s start with an easy one. I have a static site that uses JavaScript to create a slideshow of images.

The HTML page, JavaScript, and images are all static files. All of the HTML manipulation required for the slideshow to function happens in the browser with no external influence.

Cookies Static

I have a static site that adds a banner to the top of the page using JavaScript if a cookie exists. A cookie is just a header. The rest of the files are static.

External assets Basically Static

On a web page, we can load images or JavaScript from an external source. This external source may generate these assets dynamically on request. Would that mean we have a dynamic site?

Most people, including myself, would consider this a static site because it basically is. But if we’re strict to the definition, it doesn’t fit the bill. Having any part of the page generated dynamically defiles the sacred harmony of static.

iFrames Basically Static

An inline frame allows you to embed an HTML page within another HTML page. iFrames are commonly used for embedding Google Maps, Facebook Like buttons, and YouTube videos on a webpage.

Again, most people would still consider this a static site. However, these embeds are almost always from a dynamically-generated source.

Forms Basically Static

A static site can undoubtedly have a form on it. The dilemma comes when you submit it. If you want to do something with the data, you almost certainly need a dynamic back-end. There are plenty of form submission services you can use as the action for your form.

I can see two ways to argue this:

  1. You’re submitting a form to an external website, and it happens to redirect back afterward. This separation means the definition of static remains intact.
  2. This external service is a core workflow on your website, the definition of static no longer works.

In reality, most people would still consider this a static site.

Ajax requests Jamstack

An Ajax request allows a developer to request data from an external source without reloading the page. We’re in the same boat as the above situations of relying on a third party. It’s possible the endpoint for the Ajax call is a static JSON file, but it’s more likely that it’s dynamically-generated.

The nature of how Ajax data is typically used on a website pushes it past a static website into Jamstack territory. It fits well with Jamstack as you can have a site where you prerender everything you can, then use Ajax to layer on any dynamic functionality or content on the site.

Embedded eCommerce Jamstack

There are services that allow you to add eCommerce, even to static websites. Behind the scenes, they’re essentially making Ajax requests to manage items in a shopping cart and collect payment details.

Single page application (SPA) Jamstack

The title alone puts it out of static site contention. A SPA uses Ajax calls to request data. The presentation layer lives entirely in the front end, making it Jamtastic.

Ajax call to a serverless function Jamstack

Whether the endpoint of an Ajax call is serverless with something like AWS Lambda, goes to your Kubernetes clustered Node.js back-end, or a simple PHP back-end, it doesn’t matter. The key for Jamstack is the front end is independent of the back end.

Reverse proxy in front of a webserver Static

Adding a reverse proxy in front of the webserver for a static site must make it dynamic, right? Well, not so fast. While a proxy is software that adds a dynamic element to the network, as long as the file on the server is precisely the file the browser receives, it’s still static.

A webserver, modem, and every piece of network infrastructure in between are running software. If adding a proxy makes a static site dynamic, then nothing is static.

CDN Static

A CDN is a globally-distributed reverse proxy, so it falls into the same category as a reverse proxy. CDNs often add their own headers. This still doesn’t impact the prestigious static status as the headers aren’t part of the file sitting on the server’s hard drive.

CDN in front of a dynamic site with a 200-year cache expiration time Dynamic

OK, 200 years is a long expiry time, I’ll give you that. There are two reasons this is neither a static nor Jamstack site:

  1. The first request isn’t cached, so it generates on demand.
  2. CDNs aren’t designed for persistent storage. If, after one week, you’ve only had five hits on your website, the CDN might purge your web page from the cache. It can always retrieve the web page from the origin server, which would dynamically render the response.
WordPress with a static output Static

Using a WordPress plugin like WP2Static lets you create and manage your website in WordPress and output a static website whenever something changes.

When you do this, the files the browser requests already exist on the webserver, making it a static website—a subtle but important distinction from having a CDN in front of a dynamic site.

Edge computing Dynamic

Many companies are now offering the ability to run dynamic code at the edge of a CDN. It’s a powerful concept because you can have dynamic functionality without adding latency to the user. You can even use edge computation to manipulate HTML before sending it to the client.

It comes down to how you’re using edge functions. You could use an edge function to add a header to particular requests. I would argue this is still a static site. Push much beyond this, where you’re manipulating the HTML, and you’ve crossed the dynamic boundary.

It’s hard to argue it’s a Jamstack site as it doesn’t adhere to some of the fundamental benefits: scale, maintainability, and portability. Now, you have a piece of your core infrastructure that’s changing HTML on every request, and it will only work on that particular hosting infrastructure. That’s getting pretty far away from the blissful simplicity of a static site.

One of the elegant things about Jamstack is the front end and back end are decoupled. The backend is made up of APIs that output data. They don’t know or care how the data is used. The front end is the presentation layer. It knows where to get dynamic data from and how to render it. When you break this separation of concerns, you’ve crossed into a dynamic world.

Distributed Persistent Rendering (DPR) Dynamic

DPR is a strategy to reduce long build times on large static site generator (SSG) sites. The idea is the SSG builds a subset of the most popular pages. For the rest of the pages, the SSG builds them on-demand the first time they’re requested and saves them to persistent storage. After the initial request, the page behaves precisely like the rest of the built static pages.

Long build times limit large-scale use cases from choosing Jamstack. If all the SSG tooling were GoLang-based, we probably wouldn’t need DPR. However, that’s not the direction most Jamstack tooling has taken, and build performance can be excruciatingly long on big websites.

DPR is a means to an end and a necessity for Jamstack to grow. While it allows you to use Jamstack workflows on massive websites, ironically, I don’t think you can call a site using DPR a Jamstack site. Running software on-demand to generate a web page certainly sounds dynamicy. After the first request, a page served using DPR is a static page which makes DPR “more static” than putting a CDN in front of a dynamic site. However, it’s still a dynamic site as there isn’t a separation between frontend and backend, and it’s not portable, one of the pillars of a Jamstack site.

Incremental Static Regeneration (ISR) Dynamic

ISR is a similar but subtly different strategy to DPR to reduce long build times on large SSG sites. The difference is you can revalidate individual pages periodically to mimic a dynamic site without doing an entire site build.

Requests to a page without a cached version fall back to a stale version of that page or a generic loading page.

Again, it’s an exciting technology that expands what you can do with Jamstack workflows, but dynamically generating a page on-demand sounds like something a dynamic site would do.

Flat file CMS Dynamic

A flat file CMS uses text files for content rather than a database. While flat file CMSs remove a dynamic element from the stack, it’s still dynamically rendering the response.

The lines have been drawn

Exploring and debating these edge cases gives us a better understanding of the limits of all of these terms. The point of this exercise isn’t to be dogmatic about creating static or Jamstack websites. It’s to give us a common language to talk about the tradeoffs you make as you cross the boundary from one concept to another.

There’s absolutely nothing wrong with tradeoffs either. Not everything can be a purely static website. In many cases, the trade-offs make sense. For example, let’s say the front end needs to know the country of the visitor. There are two ways to do this:

  1. On page load, perform an Ajax call to query the country from an API. (Jamstack)
  2. Use an edge function to dynamically insert a country code into the HTML on response. (Dynamic)

If having the country code is a nice-to-have and the web page doesn’t need it immediately, then the first approach is a good option. The page can be static and the API call can fail gracefully if it doesn’t work. However, if the country code is required for the page, dynamically adding it using an edge function might make more sense. It’ll be faster as you don’t need to perform a second request/response cycle.

The key is understanding the problem you’re solving and thinking through the trade-offs you’re making with different approaches. You might end up with the majority of your site Jamstack and a portion dynamic. That’s totally fine and might be necessary for your use case. Typically, the closer you can get to static, the faster, more secure, and more scalable your site will be.

This is only the beginning of the discussion, and I’d love to hear your take. Where would you draw the lines? What do static and Jamstack mean to you? Are you sitting on a chair or stool right now?

The post Static vs. Dynamic vs. Jamstack: Where’s The Line? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Napkin

Css Tricks - Wed, 08/11/2021 - 4:31am

We took a surface level look at Pipedream the other day, which really does look cool. It’s like a much more modern and fancy version of what Yahoo Pipes was. A better comparison might be Zapier, except you write code (if you want to) to make easy-to-build cloud functions that can be triggered by anything from RSS to HTTP requests to Slack messages. I wouldn’t say Pipedream itself is complicated to learn (although, admittedly, I haven’t exactly dug deep), but it does embrace complexity. Lots of inputs, lots of processing possibilities, and lots of outputs. Unlimited combinations, you might say.

I saw and bookmarked Napkin.io the other day, which, so far (as it’s brand new) seems to push away the complexity.

Computing tools should be made for humans. They should allow us to be more creative, more free, and more inspired. We believe everyone should have access to computing and tap into its full potential.

We’re making Napkin to change the status quo, to build a new kind of tool – a tool that gets out of your way, that lets you code, and that’s a joy to use.

Philosophy

It’s like a cleaner version of how I remember Webtask. You write a function and… that’s it. It’s available at a URL you can hit.

Each function has environment variables, so you can chuck API keys in there for proxying, auth if you need it, logs for debugging, plus you can write in Node or Python. It’s a healthy amount of features, with more on the way, but it really does feel like embracing simplicity rather than complexity.

Direct Link to ArticlePermalink

The post Napkin appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

View Source (on Mobile)

Css Tricks - Tue, 08/10/2021 - 11:26am

Have you ever wished you could see the HTML source of a web page while on a mobile browser, which generally doesn’t offer that feature? If you have a desktop machine around, there are ways, but what I mean is getting the source without anything but the device itself.

The little View Source tool by Neatnik does the trick.

You enter the URL in the little bar to see the source of that URL. Or add the URL to the the tool’s URL itself to link right to it. Here’s CSS-Tricks (without line wrapping and tidyied up!):

Direct Link to ArticlePermalink

The post View Source (on Mobile) appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Research for Embedded GUI Design: What You Need to Know

Usability Geek - Tue, 08/10/2021 - 8:30am
When it comes to embedded GUI, the device you’re designing for shapes the entire process. Its technical and contextual characteristics define the bounds between which the designer must work to...
Categories: Web Standards

Responsible Markdown in Next.js

Css Tricks - Tue, 08/10/2021 - 4:55am

Markdown truly is a great format. It’s close enough to plain text so that anyone can quickly learn it, and it’s structured enough that it can be parsed and eventually converted to you name it.

That being said: parsing, processing, enhancing, and converting Markdown needs code. Shipping all that code in the client comes at a cost. It’s not huge per se, but it’s still a few dozens of kilobytes of code that are used only to deal with Markdown and nothing else.

In this article, I want to explain how to keep Markdown out of the client in a Next.js application, using the Unified/Remark ecosystem (genuinely not sure which name to use, this is all super confusing).

General idea

The idea is to only use Markdown in the getStaticProps functions from Next.js so this is done during a build (or in a Next serverless function if using Vercel’s incremental builds), but never in the client. I guess getServerSideProps would also be fine, but I think getStaticProps is more likely to be the common use case.

This would return an AST (Abstract Syntax Tree, which is to say a big nested object describing our content) resulting from parsing and processing the Markdown content, and the client would only be responsible for rendering that AST into React components.

I guess we could even render the Markdown as HTML directly in getStaticProps and return that to render with dangerouslySetInnerHtml but we’re not that kind of people. Security matters. And also, flexibility of rendering Markdown the way we want with our components instead of it rendering as plain HTML. Seriously folks, do not do that. &#x1f605;

export const getStaticProps = async () => { // Get the Markdown content from somewhere, like a CMS or whatnot. It doesn’t // matter for the sake of this article, really. It could also be read from a // file. const markdown = await getMarkdownContentFromSomewhere() const ast = parseMarkdown(markdown) return { props: { ast } } } const Page = props => { // This would usually have your layout and whatnot as well, but omitted here // for sake of simplicity of course. return <MarkdownRenderer ast={props.ast} /> } export default Page Parsing Markdown

We are going to use the Unified/Remark ecosystem. We need to install unified and remark-parse and that’s about it. Parsing the Markdown itself is relatively straightforward:

import unified from 'unified' import markdown from 'remark-parse' const parseMarkdown = content => unified().use(markdown).parse(content) export default parseMarkdown

Now, what took me a long while to understand is why my extra plugins, like remark-prism or remark-slug, did not work like this. This is because the .parse(..) method from Unified does not process the AST with plugins. As the name suggests, it only parses the string of Markdown content into a tree.

If we want Unified to apply our plugins, we need Unified to go through what they call the “run” phase. Normally, this is done by using the .process(..) method instead of the .parse(..) method. Unfortunately, .process(..) not only parses Markdown and applies plugins, but also stringifies the AST into another format (like HTML via remark-html, or JSX with remark-react). And this is not what we want, as we want to preserve the AST, but after it’s been processed by plugins.

| ........................ process ........................... | | .......... parse ... | ... run ... | ... stringify ..........| +--------+ +----------+ Input ->- | Parser | ->- Syntax Tree ->- | Compiler | ->- Output +--------+ | +----------+ X | +--------------+ | Transformers | +--------------+

So what we need to do is run both the parsing and running phases, but not the stringifying phase. Unified does not provide a method to do these 2 out of 3 phases, but it provides individual methods for every phase, so we can do it manually:

import unified from 'unified' import markdown from 'remark-parse' import prism from 'remark-prism' const parseMarkdown = content => { const engine = unified().use(markdown).use(prism) const ast = engine.parse(content) // Unified‘s *process* contains 3 distinct phases: parsing, running and // stringifying. We do not want to go through the stringifying phase, since we // want to preserve an AST, so we cannot call `.process(..)`. Calling // `.parse(..)` is not enough though as plugins (so Prism) are executed during // the running phase. So we need to manually call the run phase (synchronously // for simplicity). // See: https://github.com/unifiedjs/unified#description return engine.runSync(ast) }

Tada! We parsed our Markdown into a syntax tree. And then we ran our plugins on that tree (done here synchronously for sake of simplicity, but you could use .run(..) to do it asynchronously). But we did not convert our tree into some other syntax like HTML or JSX. We can do that ourselves, in the render.

Rendering Markdown

Now that we have our cool tree at the ready, we can render it the way we intend to. Let’s have a MarkdownRenderer component that receives the tree as an ast prop, and renders it all with React components.

const getComponent = node => { switch (node.type) { case 'root': return React.Fragment case 'paragraph': return 'p' case 'emphasis': return 'em' case 'heading': return ({ children, depth = 2 }) => { const Heading = `h${depth}` return <Heading>{children}</Heading> } /* Handle all types here … */ default: console.log('Unhandled node type', node) return React.Fragment } } const Node = node => { const Component = getComponent(node) const { children } = node return children ? ( <Component {...node}> {children.map((child, index) => ( <Node key={index} {...child} /> ))} </Component> ) : ( <Component {...node} /> ) } const MarkdownRenderer = props => <Node {...props.ast} /> export default React.memo(MarkdownRenderer)

Most of the logic of our renderer lives in the Node component. It finds out what to render based on the type key of the AST node (this is our getComponent method handling every type of node), and then renders it. If the node has children, it recursively goes into the children; otherwise it just renders the component as a final leaf.

Cleaning up the tree

Depending on which Remark plugins we use, we might encounter the following problem when trying to render our page:

Error: Error serializing .content[0].content.children[3].data.hChildren[0].data.hChildren[0].data.hChildren[0].data.hChildren[0].data.hName returned from getStaticProps in “/”. Reason: undefined cannot be serialized as JSON. Please use null or omit this value.

This happens because our AST contains keys whose values are undefined, which is not something that can be safely serialized as JSON. Next gives us the solution: either we omit the value entirely, or if we need it somewhat, replace it with null.

We’re not going to fix every path by hand though, so we need to walk that AST recursively and clean it up. I found out that this happened when using remark-prism, a plugin to enable syntax highlighting for code blocks. The plugin indeed adds a [data] object to nodes.

What we can do is walk our AST before returning it to clean up these nodes:

const cleanNode = node => { if (node.value === undefined) delete node.value if (node.tagName === undefined) delete node.tagName if (node.data) { delete node.data.hName delete node.data.hChildren delete node.data.hProperties } if (node.children) node.children.forEach(cleanNode) return node } const parseMarkdown = content => { const engine = unified().use(markdown).use(prism) const ast = engine.parse(content) const processedAst = engine.runSync(parsed) cleanNode(processedAst) return processedAst }

One last thing we can do to ship less data to the client is remove the position object which exists on every single node and holds the original position in the Markdown string. It’s not a big object (it has only two keys), but when the tree gets big, it adds up quickly.

const cleanNode = node => { delete node.position Wrapping up

That’s it folks! We managed to restrict Markdown handling to the build-/server-side code so we don’t ship a Markdown runtime to the browser, which is unnecessarily costly. We pass a tree of data to the client, which we can walk and convert into whatever React components we want.

I hope this helps. :)

The post Responsible Markdown in Next.js appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

WooCommerce With Apple Pay and Google Pay

Css Tricks - Tue, 08/10/2021 - 4:54am

(This is a sponsored post.)

Got a WooCommerce store? It behooves you to offer a variety of payment methods. Just anecdotally, I’m sure both you and me have been annoyed and even abandoned purchases when a merchant, online or otherwise, doesn’t take the payment method we want to pay with. That’s just straight-up lost sales for the merchant. But you don’t have to entirely trust anecdotal evidence, there is data you can pour into, suggesting 7% of abandonment is from missing payment methods.

I’d suggest, at a minimum, you take credit cards and PayPal. There are a variety of payment gateways you can explore (and it’s worth doing so), including a number that take credit cards. The best bet there is WooCommerce Payments — supported in many big countries. It’s Stripe-backed, so it’s a lot like using the Stripe gateway anyway, except way better as it’s loaded with useful features like the fact that you manage all your payments directly in your WordPress dashboard, and Instant Deposits.

The PayPal plugin is free, so that’s kind of a no-brainer, and I’m just talking the basic integration that kicks people over to PayPal.com to pay. Some people like that, as it lets them use their PayPal account online where they may already carry a balance for online purchases and transfers.

The very next step? Apple Pay and Google Pay. Why? Like PayPal, some people strongly prefer it (including me) because of how quick and familiar it makes the checkout process. The Apple Pay and Google Pay functionality in WooCommerce goes so far as to even allow skipping the whole traditional cart and checkout process. That might allow you to make up even more than that 7% based on improved UX.

How does Apple Pay and Google Pay work on WooCommerce? Well if you’re already using WooCommerce Payments, like you should, you’re already almost there.

Enabling Apple Pay and Google Pay on WooCommerce

Apple Pay is supported via the Striple plugin or the Square plugin, but I’d say it’s easiest with WooCommerce Payments. Under Settings > Payments, you’ll see a checkbox for “Enable express checkouts” — flip that on and you’ll be enabling both Apple Pay and Google Pay — and will have an opportunity to pick where you want them to appear.

There are a handful of prerequisites, like having an HTTPS site, but with eCommerce in general, that is not optional and you’ve probably already got it in place.

One thing I experienced when activating it is this warning:

I was able to download the domain association file from the Strip docs, give it to my WordPress Host (Flywheel), and they manually installed it for me and it worked fine.

No “account”

With PayPal, you need a PayPal account for yourself to make it work. That’s not the case with Apple Pay and Google Pay where you don’t have an account and they don’t keep a balance — they just kick that money directly over to WooCommerce Payments and you have access to that money like you would any other WooCommerce Payments transaction.

Example transaction

Here’s an order that came in (I get email notifications for orders):

Notice I can see right in the email that Apple Pay was used.

I can see the order in my dashboard like any other, and have the ability to refund it directly from there and other actions:

I barely even notice it. What payment gateway someone chooses is of little consequence to me once it’s all set up.

The user experience

Apple Pay works on Safari, both on iOS and macOS. If a user both is using one of those browsers and has Apple Pay set up, they’ll see the special buttons show up on your store:

Press that button, and the user sees this immediate checkout step:

The user can change credit cards (that they have set up in Apple Pay), changing shipping address, and then if they approve it, it’s instantly done.

It’s a pretty satisfying user experience, I must say.

Even moreso on a mobile phone, where it feels like things like Apple Pay and Google Pay were really designed to shine. Here’s Apple Pay:

Google Pay works on Android phones nicely, but also works in desktop Chrome.

I did learn one super weird little caveat with Google Pay and desktop Chrome though! Cards that are in your desktop Chrome autofill area in settings that literally say “Google Pay” next to them don’t actually work for the WooCommcere Google Pay buttons. Only credit cards that are kinda manually added in there without that little label work. Just a little thing to be aware of when testing:

This is a rather compelling reason to use WooCommerce for eCommerce. I feel like I got this feature for free. I basically checked a box in settings, and it makes a material positive impact on my business.

The post WooCommerce With Apple Pay and Google Pay appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS Nesting, specificity, and you

Css Tricks - Tue, 08/10/2021 - 4:51am

Here’s Kilian Valkhof on CSS nesting which isn’t available in browsers yet, but will be soon. There are a few differences he notes between CSS nesting and nesting in Sass or Less though. Take, for example, the following code:

div { background: #fff; & p { color: red; } border: 1px solid; }

When CSS nesting lands, that last line border: 1px solid; won’t be applied to the div like it would be in, say, Sass. That’s because with CSS nesting, any styles you want applied to that div have to be written before any nesting styles are written. I think this makes a ton of sense because I tend to enforce that style in any Sass codebases I work on (it’s just much easier to read), but I can imagine people getting confused about this the first time around.

One of the smaller and, yet for some reason, super exciting things about CSS nesting is how we’ll be able to nest media queries, as Kilian notes, just like this:

body { background: red; @media (min-width: 40rem) { & { background: blue; } } }

This is very exciting!

Direct Link to ArticlePermalink

The post CSS Nesting, specificity, and you appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Choice Words about the Upcoming Deprecation of JavaScript Dialogs

Css Tricks - Mon, 08/09/2021 - 11:23am

It might be the very first thing a lot of people learn in JavaScript:

alert("Hello, World");

One day at CodePen, we woke up to a ton of customer support tickets about their Pens being broken, which ultimately boiled down to a version of Chrome that shipped where they ripped out alert() from functioning in cross-origin iframes. And all other native “JavaScript Dialogs” like confirm(), prompt() and I-don’t-know-what-else (onbeforeunload?, .htpasswd protected assets?).

Cross-origin iframes are essentially the heart of how CodePen works. You write code, and we execute it for you in an iframe that doesn’t share the same domain as CodePen itself, as the very first line of security defense. We didn’t hear any heads up or anything, but I’m sure the plans were on display.

I tweeted out of dismay. I get that there are potential security concerns here. JavaScript dialogs look the same whether they are triggered by an iframe or not, so apparently it’s confusing-at-best when they’re triggered by an iframe, particularly a cross-origin iframe where the parent page likely has little control. Well, outside of, ya know, a website like CodePen. Chrome cite performance concerns as well, as the nature of these JavaScript dialogs is that they block the main thread when open, which essentially halts everything.

There are all sorts of security and UX-annoyance issues that can come from iframes though. That’s why sandboxing is a thing. I can do this:

<iframe sandbox></iframe>

And that sucker is locked down. If some form tried to submit something in there: nope, won’t work. What if it tries to trigger a download? Nope. Ask for device access? No way. It can’t even load any JavaScript at all. That is unless I let it:

<iframe sandbox="allow-scripts allow-downloads ...etc"></iframe>

So why not an attribute for JavaScript dialogs? Ironically, there already is one: allow-modals. I’m not entirely sure why that isn’t good enough, but as I understand it, nuking JavaScript dialogs in cross-origin iframes is just a stepping stone on the ultimate goal: removing them from the web platform entirely.

Daaaaaang. Entirely? That’s the word. Imagine the number of programming tutorials that will just be outright broken.

For now, even the cross-origin removal is delayed until January 2022, but as far as we know this is going to proceed, and then subsequent steps will happen to remove them entirely. This is spearheaded by Chrome, but the status reports that both Firefox and Safari are on board with the change. Plus, this is a specced change, so I guess we can waggle our fingers literally everywhere here, if you, like me, feel like this wasn’t particularly well-handled.

What we’ve been told so far, the solution is to use postMessage if you really absolutely need to keep this functionality for cross-origin iframes. That sends the string the user uses in window.alert up to the parent page and triggers the alert from there. I’m not the biggest fan here, because:

  1. postMessage is not blocking like JavaScript dialogs are. This changes application flow.
  2. I have to inject code into users code for this. This is new technical debt and it can harm the expectations of expected user output (e.g. an extra <script> in their HTML has weird implications, like changing what :nth-child and friends select).
  3. I’m generally concerned about passing anything user-generated to a parent to execute. I’m sure there are theoretical ways to do it safely, but XSS attack vectors are always surprising in their ingenouity.

Even lower-key suggestions, like window.alert = console.log, have essentially the same issues.

Allow me to hand the mic over to others for their opinions.

Couldn’t the alert be contained to the iframe instead of showing up in the parent window?

Jaden Baptista, Twitter

Yes, please! Doesn’t that solve a big part of this? While making the UX of these dialogs more useful? Put the dang dialogs inside the <iframe>.

“Don’t break the web.” to “Don’t break 90% of the web.” and now “Don’t break the web whose content we agree with.”

Matthew Phillips, Twitter

I respect the desire to get rid of inelegant parts [of the HTML spec] that can be seen as historical mistakes and that cause implementation complexity, but I can’t shake the feeling that the existing use cases are treated with very little respect or curiosity.

Dan Abramov, Twitter

It’s weird to me this is part of the HTML spec, not the JavaScript spec. Right?!

I always thought there was a sort of “prime directive” not to break the web? I’ve literally seen web-based games that used alert as a “pause”, leveraging the blocking nature as a feature. Like: <button onclick="alert('paused')">Pause</button>[.] Funny, but true.

Ben Lesh, Twitter

A metric was cited that only 0.006% of all page views contain a cross-origin iframe that uses these functions, yet:

Seems like a misleading metric for something like confirm(). E.g. if account deletion flow is using confirm() and breaks because of a change to it, this doesn’t mean account deletion flow wasn’t important. It just means people don’t hit it on every session.

Dan Abramov, Twitter

That’s what’s extra concerning to me: alert() is one thing, but confirm() literally returns true or false, meaning it is a logical control structure in a program. Removing that breaks websites, no question. Chris Ferdinandi showed me this little obscure website that uses it:

Speaking of Chris:

The condescending “did you actually read it, it’s so clear” refrain is patronizing AF. It’s the equivalent of “just” or “simply” in developer documentation.

I read it. I didn’t understand it. That’s why I asked someone whose literal job is communicating with developers about changes Chrome makes to the platform.

This is not isolated to one developer at Chrome. The entire message thread where this change was surfaced is filled with folks begging Chrome not to move forward with this proposal because it will break all-the-things.

Chris Ferdinandi, “Google vs. the web”

And here’s Jeremy:

[…] breaking changes don’t happen often on the web. They are—and should be—rare. If that were to change, the web would suffer massively in terms of predictability.

Secondly, the onus is not on web developers to keep track of older features in danger of being deprecated. That’s on the browser makers. I sincerely hope we’re not expected to consult a site called canistilluse.com.

Jeremy Keith, “Foundations”

I’ve painted a pretty bleak picture here. To be fair, there were some tweets with the Yes!! Finally!! vibe, but they didn’t feel like critical assessments to me as much as random Google cheerleading.

Believe it or not, I generally am a fan of Google and think they do a good job of pushing the web forward. I also think it’s appropriate to waggle fingers when I see problems and request they do better. “Better” here means way more developer and user outreach to spell out the situation, way more conversation about the potential implications and transition ideas, and way more openness to bending the course ahead.

The post Choice Words about the Upcoming Deprecation of JavaScript Dialogs appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

The Large, Small, and Dynamic Viewports

Css Tricks - Mon, 08/09/2021 - 10:37am

We’ve got viewport units (e.g. vw, vh, vmin, vmax), and they are mostly pretty great. It’s cool to always have a unit available that is relative to the entire screen. But when you ask people what they want fixed up in CSS, viewport units are always on the list. The problem is that people use them to do things like position important buttons along the bottom of the screen on mobile devices. Do something like that wrong, it might cost you $8 million dollars.

What’s “wrong”? Well, assuming that 100vh is the visible/usable area in the viewport. Whaaaat? Isn’t that the point of those units? There are tricks like this and this, but that’s why people are unhappy. None of that is intuitive and huge mistakes are all too common. Even though Safari 15 is going to make this a little better, I’d say it’s still not particularly intuitive how you have to handle it.

Bramus Van Damme covers that the spec now includes some new values:

  • The “Large Viewport”: lvh / lvw / lvmin / lvmax
  • The “Small Viewport”: svh / svw / svmin / svmax
  • The “Baby Bear Viewport”
  • The “Dynamic Viewport”: dvh / dvw / dvmin / dvmax

It seems to me the dynamic ones are the useful ones, because they will be intuitive: the units that represent the currently usable space, be it large or small.

The Dynamic Viewport is the viewport sized with *dynamic consideration of any UA interfaces*. It will automatically adjust itself in response to UA interface elements being shown or not: the value will be anything within the limits of 100vh (maximum) and 100svh (minimum).

Bramus Van Damme, “The Large, Small, and Dynamic Viewports”

Direct Link to ArticlePermalink

The post The Large, Small, and Dynamic Viewports appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Exploring the CSS Paint API: Image Fragmentation Effect

Css Tricks - Mon, 08/09/2021 - 4:27am

In my previous article, I created a fragmentation effect using CSS mask and custom properties. It was a neat effect but it has one drawback: it uses a lot of CSS code (generated using Sass). This time I am going to redo the same effect but rely on the new Paint API. This drastically reduces the amount of CSS and completely removes the need for Sass.

Here is what we are making. Like in the previous article, only Chrome and Edge support this for now.

CodePen Embed Fallback

See that? No more than five CSS declarations and yet we get a pretty cool hover animation.

What is the Paint API?

The Paint API is part of the Houdini project. Yes, “Houdini” the strange term that everyone is talking about. A lot of articles already cover the theoretical aspect of it, so I won’t bother you with more. If I have to sum it up in a few words, I would simply say : it’s the future of CSS. The Paint API (and the other APIs that fall under the Houdini umbrella) allow us to extend CSS with our own functionalities. We no longer need to wait for the release of new features because we can do it ourselves!

From the specification:

An API for allowing web developers to define a custom CSS <image> with javascript [sic], which will respond to style and size changes.

And from the explainer:

The CSS Paint API is being developed to improve the extensibility of CSS. Specifically this allows developers to write a paint function which allows us to draw directly into an elements [sic] background, border, or content.

I think the idea is pretty clear. We can draw what we want. Let’s start with a very basic demo of background coloration:

CodePen Embed Fallback
  1. We add the paint worklet using CSS.paintWorklet.addModule('your_js_file').
  2. We register a new paint method called draw.
  3. Inside that, we create a paint() function where we do all the work. And guess what? Everything is like working with <canvas>. That ctx is the 2D context, and I simply used some well-known functions to draw a red rectangle covering the whole area.

This may look unintuitive at first glance, but notice that the main structure is always the same: the three steps above are the “copy/paste” part that you repeat for each project. The real work is the code we write inside the paint() function.

Let’s add a variable:

CodePen Embed Fallback

As you can see, the logic is pretty simple. We define the getter inputProperties with our variables as an array. We add properties as a third parameter to paint() and later we get our variable using properties.get().

That’s it! Now we have everything we need to build our complex fragmentation effect.

Building the mask

You may wonder why the paint API to create a fragmentation effect. We said it’s a tool to draw images so how it will allow us to fragment an image?

In the previous article, I did the effect using different mask layer where each one is a square defined with a gradient (remember that a gradient is an image) so we got a kind of matrix and the trick was to adjust the alpha channel of each one individually.

This time, instead of using many gradients we will define only one custom image for our mask and that custom image will be handled by our paint API.

An example please!

CodePen Embed Fallback

In the above, I have created an image having an opaque color covering the left part and a semi-transparent one covering the right part. Applying this image as a mask gives us the logical result of a half-transparent image.

Now all we need to do is to split our image to more parts. Let’s define two variables and update our code:

CodePen Embed Fallback

The relevant part of the code is the following:

const n = properties.get('--f-n'); const m = properties.get('--f-m'); const w = size.width/n; const h = size.height/m; for(var i=0;i<n;i++) { for(var j=0;j<m;j++) { ctx.fillStyle = 'rgba(0,0,0,'+(Math.random())+')'; ctx.fillRect(i*w, j*h, w, h); } }

N and M define the dimension of our matrix of rectangles. W and H are the size of each rectangle. Then we have a basic FOR loop to fill each rectangle with a random transparent color.

With a little JavaScript, we get a custom mask that we can easily control by adjusting the CSS variables:

CodePen Embed Fallback

Now, we need to control the alpha channel in order to create the fading effect of each rectangle and build the fragmentation effect.

Let’s introduce a third variable that we use for the alpha channel that we also change on hover.

CodePen Embed Fallback

We defined a CSS custom property as a <number> that we transition from 1 to 0, and that same property is used to define the alpha channel of our rectangles. Nothing fancy will happen on hover because all the rectangles will fade the same way.

We need a trick to prevent fading of all the rectangles at the same time, instead creating a delay between them. Here is an illustration to explain the idea I am going to use:

The above is showing the alpha animation for two rectangles. First we define a variable L that should be bigger or equal to 1 then for each rectangle of our matrix (i.e. for each alpha channel) we perform a transition between X and Y where X - Y = L so we have the same overall duration for all the alpha channel. X should be bigger or equal to 1 and Y smaller or equal to 0.

Wait, the alpha value shouldn’t be in the range [1 0], right ?

Yes, it should! And all the tricks that we’re working on rely on that. Above, the alpha is animating from 8 to -2, meaning we have an opaque color in the [8 1] range, a transparent one in the [0 -2] range and an animation within [1 0]. In other words, any value bigger than 1 will have the same effect as 1, and any value smaller than 0 will have the same effect as 0.

Animation within [1 0] will not happen at the same time for both our rectangles. Rectangle 2 will reach [1 0] before Rectangle 1 will. We apply this to all the alpha channels to get our delayed animations.

In our code we will update this:

rgba(0,0,0,'+(o)+')

…to this:

rgba(0,0,0,'+((Math.random()*(l-1) + 1) - (1-o)*l)+')

L is the variable illustrated previously, and O is the value of our CSS variable that transitions from 1 to 0

When O=1, we have (Math.random()*(l-1) + 1). Considering the fact that the random() function gives us a value within the [0 1] range, the final value will be in the [L 1]range.

When O=0, we have (Math.random()*(l-1) + 1 - l) and a value with the [0 1-L] range.

L is our variable to control the delay.

Let’s see this in action:

CodePen Embed Fallback

We are getting closer. We have a cool fragmentation effect but not the one we saw in the beginning of the article. This one isn’t as smooth.

The issue is related the random() function. We said that each alpha channel need to animate between X and Y, so logically those value need to remain the same. But the paint() function is called a bunch during the transition, so each time, the random() function give us different X and Y values for each alpha channel; hence the “random” effect we are getting.

To fix this we need to find a way to store the generated value so they are always the same for each call of the paint() function. Let’s consider a pseudo-random function, a function that always generates the same sequence of values. In other words, we want to control the seed.

Unfortunately, we cannot do this with the JavaScript’s built-in random() function, so like any good developer, let’s pick one up from Stack Overflow:

const mask = 0xffffffff; const seed = 30; /* update this to change the generated sequence */ let m_w = (123456789 + seed) & mask; let m_z = (987654321 - seed) & mask; let random = function() { m_z = (36969 * (m_z & 65535) + (m_z >>> 16)) & mask; m_w = (18000 * (m_w & 65535) + (m_w >>> 16)) & mask; var result = ((m_z << 16) + (m_w & 65535)) >>> 0; result /= 4294967296; return result; }

And the result becomes:

CodePen Embed Fallback

We have our fragmentation effect without complex code:

  • a basic nested loop to create NxM rectangles
  • a clever formula for the channel alpha to create the transition delay
  • a ready random() function taken from the Net

That’s it! All you have to do is to apply the mask property to any element and adjust the CSS variables.

Fighting the gaps!

If you play with the above demos you will notice, in some particular case, strange gaps between the rectangles

To avoid this, we can extend the area of each rectangle with a small offset.

We update this:

ctx.fillRect(i*w, j*h, w, h);

…with this:

ctx.fillRect(i*w-.5, j*h-.5, w+.5, h+.5);

It creates a small overlap between the rectangles that compensates for the gaps between them. There is no particular logic with the value 0.5 I used. You can go bigger or smaller based on your use case.

CodePen Embed Fallback Want more shapes?

Can the above be extended to consider more than rectangular shape? Sure it can! Let’s not forget that we can use Canvas to draw any kind of shape — unlike pure CSS shapes where we sometimes need some hacky code. Let’s try to build that triangular fragmentation effect.

After searching the web, I found something called Delaunay triangulation. I won’t go into the deep theory behind it, but it’s an algorithm for a set of points to draw connected triangles with specific properties. There are lots of ready-to-use implementations of it, but we’ll go with Delaunator because it’s supposed to be the fastest of the bunch.

We first define a set of points (we will use random() here) then run Delauntor to generate the triangles for us. In this case, we only need one variable that defines the number of points.

const n = properties.get('--f-n'); const o = properties.get('--f-o'); const w = size.width; const h = size.height; const l = 7; var dots = [[0,0],[0,w],[h,0],[w,h]]; /* we always include the corners */ /* we generate N random points within the area of the element */ for (var i = 0; i < n; i++) { dots.push([random() * w, random() * h]); } /**/ /* We call Delaunator to generate the triangles*/ var delaunay = Delaunator.from(dots); var triangles = delaunay.triangles; /**/ for (var i = 0; i < triangles.length; i += 3) { /* we loop the triangles points */ /* we draw the path of the triangles */ ctx.beginPath(); ctx.moveTo(dots[triangles[i]][0] , dots[triangles[i]][1]); ctx.lineTo(dots[triangles[i + 1]][0], dots[triangles[i + 1]][1]); ctx.lineTo(dots[triangles[i + 2]][0], dots[triangles[i + 2]][1]); ctx.closePath(); /**/ var alpha = (random()*(l-1) + 1) - (1-o)*l; /* the alpha value */ /* we fill the area of triangle with the semi-transparent color */ ctx.fillStyle = 'rgba(0,0,0,'+alpha+')'; /* we consider stroke to fight the gaps */ ctx.strokeStyle = 'rgba(0,0,0,'+alpha+')'; ctx.stroke(); ctx.fill(); }

I have nothing more to add to the comments in the above code. I simply used some basic JavaScript and Canvas stuff and yet we have a pretty cool effect.

CodePen Embed Fallback

We can make even more shapes! All we have to do is to find an algorithm for it.

I cannot move on without doing the hexagon one!

CodePen Embed Fallback

I took the code from this article written by Izan Pérez Cosano. Our variable is now R that will define the dimension of one hexagon.

What’s next?

Now that we have built our fragmentation effect, let’s focus on the CSS. Notice that the effect is as simple as changing the opacity value (or the value of whichever property you are working with) of an element on it hover state.

Opacity animation

img { opacity:1; transition:opacity 1s; } img:hover { opacity:0; }

Fragmentation effect

img { -webkit-mask: paint(fragmentation); --f-o:1; transition:--f-o 1s; } img:hover { --f-o:0; }

This means we can easily integrate this kind of effect to create more complex animations. Here are a bunch of ideas!

Responsive image slider CodePen Embed Fallback

Another version of the same slider:

CodePen Embed Fallback Noise effect CodePen Embed Fallback Loading screen CodePen Embed Fallback Card hover effect CodePen Embed Fallback That’s a wrap

And all of this is just the tip of the iceberg of what can be achieved using the Paint API. I’ll end with two important points:

  • The Paint API is 90% <canvas>, so the more you know about <canvas>, the more fancy things you can do. Canvas is widely used, which means there’s a bunch of documentation and writing about it to get you up to speed. Hey, here’s one right here on CSS-Tricks!
  • The Paint API removes all the complexity from the CSS side of things. There’s no dealing with complex and hacky code to draw cool stuff. This makes CSS code so much easier to maintain, not to mention less prone to error.

The post Exploring the CSS Paint API: Image Fragmentation Effect appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Custom properties and @property

QuirksBlog - Wed, 07/21/2021 - 3:18am

You’re reading a failed article. I hoped to write about @property and how it is useful for extending CSS inheritance considerably in many different circumstances. Alas, I failed. @property turns out to be very useful for font sizes, but does not even approach the general applicability I hoped for.

Grandparent-inheriting

It all started when I commented on what I thought was an interesting but theoretical idea by Lea Verou: what if elements could inherit the font size of not their parent, but their grandparent? Something like this:

div.grandparent { /* font-size could be anything */ } div.parent { font-size: 0.4em; } div.child { font-size: [inherit from grandparent in some sort of way]; font-size: [yes, you could do 2.5em to restore the grandparent's font size]; font-size: [but that's not inheriting, it's just reversing a calculation]; font-size: [and it will not work if the parent's font size is also unknown]; }

Lea told me this wasn’t a vague idea, but something that can be done right now. I was quite surprised — and I assume many of my readers are as well — and asked for more information. So she wrote Inherit ancestor font-size, for fun and profit, where she explained how the new Houdini @property can be used to do this.

This was seriously cool. Also, I picked up a few interesting bits about how CSS custom properties and Houdini @property work. I decided to explain these tricky bits in simple terms — mostly because I know that by writing an explanation I myself will understand them better — and to suggest other possibilities for using Lea’s idea.

Alas, that last objective is where I failed. Lea’s idea can only be used for font sizes. That’s an important use case, but I had hoped for more. The reasons why it doesn’t work elsewhere are instructive, though.

Tokens and values

Let’s consider CSS custom properties. What if we store the grandparent’s font size in a custom property and use that in the child?

div.grandparent { /* font-size could be anything */ --myFontSize: 1em; } div.parent { font-size: 0.4em; } div.child { font-size: var(--myFontSize); /* hey, that's the grandparent's font size, isn't it? */ }

This does not work. The child will have the same font size as the parent, and ignore the grandparent. In order to understand why we need to understand how custom properties work. What does this line of CSS do?

--myFontSize: 1em;

It sets a custom property that we can use later. Well duh.

Sure. But what value does this custom property have?

... errr ... 1em?

Nope. The answer is: none. That’s why the code example doesn’t work.

When they are defined, custom properties do not have a value or a type. All that you ordered the browsers to do is to store a token in the variable --myFontSize.

This took me a while to wrap my head around, so let’s go a bit deeper. What is a token? Let’s briefly switch to JavaScript to explain.

let myVar = 10;

What’s the value of myVar in this line? I do not mean: what value is stored in the variable myVar, but: what value does the character sequence myVar have in that line of code? And what type?

Well, none. Duh. It’s not a variable or value, it’s just a token that the JavaScript engine interprets as “allow me to access and change a specific variable” whenever you type it.

CSS custom properties also hold such tokens. They do not have any intrinsic meaning. Instead, they acquire meaning when they are interpreted by the CSS engine in a certain context, just as the myVar token is in the JavaScript example.

So the CSS custom property contains the token 1em without any value, without any type, without any meaning — as yet.

You can use pretty any bunch of characters in a custom property definition. Browsers make no assumptions about their validity or usefulness because they don’t yet know what you want to do with the token. So this, too, is a perfectly fine CSS custom property:

--myEgoTrip: ppk;

Browsers shrug, create the custom property, and store the indicated token. The fact that ppk is invalid in all CSS contexts is irrelevant: we haven’t tried to use it yet.

It’s when you actually use the custom property that values and types are assigned. So let’s use it:

background-color: var(--myEgoTrip);

Now the CSS parser takes the tokens we defined earlier and replaces the custom property with them:

background-color: ppk;

And only NOW the tokens are read and intrepreted. In this case that results in an error: ppk is not a valid value for background-color. So the CSS declaration as a whole is invalid and nothing happens — well, technically it gets the unset value, but the net result is the same. The custom property itself is still perfectly valid, though.

The same happens in our original code example:

div.grandparent { /* font-size could be anything */ --myFontSize: 1em; /* just a token; no value, no meaning */ } div.parent { font-size: 0.4em; } div.child { font-size: var(--myFontSize); /* becomes */ font-size: 1em; /* hey, this is valid CSS! */ /* Right, you obviously want the font size to be the same as the parent's */ /* Sure thing, here you go */ }

In div.child he tokens are read and interpreted by the CSS parser. This results in a declaration font-size: 1em;. This is perfectly valid CSS, and the browsers duly note that the font size of this element should be 1em.

font-size: 1em is relative. To what? Well, to the parent’s font size, of course. Duh. That’s how CSS font-size works.

So now the font size of the child becomes the same as its parent’s, and browsers will proudly display the child element’s text in the same font size as the parent element’s while ignoring the grandparent.

This is not what we wanted to achieve, though. We want the grandparent’s font size. Custom properties — by themselves — don’t do what we want. We have to find another solution.

@property

Lea’s article explains that other solution. We have to use the Houdini @property rule.

@property --myFontSize { syntax: "<length>"; initial-value: 0; inherits: true; } div { border: 1px solid; padding: 1em; } div.grandparent { /* font-size could be anything */ --myFontSize: 1em; } div.parent { font-size: 0.4em; } div.child { font-size: var(--myFontSize); }

Now it works. Wut? Yep — though only in Chrome so far.

@property --myFontSize { syntax: ""; initial-value: 0; inherits: true; } section.example { max-width: 500px; } section.example div { border: 1px solid; padding: 1em; } div.grandparent { font-size: 23px; --myFontSize: 1em; } div.parent { font-size: 0.4em; } div.child { font-size: var(--myFontSize); } This is the grandparent This is the parent This is the child

What black magic is this?

Adding the @property rule changes the custom property --myFontSize from a bunch of tokens without meaning to an actual value. Moreover, this value is calculated in the context it is defined in — the grandfather — so that the 1em value now means 100% of the font size of the grandfather. When we use it in the child it still has this value, and therefore the child gets the same font size as the grandfather, which is exactly what we want to achieve.

(The variable uses a value from the context it’s defined in, and not the context it’s executed in. If, like me, you have a grounding in basic JavaScript you may hear “closures!” in the back of your mind. While they are not the same, and you shouldn’t take this apparent equivalency too far, this notion still helped me understand. Maybe it’ll help you as well.)

Unfortunately I do not quite understand what I’m doing here, though I can assure you the code snippet works in Chrome — and will likely work in the other browsers once they support @property.

Misson completed — just don’t ask me how.

Syntax

You have to get the definition right. You need all three lines in the @property rule. See also the specification and the MDN page.

@property --myFontSize { syntax: "<length>"; initial-value: 0; inherits: true; }

The syntax property tells browsers what kind of property it is and makes parsing it easier. Here is the list of possible values for syntax, and in 99% of the cases one of these values is what you need.

You could also create your own syntax, e.g. syntax: "ppk | <length>"

Now the ppk keyword and any sort of length is allowed as a value.

Note that percentages are not lengths — one of the many things I found out during the writing of this article. Still, they are so common that a special value for “length that may be a percentage or may be calculated using percentages” was created:

syntax: "<length-percentage>"

Finally, one special case you need to know about is this one:

syntax: "*"

MDN calls this a universal selector, but it isn’t, really. Instead, it means “I don’t know what syntax we’re going to use” and it tells browsers not to attempt to interpret the custom property. In our case that would be counterproductive: we definitely want the 1em to be interpreted. So our example doesn’t work with syntax: "*".

initial-value and inherits

An initial-value property is required for any syntax value that is not a *. Here that’s simple: just give it an initial value of 0 — or 16px, or any absolute value. The value doesn’t really matter since we’re going to overrule it anyway. Still, a relative value such as 1em is not allowed: browsers don’t know what the 1em would be relative to and reject it as an initial value.

Finally, inherits: true specifies that the custom property value can be inherited. We definitely want the computed 1em value to be inherited by the child — that’s the entire point of this experiment. So we carefully set this flag to true.

Other use cases

So far this article merely rehashed parts of Lea’s. Since I’m not in the habit of rehashing other people’s articles my original plan was to add at least one other use case. Alas, I failed, though Lea was kind enough to explain why each of my ideas fails.

Percentage of what?

Could we grandfather-inherit percentual margins and paddings? They are relative to the width of the parent of the element you define them on, and I was wondering if it might be useful to send the grandparent’s margin on to the child just like the font size. Something like this:

@property --myMargin { syntax: "<length-percentage>"; initial-value: 0; inherits: true; } div.grandparent { --myMargin: 25%; margin-left: var(--myMargin); } div.parent { font-size: 0.4em; } div.child { margin-left: var(--myMargin); /* should now be 25% of the width of the grandfather's parent */ /* but isn't */ }

Alas, this does not work. Browsers cannot resolve the 25% in the context of the grandparent, as they did with the 1em, because they don’t know what to do.

The most important trick for using percentages in CSS is to always ask yourself: “percentage of WHAT?”

That’s exactly what browsers do when they encounter this @property definition. 25% of what? The parent’s font size? Or the parent’s width? (This is the correct answer, but browsers have no way of knowing that.) Or maybe the width of the element itself, for use in background-position?

Since browsers cannot figure out what the percentage is relative to they do nothing: the custom property gets the initial value of 0 and the grandfather-inheritance fails.

Colours

Another idea I had was using this trick for the grandfather’s text colour. What if we store currentColor, which always has the value of the element’s text colour, and send it on to the grandchild? Something like this:

@property --myColor { syntax: "<color>"; initial-value: black; inherits: true; } div.grandparent { /* color unknown */ --myColor: currentColor; } div.parent { color: red; } div.child { color: var(--myColor); /* should now have the same color as the grandfather */ /* but doesn't */ }

Alas, this does not work either. When the @property blocks are evaluated, and 1em is calculated, currentColor specifically is not touched because it is used as an initial (default) value for some inherited SVG and CSS properties such as fill. Unfortunately I do not fully understand what’s going on, but Tab says this behaviour is necessary, so it is.

Pity, but such is life. Especially when you’re working with new CSS functionalities.

Conclusion

So I tried to find more possbilities for using Lea’s trick, but failed. Relative units are fairly sparse, especially when you leave percentages out of the equation. em and related units such as rem are the only ones, as far as I can see.

So we’re left with a very useful trick for font sizes. You should use it when you need it (bearing in mind that right now it’s only supported in Chromium-based browsers), but extending it to other declarations is not possible at the moment.

Many thanks to Lea Verou and Tab Atkins for reviewing and correcting an earlier draft of this article.

Let&#8217;s talk about money

QuirksBlog - Tue, 06/29/2021 - 1:23am

Let’s talk about money!

Let’s talk about how hard it is to pay small amounts online to people whose work you like and who could really use a bit of income. Let’s talk about how Coil aims to change that.

Taking a subscription to a website is moderately easy, but the person you want to pay must have enabled them. Besides, do you want to purchase a full subscription in order to read one or two articles per month?

Sending a one-time donation is pretty easy as well, but, again, the site owner must have enabled them. And even then it just gives them ad-hoc amounts that they cannot depend on.

Then there’s Patreon and Kickstarter and similar systems, but Patreon is essentially a subscription service while Kickstarter is essentially a one-time donation service, except that both keep part of the money you donate.

And then there’s ads ... Do we want small content creators to remain dependent on ads and thus support the entire ad ecosystem? I, personally, would like to get rid of them.

The problem today is that all non-ad-based systems require you to make conscious decisions to support someone — and even if you’re serious about supporting them you may forget to send in a monthly donation or to renew your subscription. It sort-of works, but the user experience can be improved rather dramatically.

That’s where Coil and the Web Monetization Standard come in.

Web Monetization

The idea behind Coil is that you pay for what you consume easily and automatically. It’s not a subscription - you only pay for what you consume. It’s not a one-time donation, either - you always pay when you consume.

Payments occur automatically when you visit a website that is also subscribed to Coil, and the amount you pay to a single site owner depends on the time you spend on the site. Coil does not retain any of your money, either — everything goes to the people you support.

In this series of four articles we’ll take a closer look at the architecture of the current Coil implementation, how to work with it right now, the proposed standard, and what’s going to happen in the future.

Overview

So how does Coil work right now?

Both the payer and the payee need a Coil account to send and receive money. The payee has to add a <meta> tag with a Coil payment pointer to all pages they want to monetize. The payer has to install the Coil extension in their browsers. You can see this extension as a polyfill. In the future web monetization will, I hope, be supported natively in all browsers.

Once that’s done the process works pretty much automatically. The extension searches for the <meta> tag on any site the user visits. If it finds one it starts a payment stream from payer to payee that continues for as long as the payer stays on the site.

The payee can use the JavaScript API to interact with the monetization stream. For instance, they can show extra content to paying users, or keep track of how much a user paid so far. Unfortunately these functionalities require JavaScript, and the hiding of content is fairly easy to work around. Thus it is not yet suited for serious business purposes, especially in web development circles.

This is one example of how the current system is still a bit rough around the edges. You’ll find more examples in the subsequent articles. Until the time browsers support the standard natively and you can determine your visitors’ monetization status server-side these rough bits will continue to exist. For the moment we will have to work with the system we have.

This article series will discuss all topics we touched on in more detail.

Start now!

For too long we have accepted free content as our birthright, without considering the needs of the people who create it. This becomes even more curious for articles and documentation that are absolutely vital to our work as web developers.

Take a look at this list of currently-monetized web developer sites. Chances are you’ll find a few people whose work you used in the past. Don’t they deserve your direct support?

Free content is not a right, it’s an entitlement. The sooner we internalize this, and start paying independent voices, the better for the web.

The only alternative is that all articles and documentation that we depend on will written by employees of large companies. And employees, no matter how well-meaning, will reflect the priorities and point of view of their employer in the long run.

So start now.

In order to support them you should invest a bit of time once and US$5 per month permanently. I mean, that’s not too much to ask, is it?

Continue

I wrote this article and its sequels for Coil, and yes, I’m getting paid. Still, I believe in what they are doing, so I won’t just spread marketing drivel. Initially it was unclear to me exactly how Coil works. So I did some digging, and the remaining parts of this series give a detailed description of how Coil actually works in practice.

For now the other three articles will only be available on dev.to. I just published part 2, which gives a high-level overview of how Coil works right now. Part 3 will describe the meta tag and the JavaScript API, and in part 4 we’ll take a look at the future, which includes a formal W3C standard. Those parts will be published next week and the week after that.

Wed, 12/31/1969 - 2:00pm
Syndicate content
©2003 - Present Akamai Design & Development.