Front End Web Development

Stay alert

Css Tricks - Thu, 08/12/2021 - 11:10am

A few days ago, Chris wrote up his thoughts about how alert(), confirm(), and prompt() were being deprecated by Chrome and collected a bunch of thoughts from developers. If certain features can essentially be turned off by a major browser, a lot of folks started to worry about the predictability of the web.

On that note, I really liked this note by Richard Harris:

We can’t normalise the attitude that collateral damage is the price of progress, even if we accept the premise — which I don’t — that removing APIs like alert represents progress. For all its flaws, the web is generally agreed to be a stable platform, where investments made today will stand the test of time. A world in which websites are treated as inherently transient objects, where APIs we commonly rely on today could be cast aside as unwanted baggage by tomorrow’s spec wranglers, is a world in which the web has already lost.

This specific bit of drama isn’t of much interest to me, I must admit. But! I think it brings up a super important distinction between software and the web. Here’s a story.

The other day I was faffing about with Astro (which I like a lot). I was rebuilding my personal site with it and I decided — in a spark of punk rock-ness — to update to the latest version of it. I thought perhaps it might make my build process a bit quicker and give me a chance to explore new features. But alas — everything broke. APIs had been deprecated! My build process broke! Everything crumbled down around me.

This isn’t me dunking on Astro. I love it, still. But it’s important to remember that Astro isn’t the web. Neither is React or any other framework, really. Those teams can feel free to deprecate things, improve things as much as they want. They can burn it all to the ground and start again. But stuff like alert(), old CSS features, and HTML elements aren’t in the same category. They can’t be deprecated in the same way because, as Jeremy said, the web needs to be predictable. And we can’t treat the web like plain ol’ software because no one team or individual owns those features.

Here’s the gist of my rant: alert() and confirm() aren’t features of Chrome, but of the web. But I fear that’s how a lot of folks might think about them.

This is also why standards are so important! Talking about new features in public lets us fix all the bugs and answer all the questions before a new feature ships onto this platform where you can’t just delete it when you realize you goofed up. I’m not even really dunking on Chrome here either, but this distinction between software and the open web is an important one to make. Right?

Direct Link to ArticlePermalink

The post Stay alert appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Breaking the web forward

QuirksBlog - Thu, 08/12/2021 - 5:19am

Safari is holding back the web. It is the new IE, after all. In contrast, Chrome is pushing the web forward so hard that it’s starting to break. Meanwhile web developers do nothing except moan and complain. The only thing left to do is to pick our poison.

blockquote { font-size: inherit; font-family: inherit; } blockquote p { font-size: inherit; font-family: inherit; } Safari is the new IE

Recently there was yet another round of “Safari is the new IE” stories. Once Jeremy’s summary and a short discussion cleared my mind I finally figured out that Safari is not IE, and that Safari’s IE-or-not-IE is not the worst problem the web is facing.

Perry Sun argues that for developers, Safari is crap and outdated, emulating the old IE of fifteen years ago in this respect. He also repeats the theory that Apple is deliberately starving Safari of features in order to protect the app store, and thus its bottom line. We’ll get back to that.

The allegation that Safari is holding back web development by its lack of support for key features is not new, but it’s not true, either. Back fifteen years ago IE held back the web because web developers had to cater to its outdated technology stack. “Best viewed with IE” and all that. But do you ever see a “Best viewed with Safari” notice? No, you don’t. Another browser takes that special place in web developers’ hearts and minds.

Chrome is the new IE, but in reverse

Jorge Arango fears we’re going back to the bad old days with “Best viewed in Chrome.” Chris Krycho reinforces this by pointing out that, even though Chrome is not the standard, it’s treated as such by many web developers.

“Best viewed in Chrome” squares very badly with “Safari is the new IE.” Safari’s sad state does not force web developers to restrict themselves to Safari-supported features, so it does not hold the same position as IE.

So I propose to lay this tired old meme to rest. Safari is not the new IE. If anything it’s the new Netscape 4.

Meanwhile it is Chrome that is the new IE, but in reverse.

Break the web forward

Back in the day, IE was accused of an embrace, extend, and extinguish strategy. After IE6 Microsoft did nothing for ages, assuming it had won the web. Thanks to web developers taking action in their own name for the first (and only) time, IE was updated once more and the web moved forward again.

Google learned from Microsoft’s mistakes and follows a novel embrace, extend, and extinguish strategy by breaking the web and stomping on the bits. Who cares if it breaks as long as we go forward. And to hell with backward compatibility.

Back in 2015 I proposed to stop pushing the web forward, and as expected the Chrome devrels were especially outraged at this idea. It never went anywhere. (Truth to tell: I hadn’t expected it to.)

I still think we should stop pushing the web forward for a while until we figure out where we want to push the web forward to — but as long as Google is in charge that won’t happen. It will only get worse.

On alert

A blog storm broke out over the decision to remove alert(), confirm() and prompt(), first only the cross-origin variants, but eventually all of them. Jeremy and Chris Coyier already summarised the situation, while Rich Harris discusses the uses of the three ancient modals, especially when it comes to learning JavaScript.

With all these articles already written I will only note that, if the three ancient modals are truly as horrendous a security issue as Google says they are it took everyone a bloody long time to figure that out. I mean, they turn 25 this year.

Although it appears Firefox and Safari are on board with at least the cross-origin part of the proposal, there is no doubt that it’s Google that leads the charge.

From Google’s perspective the ancient modals have one crucial flaw quite apart from their security model: they weren’t invented there. That’s why they have to be replaced by — I don’t know what, but it will likely be a very complicated API.

Complex systems and arrogant priests rule the web

Thus the new embrace, extend, and extinguish is breaking backward compatibility in order to make the web more complicated. Nolan Lawson puts it like this:

we end up with convoluted specs like Service Worker that you need a PhD to understand, and yet we still don't have a working <dialog> element.

In addition, Google can be pretty arrogant and condescending, as Chris Ferdinandi points out.

The condescending “did you actually read it, it’s so clear” refrain is patronizing AF. It’s the equivalent of “just” or “simply” in developer documentation.

I read it. I didn’t understand it. That’s why I asked someone whose literal job is communicating with developers about changes Chrome makes to the platform.

This is not isolated to one developer at Chrome. The entire message thread where this change was surfaced is filled with folks begging Chrome not to move forward with this proposal because it will break all-the-things.

If you write documentation or a technical article and nobody understands it, you’ve done a crappy job. I should know; I’ve been writing this stuff for twenty years.

Extend, embrace, extinguish. And use lots of difficult words.

Patience is a virtue

As a reaction to web dev outcry Google temporarily halted the breaking of the web. That sounds great but really isn’t. It’s just a clever tactical move.

I saw this tactic in action before. Back in early 2016 Google tried to break the de-facto standard for the mobile visual viewport that I worked very hard to establish. I wrote a piece that resonated with web developers, whose complaints made Google abandon the plan — temporarily. They tried again in late 2017, and I again wrote an article, but this time around nobody cared and the changes took effect and backward compatibility was broken.

So the three ancient modals still have about 12 to 18 months to live. Somewhere in late 2022 to early 2023 Google will try again, web developers will be silent, and the modals will be gone.

The pursuit of appiness

But why is Google breaking the web forward at such a pace? And why is Apple holding it back?

Safari is kept dumb to protect the app store and thus revenue. In contrast, the Chrome team is pushing very hard to port every single app functionality to the browser. Ages ago I argued we should give up on this, but of course no one listened.

When performing Valley Kremlinology, it is useful to see Google policies as stemming from a conflict between internal pro-web and anti-web factions. We web developers mainly deal with the pro-web faction, the Chrome devrel and browser teams. On the other hand, the Android team is squarely in the anti-web camp.

When seen in this light the pro-web camp’s insistence on copying everything appy makes excellent sense: if they didn’t Chrome would lag behind apps and the Android anti-web camp would gain too much power. While I prefer the pro-web over the anti-web camp, I would even more prefer the web not to be a pawn in an internal Google power struggle. But it has come to that, no doubt about it.

Solutions?

Is there any good solution? Not really.

Jim Nielsen feels that part of the issue is the lack of representation of web developers in the standardization process. That sounds great but is proven not to work.

Three years ago Fronteers and I attempted to get web developers represented and were met with absolute disinterest. Nobody else cared even one shit, and the initiative sank like a stone.

So a hypothetical web dev representative in W3C is not going to work. Also, the organisational work would involve a lot of unpaid labour, and I, for one, am not willing to do it again. Neither is anyone else. So this is not the solution.

And what about Firefox? Well, what about it? Ten years ago it made a disastrous mistake by ignoring the mobile web for way too long, then it attempted an arrogant and uninformed come-back with Firefox OS that failed, and its history from that point on is one long slide into obscurity. That’s what you get with shitty management.

Pick your poison

So Safari is trying to slow the web down. With Google’s move-fast-break-absofuckinglutely-everything axiom in mind, is Safari’s approach so bad?

Regardless of where you feel the web should be on this spectrum between Google and Apple, there is a fundamental difference between the two.

We have the tools and procedures to manage Safari’s disinterest. They’re essentially the same as the ones we deployed against Microsoft back in the day — though a fundamental difference is that Microsoft was willing to talk while Apple remains its old haughty self, and its “devrels” aren’t actually allowed to do devrelly things such as managing relations with web developers. (Don’t blame them, by the way. If something would ever change they’re going to be our most valuable internal allies — just as the IE team was back in the day.)

On the other hand, we have no process for countering Google’s reverse embrace, extend, and extinguish strategy, since a section of web devs will be enthusiastic about whatever the newest API is. Also, Google devrels talk. And talk. And talk. And provide gigs of data that are hard to make sense of. And refer to their proprietary algorithms that “clearly” show X is in the best interest of the web — and don’t ask questions! And make everything so fucking complicated that we eventually give up and give in.

So pick your poison. Shall we push the web forward until it’s broken, or shall we break it by inaction? What will it be? Privately, my money is on Google. So we should say goodbye to the old web while we still can.

Using Web Components in WordPress is Easier Than You Think

Css Tricks - Thu, 08/12/2021 - 4:38am

Now that we’ve seen that web components and interactive web components are both easier than you think, let’s take a look at adding them to a content management system, namely WordPress.

There are three major ways we can add them. First, through manual input into the siteputting them directly into widgets or text blocks, basically anywhere we can place other HTML. Second, we can add them as the output of a theme in a theme file. And, finally, we can add them as the output of a custom block.

Loading the web component files

Now whichever way we end up adding web components, there’s a few things we have to ensure:

  1. our custom element’s template is available when we need it,
  2. any JavaScript we need is properly enqueued, and
  3. any un-encapsulated styles we need are enqueued.

We’ll be adding the <zombie-profile> web component from my previous article on interactive web components. Check out the code over at CodePen.

Let’s hit that first point. Once we have the template it’s easy enough to add that to the WordPress theme’s footer.php file, but rather than adding it directly in the theme, it’d be better to hook into wp_footer so that the component is loaded independent of the footer.php file and independent of the overall theme— assuming that the theme uses wp_footer, which most do. If the template doesn’t appear in your theme when you try it, double check that wp_footer is called in your theme’s footer.php template file.

<?php function diy_ezwebcomp_footer() { ?> <!-- print/echo Zombie profile template code. --> <!-- It's available at https://codepen.io/undeadinstitute/pen/KKNLGRg --> <?php } add_action( 'wp_footer', 'diy_ezwebcomp_footer');

Next is to enqueue our component’s JavaScript. We can add the JavaScript via wp_footer as well, but enqueueing is the recommended way to link JavaScript to WordPress. So let’s put our JavaScript in a file called ezwebcomp.js (that name is totally arbitrary), stick that file in the theme’s JavaScript directory (if there is one), and enqueue it (in the functions.php file).

wp_enqueue_script( 'ezwebcomp_js', get_template_directory_uri() . '/js/ezwebcomp.js', '', '1.0', true );

We’ll want to make sure that last parameter is set to true , i.e. it loads the JavaScript before the closing body tag. If we load it in the head instead, it won’t find our HTML template and will get super cranky (throw a bunch of errors.)

If you can fully encapsulate your web component, then you can skip this next step. But if you (like me) are unable to do it, you’ll need to enqueue those un-encapsulated styles so that they’re available wherever the web component is used. (Similar to JavaScript, we could add this directly to the footer, but enqueuing the styles is the recommended way to do it). So we’ll enqueue our CSS file:

wp_enqueue_style( 'ezwebcomp_style', get_template_directory_uri() . '/ezwebcomp.css', '', '1.0', 'screen' );

That wasn’t too tough, right? And if you don’t plan to have any users other than Administrators use it, you should be all set for adding these wherever you want them. But that’s not always the case, so we’ll keep moving ahead!

Don’t filter out your web component

WordPress has a few different ways to both help users create valid HTML and prevent your Uncle Eddie from pasting that “hilarious” picture he got from Shady Al directly into the editor (complete with scripts to pwn every one of your visitors).

So when adding web-components directly into blocks or widgets, we’ll need to be careful about WordPress’s built-in code filtering . Disabling it all together would let Uncle Eddie (and, by extension, Shady Al) run wild, but we can modify it to let our awesome web component through the gate that (thankfully) keeps Uncle Eddie out.

First, we can use the wp_kses_allowed filter to add our web component to the list of elements not to filter out. It’s sort of like we’re whitelisting the component, and we do that by adding it to the the allowed tags array that’s passed to the filter function.

function add_diy_ezwebcomp_to_kses_allowed( $the_allowed_tags ) { $the_allowed_tags['zombie-profile'] = array(); } add_filter( 'wp_kses_allowed_html', 'add_diy_ezwebcomp_to_kses_allowed');

We’re adding an empty array to the <zombie-profile> component because WordPress filters out attributes in addition to elements—which brings us to another problem: the slot attribute (as well as part and any other web-component-ish attribute you might use) are not allowed by default. So, we have to explitcly allow them on every element on which you anticipate using them, and, by extension, any element your user might decide to add them to. (Wait, those element lists aren’t the same even though you went over it six times with each user… who knew?) Thus, below I have set slot to true on <span>, <img> and <ul>, the three elements I’m putting into slots in the <zombie-profile> component. (I also set part to true on span elements so that I could let that attribute through too.)

function add_diy_ezwebcomp_to_kses_allowed( $the_allowed_tags ) { $the_allowed_tags['zombie-profile'] = array(); $the_allowed_tags\['span'\]['slot'] = true; $the_allowed_tags\['span'\]['part'] = true; $the_allowed_tags\['ul'\]['slot'] = true; $the_allowed_tags\['img'\]['slot'] = true; return $the_allowed_tags; } add_filter( 'wp_kses_allowed_html', 'add_diy_ezwebcomp_to_kses_allowed');

We could also enable the slot (and part) attribute in all allowed elements with something like this:

function add_diy_ezwebcomp_to_kses_allowed($the_allowed_tags) { $the_allowed_tags['zombie-profile'] = array(); foreach ($the_allowed_tags as &$tag) { $tag['slot'] = true; $tag['part'] = true; } return $the_allowed_tags; } add_filter('wp_kses_allowed_html', 'add_diy_ezwebcomp_to_kses_allowed');

Sadly, there is one more possible wrinkle with this. You may not run into this if all the elements you’re putting in your slots are inline/phrase elements, but if you have a block level element to put into your web component, you’ll probably get into a fistfight with the block parser in the Code Editor. You may be a better fist fighter than I am, but I always lost.

The code editor is an option that allows you to inspect and edit the markup for a block.

For reasons I can’t fully explain, the client-side parser assumes that the web component should only have inline elements within it, and if you put a <ul> or <div>, <h1> or some other block-level element in there, it’ll move the closing web component tag to just after the last inline/phrase element. Worse yet, according to a note in the WordPress Developer Handbook, it’s currently “not possible to replace the client-side parser.”

While this is frustrating and something you’ll have to train your web editors on, there is a workaround. If we put the web component in a Custom HTML block directly in the Block Editor, the client-side parser won’t leave us weeping on the sidewalk, rocking back and forth, and questioning our ability to code… Not that that’s ever happened to anyone… particularly not people who write articles…

Component up the theme

Outputting our fancy web component in our theme file is straightforward as long as it isn’t updated outside the HTML block. We add it the way we would add it in any other context, and, assuming we have the template, scripts and styles in place, things will just work.

But let’s say we want to output the contents of a WordPress post or custom post type in a web component. You know, write a post and that post is the content for the component. This allows us to use the WordPress editor to pump out an archive of <zombie-profile> elements. This is great because the WordPress editor already has most of the UI we need to enter the content for one of the <zombie-profile> components:

  • The post title can be the zombie’s name.
  • A regular paragraph block in the post content can be used for the zombie’s statement.
  • The featured image can be used for the zombie’s profile picture.

That’s most of it! But we’ll still need fields for the zombie’s age, infection date, and interests. We’ll create these with WordPress’s built in Custom Fields feature.

We’ll use the template part that handles printing each post, e.g. content.php, to output the web component. First, we’ll print out the opening <zombie-profile> tag followed by the post thumbnail (if it exists).

<zombie-profile> <?php // If the post featured image exists... if (has_post_thumbnail()) { $src = wp_get_attachment_image_url(get_post_thumbnail_id()); ?> <img src="<?php echo $src; ?>" slot="profile-image"> <?php } ?>

Next we’ll print the title for the name

<?php // If the post title field exits... if (get_the_title()) { ?> <span slot="zombie-name"><?php echo get_the_title(); ?></span> <?php } ?>

In my code, I have tested whether these fields exist before printing them for two reasons:

  1. It’s just good programming practice (in most cases) to hide the labels and elements around empty fields.
  2. If we end up outputting an empty <span> for the name (e.g. <span slot="zombie-name"></span>), then the field will show as empty in the final profile rather than use our web component’s built-in default text, image, etc. (If you want, for instance, the text fields to be empty if they have no content, you can either put in a space in the custom field or skip the if statement in the code).

Next, we will grab the custom fields and place them into the slots they belong to. Again, this goes into the theme template that outputs the post content.

<?php // Zombie age $temp = get_post_meta(the_ID(), 'Age', true); if ($temp) { ?> <span slot="z-age"><?php echo $temp; ?></span> <?php } // Zombie infection date $temp = get_post_meta(the_ID(), 'Infection Date', true); if ($temp) { ?> <span slot="idate"><?php echo $temp; ?></span> <?php } // Zombie interests $temp = get_post_meta(the_ID(), 'Interests', true); if ($temp) { ?> <ul slot="z-interests"><?php echo $temp; ?></ul> <?php } ?>

One of the downsides of using the WordPress custom fields is that you can’t do any special formatting, A non-technical web editor who’s filling this out would need to write out the HTML for the list items (<li>) for each and every interest in the list. (You can probably get around this interface limitation by using a more robust custom field plugin, like Advanced Custom Fields, Pods, or similar.)

Lastly. we add the zombie’s statement and the closing <zombie-profile> tag.

<?php $temp = get_the_content(); if ($temp) { ?> <span slot="statement"><?php echo $temp; ?></span> <?php } ?> </zombie-profile>

Because we’re using the body of the post for our statement, we’ll get a little extra code in the bargain, like paragraph tags around the content. Putting the profile statement in a custom field will mitigate this, but depending on your purposes, it may also be intended/desired behavior.

You can then add as many posts/zombie profiles as you need simply by publishing each one as a post!

Block party: web components in a custom block

Creating a custom block is a great way to add a web component. Your users will be able to fill out the required fields and get that web component magic without needing any code or technical knowledge. Plus, blocks are completely independent of themes, so really, we could use this block on one site and then install it on other WordPress sites—sort of like how we’d expect a web component to work!

There are the two main parts of a custom block: PHP and JavaScript. We’ll also add a little CSS to improve the editing experience.

First, the PHP:

function ez_webcomp_register_block() { // Enqueues the JavaScript needed to build the custom block wp_register_script( 'ez-webcomp', plugins_url('block.js', __FILE__), array('wp-blocks', 'wp-element', 'wp-editor'), filemtime(plugin_dir_path(__FILE__) . 'block.js') ); // Enqueues the component's CSS file wp_register_style( 'ez-webcomp', plugins_url('ezwebcomp-style.css', __FILE__), array(), filemtime(plugin_dir_path(__FILE__) . 'ezwebcomp-style.css') ); // Registers the custom block within the ez-webcomp namespace register_block_type('ez-webcomp/zombie-profile', array( // We already have the external styles; these are only for when we are in the WordPress editor 'editor_style' =&gt; 'ez-webcomp', 'editor_script' =&gt; 'ez-webcomp', )); } add_action('init', 'ez_webcomp_register_block');

The CSS isn’t necessary, it does help prevent the zombie’s profile image from overlapping the content in the WordPress editor.

/* Sets the width and height of the image. * Your mileage will likely vary, so adjust as needed. * "pic" is a class we'll add to the editor in block.js */ #editor .pic img { width: 300px; height: 300px; } /* This CSS ensures that the correct space is allocated for the image, * while also preventing the button from resizing before an image is selected. */ #editor .pic button.components-button { overflow: visible; height: auto; }

The JavaScript we need is a bit more involved. I’ve endeavored to simplify it as much as possible and make it as accessible as possible to everyone, so I’ve written it in ES5 to remove the need to compile anything.

Show code (function (blocks, editor, element, components) { // The function that creates elements var el = element.createElement; // Handles text input for block fields var RichText = editor.RichText; // Handles uploading images/media var MediaUpload = editor.MediaUpload; // Harkens back to register_block_type in the PHP blocks.registerBlockType('ez-webcomp/zombie-profile', { title: 'Zombie Profile', //User friendly name shown in the block selector icon: 'id-alt', //the icon to usein the block selector category: 'layout', // The attributes are all the different fields we'll use. // We're defining what they are and how the block editor grabs data from them. attributes: { name: { // The content type type: 'string', // Where the info is available to grab source: 'text', // Selectors are how the block editor selects and grabs the content. // These should be unique within an instance of a block. // If you only have one img or one <ul> etc, you can use element selectors. selector: '.zname', }, mediaID: { type: 'number', }, mediaURL: { type: 'string', source: 'attribute', selector: 'img', attribute: 'src', }, age: { type: 'string', source: 'text', selector: '.age', }, infectdate: { type: 'date', source: 'text', selector: '.infection-date' }, interests: { type: 'array', source: 'children', selector: 'ul', }, statement: { type: 'array', source: 'children', selector: '.statement', }, }, // The edit function handles how things are displayed in the block editor. edit: function (props) { var attributes = props.attributes; var onSelectImage = function (media) { return props.setAttributes({ mediaURL: media.url, mediaID: media.id, }); }; // The return statement is what will be shown in the editor. // el() creates an element and sets the different attributes of it. return el( // Using a div here instead of the zombie-profile web component for simplicity. 'div', { className: props.className }, // The zombie's name el(RichText, { tagName: 'h2', inline: true, className: 'zname', placeholder: 'Zombie Name…', value: attributes.name, onChange: function (value) { props.setAttributes({ name: value }); }, }), el( // Zombie profile picture 'div', { className: 'pic' }, el(MediaUpload, { onSelect: onSelectImage, allowedTypes: 'image', value: attributes.mediaID, render: function (obj) { return el( components.Button, { className: attributes.mediaID ? 'image-button' : 'button button-large', onClick: obj.open, }, !attributes.mediaID ? 'Upload Image' : el('img', { src: attributes.mediaURL }) ); }, }) ), // We'll include a heading for the zombie's age in the block editor el('h3', {}, 'Age'), // The age field el(RichText, { tagName: 'div', className: 'age', placeholder: 'Zombie\'s Age…', value: attributes.age, onChange: function (value) { props.setAttributes({ age: value }); }, }), // Infection date heading el('h3', {}, 'Infection Date'), // Infection date field el(RichText, { tagName: 'div', className: 'infection-date', placeholder: 'Zombie\'s Infection Date…', value: attributes.infectdate, onChange: function (value) { props.setAttributes({ infectdate: value }); }, }), // Interests heading el('h3', {}, 'Interests'), // Interests field el(RichText, { tagName: 'ul', // Creates a new <li> every time `Enter` is pressed multiline: 'li', placeholder: 'Write a list of interests…', value: attributes.interests, onChange: function (value) { props.setAttributes({ interests: value }); }, className: 'interests', }), // Zombie statement heading el('h3', {}, 'Statement'), // Zombie statement field el(RichText, { tagName: 'div', className: "statement", placeholder: 'Write statement…', value: attributes.statement, onChange: function (value) { props.setAttributes({ statement: value }); }, }) ); }, // Stores content in the database and what is shown on the front end. // This is where we have to make sure the web component is used. save: function (props) { var attributes = props.attributes; return el( // The <zombie-profile web component 'zombie-profile', // This is empty because the web component does not need any HTML attributes {}, // Ensure a URL exists before it prints attributes.mediaURL && // Print the image el('img', { src: attributes.mediaURL, slot: 'profile-image' }), attributes.name && // Print the name el(RichText.Content, { tagName: 'span', slot: 'zombie-name', className: 'zname', value: attributes.name, }), attributes.age && // Print the zombie's age el(RichText.Content, { tagName: 'span', slot: 'z-age', className: 'age', value: attributes.age, }), attributes.infectdate && // Print the infection date el(RichText.Content, { tagName: 'span', slot: 'idate', className: 'infection-date', value: attributes.infectdate, }), // Need to verify something is in the first element since the interests's type is array attributes.interests[0] && // Pint the interests el(RichText.Content, { tagName: 'ul', slot: 'z-interests', value: attributes.interests, }), attributes.statement[0] && // Print the statement el(RichText.Content, { tagName: 'span', slot: 'statement', className: 'statement', value: attributes.statement, }) ); }, }); })( //import the dependencies window.wp.blocks, window.wp.blockEditor, window.wp.element, window.wp.components );

Plugging in to web components

Now, wouldn’t it be great if some kind-hearted, article-writing, and totally-awesome person created a template that you could just plug your web component into and use on your site? Well that guy wasn’t available (he was off helping charity or something) so I did it. It’s up on github:

Do It Yourself – Easy Web Components for WordPress

The plugin is a coding template that registers your custom web component, enqueues the scripts and styles the component needs, provides examples of the custom block fields you might need, and even makes sure things are styled nicely in the editor. Put this in a new folder in /wp-content/plugins like you would manually install any other WordPress plugin, make sure to update it with your particular web component, then activate it in WordPress on the “Installed Plugins” screen.

Not that bad, right?

Even though it looks like a lot of code, we’re really doing a few pretty standard WordPress things to register and render a custom web component. And, since we packaged it up as a plugin, we can drop this into any WordPress site and start publishing zombie profiles to our heart’s content.

I’d say that the balancing act is trying to make the component work as nicely in the WordPress block editor as it does on the front end. We would have been able to knock this out with a lot less code without that consideration.

Still, we managed to get the exact same component we made in my previous articles into a CMS, which allows us to plop as many zombie profiles on the site. We combined our knowledge of web components with WordPress blocks to develop a reusable block for our reusable web component.

What sort of components will you build for your WordPress site? I imagine there are lots of possibilities here and I’m interested to see what you wind up making.

Article series
  1. Web Components Are Easier Than You Think
  2. Interactive Web Components Are Easier Than You Think
  3. Using Web Components in WordPress is Easier Than You Think

The post Using Web Components in WordPress is Easier Than You Think appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Wanna see a whiter white?

Css Tricks - Wed, 08/11/2021 - 9:09am

Heck of a CSS trick here from Dongsung Kim.

There are hidden HDR videos playing at the corners of this page. When a HDR-capable browser encounters one, it switches to HDR mode. For some reason, CSS backdrop-filter + brightness >100% combo seems to behave like HDR—reaching beyond the user-controlled display brightness, up to the maximum HDR brightness—while the everything in between follow[s] along. At least that’s the overall idea, but I still don’t know exactly why it works; especially why with those two CSS properties.

As I look at that demo in Chrome, I see an extra-white text-shadow. In Safari, I see extra-white text. In Firefox, the whites match so I see nothing. Probably a bug.

I wouldn’t recommend actually using the trick, as I’d think the extra-whiteness almost certainly takes extra battery power that a user isn’t opting into, even without the video playing—even though it does feel like a bummer that our screens are capable of whiter whites than we normally have access to. The good news is that the gamut of color on the web is expanding, generally.

Direct Link to ArticlePermalink

The post Wanna see a whiter white? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Static vs. Dynamic vs. Jamstack: Where’s The Line?

Css Tricks - Wed, 08/11/2021 - 4:31am

You’ll often hear developers talking about “static” vs. “dynamic” sites, or you may have heard someone use the term Jamstack. What do these terms mean, and when does a “static” site become either a Jamstack or dynamic site? These questions sound simple, but they’re more nuanced than they appear. Let’s explore these terms to gain a deeper understanding of Jamstack.

Finding the line

What’s the difference between a chair and a stool? Most people will respond that a chair has four legs and back support, whereas a stool has three legs with no back support.

Credit: Rumman Amin Credit: Rumman Amin

OK, that’s a great starting point, but what about these?

Credit: Valerii Zorin Credit: Krisztian Tabori

The more stool-like a chair becomes, the fewer people will unequivocally agree that it’s a chair. Eventually, we’ll reach a point where most people agree it’s a stool rather than a chair. It may sound like a silly exercise, but if we want to have a deep appreciation of what it means to be a chair, it’s a valuable one. We find out where the limits of a chair are for most people. We also build an understanding of the gray area beyond. Eventually, we get to the point where even the biggest die-hard chair fans concede and admit there’s a stool in front of them.

As interesting as chairs are, this is an article about website delivery technology. Let’s perform this same exercise for static, dynamic, and Jamstack websites.

At a high level

When you go to a website in your browser, there’s a lot going on behind the scenes:

  1. Your browser performs a DNS lookup to turn the domain name into an IP address.
  2. It requests an HTML file from that IP address.
  3. The webserver sends back the requested file.
  4. As the browser renders the web page, it may come across a reference for an asset, such as a CSS, JavaScript, or image file. The browser then performs a request for this asset.
  5. This cycle continues until the browser has all the files for the web page. It’s not unusual for a single webpage to make 50+ requests.

For every request, the response from the webserver is always a static file, even on a dynamic website. You could save these files to a USB drive, email them to a friend just like any other file on your computer.

When comparing static and dynamic, we’re talking about what the webserver is doing. On a static site, the files the browser requests already exist on the webserver. The webserver sends them back exactly as they are. On a dynamic site, the response gets generated by software. This software might connect to a database to retrieve data, build a layout from template files, and add today’s date to the footer. It does all of this for every request.

That’s the foundational difference between static and dynamic websites.

Where does Jamstack fit in?

Static websites are restrictive. They’re great for informational websites; however, you can’t have any dynamic content or behavior by definition. Jamstack blurs the line between static and dynamic. The idea is to take advantage of all the things that make static websites awesome while enabling dynamic functionality where necessary.

The ‘stack’ in Jamstack is a misnomer. The truth is, Jamstack is not a stack at all. It’s a philosophy that exhibits a striking resemblance to The 5 Pillars of the AWS Well-Architected Framework. The ambiguity in the term has led to extensive community discussion about what it means to be Jamstack.

What is Jamstack?

Jamstack is a superset of static. But to truly understand Jamstack, let’s start with the seeds that led to the coining of the term.

In 2002, the late Aaron Swartz published a blog post titled “Bake, Don’t Fry.” While Aaron didn’t coin “Bake, Don’t Fry,” it’s the first time I can find someone recognizing the benefits of static websites while breaking out perceived constraints of the word.

I care about not having to maintain cranky AOLserver, Postgres and Oracle installs. I care about being able to back things up with scp. I care about not having to do any installation or configuration to move my site to a new server. I care about being platform and server independent.

If we trawl through history, we can find similar frustrations that led to Jamstack seeds:

  • Ben and Mena Trott created MovableType because of a [d]issatisfaction with existing blog CMSes — performance, stability.
  • Tom Preston-Werner created Jekyll to move away from complexity:I already knew a lot about what I didn’t want. I was tired of complicated blogging engines like WordPress and Mephisto. I wanted to write great posts, not style a zillion template pages, moderate comments all day long, and constantly lag behind the latest software release.
  • Steve Francia created Hugo for performance:The past few years this blog has [been] powered by wordpress [sic] and drupal prior to that. Both are are fine pieces of software, but over time I became increasingly disappointed with how they are both optimized for writing content even though significantly most common usage is reading content. Due to the need to load the PHP interpreter on each request it could never be considered fast and consumed a lot of memory on my VPS.

The same themes surface as you look at the origins of many early Jamstack tools:

  • Reduce complexity
  • Improve performance
  • Reduce vendor lock-in
  • Better workflows for developers

In the past 20 years, JavaScript has evolved from a language for adding small interactions to a website to becoming a platform for building rich web applications in the browser. In parallel, we’ve seen a movement of splitting large applications into smaller microservices. These two developments gave rise to a new way of building websites where you could have a static front-end decoupled from a dynamic back-end.

In 2015, Mathias Biilmann wanted to talk about this modern way of building websites but was struggling with the constricting definition of static:

We were in this space of modern static websites. That’s a really bad description of what we’re doing, right? And we kept having that problem that, talking to people about static sites, they would think about something very static. They would think about a brochure or something with no moving parts. A little one-pager or something like that.

To break out of these constraints, he coined the term “Jamstack” to talk about this new approach, and it caught on like wildfire. What was old static technology from the 90s became new again and pushed to new limits. Many developers caught on to the benefits of the Jamstack approach, which helped Jamstack grow into the thriving ecosystem it is today.

Aaron Swartz put it nicely, 13 years before Jamstack was coined: keep a strict separation between input (which needs dynamic code to be processed) and output (which can usually be baked). In other words, decouple the front end from the back end. Prerender content whenever possible. Layer on dynamic functionality where necessary. That’s the crux of Jamstack.

The reason you might want to build a Jamstack site over a dynamic site come down to the six pillars of Jamstack:

Security

Jamstack sites have fewer moving parts and less surface area for malicious exploitation from outside sources.

Scale

Jamstack sites are static where possible. Static sites can live entirely in a CDN, making them much easier and cheaper to scale.

Performance

Serving a web page from a CDN rather than generating it from a centralized server on-demand improves the page load speed.

Maintainability


Static websites are simple. You need a webserver capable of serving files. With a dynamic site, you might need an entire team to keep a website online and fast.

Portability


Again, a static website is made up of files. As long as you find a webserver capable of serving website files, you can move your site anywhere.

Developer experience

Git workflows are a core part of software development today. With many legacy CMSs, it’s difficult to have Git development workflows. With a Jamstack site, everything is a file making it seamless to use Git.

Chris touches on some of these points in a deep-dive comparison between Jamstack and WordPress. He also compares the reasons for choosing a Jamstack architecture versus a server-side one in “Static or Not?”.

Let’s use these pillars to evaluate Jamstack use cases.

Where is the edge of static and Jamstack?

Now that we have the basics of static and Jamstack, let’s dive in and see what lies at the edge of each definition. We have four categories each edge case can fall under.

  • Static – This strictly adheres to the definition of static.
  • Basically static – While not precisely static, most people would call it a static site.
  • Jamstack – A static frontend decoupled from a dynamic backend.
  • Dynamic – Renders web pages on-demand.

Many of these use cases can be placed in multiple categories. In this exercise, we’re putting them in the most restrictive category they fit.

JavaScript interaction Static

Let’s start with an easy one. I have a static site that uses JavaScript to create a slideshow of images.

The HTML page, JavaScript, and images are all static files. All of the HTML manipulation required for the slideshow to function happens in the browser with no external influence.

Cookies Static

I have a static site that adds a banner to the top of the page using JavaScript if a cookie exists. A cookie is just a header. The rest of the files are static.

External assets Basically Static

On a web page, we can load images or JavaScript from an external source. This external source may generate these assets dynamically on request. Would that mean we have a dynamic site?

Most people, including myself, would consider this a static site because it basically is. But if we’re strict to the definition, it doesn’t fit the bill. Having any part of the page generated dynamically defiles the sacred harmony of static.

iFrames Basically Static

An inline frame allows you to embed an HTML page within another HTML page. iFrames are commonly used for embedding Google Maps, Facebook Like buttons, and YouTube videos on a webpage.

Again, most people would still consider this a static site. However, these embeds are almost always from a dynamically-generated source.

Forms Basically Static

A static site can undoubtedly have a form on it. The dilemma comes when you submit it. If you want to do something with the data, you almost certainly need a dynamic back-end. There are plenty of form submission services you can use as the action for your form.

I can see two ways to argue this:

  1. You’re submitting a form to an external website, and it happens to redirect back afterward. This separation means the definition of static remains intact.
  2. This external service is a core workflow on your website, the definition of static no longer works.

In reality, most people would still consider this a static site.

Ajax requests Jamstack

An Ajax request allows a developer to request data from an external source without reloading the page. We’re in the same boat as the above situations of relying on a third party. It’s possible the endpoint for the Ajax call is a static JSON file, but it’s more likely that it’s dynamically-generated.

The nature of how Ajax data is typically used on a website pushes it past a static website into Jamstack territory. It fits well with Jamstack as you can have a site where you prerender everything you can, then use Ajax to layer on any dynamic functionality or content on the site.

Embedded eCommerce Jamstack

There are services that allow you to add eCommerce, even to static websites. Behind the scenes, they’re essentially making Ajax requests to manage items in a shopping cart and collect payment details.

Single page application (SPA) Jamstack

The title alone puts it out of static site contention. A SPA uses Ajax calls to request data. The presentation layer lives entirely in the front end, making it Jamtastic.

Ajax call to a serverless function Jamstack

Whether the endpoint of an Ajax call is serverless with something like AWS Lambda, goes to your Kubernetes clustered Node.js back-end, or a simple PHP back-end, it doesn’t matter. The key for Jamstack is the front end is independent of the back end.

Reverse proxy in front of a webserver Static

Adding a reverse proxy in front of the webserver for a static site must make it dynamic, right? Well, not so fast. While a proxy is software that adds a dynamic element to the network, as long as the file on the server is precisely the file the browser receives, it’s still static.

A webserver, modem, and every piece of network infrastructure in between are running software. If adding a proxy makes a static site dynamic, then nothing is static.

CDN Static

A CDN is a globally-distributed reverse proxy, so it falls into the same category as a reverse proxy. CDNs often add their own headers. This still doesn’t impact the prestigious static status as the headers aren’t part of the file sitting on the server’s hard drive.

CDN in front of a dynamic site with a 200-year cache expiration time Dynamic

OK, 200 years is a long expiry time, I’ll give you that. There are two reasons this is neither a static nor Jamstack site:

  1. The first request isn’t cached, so it generates on demand.
  2. CDNs aren’t designed for persistent storage. If, after one week, you’ve only had five hits on your website, the CDN might purge your web page from the cache. It can always retrieve the web page from the origin server, which would dynamically render the response.
WordPress with a static output Static

Using a WordPress plugin like WP2Static lets you create and manage your website in WordPress and output a static website whenever something changes.

When you do this, the files the browser requests already exist on the webserver, making it a static website—a subtle but important distinction from having a CDN in front of a dynamic site.

Edge computing Dynamic

Many companies are now offering the ability to run dynamic code at the edge of a CDN. It’s a powerful concept because you can have dynamic functionality without adding latency to the user. You can even use edge computation to manipulate HTML before sending it to the client.

It comes down to how you’re using edge functions. You could use an edge function to add a header to particular requests. I would argue this is still a static site. Push much beyond this, where you’re manipulating the HTML, and you’ve crossed the dynamic boundary.

It’s hard to argue it’s a Jamstack site as it doesn’t adhere to some of the fundamental benefits: scale, maintainability, and portability. Now, you have a piece of your core infrastructure that’s changing HTML on every request, and it will only work on that particular hosting infrastructure. That’s getting pretty far away from the blissful simplicity of a static site.

One of the elegant things about Jamstack is the front end and back end are decoupled. The backend is made up of APIs that output data. They don’t know or care how the data is used. The front end is the presentation layer. It knows where to get dynamic data from and how to render it. When you break this separation of concerns, you’ve crossed into a dynamic world.

Distributed Persistent Rendering (DPR) Dynamic

DPR is a strategy to reduce long build times on large static site generator (SSG) sites. The idea is the SSG builds a subset of the most popular pages. For the rest of the pages, the SSG builds them on-demand the first time they’re requested and saves them to persistent storage. After the initial request, the page behaves precisely like the rest of the built static pages.

Long build times limit large-scale use cases from choosing Jamstack. If all the SSG tooling were GoLang-based, we probably wouldn’t need DPR. However, that’s not the direction most Jamstack tooling has taken, and build performance can be excruciatingly long on big websites.

DPR is a means to an end and a necessity for Jamstack to grow. While it allows you to use Jamstack workflows on massive websites, ironically, I don’t think you can call a site using DPR a Jamstack site. Running software on-demand to generate a web page certainly sounds dynamicy. After the first request, a page served using DPR is a static page which makes DPR “more static” than putting a CDN in front of a dynamic site. However, it’s still a dynamic site as there isn’t a separation between frontend and backend, and it’s not portable, one of the pillars of a Jamstack site.

Incremental Static Regeneration (ISR) Dynamic

ISR is a similar but subtly different strategy to DPR to reduce long build times on large SSG sites. The difference is you can revalidate individual pages periodically to mimic a dynamic site without doing an entire site build.

Requests to a page without a cached version fall back to a stale version of that page or a generic loading page.

Again, it’s an exciting technology that expands what you can do with Jamstack workflows, but dynamically generating a page on-demand sounds like something a dynamic site would do.

Flat file CMS Dynamic

A flat file CMS uses text files for content rather than a database. While flat file CMSs remove a dynamic element from the stack, it’s still dynamically rendering the response.

The lines have been drawn

Exploring and debating these edge cases gives us a better understanding of the limits of all of these terms. The point of this exercise isn’t to be dogmatic about creating static or Jamstack websites. It’s to give us a common language to talk about the tradeoffs you make as you cross the boundary from one concept to another.

There’s absolutely nothing wrong with tradeoffs either. Not everything can be a purely static website. In many cases, the trade-offs make sense. For example, let’s say the front end needs to know the country of the visitor. There are two ways to do this:

  1. On page load, perform an Ajax call to query the country from an API. (Jamstack)
  2. Use an edge function to dynamically insert a country code into the HTML on response. (Dynamic)

If having the country code is a nice-to-have and the web page doesn’t need it immediately, then the first approach is a good option. The page can be static and the API call can fail gracefully if it doesn’t work. However, if the country code is required for the page, dynamically adding it using an edge function might make more sense. It’ll be faster as you don’t need to perform a second request/response cycle.

The key is understanding the problem you’re solving and thinking through the trade-offs you’re making with different approaches. You might end up with the majority of your site Jamstack and a portion dynamic. That’s totally fine and might be necessary for your use case. Typically, the closer you can get to static, the faster, more secure, and more scalable your site will be.

This is only the beginning of the discussion, and I’d love to hear your take. Where would you draw the lines? What do static and Jamstack mean to you? Are you sitting on a chair or stool right now?

The post Static vs. Dynamic vs. Jamstack: Where’s The Line? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Napkin

Css Tricks - Wed, 08/11/2021 - 4:31am

We took a surface level look at Pipedream the other day, which really does look cool. It’s like a much more modern and fancy version of what Yahoo Pipes was. A better comparison might be Zapier, except you write code (if you want to) to make easy-to-build cloud functions that can be triggered by anything from RSS to HTTP requests to Slack messages. I wouldn’t say Pipedream itself is complicated to learn (although, admittedly, I haven’t exactly dug deep), but it does embrace complexity. Lots of inputs, lots of processing possibilities, and lots of outputs. Unlimited combinations, you might say.

I saw and bookmarked Napkin.io the other day, which, so far (as it’s brand new) seems to push away the complexity.

Computing tools should be made for humans. They should allow us to be more creative, more free, and more inspired. We believe everyone should have access to computing and tap into its full potential.

We’re making Napkin to change the status quo, to build a new kind of tool – a tool that gets out of your way, that lets you code, and that’s a joy to use.

Philosophy

It’s like a cleaner version of how I remember Webtask. You write a function and… that’s it. It’s available at a URL you can hit.

Each function has environment variables, so you can chuck API keys in there for proxying, auth if you need it, logs for debugging, plus you can write in Node or Python. It’s a healthy amount of features, with more on the way, but it really does feel like embracing simplicity rather than complexity.

Direct Link to ArticlePermalink

The post Napkin appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

View Source (on Mobile)

Css Tricks - Tue, 08/10/2021 - 11:26am

Have you ever wished you could see the HTML source of a web page while on a mobile browser, which generally doesn’t offer that feature? If you have a desktop machine around, there are ways, but what I mean is getting the source without anything but the device itself.

The little View Source tool by Neatnik does the trick.

You enter the URL in the little bar to see the source of that URL. Or add the URL to the the tool’s URL itself to link right to it. Here’s CSS-Tricks (without line wrapping and tidyied up!):

Direct Link to ArticlePermalink

The post View Source (on Mobile) appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Responsible Markdown in Next.js

Css Tricks - Tue, 08/10/2021 - 4:55am

Markdown truly is a great format. It’s close enough to plain text so that anyone can quickly learn it, and it’s structured enough that it can be parsed and eventually converted to you name it.

That being said: parsing, processing, enhancing, and converting Markdown needs code. Shipping all that code in the client comes at a cost. It’s not huge per se, but it’s still a few dozens of kilobytes of code that are used only to deal with Markdown and nothing else.

In this article, I want to explain how to keep Markdown out of the client in a Next.js application, using the Unified/Remark ecosystem (genuinely not sure which name to use, this is all super confusing).

General idea

The idea is to only use Markdown in the getStaticProps functions from Next.js so this is done during a build (or in a Next serverless function if using Vercel’s incremental builds), but never in the client. I guess getServerSideProps would also be fine, but I think getStaticProps is more likely to be the common use case.

This would return an AST (Abstract Syntax Tree, which is to say a big nested object describing our content) resulting from parsing and processing the Markdown content, and the client would only be responsible for rendering that AST into React components.

I guess we could even render the Markdown as HTML directly in getStaticProps and return that to render with dangerouslySetInnerHtml but we’re not that kind of people. Security matters. And also, flexibility of rendering Markdown the way we want with our components instead of it rendering as plain HTML. Seriously folks, do not do that. &#x1f605;

export const getStaticProps = async () => { // Get the Markdown content from somewhere, like a CMS or whatnot. It doesn’t // matter for the sake of this article, really. It could also be read from a // file. const markdown = await getMarkdownContentFromSomewhere() const ast = parseMarkdown(markdown) return { props: { ast } } } const Page = props => { // This would usually have your layout and whatnot as well, but omitted here // for sake of simplicity of course. return <MarkdownRenderer ast={props.ast} /> } export default Page Parsing Markdown

We are going to use the Unified/Remark ecosystem. We need to install unified and remark-parse and that’s about it. Parsing the Markdown itself is relatively straightforward:

import unified from 'unified' import markdown from 'remark-parse' const parseMarkdown = content => unified().use(markdown).parse(content) export default parseMarkdown

Now, what took me a long while to understand is why my extra plugins, like remark-prism or remark-slug, did not work like this. This is because the .parse(..) method from Unified does not process the AST with plugins. As the name suggests, it only parses the string of Markdown content into a tree.

If we want Unified to apply our plugins, we need Unified to go through what they call the “run” phase. Normally, this is done by using the .process(..) method instead of the .parse(..) method. Unfortunately, .process(..) not only parses Markdown and applies plugins, but also stringifies the AST into another format (like HTML via remark-html, or JSX with remark-react). And this is not what we want, as we want to preserve the AST, but after it’s been processed by plugins.

| ........................ process ........................... | | .......... parse ... | ... run ... | ... stringify ..........| +--------+ +----------+ Input ->- | Parser | ->- Syntax Tree ->- | Compiler | ->- Output +--------+ | +----------+ X | +--------------+ | Transformers | +--------------+

So what we need to do is run both the parsing and running phases, but not the stringifying phase. Unified does not provide a method to do these 2 out of 3 phases, but it provides individual methods for every phase, so we can do it manually:

import unified from 'unified' import markdown from 'remark-parse' import prism from 'remark-prism' const parseMarkdown = content => { const engine = unified().use(markdown).use(prism) const ast = engine.parse(content) // Unified‘s *process* contains 3 distinct phases: parsing, running and // stringifying. We do not want to go through the stringifying phase, since we // want to preserve an AST, so we cannot call `.process(..)`. Calling // `.parse(..)` is not enough though as plugins (so Prism) are executed during // the running phase. So we need to manually call the run phase (synchronously // for simplicity). // See: https://github.com/unifiedjs/unified#description return engine.runSync(ast) }

Tada! We parsed our Markdown into a syntax tree. And then we ran our plugins on that tree (done here synchronously for sake of simplicity, but you could use .run(..) to do it asynchronously). But we did not convert our tree into some other syntax like HTML or JSX. We can do that ourselves, in the render.

Rendering Markdown

Now that we have our cool tree at the ready, we can render it the way we intend to. Let’s have a MarkdownRenderer component that receives the tree as an ast prop, and renders it all with React components.

const getComponent = node => { switch (node.type) { case 'root': return React.Fragment case 'paragraph': return 'p' case 'emphasis': return 'em' case 'heading': return ({ children, depth = 2 }) => { const Heading = `h${depth}` return <Heading>{children}</Heading> } /* Handle all types here … */ default: console.log('Unhandled node type', node) return React.Fragment } } const Node = node => { const Component = getComponent(node) const { children } = node return children ? ( <Component {...node}> {children.map((child, index) => ( <Node key={index} {...child} /> ))} </Component> ) : ( <Component {...node} /> ) } const MarkdownRenderer = props => <Node {...props.ast} /> export default React.memo(MarkdownRenderer)

Most of the logic of our renderer lives in the Node component. It finds out what to render based on the type key of the AST node (this is our getComponent method handling every type of node), and then renders it. If the node has children, it recursively goes into the children; otherwise it just renders the component as a final leaf.

Cleaning up the tree

Depending on which Remark plugins we use, we might encounter the following problem when trying to render our page:

Error: Error serializing .content[0].content.children[3].data.hChildren[0].data.hChildren[0].data.hChildren[0].data.hChildren[0].data.hName returned from getStaticProps in “/”. Reason: undefined cannot be serialized as JSON. Please use null or omit this value.

This happens because our AST contains keys whose values are undefined, which is not something that can be safely serialized as JSON. Next gives us the solution: either we omit the value entirely, or if we need it somewhat, replace it with null.

We’re not going to fix every path by hand though, so we need to walk that AST recursively and clean it up. I found out that this happened when using remark-prism, a plugin to enable syntax highlighting for code blocks. The plugin indeed adds a [data] object to nodes.

What we can do is walk our AST before returning it to clean up these nodes:

const cleanNode = node => { if (node.value === undefined) delete node.value if (node.tagName === undefined) delete node.tagName if (node.data) { delete node.data.hName delete node.data.hChildren delete node.data.hProperties } if (node.children) node.children.forEach(cleanNode) return node } const parseMarkdown = content => { const engine = unified().use(markdown).use(prism) const ast = engine.parse(content) const processedAst = engine.runSync(parsed) cleanNode(processedAst) return processedAst }

One last thing we can do to ship less data to the client is remove the position object which exists on every single node and holds the original position in the Markdown string. It’s not a big object (it has only two keys), but when the tree gets big, it adds up quickly.

const cleanNode = node => { delete node.position Wrapping up

That’s it folks! We managed to restrict Markdown handling to the build-/server-side code so we don’t ship a Markdown runtime to the browser, which is unnecessarily costly. We pass a tree of data to the client, which we can walk and convert into whatever React components we want.

I hope this helps. :)

The post Responsible Markdown in Next.js appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

WooCommerce With Apple Pay and Google Pay

Css Tricks - Tue, 08/10/2021 - 4:54am

(This is a sponsored post.)

Got a WooCommerce store? It behooves you to offer a variety of payment methods. Just anecdotally, I’m sure both you and me have been annoyed and even abandoned purchases when a merchant, online or otherwise, doesn’t take the payment method we want to pay with. That’s just straight-up lost sales for the merchant. But you don’t have to entirely trust anecdotal evidence, there is data you can pour into, suggesting 7% of abandonment is from missing payment methods.

I’d suggest, at a minimum, you take credit cards and PayPal. There are a variety of payment gateways you can explore (and it’s worth doing so), including a number that take credit cards. The best bet there is WooCommerce Payments — supported in many big countries. It’s Stripe-backed, so it’s a lot like using the Stripe gateway anyway, except way better as it’s loaded with useful features like the fact that you manage all your payments directly in your WordPress dashboard, and Instant Deposits.

The PayPal plugin is free, so that’s kind of a no-brainer, and I’m just talking the basic integration that kicks people over to PayPal.com to pay. Some people like that, as it lets them use their PayPal account online where they may already carry a balance for online purchases and transfers.

The very next step? Apple Pay and Google Pay. Why? Like PayPal, some people strongly prefer it (including me) because of how quick and familiar it makes the checkout process. The Apple Pay and Google Pay functionality in WooCommerce goes so far as to even allow skipping the whole traditional cart and checkout process. That might allow you to make up even more than that 7% based on improved UX.

How does Apple Pay and Google Pay work on WooCommerce? Well if you’re already using WooCommerce Payments, like you should, you’re already almost there.

Enabling Apple Pay and Google Pay on WooCommerce

Apple Pay is supported via the Striple plugin or the Square plugin, but I’d say it’s easiest with WooCommerce Payments. Under Settings > Payments, you’ll see a checkbox for “Enable express checkouts” — flip that on and you’ll be enabling both Apple Pay and Google Pay — and will have an opportunity to pick where you want them to appear.

There are a handful of prerequisites, like having an HTTPS site, but with eCommerce in general, that is not optional and you’ve probably already got it in place.

One thing I experienced when activating it is this warning:

I was able to download the domain association file from the Strip docs, give it to my WordPress Host (Flywheel), and they manually installed it for me and it worked fine.

No “account”

With PayPal, you need a PayPal account for yourself to make it work. That’s not the case with Apple Pay and Google Pay where you don’t have an account and they don’t keep a balance — they just kick that money directly over to WooCommerce Payments and you have access to that money like you would any other WooCommerce Payments transaction.

Example transaction

Here’s an order that came in (I get email notifications for orders):

Notice I can see right in the email that Apple Pay was used.

I can see the order in my dashboard like any other, and have the ability to refund it directly from there and other actions:

I barely even notice it. What payment gateway someone chooses is of little consequence to me once it’s all set up.

The user experience

Apple Pay works on Safari, both on iOS and macOS. If a user both is using one of those browsers and has Apple Pay set up, they’ll see the special buttons show up on your store:

Press that button, and the user sees this immediate checkout step:

The user can change credit cards (that they have set up in Apple Pay), changing shipping address, and then if they approve it, it’s instantly done.

It’s a pretty satisfying user experience, I must say.

Even moreso on a mobile phone, where it feels like things like Apple Pay and Google Pay were really designed to shine. Here’s Apple Pay:

Google Pay works on Android phones nicely, but also works in desktop Chrome.

I did learn one super weird little caveat with Google Pay and desktop Chrome though! Cards that are in your desktop Chrome autofill area in settings that literally say “Google Pay” next to them don’t actually work for the WooCommcere Google Pay buttons. Only credit cards that are kinda manually added in there without that little label work. Just a little thing to be aware of when testing:

This is a rather compelling reason to use WooCommerce for eCommerce. I feel like I got this feature for free. I basically checked a box in settings, and it makes a material positive impact on my business.

The post WooCommerce With Apple Pay and Google Pay appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS Nesting, specificity, and you

Css Tricks - Tue, 08/10/2021 - 4:51am

Here’s Kilian Valkhof on CSS nesting which isn’t available in browsers yet, but will be soon. There are a few differences he notes between CSS nesting and nesting in Sass or Less though. Take, for example, the following code:

div { background: #fff; & p { color: red; } border: 1px solid; }

When CSS nesting lands, that last line border: 1px solid; won’t be applied to the div like it would be in, say, Sass. That’s because with CSS nesting, any styles you want applied to that div have to be written before any nesting styles are written. I think this makes a ton of sense because I tend to enforce that style in any Sass codebases I work on (it’s just much easier to read), but I can imagine people getting confused about this the first time around.

One of the smaller and, yet for some reason, super exciting things about CSS nesting is how we’ll be able to nest media queries, as Kilian notes, just like this:

body { background: red; @media (min-width: 40rem) { & { background: blue; } } }

This is very exciting!

Direct Link to ArticlePermalink

The post CSS Nesting, specificity, and you appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Choice Words about the Upcoming Deprecation of JavaScript Dialogs

Css Tricks - Mon, 08/09/2021 - 11:23am

It might be the very first thing a lot of people learn in JavaScript:

alert("Hello, World");

One day at CodePen, we woke up to a ton of customer support tickets about their Pens being broken, which ultimately boiled down to a version of Chrome that shipped where they ripped out alert() from functioning in cross-origin iframes. And all other native “JavaScript Dialogs” like confirm(), prompt() and I-don’t-know-what-else (onbeforeunload?, .htpasswd protected assets?).

Cross-origin iframes are essentially the heart of how CodePen works. You write code, and we execute it for you in an iframe that doesn’t share the same domain as CodePen itself, as the very first line of security defense. We didn’t hear any heads up or anything, but I’m sure the plans were on display.

I tweeted out of dismay. I get that there are potential security concerns here. JavaScript dialogs look the same whether they are triggered by an iframe or not, so apparently it’s confusing-at-best when they’re triggered by an iframe, particularly a cross-origin iframe where the parent page likely has little control. Well, outside of, ya know, a website like CodePen. Chrome cite performance concerns as well, as the nature of these JavaScript dialogs is that they block the main thread when open, which essentially halts everything.

There are all sorts of security and UX-annoyance issues that can come from iframes though. That’s why sandboxing is a thing. I can do this:

<iframe sandbox></iframe>

And that sucker is locked down. If some form tried to submit something in there: nope, won’t work. What if it tries to trigger a download? Nope. Ask for device access? No way. It can’t even load any JavaScript at all. That is unless I let it:

<iframe sandbox="allow-scripts allow-downloads ...etc"></iframe>

So why not an attribute for JavaScript dialogs? Ironically, there already is one: allow-modals. I’m not entirely sure why that isn’t good enough, but as I understand it, nuking JavaScript dialogs in cross-origin iframes is just a stepping stone on the ultimate goal: removing them from the web platform entirely.

Daaaaaang. Entirely? That’s the word. Imagine the number of programming tutorials that will just be outright broken.

For now, even the cross-origin removal is delayed until January 2022, but as far as we know this is going to proceed, and then subsequent steps will happen to remove them entirely. This is spearheaded by Chrome, but the status reports that both Firefox and Safari are on board with the change. Plus, this is a specced change, so I guess we can waggle our fingers literally everywhere here, if you, like me, feel like this wasn’t particularly well-handled.

What we’ve been told so far, the solution is to use postMessage if you really absolutely need to keep this functionality for cross-origin iframes. That sends the string the user uses in window.alert up to the parent page and triggers the alert from there. I’m not the biggest fan here, because:

  1. postMessage is not blocking like JavaScript dialogs are. This changes application flow.
  2. I have to inject code into users code for this. This is new technical debt and it can harm the expectations of expected user output (e.g. an extra <script> in their HTML has weird implications, like changing what :nth-child and friends select).
  3. I’m generally concerned about passing anything user-generated to a parent to execute. I’m sure there are theoretical ways to do it safely, but XSS attack vectors are always surprising in their ingenouity.

Even lower-key suggestions, like window.alert = console.log, have essentially the same issues.

Allow me to hand the mic over to others for their opinions.

Couldn’t the alert be contained to the iframe instead of showing up in the parent window?

Jaden Baptista, Twitter

Yes, please! Doesn’t that solve a big part of this? While making the UX of these dialogs more useful? Put the dang dialogs inside the <iframe>.

“Don’t break the web.” to “Don’t break 90% of the web.” and now “Don’t break the web whose content we agree with.”

Matthew Phillips, Twitter

I respect the desire to get rid of inelegant parts [of the HTML spec] that can be seen as historical mistakes and that cause implementation complexity, but I can’t shake the feeling that the existing use cases are treated with very little respect or curiosity.

Dan Abramov, Twitter

It’s weird to me this is part of the HTML spec, not the JavaScript spec. Right?!

I always thought there was a sort of “prime directive” not to break the web? I’ve literally seen web-based games that used alert as a “pause”, leveraging the blocking nature as a feature. Like: <button onclick="alert('paused')">Pause</button>[.] Funny, but true.

Ben Lesh, Twitter

A metric was cited that only 0.006% of all page views contain a cross-origin iframe that uses these functions, yet:

Seems like a misleading metric for something like confirm(). E.g. if account deletion flow is using confirm() and breaks because of a change to it, this doesn’t mean account deletion flow wasn’t important. It just means people don’t hit it on every session.

Dan Abramov, Twitter

That’s what’s extra concerning to me: alert() is one thing, but confirm() literally returns true or false, meaning it is a logical control structure in a program. Removing that breaks websites, no question. Chris Ferdinandi showed me this little obscure website that uses it:

Speaking of Chris:

The condescending “did you actually read it, it’s so clear” refrain is patronizing AF. It’s the equivalent of “just” or “simply” in developer documentation.

I read it. I didn’t understand it. That’s why I asked someone whose literal job is communicating with developers about changes Chrome makes to the platform.

This is not isolated to one developer at Chrome. The entire message thread where this change was surfaced is filled with folks begging Chrome not to move forward with this proposal because it will break all-the-things.

Chris Ferdinandi, “Google vs. the web”

And here’s Jeremy:

[…] breaking changes don’t happen often on the web. They are—and should be—rare. If that were to change, the web would suffer massively in terms of predictability.

Secondly, the onus is not on web developers to keep track of older features in danger of being deprecated. That’s on the browser makers. I sincerely hope we’re not expected to consult a site called canistilluse.com.

Jeremy Keith, “Foundations”

I’ve painted a pretty bleak picture here. To be fair, there were some tweets with the Yes!! Finally!! vibe, but they didn’t feel like critical assessments to me as much as random Google cheerleading.

Believe it or not, I generally am a fan of Google and think they do a good job of pushing the web forward. I also think it’s appropriate to waggle fingers when I see problems and request they do better. “Better” here means way more developer and user outreach to spell out the situation, way more conversation about the potential implications and transition ideas, and way more openness to bending the course ahead.

The post Choice Words about the Upcoming Deprecation of JavaScript Dialogs appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

The Large, Small, and Dynamic Viewports

Css Tricks - Mon, 08/09/2021 - 10:37am

We’ve got viewport units (e.g. vw, vh, vmin, vmax), and they are mostly pretty great. It’s cool to always have a unit available that is relative to the entire screen. But when you ask people what they want fixed up in CSS, viewport units are always on the list. The problem is that people use them to do things like position important buttons along the bottom of the screen on mobile devices. Do something like that wrong, it might cost you $8 million dollars.

What’s “wrong”? Well, assuming that 100vh is the visible/usable area in the viewport. Whaaaat? Isn’t that the point of those units? There are tricks like this and this, but that’s why people are unhappy. None of that is intuitive and huge mistakes are all too common. Even though Safari 15 is going to make this a little better, I’d say it’s still not particularly intuitive how you have to handle it.

Bramus Van Damme covers that the spec now includes some new values:

  • The “Large Viewport”: lvh / lvw / lvmin / lvmax
  • The “Small Viewport”: svh / svw / svmin / svmax
  • The “Baby Bear Viewport”
  • The “Dynamic Viewport”: dvh / dvw / dvmin / dvmax

It seems to me the dynamic ones are the useful ones, because they will be intuitive: the units that represent the currently usable space, be it large or small.

The Dynamic Viewport is the viewport sized with *dynamic consideration of any UA interfaces*. It will automatically adjust itself in response to UA interface elements being shown or not: the value will be anything within the limits of 100vh (maximum) and 100svh (minimum).

Bramus Van Damme, “The Large, Small, and Dynamic Viewports”

Direct Link to ArticlePermalink

The post The Large, Small, and Dynamic Viewports appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Exploring the CSS Paint API: Image Fragmentation Effect

Css Tricks - Mon, 08/09/2021 - 4:27am

In my previous article, I created a fragmentation effect using CSS mask and custom properties. It was a neat effect but it has one drawback: it uses a lot of CSS code (generated using Sass). This time I am going to redo the same effect but rely on the new Paint API. This drastically reduces the amount of CSS and completely removes the need for Sass.

Here is what we are making. Like in the previous article, only Chrome and Edge support this for now.

CodePen Embed Fallback

See that? No more than five CSS declarations and yet we get a pretty cool hover animation.

What is the Paint API?

The Paint API is part of the Houdini project. Yes, “Houdini” the strange term that everyone is talking about. A lot of articles already cover the theoretical aspect of it, so I won’t bother you with more. If I have to sum it up in a few words, I would simply say : it’s the future of CSS. The Paint API (and the other APIs that fall under the Houdini umbrella) allow us to extend CSS with our own functionalities. We no longer need to wait for the release of new features because we can do it ourselves!

From the specification:

An API for allowing web developers to define a custom CSS <image> with javascript [sic], which will respond to style and size changes.

And from the explainer:

The CSS Paint API is being developed to improve the extensibility of CSS. Specifically this allows developers to write a paint function which allows us to draw directly into an elements [sic] background, border, or content.

I think the idea is pretty clear. We can draw what we want. Let’s start with a very basic demo of background coloration:

CodePen Embed Fallback
  1. We add the paint worklet using CSS.paintWorklet.addModule('your_js_file').
  2. We register a new paint method called draw.
  3. Inside that, we create a paint() function where we do all the work. And guess what? Everything is like working with <canvas>. That ctx is the 2D context, and I simply used some well-known functions to draw a red rectangle covering the whole area.

This may look unintuitive at first glance, but notice that the main structure is always the same: the three steps above are the “copy/paste” part that you repeat for each project. The real work is the code we write inside the paint() function.

Let’s add a variable:

CodePen Embed Fallback

As you can see, the logic is pretty simple. We define the getter inputProperties with our variables as an array. We add properties as a third parameter to paint() and later we get our variable using properties.get().

That’s it! Now we have everything we need to build our complex fragmentation effect.

Building the mask

You may wonder why the paint API to create a fragmentation effect. We said it’s a tool to draw images so how it will allow us to fragment an image?

In the previous article, I did the effect using different mask layer where each one is a square defined with a gradient (remember that a gradient is an image) so we got a kind of matrix and the trick was to adjust the alpha channel of each one individually.

This time, instead of using many gradients we will define only one custom image for our mask and that custom image will be handled by our paint API.

An example please!

CodePen Embed Fallback

In the above, I have created an image having an opaque color covering the left part and a semi-transparent one covering the right part. Applying this image as a mask gives us the logical result of a half-transparent image.

Now all we need to do is to split our image to more parts. Let’s define two variables and update our code:

CodePen Embed Fallback

The relevant part of the code is the following:

const n = properties.get('--f-n'); const m = properties.get('--f-m'); const w = size.width/n; const h = size.height/m; for(var i=0;i<n;i++) { for(var j=0;j<m;j++) { ctx.fillStyle = 'rgba(0,0,0,'+(Math.random())+')'; ctx.fillRect(i*w, j*h, w, h); } }

N and M define the dimension of our matrix of rectangles. W and H are the size of each rectangle. Then we have a basic FOR loop to fill each rectangle with a random transparent color.

With a little JavaScript, we get a custom mask that we can easily control by adjusting the CSS variables:

CodePen Embed Fallback

Now, we need to control the alpha channel in order to create the fading effect of each rectangle and build the fragmentation effect.

Let’s introduce a third variable that we use for the alpha channel that we also change on hover.

CodePen Embed Fallback

We defined a CSS custom property as a <number> that we transition from 1 to 0, and that same property is used to define the alpha channel of our rectangles. Nothing fancy will happen on hover because all the rectangles will fade the same way.

We need a trick to prevent fading of all the rectangles at the same time, instead creating a delay between them. Here is an illustration to explain the idea I am going to use:

The above is showing the alpha animation for two rectangles. First we define a variable L that should be bigger or equal to 1 then for each rectangle of our matrix (i.e. for each alpha channel) we perform a transition between X and Y where X - Y = L so we have the same overall duration for all the alpha channel. X should be bigger or equal to 1 and Y smaller or equal to 0.

Wait, the alpha value shouldn’t be in the range [1 0], right ?

Yes, it should! And all the tricks that we’re working on rely on that. Above, the alpha is animating from 8 to -2, meaning we have an opaque color in the [8 1] range, a transparent one in the [0 -2] range and an animation within [1 0]. In other words, any value bigger than 1 will have the same effect as 1, and any value smaller than 0 will have the same effect as 0.

Animation within [1 0] will not happen at the same time for both our rectangles. Rectangle 2 will reach [1 0] before Rectangle 1 will. We apply this to all the alpha channels to get our delayed animations.

In our code we will update this:

rgba(0,0,0,'+(o)+')

…to this:

rgba(0,0,0,'+((Math.random()*(l-1) + 1) - (1-o)*l)+')

L is the variable illustrated previously, and O is the value of our CSS variable that transitions from 1 to 0

When O=1, we have (Math.random()*(l-1) + 1). Considering the fact that the random() function gives us a value within the [0 1] range, the final value will be in the [L 1]range.

When O=0, we have (Math.random()*(l-1) + 1 - l) and a value with the [0 1-L] range.

L is our variable to control the delay.

Let’s see this in action:

CodePen Embed Fallback

We are getting closer. We have a cool fragmentation effect but not the one we saw in the beginning of the article. This one isn’t as smooth.

The issue is related the random() function. We said that each alpha channel need to animate between X and Y, so logically those value need to remain the same. But the paint() function is called a bunch during the transition, so each time, the random() function give us different X and Y values for each alpha channel; hence the “random” effect we are getting.

To fix this we need to find a way to store the generated value so they are always the same for each call of the paint() function. Let’s consider a pseudo-random function, a function that always generates the same sequence of values. In other words, we want to control the seed.

Unfortunately, we cannot do this with the JavaScript’s built-in random() function, so like any good developer, let’s pick one up from Stack Overflow:

const mask = 0xffffffff; const seed = 30; /* update this to change the generated sequence */ let m_w = (123456789 + seed) & mask; let m_z = (987654321 - seed) & mask; let random = function() { m_z = (36969 * (m_z & 65535) + (m_z >>> 16)) & mask; m_w = (18000 * (m_w & 65535) + (m_w >>> 16)) & mask; var result = ((m_z << 16) + (m_w & 65535)) >>> 0; result /= 4294967296; return result; }

And the result becomes:

CodePen Embed Fallback

We have our fragmentation effect without complex code:

  • a basic nested loop to create NxM rectangles
  • a clever formula for the channel alpha to create the transition delay
  • a ready random() function taken from the Net

That’s it! All you have to do is to apply the mask property to any element and adjust the CSS variables.

Fighting the gaps!

If you play with the above demos you will notice, in some particular case, strange gaps between the rectangles

To avoid this, we can extend the area of each rectangle with a small offset.

We update this:

ctx.fillRect(i*w, j*h, w, h);

…with this:

ctx.fillRect(i*w-.5, j*h-.5, w+.5, h+.5);

It creates a small overlap between the rectangles that compensates for the gaps between them. There is no particular logic with the value 0.5 I used. You can go bigger or smaller based on your use case.

CodePen Embed Fallback Want more shapes?

Can the above be extended to consider more than rectangular shape? Sure it can! Let’s not forget that we can use Canvas to draw any kind of shape — unlike pure CSS shapes where we sometimes need some hacky code. Let’s try to build that triangular fragmentation effect.

After searching the web, I found something called Delaunay triangulation. I won’t go into the deep theory behind it, but it’s an algorithm for a set of points to draw connected triangles with specific properties. There are lots of ready-to-use implementations of it, but we’ll go with Delaunator because it’s supposed to be the fastest of the bunch.

We first define a set of points (we will use random() here) then run Delauntor to generate the triangles for us. In this case, we only need one variable that defines the number of points.

const n = properties.get('--f-n'); const o = properties.get('--f-o'); const w = size.width; const h = size.height; const l = 7; var dots = [[0,0],[0,w],[h,0],[w,h]]; /* we always include the corners */ /* we generate N random points within the area of the element */ for (var i = 0; i < n; i++) { dots.push([random() * w, random() * h]); } /**/ /* We call Delaunator to generate the triangles*/ var delaunay = Delaunator.from(dots); var triangles = delaunay.triangles; /**/ for (var i = 0; i < triangles.length; i += 3) { /* we loop the triangles points */ /* we draw the path of the triangles */ ctx.beginPath(); ctx.moveTo(dots[triangles[i]][0] , dots[triangles[i]][1]); ctx.lineTo(dots[triangles[i + 1]][0], dots[triangles[i + 1]][1]); ctx.lineTo(dots[triangles[i + 2]][0], dots[triangles[i + 2]][1]); ctx.closePath(); /**/ var alpha = (random()*(l-1) + 1) - (1-o)*l; /* the alpha value */ /* we fill the area of triangle with the semi-transparent color */ ctx.fillStyle = 'rgba(0,0,0,'+alpha+')'; /* we consider stroke to fight the gaps */ ctx.strokeStyle = 'rgba(0,0,0,'+alpha+')'; ctx.stroke(); ctx.fill(); }

I have nothing more to add to the comments in the above code. I simply used some basic JavaScript and Canvas stuff and yet we have a pretty cool effect.

CodePen Embed Fallback

We can make even more shapes! All we have to do is to find an algorithm for it.

I cannot move on without doing the hexagon one!

CodePen Embed Fallback

I took the code from this article written by Izan Pérez Cosano. Our variable is now R that will define the dimension of one hexagon.

What’s next?

Now that we have built our fragmentation effect, let’s focus on the CSS. Notice that the effect is as simple as changing the opacity value (or the value of whichever property you are working with) of an element on it hover state.

Opacity animation

img { opacity:1; transition:opacity 1s; } img:hover { opacity:0; }

Fragmentation effect

img { -webkit-mask: paint(fragmentation); --f-o:1; transition:--f-o 1s; } img:hover { --f-o:0; }

This means we can easily integrate this kind of effect to create more complex animations. Here are a bunch of ideas!

Responsive image slider CodePen Embed Fallback

Another version of the same slider:

CodePen Embed Fallback Noise effect CodePen Embed Fallback Loading screen CodePen Embed Fallback Card hover effect CodePen Embed Fallback That’s a wrap

And all of this is just the tip of the iceberg of what can be achieved using the Paint API. I’ll end with two important points:

  • The Paint API is 90% <canvas>, so the more you know about <canvas>, the more fancy things you can do. Canvas is widely used, which means there’s a bunch of documentation and writing about it to get you up to speed. Hey, here’s one right here on CSS-Tricks!
  • The Paint API removes all the complexity from the CSS side of things. There’s no dealing with complex and hacky code to draw cool stuff. This makes CSS code so much easier to maintain, not to mention less prone to error.

The post Exploring the CSS Paint API: Image Fragmentation Effect appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

An Interview with Fontfabric

Typography - Sat, 08/07/2021 - 7:08pm

Read the book, Typographic Firsts

We interviewed the brilliantly talented folk at Fontfabric. Their clients include high-profile brands like Nike, Lipton, Hyundai, CNET, and the US national football team. We talked about how they got started, what makes them tick, and about their new release, Audela.

The post An Interview with Fontfabric appeared first on I Love Typography.

SVG Gobbler

Css Tricks - Fri, 08/06/2021 - 9:32am

Great little project from Ross Moody:

SVG Gobbler is a browser extension that finds the vector content on the page you’re viewing and gives you the option to download, optimize, copy, view the code, or export it as an image.

When a site uses SVG as an <img>, you can right-click/save-as like any other image. But when SVG is inline as <svg> (which often makes sense for styling reasons), it’s harder to snag a copy of it. I usually end up opening DevTools, finding the <svg>, right clicking that, using Copy > Copy outerHTML, pasting into a text file, and saving out as whatever.svg. A little more toil than I’d like.

With SVG Gobbler, I click the browser extension and it presents me a nice grid of options:

I can quickly download them from here, but notice it will even optimize them for me if I like, or export as a PNG instead. Neat! I’ve made use of this today already, and I’ve only just installed it today.

By way of feedback, I’d say it would be nice to:

  1. Have a way to size the PNG export (might as well allow me to make it huge if I need to).
  2. Export in next-gen formats that might even be better than PNG as far as file size, like WebP or AVIF.
  3. SVG that has a fill of white should be shown on a non-white background so you can see what they are.
  4. Offer, optionally, to let me name the file as I download it rather than always naming it gobbler-original.svg

A stretch goal would be to somehow extract the CSS used on the site into the <svg>. I notice some SVGs it finds look very different when exported, because the page was making use of outside-the-SVG styles to style it, which are lost when exported.

I wonder if the changes to Safari extensions will allow Ross to easily port this to Safari (even Mobile Safari?!).

The post SVG Gobbler appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

New Nuxt Features past v2.10

Css Tricks - Fri, 08/06/2021 - 5:43am

Nuxt offers an incredible developer experience, with a lot of performance and application setup best practices baked in. In recent releases, they’ve been working on taking this developer experience to the next level, with some newer features that speed up and simplify developer processes. Let’s explore some today.

I set up a repo and site for you to explore some of these features! You can check them out here:

Demo Live site Nuxt Content

You no longer have to pair Nuxt with an external headless CMS and do all of the setup, particularly if you’re not looking for something at a huge scale, but something smaller like a blog. Nuxt content offers a git-based headless CMS where you can write configuration in markdown, CSV, YAML, or XML, based on your preference. There are some out of the box configuration settings available to you, and writing custom configurations is as simple as creating a property.

What this means for development: you can write static Markdown files in a directory, and that can be your blog! We’re using the same dynamic pages API that you would typically use in Nuxt to generate this content.

It also offers full-text search out of the box, which is a lovely feature to add so quickly to a blog without having to integrate a third-party service.

This tutorial by Debbie O’Brien is an incredible guide, it walks you through every piece of setting it up, highly recommended.

Nuxt Components

One thing I noticed I was doing again and again and again was typing import code in all my components. I do have some snippets to make this a bit faster, but adding them each in every file was still interrupting the flow of my work just a bit.

Nuxt component module scans, imports, and registers components, so that we no longer need to do this. The components must be in the components directory, but we can use them in layouts, pages, and components themselves. 

The addition of this module is a small change to our nuxt.config.js:

export default { components: true }

Seriously, that’s it!

If you’d like a deep dive, this incredibly comprehensive guide by Kruite Patel has you covered.

If you use the component repeatedly, Nuxt will do some nice optimizations such as automatically creating a shared chunk for the component. Be mindful when using this on huge projects, though, as it may impact build times. 

Nuxt Image

Nuxt Image is a newer module that offers seamless and quick resizes and transforms for optimized responsive images. You can use their built-in optimizer, or work with 10+ ready-to-use popular providers such as Cloudinary or Fastly.

The code output from using their API are standard <img> and <picture> tags, so there’s no obfuscation when integrating them into your workflow.

After adding the module, you’ll be able to add configuration to the images via an images property in the nuxt.config.js, and designate breakpoints, providers, and other configurations:

export default { image: { // The screen sizes predefined by `@nuxt/image`: screens: { xs: 320, sm: 640, md: 768, lg: 1024, xl: 1280, xxl: 1536, '2xl': 1536 }, // Generate images to `/_nuxt/image/file.png` staticFilename: '[publicPath]/images/[name]-[hash][ext]', domains: [ 'images.unsplash.com' ], alias: { unsplash: 'https://images.unsplash.com' } } }

This is just a sampling of some of the options available to you, provided as an example. The full documentation is here.

And then the usage is similar to any Vue component:

<nuxt-img src="/nuxt-icon.png" />

or

<nuxt-picture src="/nuxt-icon.png" />

Further information and all options are documented here. Hat tip to Ben Hong for letting me know this was available. He has a few Nuxt resources out there that are worth exploring, too!

Sample Repo

I’ve created a sample repo for you to explore that uses all of this functionality. It’s a small recipe blog with nuxt-content for the recipe entries, Nuxt components so that I didn’t need to define imports, and nuxt-image for the image transformations.

Demo Live site

You can visit it here to see it all in action, fork it, play around with it, and make it your own.

You can see in it how I used the $img API in Nuxt image for background images here, too, which is not yet fully documented.

Nuxt offers incredible developer experience. Nuxt is even coming out with a new version soon with more updates, always expertly implemented. It’s why using Nuxt is continually such a joy, and proves to be a great framework for teams and single developers alike.

The post New Nuxt Features past v2.10 appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Efficient Infinite Utility Helpers Using Inline CSS Custom Properties and calc()

Css Tricks - Fri, 08/06/2021 - 4:55am

I recently wrote a very basic Sass loop that outputs several padding and margin utility classes. Nothing fancy, really, just a Sass map with 11 spacing values, looped over to create classes for both padding and margin on each side. As we’ll see, this works, but it ends up a pretty hefty amount of CSS. We’re going to refactor it to use CSS custom properties and make the system much more trim.

Here’s the original Sass implementation:

$space-stops: ( '0': 0, '1': 0.25rem, '2': 0.5rem, '3': 0.75rem, '4': 1rem, '5': 1.25rem, '6': 1.5rem, '7': 1.75rem, '8': 2rem, '9': 2.25rem, '10': 2.5rem, ); @each $key, $val in $space-stops { .p-#{$key} { padding: #{$val} !important; } .pt-#{$key} { padding-top: #{$val} !important; } .pr-#{$key} { padding-right: #{$val} !important; } .pb-#{$key} { padding-bottom: #{$val} !important; } .pl-#{$key} { padding-left: #{$val} !important; } .px-#{$key} { padding-right: #{$val} !important; padding-left: #{$val} !important; } .py-#{$key} { padding-top: #{$val} !important; padding-bottom: #{$val} !important; } .m-#{$key} { margin: #{$val} !important; } .mt-#{$key} { margin-top: #{$val} !important; } .mr-#{$key} { margin-right: #{$val} !important; } .mb-#{$key} { margin-bottom: #{$val} !important; } .ml-#{$key} { margin-left: #{$val} !important; } .mx-#{$key} { margin-right: #{$val} !important; margin-left: #{$val} !important; } .my-#{$key} { margin-top: #{$val} !important; margin-bottom: #{$val} !important; } }

This very much works. It outputs all the utility classes we need. But, it can also get bloated quickly. In my case, they were about 8.6kb uncompressed and under 1kb compressed. (Brotli was 542 bytes, and gzip came in at 925 bytes.)

Since they are extremely repetitive, they compress well, but I still couldn’t shake the feeling that all these classes were overkill. Plus, I hadn’t even done any small/medium/large breakpoints which are fairly typical for these kinds of helper classes.

Here’s a contrived example of what the responsive version might look like with small/medium/large classes added. We’ll re-use the $space-stops map defined previously and throw our repetitious code into a mixin

@mixin finite-spacing-utils($bp: '') { @each $key, $val in $space-stops { .p-#{$key}#{$bp} { padding: #{$val} !important; } .pt-#{$key}#{$bp} { padding-top: #{$val} !important; } .pr-#{$key}#{$bp} { padding-right: #{$val} !important; } .pb-#{$key}#{$bp} { padding-bottom: #{$val} !important; } .pl-#{$key}#{$bp} { padding-left: #{$val} !important; } .px-#{$key}#{$bp} { padding-right: #{$val} !important; padding-left: #{$val} !important; } .py-#{$key}#{$bp} { padding-top: #{$val} !important; padding-bottom: #{$val} !important; } .m-#{$key}#{$bp} { margin: #{$val} !important; } .mt-#{$key}#{$bp} { margin-top: #{$val} !important; } .mr-#{$key}#{$bp} { margin-right: #{$val} !important; } .mb-#{$key}#{$bp} { margin-bottom: #{$val} !important; } .ml-#{$key}#{$bp} { margin-left: #{$val} !important; } .mx-#{$key}#{$bp} { margin-right: #{$val} !important; margin-left: #{$val} !important; } .my-#{$key}#{$bp} { margin-top: #{$val} !important; margin-bottom: #{$val} !important; } } } @include finite-spacing-utils; @media (min-width: 544px) { @include finite-spacing-utils($bp: '_sm'); } @media (min-width: 768px) { @include finite-spacing-utils($bp: '_md'); } @media (min-width: 1024px) { @include finite-spacing-utils($bp: '_lg'); }

That clocks in at about 41.7kb uncompressed (and about 1kb with Brotli, and 3kb with gzip). It still compresses well, but it’s a bit ridiculous.

I knew it was possible to reference data-* attributes from within CSS using the [attr() function, so I wondered if it was possible to use calc() and attr() together to create dynamically-calculated spacing utility helpers via data-* attributes — like data-m="1" or data-m="1@md" — then in the CSS to do something like margin: calc(attr(data-m) * 0.25rem) (assuming I’m using a spacing scale incrementing at 0.25rem intervals). That could be very powerful.

But the end of that story is: no, you (currently) can’t use attr() with any property except the content property. Bummer. But in searching for attr() and calc() information, I found this intriguing Stack Overflow comment by Simon Rigét that suggests setting a CSS variable directly within an inline style attribute. Aha!

So it’s possible to do something like <div style="--p: 4;"> then, in CSS:

:root { --p: 0; } [style*='--p:'] { padding: calc(0.25rem * var(--p)) !important; }

In the case of the style="--p: 4;" example, you’d effectively end up with padding: 1rem !important;.

… and now you have an infinitely scalable spacing utility class monstrosity helper.

Here’s what that might look like in CSS:

:root { --p: 0; --pt: 0; --pr: 0; --pb: 0; --pl: 0; --px: 0; --py: 0; --m: 0; --mt: 0; --mr: 0; --mb: 0; --ml: 0; --mx: 0; --my: 0; } [style*='--p:'] { padding: calc(0.25rem * var(--p)) !important; } [style*='--pt:'] { padding-top: calc(0.25rem * var(--pt)) !important; } [style*='--pr:'] { padding-right: calc(0.25rem * var(--pr)) !important; } [style*='--pb:'] { padding-bottom: calc(0.25rem * var(--pb)) !important; } [style*='--pl:'] { padding-left: calc(0.25rem * var(--pl)) !important; } [style*='--px:'] { padding-right: calc(0.25rem * var(--px)) !important; padding-left: calc(0.25rem * var(--px)) !important; } [style*='--py:'] { padding-top: calc(0.25rem * var(--py)) !important; padding-bottom: calc(0.25rem * var(--py)) !important; } [style*='--m:'] { margin: calc(0.25rem * var(--m)) !important; } [style*='--mt:'] { margin-top: calc(0.25rem * var(--mt)) !important; } [style*='--mr:'] { margin-right: calc(0.25rem * var(--mr)) !important; } [style*='--mb:'] { margin-bottom: calc(0.25rem * var(--mb)) !important; } [style*='--ml:'] { margin-left: calc(0.25rem * var(--ml)) !important; } [style*='--mx:'] { margin-right: calc(0.25rem * var(--mx)) !important; margin-left: calc(0.25rem * var(--mx)) !important; } [style*='--my:'] { margin-top: calc(0.25rem * var(--my)) !important; margin-bottom: calc(0.25rem * var(--my)) !important; }

This is a lot like the first Sass loop above, but there’s no loop going 11 times — and yet it’s infinite. It’s about 1.4kb uncompressed, 226 bytes with Brotli, or 284 bytes gzipped.

If you wanted to extend this for breakpoints, the unfortunate news is that you can’t put the “@” character in CSS variable names (although emojis and other UTF-8 characters are strangely permitted). So you could probably set up variable names like p_sm or sm_p. You’d have to add some extra CSS variables and some media queries to handle all this, but it won’t blow up exponentially the way traditional CSS classnames created with a Sass for-loop do.

Here’s the equivalent responsive version. We’ll use a Sass mixin again to cut down the repetition:

:root { --p: 0; --pt: 0; --pr: 0; --pb: 0; --pl: 0; --px: 0; --py: 0; --m: 0; --mt: 0; --mr: 0; --mb: 0; --ml: 0; --mx: 0; --my: 0; } @mixin infinite-spacing-utils($bp: '') { [style*='--p#{$bp}:'] { padding: calc(0.25rem * var(--p#{$bp})) !important; } [style*='--pt#{$bp}:'] { padding-top: calc(0.25rem * var(--pt#{$bp})) !important; } [style*='--pr#{$bp}:'] { padding-right: calc(0.25rem * var(--pr#{$bp})) !important; } [style*='--pb#{$bp}:'] { padding-bottom: calc(0.25rem * var(--pb#{$bp})) !important; } [style*='--pl#{$bp}:'] { padding-left: calc(0.25rem * var(--pl#{$bp})) !important; } [style*='--px#{$bp}:'] { padding-right: calc(0.25rem * var(--px#{$bp})) !important; padding-left: calc(0.25rem * var(--px)#{$bp}) !important; } [style*='--py#{$bp}:'] { padding-top: calc(0.25rem * var(--py#{$bp})) !important; padding-bottom: calc(0.25rem * var(--py#{$bp})) !important; } [style*='--m#{$bp}:'] { margin: calc(0.25rem * var(--m#{$bp})) !important; } [style*='--mt#{$bp}:'] { margin-top: calc(0.25rem * var(--mt#{$bp})) !important; } [style*='--mr#{$bp}:'] { margin-right: calc(0.25rem * var(--mr#{$bp})) !important; } [style*='--mb#{$bp}:'] { margin-bottom: calc(0.25rem * var(--mb#{$bp})) !important; } [style*='--ml#{$bp}:'] { margin-left: calc(0.25rem * var(--ml#{$bp})) !important; } [style*='--mx#{$bp}:'] { margin-right: calc(0.25rem * var(--mx#{$bp})) !important; margin-left: calc(0.25rem * var(--mx#{$bp})) !important; } [style*='--my#{$bp}:'] { margin-top: calc(0.25rem * var(--my#{$bp})) !important; margin-bottom: calc(0.25rem * var(--my#{$bp})) !important; } } @include infinite-spacing-utils; @media (min-width: 544px) { @include infinite-spacing-utils($bp: '_sm'); } @media (min-width: 768px) { @include infinite-spacing-utils($bp: '_md'); } @media (min-width: 1024px) { @include infinite-spacing-utils($bp: '_lg'); }

That’s about 6.1kb uncompressed, 428 bytes with Brotli, and 563 with gzip.

Do I think that writing HTML like <div style="--px:2; --my:4;"> is pleasing to the eye, or good developer ergonomics… no, not particularly. But could this approach be viable in situations where you (for some reason) need extremely minimal CSS, or perhaps no external CSS file at all? Yes, I sure do.

It’s worth pointing out here that CSS variables assigned in inline styles do not leak out. They’re scoped only to the current element and don’t change the value of the variable globally. Thank goodness! The one oddity I have found so far is that DevTools (at least in Chrome, Firefox, and Safari) do not report the styles using this technique in the “Computed” styles tab.

Also worth mentioning is that I’ve used good old padding  and margin properties with -top, -right, -bottom, and -left, but you could use the equivalent logical properties like padding-block and padding-inline. It’s even possible to shave off just a few more bytes by selectively mixing and matching logical properties with traditional properties. I managed to get it down to 400 bytes with Brotli and 521 with gzip this way.

Other use cases

This seems most appropriate for things that are on a (linear) incremental scale (which is why padding and margin seems like a good use case) but I could see this potentially working for widths and heights in grid systems (column numbers and/or widths). Maybe for typographic scales (but maybe not).

I’ve focused a lot on file size, but there may be some other uses here I’m not thinking of. Perhaps you wouldn’t write your code in this way, but a critical CSS tool could potentially refactor the code to use this approach.

Digging deeper

As I dug deeper, I found that Ahmad Shadeed blogged in 2019 about mixing calc() with CSS variable assignments within inline styles particularly for avatar sizes. Miriam Suzanne’s article on Smashing Magazine in 2019 didn’t use calc() but shared some amazing things you can do with variable assignments in inline styles.

The post Efficient Infinite Utility Helpers Using Inline CSS Custom Properties and calc() appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

gridless.design

Css Tricks - Fri, 08/06/2021 - 4:48am

Donnie D’Amato built a whole site around the thesis that “digital designers still expect to use the grid while experienced layout engineers have moved beyond it.” The idea isn’t that we should never literally use display: grid; but rather that strict adherence to an overall page grid isn’t necessary. Brad’s reaction was interesting, as someone in and out of a lot more projects than I am:

One of the most frequent, confusing conversations w/ designers is “No, the pink lines that overlay design comps aren’t all that helpful for how things actually work in the browser.”

[…] throw your transparent pink 12-column grids in the trash can.

Brad Frost, “Link post to gridless.desgn”

Donnie feels this is all in the spirit of responsive design, and I’m inclined to agree, except that browser technology has evolved quite a bit since the coining of responsive design and it might be time to call it something new. “Content-driven design” is one of Donnie’s headers and that’s a nice phrase.

This all resonated with Michelle as well:

CSS layout features like flexbox and Grid enable us to build more flexible layouts that prioritise content. We talk about intrinsic and extrinsic sizing in CSS — sizing based on both content and context. The promised container queries specification will put even more power in the hands of developers. But it feels to me like the design process is still stuck in the past.

Michelle Barker, “Is it Time to Ditch the Design Grid?”

When container queries are really here, overall page layouts are really going to be an endangered species. Donnie knows:

[…] you should truly consider all other options before using a [browser window size] breakpoint. Ask, is the component expected to always be related to the page size (headers, modals, etc.)? Then a breakpoint might be acceptable. However, components that are placed deep within the page should not be using breakpoints to inform their layout.

Direct Link to ArticlePermalink

The post gridless.design appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Three Buggy React Code Examples and How to Fix Them

Css Tricks - Thu, 08/05/2021 - 4:24am

There’s usually more than one way to code a thing in React. And while it’s possible to create the same thing different ways, there may be one or two approaches that technically work “better” than others. I actually run into plenty of examples where the code used to build a React component is technically “correct” but opens up issues that are totally avoidable.

So, let’s look at some of those examples. I’m going to provide three instances of “buggy” React code that technically gets the job done for a particular situation, and ways it can be improved to be more maintainable, resilient, and ultimately functional.

This article assumes some knowledge of React hooks. It isn’t an introduction to hooks—you can find a good introduction from Kingsley Silas on CSS Tricks, or take a look at the React docs to get acquainted with them. We also won’t be looking at any of that exciting new stuff coming up in React 18. Instead, we’re going to look at some subtle problems that won’t completely break your application, but might creep into your codebase and can cause strange or unexpected behavior if you’re not careful.

Buggy code #1: Mutating state and props

It’s a big anti-pattern to mutate state or props in React. Don’t do this!

This is not a revolutionary piece of advice—it’s usually one of the first things you learn if you’re getting started with React. But you might think you can get away with it (because it seems like you can in some cases).

I’m going to show you how bugs might creep into your code if you’re mutating props. Sometimes you’ll want a component that will show a transformed version of some data. Let’s create a parent component that holds a count in state and a button that will increment it. We’ll also make a child component that receives the count via props and shows what the count would look like with 5 added to it.

Here’s a Pen that demonstrates a naïve approach:

CodePen Embed Fallback

This example works. It does what we want it to do: we click the increment button and it adds one to the count. Then the child component is re-rendered to show what the count would look like with 5 added on. We changed the props in the child here and it works fine! Why has everybody been telling us mutating props is so bad?

Well, what if later we refactor the code and need to hold the count in an object? This might happen if we need to store more properties in the same useState hook as our codebase grows larger.

Instead of incrementing the number held in state, we increment the count property of an object held in state. In our child component, we receive the object through props and add to the count property to show what the count would look like if we added 5.

Let’s see how this goes. Try incrementing the state a few times in this pen:

CodePen Embed Fallback

Oh no! Now when we increment the count it seems to add 6 on every click! Why is this happening? The only thing that changed between these two examples is that we used an object instead of a number!

More experienced JavaScript programmers will know that the big difference here is that primitive types such as numbers, booleans and strings are immutable and passed by value, whereas objects are passed by reference.

This means that:

  • If you put a number in a variable, assign another variable to it, then change the second variable, the first variable will not be changed.
  • If you if you put an object in a variable, assign another variable to it, then change the second variable, the first variable will get changed.

When the child component changes a property of the state object, it’s adding 5 to the same object React uses when updating the state. This means that when our increment function fires after a click, React uses the same object after it has been manipulated by our child component, which shows as adding 6 on every click.

The solution

There are multiple ways to avoid these problems. For a situation as simple as this, you could avoid any mutation and express the change in a render function:

function Child({state}){ return <div><p>count + 5 = {state.count + 5} </p></div> }

However, in a more complicated case, you might need to reuse state.count + 5 multiple times or pass the transformed data to multiple children.

One way to do this is to create a copy of the prop in the child, then transform the properties on the cloned data. There’s a couple of different ways to clone objects in JavaScript with various tradeoffs. You can use object literal and spread syntax:

function Child({state}){ const copy = {...state}; return <div><p>count + 5 = {copy.count + 5} </p></div> }

But if there are nested objects, they will still reference the old version. Instead, you could convert the object to JSON then immediately parse it:

JSON.parse(JSON.stringify(myobject))

This will work for most simple object types. But if your data uses more exotic types, you might want to use a library. A popular method would be to use lodash’s deepClone. Here’s a Pen that shows a fixed version using object literal and spread syntax to clone the object:

CodePen Embed Fallback

One more option is to use a library like Immutable.js. If you have a rule to only use immutable data structures, you’ll be able to trust that your data won’t get unexpectedly mutated. Here’s one more example using the immutable Map class to represent the state of the counter app:

CodePen Embed Fallback Buggy code #2: Derived state

Let’s say we have a parent and a child component. They both have useState hooks holding a count. And let’s say the parent passes its state down as prop down to the child, which the child uses to initialize its count.

function Parent(){ const [parentCount,setParentCount] = useState(0); return <div> <p>Parent count: {parentCount}</p> <button onClick={()=>setParentCount(c=>c+1)}>Increment Parent</button> <Child parentCount={parentCount}/> </div>; } function Child({parentCount}){ const [childCount,setChildCount] = useState(parentCount); return <div> <p>Child count: {childCount}</p> <button onClick={()=>setChildCount(c=>c+1)}>Increment Child</button> </div>; }

What happens to the child’s state when the parent’s state changes, and the child is re-rendered with different props? Will the child state remain the same or will it change to reflect the new count that was passed to it?

We’re dealing with a function, so the child state should get blown away and replaced right? Wrong! The child’s state trumps the new prop from the parent. After the child component’s state is initialized in the first render, it’s completely independent from any props it receives.

CodePen Embed Fallback

React stores component state for each component in the tree and the state only gets blown away when the component is removed. Otherwise, the state won’t be affected by new props.

Using props to initialize state is called “derived state” and it is a bit of an anti-pattern. It removes the benefit of a component having a single source of truth for its data.

Using the key prop

But what if we have a collection of items we want to edit using the same type of child component, and we want the child to hold a draft of the item we’re editing? We’d need to reset the state of the child component each time we switch items from the collection.

Here’s an example: Let’s write an app where we can write a daily list of five thing’s we’re thankful for each day. We’ll use a parent with state initialized as an empty array which we’re going to fill up with five string statements.

Then we’ll have a a child component with a text input to enter our statement.

We’re about to use a criminal level of over-engineering in our tiny app, but it’s to illustrate a pattern you might need in a more complicated project: We’re going to hold the draft state of the text input in the child component.

Lowering the state to the child component can be a performance optimization to prevent the parent re-rendering when the input state changes. Otherwise the parent component will re-render every time there is a change in the text input.

We’ll also pass down an example statement as a default value for each of the five notes we’ll write.

Here’s a buggy way to do this:

// These are going to be our default values for each of the five notes // To give the user an idea of what they might write const ideaList = ["I'm thankful for my friends", "I'm thankful for my family", "I'm thankful for my health", "I'm thankful for my hobbies", "I'm thankful for CSS Tricks Articles"] const maxStatements = 5; function Parent(){ const [list,setList] = useState([]); // Handler function for when the statement is completed // Sets state providing a new array combining the current list and the new item function onStatementComplete(payload){ setList(list=>[...list,payload]); } // Function to reset the list back to an empty array function reset(){ setList([]); } return <div> <h1>Your thankful list</h1> <p>A five point list of things you're thankful for:</p> {/* First we list the statements that have been completed*/} {list.map((item,index)=>{return <p>Item {index+1}: {item}</p>})} {/* If the length of the list is under our max statements length, we render the statement form for the user to enter a new statement. We grab an example statement from the idealist and pass down the onStatementComplete function. Note: This implementation won't work as expected*/} {list.length<maxStatements ? <StatementForm initialStatement={ideaList[list.length]} onStatementComplete={onStatementComplete}/> :<button onClick={reset}>Reset</button> } </div>; } // Our child StatementForm component This accepts the example statement for it's initial state and the on complete function function StatementForm({initialStatement,onStatementComplete}){ // We hold the current state of the input, and set the default using initialStatement prop const [statement,setStatement] = useState(initialStatement); return <div> {/*On submit we prevent default and fire the onStatementComplete function received via props*/} <form onSubmit={(e)=>{e.preventDefault(); onStatementComplete(statement)}}> <label htmlFor="statement-input">What are you thankful for today?</label><br/> {/* Our controlled input below*/} <input id="statement-input" onChange={(e)=>setStatement(e.target.value)} value={statement} type="text"/> <input type="submit"/> </form> </div> } CodePen Embed Fallback

There’s a problem with this: each time we submit a completed statement, the input incorrectly holds onto the submitted note in the textbox. We want to replace it with an example statement from our list.

Even though we’re passing down a different example string every time, the child remembers the old state and our newer prop is ignored. You could potentially check whether the props have changed on every render in a useEffect, and then reset the state if they have. But that can cause bugs when different parts of your data use the same values and you want to force the child state to reset even though the prop remains the same.

The solution

If you need a child component where the parent needs the ability to reset the child on demand, there is a way to do it: it’s by changing the key prop on the child.

You might have seen this special key prop from when you’re rendering elements based on an array and React throws a warning asking you to provide a key for each element. Changing the key of a child element ensures React creates a brand new version of the element. It’s a way of telling React that you are rendering a conceptually different item using the same component.

Let’s add a key prop to our child component. The value is the index we’re about to fill with our statement:

<StatementForm key={list.length} initialStatement={ideaList[list.length]} onStatementComplte={onStatementComplete}/>

Here’s what this looks like in our list app:

CodePen Embed Fallback

Note the only thing that changed here is that the child component now has a key prop based on the array index we’re about to fill. Yet, the behavior of the component has completely changed.

Now each time we submit and finish writing out statement, the old state in the child component gets thrown away and replaced with the example statement.

Buggy code #3: Stale closure bugs

This is a common issue with React hooks. There’s previously been a CSS-Tricks article about dealing with stale props and states in React’s functional components.

Let’s take a look at a few situations where you might run into trouble. The first crops up is when using useEffect. If we’re doing anything asynchronous inside of useEffect we can get into trouble using old state or props.

Here’s an example. We need to increment a count every second. We set it up on the first render with a useEffect, providing a closure that increments the count as the first argument, and an empty array as the second argument. We’ll give it the empty array as we don’t want React to restart the interval on every render.

function Counter() { let [count, setCount] = useState(0); useEffect(() => { let id = setInterval(() => { setCount(count + 1); }, 1000); return () => clearInterval(id); },[]); return <h1>{count}</h1>; } CodePen Embed Fallback

Oh no! The count gets incremented to 1 but never changes after that! Why is this happening?

It’s to do with two things:

Having a look at the MDN docs on closures, we can see:

A closure is the combination of a function and the lexical environment within which that function was declared. This environment consists of any local variables that were in-scope at the time the closure was created.

The “lexical environment” in which our useEffect closure is declared is inside our Counter React component. The local variable we’re interested is count, which is zero at the time of the declaration (the first render).

The problem is, this closure is never declared again. If the count is zero at the time declaration, it will always be zero. Each time the interval fires, it’s running a function that starts with a count of zero and increments it to 1.

So how might we get the function declared again? This is where the second argument of the useEffect call comes in. We thought we were extremely clever only starting off the interval once by using the empty array, but in doing so we shot ourselves in the foot. If we had left out this argument, the closure inside useEffect would get declared again with a new count every time.

The way I like to think about it is that the useEffect dependency array does two things:

  • It will fire the useEffect function when the dependency changes.
  • It will also redeclare the closure with the updated dependency, keeping the closure safe from stale state or props.

In fact, there’s even a lint rule to keep your useEffect instances safe from stale state and props by making sure you add the right dependencies to the second argument.

But we don’t actually want to reset our interval every time the component gets rendered either. How do we solve this problem then?

The solution

Again, there are multiple solutions to our problem here. Let’s start with the easiest: not using the count state at all and instead passing a function into our setState call:

function Counter() { let [count, setCount] = useState(0); useEffect(() => { let id = setInterval(() => { setCount(prevCount => prevCount+ 1); }, 1000); return () => clearInterval(id); },[]); return <h1>{count}</h1>; }

That was easy. Another option is to use the useRef hook like this to keep a mutable reference of the count:

function Counter() { let [count, setCount] = useState(0); const countRef = useRef(count) function updateCount(newCount){ setCount(newCount); countRef.current = newCount; } useEffect(() => { let id = setInterval(() => { updateCount(countRef.current + 1); }, 1000); return () => clearInterval(id); },[]); return <h1>{count}</h1>; } ReactDOM.render(<Counter/>,document.getElementById("root")) CodePen Embed Fallback

To go more in depth on using intervals and hooks you can take a look at this article about creating a useInterval in React by Dan Abramov, who is one of the React core team members. He takes a different route where, instead of holding the count in a ref, he places the entire closure in a ref.

To go more in depth on useEffect you can have a look at his post on useEffect.

More stale closure bugs

But stale closures won’t just appear in useEffect. They can also turn up in event handlers and other closures inside your React components. Let’s have a look at a React component with a stale event handler; we’ll create a scroll progress bar that does the following:

  • increases its width along the screen as the user scrolls
  • starts transparent and becomes more and more opaque as the user scrolls
  • provides the user with a button that randomizes the color of the scroll bar

We’re going to leave the progress bar outside of the React tree and update it in the event handler. Here’s our buggy implementation:

<body> <div id="root"></div> <div id="progress"></div> </body> function Scroller(){ // We'll hold the scroll position in one state const [scrollPosition, setScrollPosition] = useState(window.scrollY); // And the current color in another const [color,setColor] = useState({r:200,g:100,b:100}); // We assign out scroll listener on the first render useEffect(()=>{ document.addEventListener("scroll",handleScroll); return ()=>{document.removeEventListener("scroll",handleScroll);} },[]); // A function to generate a random color. To make sure the contrast is strong enough // each value has a minimum value of 100 function onColorChange(){ setColor({r:100+Math.random()*155,g:100+Math.random()*155,b:100+Math.random()*155}); } // This function gets called on the scroll event function handleScroll(e){ // First we get the value of how far down we've scrolled const scrollDistance = document.body.scrollTop || document.documentElement.scrollTop; // Now we grab the height of the entire document const documentHeight = document.documentElement.scrollHeight - document.documentElement.clientHeight; // And use these two values to figure out how far down the document we are const percentAlong = (scrollDistance / documentHeight); // And use these two values to figure out how far down the document we are const progress = document.getElementById("progress"); progress.style.width = `${percentAlong*100}%`; // Here's where our bug is. Resetting the color here will mean the color will always // be using the original state and never get updated progress.style.backgroundColor = `rgba(${color.r},${color.g},${color.b},${percentAlong})`; setScrollPosition(percentAlong); } return <div className="scroller" style={{backgroundColor:`rgb(${color.r},${color.g},${color.b})`}}> <button onClick={onColorChange}>Change color</button> <span class="percent">{Math.round(scrollPosition* 100)}%</span> </div> } ReactDOM.render(<Scroller/>,document.getElementById("root")) CodePen Embed Fallback

Our bar gets wider and increasingly more opaque as the page scrolls. But if you click the change color button, our randomized colors are not affecting the progress bar. We’re getting this bug because the closure is affected by component state, and this closure is never being re-declared so we only get the original value of the state and no updates.

You can see how setting up closures that call external APIs using React state, or component props might give you grief if you’re not careful.

The solution

Again, there are multiple ways to fix this problem. We could keep the color state in a mutable ref which we could later use in our event handler:

const [color,setColor] = useState({r:200,g:100,b:100}); const colorRef = useRef(color); function onColorChange(){ const newColor = {r:100+Math.random()*155,g:100+Math.random()*155,b:100+Math.random()*155}; setColor(newColor); colorRef.current=newColor; progress.style.backgroundColor = `rgba(${newColor.r},${newColor.g},${newColor.b},${scrollPosition})`; } CodePen Embed Fallback

This works well enough but it doesn’t feel ideal. You may need to write code like this if you’re dealing with third-party libraries and you can’t find a way to pull their API into your React tree. But by keeping one of our elements out of the React tree and updating it inside of our event handler, we’re swimming against the tide.

This is a simple fix though, as we’re only dealing with the DOM API. An easy way to refactor this is to include the progress bar in our React tree and render it in JSX allowing it to reference the component’s state. Now we can use the event handling function purely for updating state.

function Scroller(){ const [scrollPosition, setScrollPosition] = useState(window.scrollY); const [color,setColor] = useState({r:200,g:100,b:100}); useEffect(()=>{ document.addEventListener("scroll",handleScroll); return ()=>{document.removeEventListener("scroll",handleScroll);} },[]); function onColorChange(){ const newColor = {r:100+Math.random()*155,g:100+Math.random()*155,b:100+Math.random()*155}; setColor(newColor); } function handleScroll(e){ const scrollDistance = document.body.scrollTop || document.documentElement.scrollTop; const documentHeight = document.documentElement.scrollHeight - document.documentElement.clientHeight; const percentAlong = (scrollDistance / documentHeight); setScrollPosition(percentAlong); } return <> <div class="progress" id="progress" style={{backgroundColor:`rgba(${color.r},${color.g},${color.b},${scrollPosition})`,width: `${scrollPosition*100}%`}}></div> <div className="scroller" style={{backgroundColor:`rgb(${color.r},${color.g},${color.b})`}}> <button onClick={onColorChange}>Change color</button> <span class="percent">{Math.round(scrollPosition * 100)}%</span> </div> </> } CodePen Embed Fallback

That feels better. Not only have we removed the chance for our event handler to get stale, we’ve also converted our progress bar into a self contained component which takes advantage of the declarative nature of React.

Also, for a scroll indicator like this, you might not even need JavaScript — have take a look at the up-and-coming @scroll-timeline CSS function or an approach using a gradient from Chris’ book on the greatest CSS tricks!

Wrapping up

We’ve had a look at three different ways you can create bugs in your React applications and some ways to fix them. It can be easy to look at counter examples which follow a happy path and don’t show subtleties in the APIs that might cause problems.

If you still find yourself needing to build a stronger mental model of what your React code is doing, here’s a list of resources which can help:

The post Three Buggy React Code Examples and How to Fix Them appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

How to Build a Full-Stack Mobile Application With Flutter, Fauna, and GraphQL

Css Tricks - Wed, 08/04/2021 - 11:23pm

(This is a sponsored post.)

Flutter is Google’s UI framework used to create flexible, expressive cross-platform mobile applications. It is one of the fastest-growing frameworks for mobile app development. On the other hand, Fauna is a transactional, developer-friendly serverless database that supports native GraphQL. Flutter + Fauna is a match made in Heaven. If you are looking to build and ship a feature-rich full-stack application within record time, Flutter and Fauna is the right tool for the job. In this article, we will walk you through building your very first Flutter application with Fauna and GraphQL back-end.

You can find the complete code for this article, on GitHub.

Learning objective

By the end of this article, you should know how to:

  1. set up a Fauna instance,
  2. compose GraphQL schema for Fauna,
  3. set up GraphQL client in a Flutter app, and
  4. perform queries and mutations against Fauna GraphQL back-end.

Fauna vs. AWS Amplify vs. Firebase: What problems does Fauna solve? How is it different from other serverless solutions? If you are new to Fauna and would like to learn more about how Fauna compares to other solutions, I recommend reading this article.

What are we building?

We will be building a simple mobile application that will allow users to add, delete and update their favorite characters from movies and shows.

Setting up Fauna

Head over to fauna.com and create a new account. Once logged in, you should be able to create a new database.

Give a name to your database. I am going to name mine flutter_demo. Next, we can select a region group. For this demo, we will choose classic. Fauna is a globally distributed serverless database. It is the only database that supports low latency read and writes access from anywhere. Think of it as CDN (Content Delivery Network) but for your database. To learn more about region groups, follow this guide.

Generating an admin key

Once the database is created head, over to the security tab. Click on the new key button and create a new key for your database. Keep this key secure as we need this for our GraphQL operations.

We will be creating an admin key for our database. Keys with an admin role are used for managing their associated database, including the database access providers, child databases, documents, functions, indexes, keys, tokens, and user-defined roles. You can learn more about Fauna’s various security keys and access roles in the following link.

Compose a GraphQL schema

We will be building a simple app that will allow the users to add, update, and delete their favorite TV characters.

Creating a new Flutter project

Let’s create a new flutter project by running the following commands.

flutter create my_app

Inside the project directory, we will create a new file called graphql/schema.graphql.

In the schema file, we will define the structure of our collection. Collections in Fauna are similar to tables in SQL. We only need one collection for now. We will call it Character.

### schema.graphql type Character { name: String! description: String! picture: String } type Query { listAllCharacters: [Character] }

As you can see above, we defined a type called Character with several properties (i.e., name, description, picture, etc.). Think of properties as columns of SQL database or key-value paid of an NoSQL database. We have also defined a Query. This query will return a list of the characters.

Now let’s go back to Fauna dashboard. Click on GraphQL and click on import schema to upload our schema to Fauna.

Once the importing is done, we will see that Fauna has generated the GraphQL queries and mutations.

Don’t like auto-generated GraphQL? Want more control over your business logic? In that case, Fauna allows you to define your custom GraphQL resolvers. To learn more, follow this link.

Setup GraphQL client in Flutter app

Let’s open up our pubspec.yaml file and add the required dependencies.

... dependencies: graphql_flutter: ^4.0.0-beta hive: ^1.3.0 flutter: sdk: flutter ...

We added two dependencies here. graphql_flutter is a GraphQL client library for flutter. It brings all the modern features of GraphQL clients into one easy-to-use package. We also added the hive package as our dependency. Hive is a lightweight key-value database written in pure Dart for local storage. We are using hive to cache our GraphQL queries.

Next, we will create a new file lib/client_provider.dart. We will create a provider class in this file that will contain our Fauna configuration.

To connect to Fauna’s GraphQL API, we first need to create a GraphQLClient. A GraphQLClient requires a cache and a link to be initialized. Let’s take a look at the code below.

// lib/client_provider.dart import 'package:graphql_flutter/graphql_flutter.dart'; import 'package:flutter/material.dart'; ValueNotifier<GraphQLClient> clientFor({ @required String uri, String subscriptionUri, }) { final HttpLink httpLink = HttpLink( uri, ); final AuthLink authLink = AuthLink( getToken: () async => 'Bearer fnAEPAjy8QACRJssawcwuywad2DbB6ssrsgZ2-2', ); Link link = authLink.concat(httpLink); return ValueNotifier<GraphQLClient>( GraphQLClient( cache: GraphQLCache(store: HiveStore()), link: link, ), ); }

In the code above, we created a ValueNotifier to wrap the GraphQLClient. Notice that we configured the AuthLink in lines 13 – 15 (highlighted). On line 14, we have added the admin key from Fauna as a part of the token. Here I have hardcoded the admin key. However, in a production application, we must avoid hard-coding any security keys from Fauna.

There are several ways to store secrets in Flutter application. Please take a look at this blog post for reference.

We want to be able to call Query and Mutation from any widget of our application. To do so we need to wrap our widgets with GraphQLProvider widget.

// lib/client_provider.dart .... /// Wraps the root application with the `graphql_flutter` client. /// We use the cache for all state management. class ClientProvider extends StatelessWidget { ClientProvider({ @required this.child, @required String uri, }) : client = clientFor( uri: uri, ); final Widget child; final ValueNotifier<GraphQLClient> client; @override Widget build(BuildContext context) { return GraphQLProvider( client: client, child: child, ); } }

Next, we go to our main.dart file and wrap our main widget with the ClientProvider widget. Let’s take a look at the code below.

// lib/main.dart ... void main() async { await initHiveForFlutter(); runApp(MyApp()); } final graphqlEndpoint = 'https://graphql.fauna.com/graphql'; class MyApp extends StatelessWidget { @override Widget build(BuildContext context) { return ClientProvider( uri: graphqlEndpoint, child: MaterialApp( title: 'My Character App', debugShowCheckedModeBanner: false, initialRoute: '/', routes: { '/': (_) => AllCharacters(), '/new': (_) => NewCharacter(), } ), ); } }

At this point, all our downstream widgets will have access to run Queries and Mutations functions and can interact with the GraphQL API.

Application pages

Demo applications should be simple and easy to follow. Let’s go ahead and create a simple list widget that will show the list of all characters. Let’s create a new file lib/screens/character-list.dart. In this file, we will write a new widget called AllCharacters.

// lib/screens/character-list.dart.dart class AllCharacters extends StatelessWidget { const AllCharacters({Key key}) : super(key: key); @override Widget build(BuildContext context) { return Scaffold( body: CustomScrollView( slivers: [ SliverAppBar( pinned: true, snap: false, floating: true, expandedHeight: 160.0, title: Text( 'Characters', style: TextStyle( fontWeight: FontWeight.w400, fontSize: 36, ), ), actions: <Widget>[ IconButton( padding: EdgeInsets.all(5), icon: const Icon(Icons.add_circle), tooltip: 'Add new entry', onPressed: () { Navigator.pushNamed(context, '/new'); }, ), ], ), SliverList( delegate: SliverChildListDelegate([ Column( children: [ for (var i = 0; i < 10; i++) CharacterTile() ], ) ]) ) ], ), ); } } // Character-tile.dart class CharacterTile extends StatefulWidget { CharacterTilee({Key key}) : super(key: key); @override _CharacterTileState createState() => _CharacterTileeState(); } class _CharacterTileState extends State<CharacterTile> { @override Widget build(BuildContext context) { return Container( child: Text(&quot;Character Tile&quot;), ); } }

As you can see in the code above, [line 37] we have a for loop to populate the list with some fake data. Eventually, we will be making a GraphQL query to our Fauna backend and fetch all the characters from the database. Before we do that, let’s try to run our application as it is. We can run our application with the following command

flutter run

At this point we should be able to see the following screen.

Performing queries and mutations

Now that we have some basic widgets, we can go ahead and hook up GraphQL queries. Instead of hardcoded strings, we would like to get all the characters from our database and view them in AllCharacters widget.

Let’s go back to the Fauna’s GraphQL playground. Notice we can run the following query to list all the characters.

query ListAllCharacters { listAllCharacters(_size: 100) { data { _id name description picture } after } }

To perform this query from our widget we will need to make some changes to it.

import 'package:flutter/material.dart'; import 'package:graphql_flutter/graphql_flutter.dart'; import 'package:todo_app/screens/Character-tile.dart'; String readCharacters = ";";"; query ListAllCharacters { listAllCharacters(_size: 100) { data { _id name description picture } after } } ";";";; class AllCharacters extends StatelessWidget { const AllCharacters({Key key}) : super(key: key); @override Widget build(BuildContext context) { return Scaffold( body: CustomScrollView( slivers: [ SliverAppBar( pinned: true, snap: false, floating: true, expandedHeight: 160.0, title: Text( 'Characters', style: TextStyle( fontWeight: FontWeight.w400, fontSize: 36, ), ), actions: <Widget>[ IconButton( padding: EdgeInsets.all(5), icon: const Icon(Icons.add_circle), tooltip: 'Add new entry', onPressed: () { Navigator.pushNamed(context, '/new'); }, ), ], ), SliverList( delegate: SliverChildListDelegate([ Query(options: QueryOptions( document: gql(readCharacters), // graphql query we want to perform pollInterval: Duration(seconds: 120), // refetch interval ), builder: (QueryResult result, { VoidCallback refetch, FetchMore fetchMore }) { if (result.isLoading) { return Text('Loading'); } return Column( children: [ for (var item in result.data\['listAllCharacters'\]['data']) CharacterTile(Character: item, refetch: refetch), ], ); }) ]) ) ], ), ); } }

First of all, we defined the query string for getting all characters from the database [line 5 to 17]. We have wrapped our list widget with a Query widget from flutter_graphql.

Feel free to take a look at the official documentation for flutter_graphql library.

In the query options argument we provide the GraphQL query string itself. We can pass in any float number for the pollInterval argument. Poll Interval defines how often we would like to refetch data from our backend. The widget also has a standard builder function. We can use a builder function to pass the query result, refetch callback function and fetch more callback function down the widget tree.

Next, I am going to update the CharacterTile widget to display the character data on screen.

// lib/screens/character-tile.dart ... class CharacterTile extends StatelessWidget { final Character; final VoidCallback refetch; final VoidCallback updateParent; const CharacterTile({ Key key, @required this.Character, @required this.refetch, this.updateParent, }) : super(key: key); @override Widget build(BuildContext context) { return InkWell( onTap: () { }, child: Padding( padding: const EdgeInsets.all(10), child: Row( children: [ Container( height: 90, width: 90, decoration: BoxDecoration( color: Colors.amber, borderRadius: BorderRadius.circular(15), image: DecorationImage( fit: BoxFit.cover, image: NetworkImage(Character['picture']) ) ), ), SizedBox(width: 10), Expanded( child: Column( mainAxisAlignment: MainAxisAlignment.center, crossAxisAlignment: CrossAxisAlignment.start, children: [ Text( Character['name'], style: TextStyle( color: Colors.black87, fontWeight: FontWeight.bold, ), ), SizedBox(height: 5), Text( Character['description'], style: TextStyle( color: Colors.black87, ), maxLines: 2, ), ], ) ) ], ), ), ); } } Adding new data

We can add new characters to our database by running the mutation below.

mutation CreateNewCharacter($data: CharacterInput!) { createCharacter(data: $data) { _id name description picture } }

To run this mutation from our widget we can use the Mutation widget from flutter_graphql library. Let’s create a new widget with a simple form for the users to interact with and input data. Once the form is submitted the createCharacter mutation will be called.

// lib/screens/new.dart ... String addCharacter = ";";"; mutation CreateNewCharacter(\$data: CharacterInput!) { createCharacter(data: \$data) { _id name description picture } } ";";";; class NewCharacter extends StatelessWidget { const NewCharacter({Key key}) : super(key: key); @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: const Text('Add New Character'), ), body: AddCharacterForm() ); } } class AddCharacterForm extends StatefulWidget { AddCharacterForm({Key key}) : super(key: key); @override _AddCharacterFormState createState() => _AddCharacterFormState(); } class _AddCharacterFormState extends State<AddCharacterForm> { String name; String description; String imgUrl; @override Widget build(BuildContext context) { return Form( child: Padding( padding: EdgeInsets.all(20), child: Column( crossAxisAlignment: CrossAxisAlignment.start, children: [ TextField( decoration: const InputDecoration( icon: Icon(Icons.person), labelText: 'Name *', ), onChanged: (text) { name = text; }, ), TextField( decoration: const InputDecoration( icon: Icon(Icons.post_add), labelText: 'Description', ), minLines: 4, maxLines: 4, onChanged: (text) { description = text; }, ), TextField( decoration: const InputDecoration( icon: Icon(Icons.image), labelText: 'Image Url', ), onChanged: (text) { imgUrl = text; }, ), SizedBox(height: 20), Mutation( options: MutationOptions( document: gql(addCharacter), onCompleted: (dynamic resultData) { print(resultData); name = ''; description = ''; imgUrl = ''; Navigator.of(context).push( MaterialPageRoute(builder: (context) => AllCharacters()) ); }, ), builder: ( RunMutation runMutation, QueryResult result, ) { return Center( child: ElevatedButton( child: const Text('Submit'), onPressed: () { runMutation({ 'data': { ";picture";: imgUrl, ";name";: name, ";description";: description, } }); }, ), ); } ) ], ), ), ); } }

As you can see from the code above Mutation widget works very similar to the Query widget. Additionally, the Mutation widget provides us with a onComplete function. This function returns the updated result from the database after the mutation is completed.

Removing data

To remove a character from our database we can run the deleteCharacter mutation. We can add this mutation function to our CharacterTile and fire it when a button is pressed.

// lib/screens/character-tile.dart ... String deleteCharacter = ";";"; mutation DeleteCharacter(\$id: ID!) { deleteCharacter(id: \$id) { _id name } } ";";";; class CharacterTile extends StatelessWidget { final Character; final VoidCallback refetch; final VoidCallback updateParent; const CharacterTile({ Key key, @required this.Character, @required this.refetch, this.updateParent, }) : super(key: key); @override Widget build(BuildContext context) { return InkWell( onTap: () { showModalBottomSheet( context: context, builder: (BuildContext context) { print(Character['picture']); return Mutation( options: MutationOptions( document: gql(deleteCharacter), onCompleted: (dynamic resultData) { print(resultData); this.refetch(); }, ), builder: ( RunMutation runMutation, QueryResult result, ) { return Container( height: 400, padding: EdgeInsets.all(30), child: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, mainAxisSize: MainAxisSize.min, children: <Widget>[ Text(Character['description']), ElevatedButton( child: Text('Delete Character'), onPressed: () { runMutation({ 'id': Character['_id'], }); Navigator.pop(context); }, ), ], ), ), ); } ); } ); }, child: Padding( padding: const EdgeInsets.all(10), child: Row( children: [ Container( height: 90, width: 90, decoration: BoxDecoration( color: Colors.amber, borderRadius: BorderRadius.circular(15), image: DecorationImage( fit: BoxFit.cover, image: NetworkImage(Character['picture']) ) ), ), SizedBox(width: 10), Expanded( child: Column( mainAxisAlignment: MainAxisAlignment.center, crossAxisAlignment: CrossAxisAlignment.start, children: [ Text( Character['name'], style: TextStyle( color: Colors.black87, fontWeight: FontWeight.bold, ), ), SizedBox(height: 5), Text( Character['description'], style: TextStyle( color: Colors.black87, ), maxLines: 2, ), ], ) ) ], ), ), ); } } Editing data

Editing data works same as add and delete. It is just another mutation in the GraphQL API. We can create an edit character form widget similar to the new character form widget. The only difference is that the edit form will run updateCharacter mutation. For editing I created a new widget lib/screens/edit.dart. Here’s the code for this widget.

// lib/screens/edit.dart String editCharacter = """ mutation EditCharacter(\$name: String!, \$id: ID!, \$description: String!, \$picture: String!) { updateCharacter(data: { name: \$name description: \$description picture: \$picture }, id: \$id) { _id name description picture } } """; class EditCharacter extends StatelessWidget { final Character; const EditCharacter({Key key, this.Character}) : super(key: key); @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: const Text('Edit Character'), ), body: EditFormBody(Character: this.Character), ); } } class EditFormBody extends StatefulWidget { final Character; EditFormBody({Key key, this.Character}) : super(key: key); @override _EditFormBodyState createState() => _EditFormBodyState(); } class _EditFormBodyState extends State<EditFormBody> { String name; String description; String picture; @override Widget build(BuildContext context) { return Container( child: Padding( padding: const EdgeInsets.all(8.0), child: Column( crossAxisAlignment: CrossAxisAlignment.start, children: [ TextFormField( initialValue: widget.Character['name'], decoration: const InputDecoration( icon: Icon(Icons.person), labelText: 'Name *', ), onChanged: (text) { name = text; } ), TextFormField( initialValue: widget.Character['description'], decoration: const InputDecoration( icon: Icon(Icons.person), labelText: 'Description', ), minLines: 4, maxLines: 4, onChanged: (text) { description = text; } ), TextFormField( initialValue: widget.Character['picture'], decoration: const InputDecoration( icon: Icon(Icons.image), labelText: 'Image Url', ), onChanged: (text) { picture = text; }, ), SizedBox(height: 20), Mutation( options: MutationOptions( document: gql(editCharacter), onCompleted: (dynamic resultData) { print(resultData); Navigator.of(context).push( MaterialPageRoute(builder: (context) => AllCharacters()) ); }, ), builder: ( RunMutation runMutation, QueryResult result, ) { print(result); return Center( child: ElevatedButton( child: const Text('Submit'), onPressed: () { runMutation({ 'id': widget.Character['_id'], 'name': name != null ? name : widget.Character['name'], 'description': description != null ? description : widget.Character['description'], 'picture': picture != null ? picture : widget.Character['picture'], }); }, ), ); } ), ] ) ), ); } }

You can take a look at the complete code for this article below.

GitHub Where to go from here

The main intention of this article is to get you up and running with Flutter and Fauna. We have only scratched the surface here. Fauna ecosystem provides a complete, auto-scaling, developer-friendly backend as a service for your mobile applications. If your goal is to ship a production-ready cross-platform mobile application in record time give Fauna and Flutter is the way to go.

I highly recommend checking out Fauna’s official documentation site. If you are interested in learning more about GraphQL clients for Dart/Flutter checkout the official GitHub repo for graphql_flutter.

Happy hacking and see you next time.

The post How to Build a Full-Stack Mobile Application With Flutter, Fauna, and GraphQL appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.