Front End Web Development

Links on React and JavaScript II

Css Tricks - Fri, 10/01/2021 - 11:09am

The post Links on React and JavaScript II appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Using the platform

Css Tricks - Fri, 10/01/2021 - 8:50am

While I’m a front-end developer at heart, I’ve rarely had the luxury of focusing on it full time. I’ve been dipping in and out of JavaScript, never fully caught up, always trying to navigate the ecosystem all over again each time a project came up. And framework fatigue is real!

So, instead of finally getting into Rollup to replace an ancient Browserify build on one of our codebases (which could also really use that upgrade from Polymer to LitElement…), I decided to go “stackless”.

Elise Hein

I’m certainly not dogmatic about it, but I think if you can pull of a project with literally zero build process. It feels good while working on it and feels very good when you come back to it months/years later: you just pick up and go.

Imports, yo — they go a long way:

Native support for modularity is the most important step towards a build-free codebase. If I had access to only one ES6 feature for the rest of my life, I’m confident that modules would take me most of the way there when it comes to well-structured native JavaScript.

That last sentence in the post is a stinger. I’d say we’re not far off.

Direct Link to ArticlePermalink

The post Using the platform appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Working With Built-in GraphQL Directives

Css Tricks - Fri, 10/01/2021 - 4:35am

Directives are one of GraphQL’s best — and most unspoken — features.

Let’s explore working with GraphQL’s built-in schema and operation directives that all GraphQL spec compliant APIs must implement. They are extremely useful if you are working with a dynamic front-end because you have the control to reduce the response payload depending on what the user is interacting with.

An overview of directives

Let’s imagine an application where you have the option to customize the columns shown in a table. If you hide two or three columns then there’s really no need to fetch the data for those cells. With GraphQL directives, though, we can choose to include or skip those fields.

The GraphQL specification defines what directives are, and the location of where they can be used. Specifically, directives can be used by consumer operations (such as a query), and by the underlying schema itself. Or, in simple terms, directives are either based on schema or operation. Schema directives are used when the schema is generated, and operation directives run when a query is executed.

In short, directives can be used for the purposes of metadata, runtime hints, runtime parsing (like returning dates in a specific format), and extended descriptions (like deprecated).

Four kinds of directives

GraphQL boasts four main directives as defined in the specification working draft, with one of them unreleased as a working draft.

  • @include
  • @skip
  • @deprecated
  • @specifiedBy (working draft)

If you’re following GraphQL closely, you will also notice two additional directives were merged to the JavaScript implementation that you can try today — @stream and @defer. These aren’t part of the official spec just yet while the community tests them in real world applications.

@include

The @include directive, true to its name, allows us to conditional include fields by passing an if argument. Since it’s conditional, it makes sense to use a variable in the query to check for truthiness.

For example, if the variable in the following examples is truthy, then the name field will be included in the query response.

query getUsers($showName: Boolean) { users { id name @include(if: $showName) } }

Conversely, we can choose not to include the field by passing the variable $showName as false along with the query. We can also specify a default value for the $showName variable so there’s no need to pass it with every request:

query getUsers($showName: Boolean = true) { users { id name @include(if: $showName) } } @skip

We can express the same sort of thing with just did, but using @skip directive instead. If the value is truthy, then it will, as you might expect, skip that field.

query getUsers($hideName: Boolean) { users { id name @skip(if: $hideName) } }

While this works great for individual fields, there are times we may want to include or skip more than one field. We could duplicate the usage of @include and @skip across multiple lines like this:

query getUsers($includeFields: Boolean) { users { id name @include(if: $includeFields) email @include(if: $includeFields) role @include(if: $includeFields) } }

Both the @skip and @include directives can be used on fields, fragment spreads, and inline fragments which means we can do something else, like this instead with inline fragments:

query getUsers($excludeFields: Boolean) { users { id ... on User @skip(if: $excludeFields) { name email role } } }

If a fragment is already defined, we can also use @skip and @include when we spread a fragment into the query:

fragment User on User { name email role } query getUsers($excludeFields: Boolean) { users { id ...User @skip(if: $excludeFields) } } @deprecated

The @deprecated directive appears only in the schema and isn’t something a user would provide as part of a query like we’ve seen above. Instead, the @deprecated directive is specified by the developer maintaining the GraphQL API schema.

As a user, if we try to fetch a field that has been deprecated in the schema, we’ll receive a warning like this that provides contextual help.

In this example, the title field has been marked deprecated and the directive provides a helpful hint to replace it.

To mark a field deprecated, we need to use the @deprecated directive within the schema definition language (SDL), passing a reason inside the arguments, like this:

type User { id: ID! title: String @deprecated(reason: "Use name instead") name: String! email: String! role: Role }

If we paired this with the @include directive, we could conditionally fetch the deprecated field based on a query variable:

fragment User on User { title @include(if: $includeDeprecatedFields) name email role } query getUsers($includeDeprecatedFields: Boolean! = false) { users { id ...User } } @specifiedBy

@specifiedBy is the fourth of the directives and is currently part of the working draft. It’s set to be used by custom scalar implementations and take a url argument that should point to a specification for the scalar.

For example, if we add a custom scalar for email address, we will want to pass the URL to the specification for the regex we use as part of that. Using the last example and the proposal defined in RFC #822, a scalar for EmailAddress would be defined in the schema like so:

scalar EmailAddress @specifiedBy(url: "https://www.w3.org/Protocols/rfc822/")

It’s recommended that custom directives have a prefixed name to prevent collisions with other added directives. If you’re looking for an example custom directive, and how it’s created, take a look at GraphQL Public Schema. It is a custom GraphQL directive that has both code and schema-first support for annotating which of an API can be consumed in public.

Wrapping up

So that’s a high-level look at GraphQL directives. Again, I believe directives are a sort of unsung hero that gets overshadowed by other GraphQL features. We already have a lot of control with GraphQL schema, and directives give us even more fine-grained control to get exactly what we want out of queries. That’s the sort of efficiency and that makes the GraphQL API so quick and ultimately more friendly to work with.

And if you’re building a GraphQL API, then be sure to include these directives to the introspection query.. Having them there not only gives developers the benefit of extra control, but an overall better developer experience. Just think how helpful it would be to properly @deprecate fields so developers know what to do without ever needing to leave the code? That’s powerful in and of itself.

Header graphic courtesy of Isabel Gonçalves on Unsplash

The post Working With Built-in GraphQL Directives appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Creating the Perfect Commit in Git

Css Tricks - Thu, 09/30/2021 - 4:29am

This article is part of our “Advanced Git” series. Be sure to follow Tower on Twitter or sign up for the Tower newsletter to hear about the next articles!

A commit in Git can be one of two things:

  • It can be a jumbled assortment of changes from all sorts of topics: some lines of code for a bugfix, a stab at rewriting an old module, and a couple of new files for a brand new feature.
  • Or, with a little bit of care, it can be something that helps us stay on top of things. It can be a container for related changes that belong to one and only one topic, and thereby make it easier for us to understand what happened.

In this post, we’re talking about what it takes to produce the latter type of commit or, in other words: the “perfect” commit.

Advanced Git series:
  • Part 1: Creating the Perfect Commit in Git
    You are here!
  • Part 2: Branching Strategies in Git
  • Part 3: Better Collaboration With Pull Requests
    Coming soon!
  • Part 4: Merge Conflicts
  • Part 5: Rebase vs. Merge
  • Part 6: Interactive Rebase
  • Part 7: Cherry-Picking Commits in Git
  • Part 8: Using the Reflog to Restore Lost Commits
Why clean and granular commits matter

Is it really necessary to compose commits in a careful, thoughtful way? Can’t we just treat Git as a boring backup system? Let’s revisit our example from above one more time.

If we follow the first path – where we just cram changes into commits whenever they happen – commits lose much of their value. The separation between one commit and the next becomes arbitrary: there seems to be no reason why changes were put into one and not the other commit. Looking at these commits later, e.g. when your colleagues try to make sense of what happened in that revision, is like going through the “everything drawer” that every household has: there’s everything in here that found no place elsewhere, from crayons to thumbtacks and cash slips. It’s terribly hard to find something in these drawers!

Following the second path – where we put only things (read: changes) that belong together into the same commit – requires a bit more planning and discipline. But in the end, you and your team are rewarded with something very valuable: a clean commit history! These commits help you understand what happened. They help explain the complex changes that were made in a digestible manner.

How do we go about creating better commits?

Composing better commits

One concept is central to composing better commits in Git: the Staging Area.

The Staging Area was made exactly for this purpose: to allow developers to select changes – in a very granular way – that should be part of the next commit. And, unlike other version control systems, Git forces you to make use of this Staging Area.

Unfortunately, however, it’s still easy to ignore the tidying effect of the Staging Area: a simple git add . will take all of our current local changes and mark them for the next commit.

It’s true that this can be a very helpful and valid approach sometimes. But many times, we would be better off stopping for a second and deciding if really all of our changes are actually about the same topic. Or if two or three separate commits might be a much better choice. 

In most cases, it makes a lot of sense to keep commits rather smaller than larger. Focused on an individual topic (instead of two, three, or four), they tend to be much more readable.

The Staging Area allows us to carefully pick each change that should go into the next commit: 

$ git add file1.ext file2.ext

This will only mark these two files for the next commit and leave the other changes for a future commit or further edits.

This simple act of pausing and deliberately choosing what should make it into the next commit goes a long way. But we can get even more precise than that. Because sometimes, even the changes in a single file belong to multiple topics.

Let’s look at a real-life example and take a look at the exact changes in our “index.html” file. We can either use the “git diff” command or a Git desktop GUI like Tower:

Now, we can add the -p option to git add:

$ git add -p index.html

We’re instructing Git to go through this file on a “patch” level: Git takes us by the hand and walks us through all of the changes in this file. And it asks us, for each chunk, if we want to add it to the Staging Area or not:

By typing [Y] (for “yes”) for the first chunk and [N] (for “no”) for the second chunk, we can include the first part of our changes in this file in the next commit, but leave the other changes for a later time or more edits.

The result? A more granular, more precise commit that’s focused on a single topic.

Testing your code

Since we’re talking about “the perfect commit” here, we cannot ignore the topic of testing. How exactly you “test” your code can certainly vary, but the notion that tests are important isn’t new. In fact, many teams refuse to consider a piece of code completed if it’s not properly tested.

If you’re still on the fence about whether you should test your code or not, let’s debunk a couple of myths about testing:

  • “Tests are overrated”: The fact is that tests help you find bugs more quickly. Most importantly, they help you find them before something goes into production – which is when mistakes hurt the most. And finding bugs early is, without exaggeration, priceless!
  • “Tests cost valuable time”: After some time you will find that well-written tests make you write code faster. You waste less time hunting bugs and find that, more often, a well-structured test primes your thinking for the actual implementation, too.
  • “Testing is complicated”: While this might have been an argument a couple of years ago, this is now untrue. Most professional programming frameworks and languages come with extensive support for setting up, writing, and managing tests.

All in all, adding tests to your development habits is almost guaranteed to make your code base more robust. And, at the same time, they help you become a better programmer.

A valuable commit message

Version control with Git is not a fancy way of backing up your code. And, as we’ve already discussed, commits are not a dump of arbitrary changes. Commits exist to help you and your teammates understand what happened in a project. And a good commit message goes a long way to ensure this.

But what makes a good commit message?

  • A brief and concise subject line that summarizes the changes
  • A descriptive message body that explains the most important facts (and as concisely as possible)

Let’s start with the subject line: the goal is to get a brief summary of what happened. Brevity, of course, is a relative term; but the general rule of thumb is to (ideally) keep the subject under 50 characters. By the way, if you find yourself struggling to come up with something brief, this might be an indicator that the commit tackles too many topics! It could be worthwhile to take another look and see if you have to split it into multiple, separate ones.

If you close the subject with a line break and an additional empty line, Git understands that the following text is the message’s “body.” Here, you have more space to describe what happened. It helps to keep the following questions in mind, which your body text should aim to answer:

  • What changed in your project with this commit?
  • What was the reason for making this change?
  • Is there anything special to watch out for? Anything someone else should know about these changes?

If you keep these questions in mind when writing your commit message body, you are very likely to produce a helpful description of what happened. And this, ultimately, benefits your colleagues (and after some time: you) when trying to understand this commit.

On top of the rules I just described about the content of commit messages, many teams also care about the format: agreeing on character limits, soft or hard line wraps, etc. all help to produce better commits within a team. 

To make it easier to stick by such rules, we recently added some features to Tower, the Git desktop GUI that we make: you can now, for example, configure character counts or automatic line wraps just as you like.

A great codebase consists of great commits

Any developer will admit that they want a great code base. But there’s only one way to achieve this lofty goal: by consistently producing great commits! I hope I was able to demonstrate that (a) it’s absolutely worth pursuing this goal and (b) it’s not that hard to achieve.

If you want to dive deeper into advanced Git tools, feel free to check out my (free!) “Advanced Git Kit”: it’s a collection of short videos about topics like branching strategies, Interactive Rebase, Reflog, Submodules and much more.

Have fun creating awesome commits!

Advanced Git series:
  • Part 1: Creating the Perfect Commit in Git
    You are here!
  • Part 2: Branching Strategies in Git
  • Part 3: Better Collaboration With Pull Requests
    Coming soon!
  • Part 4: Merge Conflicts
  • Part 5: Rebase vs. Merge
  • Part 6: Interactive Rebase
  • Part 7: Cherry-Picking Commits in Git
  • Part 8: Using the Reflog to Restore Lost Commits

The post Creating the Perfect Commit in Git appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Comparing HTML Preprocessor Features

Css Tricks - Thu, 09/30/2021 - 4:00am

(This is a sponsored post.)

Of the languages that browsers speak, I’d wager that the very first one that developers decided needed some additional processing was HTML. Every single CMS in the world (aside from intentionally headless-only CMSs) is essentially an elaborate HTML processor: they take content and squoosh it together with HTML templates. There are dozens of other dedicated HTML processing languages that exist today.

The main needs of HTML processing being:

  • Compose complete HTML documents from parts
  • Template the HTML by injecting variable data

There are plenty of other features they can have, and we’ll get to that, but I think those are the biggies.

This research is brought to you by support from Frontend Masters, CSS-Tricks’ official learning partner.

Need front-end development training?

Frontend Masters is the best place to get it. They have courses on all the most important front-end technologies, from React to CSS, from Vue to D3, and beyond with Node.js and Full Stack.

Consider PHP. It’s literally a “Hypertext Preprocessor.” On this very website, I make use of PHP in order to piece together bits of templated HTML to build the pages and complete content you’re looking at now.

<h2> <?php the_title(); // Templating! ?> </h2> <?php include("metadata.php"); // Partials! ?>

In the above code, I’ve squooshed some content into an HTML template, which calls another PHP file that likely contains more templated HTML. PHP covers the two biggies for HTML processing and is available with cost-friendly hosting — I’d guess that’s a big reason why PHP-powered websites power a huge chunk of the entire internet.

But PHP certainly isn’t the only HTML preprocessor around, and it requires a server to work. There are many others, some designed specifically to run during a build process before the website is ever requested by users.

Pug Ruby ERB Markdown PHP Haml Liquid Slim Go html/template Handlebars Mustache Twig Nunjucks Kit Sergey

Let’s go language-by-language and look at whether or not it supports certain features and how. When possible the link of the preprocessor name links to relevant docs.

Does it allow for templating?

Can you mix in data into the final HTML output?

ProcessorExamplePug
- var title = "On Dogs: Man's Best Friend";
- var author = "enlore";
h1= title
p Written with love by #{author}ERB
<%= title %>
<%= description %>

<%= @logged_in_user.name %>Markdown❌PHP
<?php echo $post.title; ?>
<?php echo get_post_description($post.id) ?>Also has HEREDOC syntax.Slim
tr
td.name = item.nameHaml
<h1><%= post.title %></h1>
<div class="subhead"><%= post.subtitle %></div>Liquid
Hello {{ user.name }}!Go html/template
{{ .Title }}
{{ $address }}Handlebars
{{firstname}} {{lastname}}Mustache
Hello {{ firstname }}!Twig
{{ foo.bar }}Nunjucks
<h1>{{ title }}</h1>Kit
<!-- $myVar = We finish each other's sandwiches. -->
<p> <!-- $myVar --> </p>Sergey❌ Does it do partials/includes?

Can you compose HTML from smaller parts?

ProcessorExamplePug
include includes/head.pugERB
<%= render 'includes/head' %>Markdown❌PHP
<?php include 'head.php'; ?>
<?php include_once 'meta.php'; ?>Slim⚠️
If you have access to the Ruby code, it looks like it can do it, but you have to write custom helpers.Haml✅
.content
=render 'meeting_info'Liquid✅{% render head.html %}
{% render meta.liquid %}Go html/template✅{{ partial "includes/head.html" . }}Handlebars⚠️
Only through registering a partial ahead of time.Mustache✅{{> next_more}}Twig✅{{ include('page.html', sandboxed = true) }}Nunjucks✅{% include "missing.html" ignore missing %}
{% import "forms.html" as forms %}
{{ forms.label('Username') }}Kit✅<!-- @import "someFile.kit" -->
<!-- @import "file.html" -->Sergey✅<sergey-import src="header" /> Does it do local variables with includes?

As in, can you pass data to the include/partial for it to use specifically? For example, in Liquid, you can pass a second parameter of variables for the partial to use. But in PHP or Twig, there is no such ability—they can only access global variables.

ProcessorExamplePHP❌ERB✅<%= render(
partial: "card",
locals: {
title: "Title"
}
) %>Markdown❌Pug⚠️
Pug has mixins that you can put in a separate file and call. Not quite the same concept but can be used similarly.Slim❌Haml✅.content
= render :partial => 'meeting_info', :locals => { :info => @info }Liquid✅{% render "name", my_variable: my_variable, my_other_variable: "oranges" %}Go html/template✅{{ partial "header/site-header.html" . }}
(The period at the end is “variable scoping.”)Handlebars✅{{> myPartial parameter=favoriteNumber }}MustacheTwig
{% include 'template.html' with {'foo': 'bar'} only %}Nunjucks✅{% macro field(name, value='', type='text') %}
<div class="field">
<input type="{{ type }}" name="{{ name }}" value="{{ value | escape }}" />
</div>
{% endmacro %}Kit❌Sergey❌ Does it do loops?

Sometimes you just need 100 <div>s, ya know? Or more likely, you need to loop over an array of data and output HTML for each entry. There are lots of different types of loops, but having at least one is nice and you can generally make it work for whatever you need to loop.

ProcessorExamplePHP✅for ($i = 1; $i <= 10; $i++) {
echo $i;
}ERB✅<% for i in 0..9 do %>
<%= @users[i].name %>
<% end %>Markdown❌Pug✅for (var x = 1; x < 16; x++)
div= xSlim✅- for i in (1..15)
div #{i}Haml✅(1..16).each do |i|
%div #{i}Liquid✅{% for i in (1..5) %}
{% endfor %}Go html/template✅{{ range $i, $sequence := (seq 5) }}
{{ $i }}: {{ $sequence }
{{ end }}Handlebars✅{{#each myArray}}
<div class="row"></div>
{{/each}}Mustache✅{{#myArray}}
{{name}}
{{/myArray}}Twig✅{% for i in 0..10 %}
{{ i }}
{% endfor %}Nunjucks✅{% set points = [0, 1, 2, 3, 4] %}
{% for x in points %}
Point: {{ x }}
{% endfor %}Kit❌Sergey❌ Does it have logic?

Mustache is famous for philosophically being “logic-less.” So sometimes it’s desirable to have a templating language that doesn’t mix in any other functionality, forcing you to deal with your business logic in another layer. Sometimes, a little logic is just what you need in a template. And actually, even Mustache has some basic logic.

ProcessorExamplePug✅#user
if user.description
h2.green Description
else if authorised
h2.blue DescriptionERB✅<% if show %>
<% endif %>Markdown❌PHP✅<?php if (value > 10) { ?>
<?php } ?>Slim✅- unless items.empty?If you turn on logic less mode:
- article
h1 = title
-! article
p Sorry, article not foundHaml✅if data == true
%p true
else
%p falseLiquid✅{% if user %}
Hello {{ user.name }}!
{% endif %}Go html/template✅{{ if isset .Params "title" }}
<h4>{{ index .Params "title" }}</h4>
{{ end }}Handlebars✅{{#if author}}
{{firstName}} {{lastName}}
{{/if}}Mustache
It’s kind of ironic that Mustache calls itself “Logic-less templates”, but they do kinda have logic in the form of “inverted sections.”
{{#repo}}
{{name}}
{{/repo}}
{{^repo}}
No repos :(
{{/repo}}Twig✅{% if online == false %}
Our website is in maintenance mode.
{% endif %}Nunjucks✅{% if hungry %}
I am hungry
{% elif tired %}
I am tired
{% else %}
I am good!
{% endif %}Kit
It can output a variable if it exists, which it calls “optionals”:
<dd class='<!-- $myVar? -->'> Page 1 </dd>Sergey❌ Does it have filters?

What I mean by filter here is a way to output content, but change it on the way out. For example, escape special characters or capitalize text.

ProcessorExamplePug⚠️
Pug thinks of filters as ways to use other languages within Pug, and doesn’t ship with any out of the box. ERB✅
Whatever Ruby has, like:
"hello James!".upcase #=> "HELLO JAMES!"Markdown❌PHP✅$str = "Mary Had A Little Lamb";
$str = strtoupper($str);
echo $str; // Prints MARY HAD A LITTLE LAMBSlim⚠️
Private only?Haml⚠️
Very specific one for whitespace removal. Mostly for embedding other languages?Liquid
Lots of them, and you can use multiple.
{{ "adam!" | capitalize | prepend: "Hello " }}Go html/template⚠️
Has a bunch of functions, many of which are filter-like.Handlebars⚠️
Triple-brackets do HTML escaping, but otherwise, you’d have to register your own block helpers.Mustache❌Twig✅{% autoescape "html" %}
{{ var }}
{{ var|raw }} {# var won't be escaped #}
{{ var|escape }} {# var won't be doubled-escaped #}
{% endautoescape %}Nunjucks✅{% filter replace("force", "forth") %}
may the force be with you
{% endfilter %}Kit❌Sergey❌ Does it have math?

Sometimes math is baked right into the language. Some of these languages are built on top of other languages, and thus use that other language to do the math. Like Pug is written in JavaScript, so you can write JavaScript in Pug, which can do math.

ProcessorSupportPHP✅<?php echo 1 + 1; ?>ERB✅<%= 1 + 1 %>Markdown❌Pug✅- const x = 1 + 1
p= xSlim✅- x = 1 + 1
p= xHaml✅%p= 1 + 1Liquid✅{{ 1 | plus: 1 }}Go html/template
{{add 1 2}}Handlebars❌Mustache❌Twig✅{{ 1 + 1 }}Nunjucks✅{{ 1 + 1 }}Kit❌Sergey❌ Does it have slots / blocks?

The concept of a slot is a template that has special areas within it that are filled with content should it be available. It’s conceptually similar to partials, but almost in reverse. Like you could think of a template with partials as the template calling those partials to compose a page, and you almost think of slots like a bit of data calling a template to turn itself into a complete page. Vue is famous for having slots, a concept that made its way to web components.

ProcessorExamplePHP❌ERB❌Markdown❌Pug✅
You can pull it off with “mixins”Slim❌Haml❌Liquid❌Go html/template❌Handlebars❌Mustache❌Twig✅{% block footer %}
© Copyright 2011 by you.
{% endblock %}Nunjucks✅{% block item %}
The name of the item is: {{ item.name }}
{% endblock %}Kit❌Sergey✅<sergey-slot /> Does it have a special HTML syntax?

HTML has <angle> <brackets> and while whitespace matters a little (a space is a space, but 80 spaces is also… a space), it’s not really a whitespace dependant language like Pug or Python. Changing these things up is a language choice. If all the language does is add in extra syntax, but otherwise, you write HTML as normal HTML, I’m considering that not a special syntax. If the language changes how you write normal HTML, that’s special syntax.

ProcessorExamplePHP❌ERBIn Ruby, if you want that you generally do Haml.Markdown✅
This is pretty much the whole point of Markdown.
# Title
Paragraph with [link](#link).

- List
- List

> QuotePug✅Slim✅Haml✅Liquid❌Go html/template❌Handlebars❌Mustache❌Twig❌Nunjucks❌Kit⚠️
HTML comment directives.Sergey⚠️
Some invented HTML tags. Wait wait — what about stuff like React and Vue?

I’d agree that those technologies are component-based and used to do templating and often craft complete pages. They also can do many/most of the features listed here. Them, and the many other JavaScript-based-frameworks like them, are also generally capable of running on a server or during a build step and producing HTML, even if it sometimes feels like an afterthought (but not always). They also have other features that can be extremely compelling, like scoped/encapsulated styles, which requires cooperation between the HTML and CSS, which is an enticing feature.

I didn’t include them because they are generally intentionally used to essentially craft the DOM. They are focused on things like data retrieval and manipulation, state management, interactivity, and such. They aren’t really focused on just being an HTML processor. If you’re using a JavaScript framework, you probably don’t need a dedicated HTML processor, although it absolutely can be done. For example, mixing Markdown and JSX or mixing Vue templates and Pug.

I didn’t even put native web components on the list here because they are very JavaScript-focused.

Other considerations
  • Speed — How fast does it process? Do you care?
  • Language — What was in what is it written in? Is it compatible with the machines you need to support?
  • Server or Build — Does it require a web server running to work? Or can it be run once during a build process? Or both?
Superchart TemplatingIncludesLocal VariablesLoopsLogicFiltersMathSlotsSpecial SyntaxPHP✅✅❌✅✅✅✅❌❌ERB✅✅✅✅✅✅✅❌⚠️Markdown❌❌❌❌❌❌❌❌✅Pug✅✅❌✅✅⚠️✅✅✅Slim✅⚠️❌✅✅⚠️✅❌✅Haml✅✅✅✅✅⚠️✅❌✅Liquid✅✅✅✅✅✅✅❌❌Go html/template✅✅✅✅✅⚠️✅❌❌Handlebars✅⚠️✅✅✅❌❌❌❌Mustache✅✅❌✅✅❌❌❌❌Twig✅✅✅✅✅✅✅✅❌Nunjucks✅✅✅✅✅✅✅✅❌Kit✅✅❌❌✅❌❌❌⚠️Sergey❌✅❌❌❌❌❌✅⚠️

The post Comparing HTML Preprocessor Features appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

New business wanted

QuirksBlog - Thu, 09/30/2021 - 12:22am

Last week Krijn and I decided to cancel performance.now() 2021. Although it was the right decision it leaves me in financially fairly dire straits. So I’m looking for new jobs and/or donations.

Even though the Corona trends in NL look good, and we could probably have brought 350 people together in November, we cannot be certain: there might be a new flare-up. More serious is the fact that it’s very hard to figure out how to apply the Corona checks Dutch government requires, especially for non-EU citizens. We couldn’t figure out how UK and US people should be tested, and for us that was the straw that broke the camel’s back. Cancelling the conference relieved us of a lot of stress.

Still, it also relieved me of a lot of money. This is the fourth conference in a row we cannot run, and I have burned through all my reserves. That’s why I thought I’d ask for help.

So ...

Has QuirksMode.org ever saved you a lot of time on a project? Did it advance your career? If so, now would be a great time to make a donation to show your appreciation.

I am trying my hand at CSS coaching. Though I had only few clients so far I found that I like it and would like to do it more. As an added bonus, because I’m still writing my CSS for JavaScripters book I currently have most of the CSS layout modules in my head and can explain them straight away — even stacking contexts.

Or if there’s any job you know of that requires a technical documentation writer with a solid knowledge of web technologies and the browser market, drop me a line. I’m interested.

Anyway, thanks for listening.

Web Streams Everywhere (and Fetch for Node.js)

Css Tricks - Wed, 09/29/2021 - 4:51am

Chrome developer advocate Jake Archibald called 2016 “the year of web streams.” Clearly, his prediction was somewhat premature. The Streams Standard was announced back in 2014. It’s taken a while, but there’s now a consistent streaming API implemented in modern browsers (still waiting on Firefox…) and in Node (and Deno).

What are streams?

Streaming involves splitting a resource into smaller pieces called chunks and processing each chunk one at a time. Rather than needing to wait to complete the download of all the data, with streams you can process data progressively as soon as the first chunk is available.

There are three kinds of streams: readable streams, writable streams, and transform streams. Readable streams are where the chunks of data come from. The underlying data sources could be a file or HTTP connection, for example. The data can then (optionally) be modified by a transform stream. The chunks of data can then be piped to a writable stream.

Web streams everywhere

Node has always had it’s own type of streams. They are generally considered to be difficult to work with. The Web Hypertext Application Technology Working Group (WHATWG) web standard for streams came later, and are largely considered an improvement. The Node docs calls them “web streams” which sounds a bit less cumbersome. The original Node streams aren’t being deprecated or removed but they will now co-exist with the web standard stream API. This makes it easier to write cross-platform code and means developers only need to learn one way of doing things.

Deno, another attempt at server-side JavaScript by Node’s original creator, has always closely aligned with browser APIs and has full support for web streams. Cloudflare workers (which are a bit like service workers but running on CDN edge locations) and Deno Deploy (a serverless offering from Deno) also support streams.

fetch() response as a readable stream

There are multiple ways to create a readable stream, but calling fetch() is bound to be the most common. The response body of fetch() is a readable stream.

fetch('data.txt') .then(response => console.log(response.body));

If you look at the console log you can see that a readable stream has several useful methods. As the spec says, A readable stream can be piped directly to a writable stream, using its pipeTo() method, or it can be piped through one or more transform streams first, using its pipeThrough() method.

Unlike browsers, Node core doesn’t currently implement fetch. node-fetch, a popular dependency that tries to match the API of the browser standard, returns a node stream, not a WHATWG stream. Undici, an improved HTTP/1.1 client from the Node.js team, is a modern alternative to the Node.js core http.request (which things like node-fetch and Axios are built on top of). Undici has implemented fetch — and response.body does return a web stream. &#x1f389;

Undici might end up in Node.js core eventually, and it looks set to become the recommended way to handle HTTP requests in Node. Once you npm install undici and import fetch, it works the same as in the browser. In the following example, we pipe the stream through a transform stream. Each chunk of the stream is a Uint8Array. Node core provides a TextDecoderStream to decode binary data.

import { fetch } from 'undici'; import { TextDecoderStream } from 'node:stream/web'; async function fetchStream() { const response = await fetch('https://example.com') const stream = response.body; const textStream = stream.pipeThrough(new TextDecoderStream()); }

response.body is synchronous so you don’t need to await it. In the browser, fetch and TextDecoderStream are available on the global object so you wouldn’t include any import statements. Other than that, the code is exactly the same for Node and web browsers. Deno also has built-in support for fetch and TextDecoderStream.

Async iteration

The for-await-of loop is an asynchronous version of the for-of loop. A regular for-of loop is used to loop over arrays and other iterables. A for-await-of loop can be used to iterate over an array of promises, for example.

const promiseArray = [Promise.resolve("thing 1"), Promise.resolve("thing 2")]; for await (const thing of promiseArray) { console.log(thing); }

Importantly for us, this can also be used to iterate streams.

async function fetchAndLogStream() { const response = await fetch('https://example.com') const stream = response.body; const textStream = stream.pipeThrough(new TextDecoderStream()); for await (const chunk of textStream) { console.log(chunk); } } fetchAndLogStream();

Async iteration of streams works in Node and Deno. All modern browsers have shipped for-await-of loops but they don’t work on streams just yet.

Some other ways to get a readable stream

Fetch will be one of the most common ways to get hold of a stream, but there are other ways. Blob and File both have a .stream() method that returns a readable stream. The following code works in modern browsers as well as in Node and in Deno — although, in Node, you will need to import { Blob } from 'buffer'; before you can use it:

const blobStream = new Blob(['Lorem ipsum'], { type: 'text/plain' }).stream();

Here is a front-end browser-based example: If you have a <input type="file"> in your markup, it’s easy to get the user-selected file as a stream.

const fileStream = document.querySelector('input').files[0].stream();

Shipping in Node 17, the FileHandle object returned by the fs/promises open() function has a .readableWebStream() method.

import { open, } from 'node:fs/promises'; const file = await open('./some/file/to/read'); for await (const chunk of file.readableWebStream()) console.log(chunk); await file.close(); Streams work nicely with promises

If you need to do something after the stream has completed, you can use promises.

someReadableStream .pipeTo(someWritableStream) .then(() => console.log("all data successfully written")) .catch(error => console.error("something went wrong", error))

Or, you can optionally await the result:

await someReadableStream.pipeTo(someWritableStream) Creating your own transform stream

We already saw TextDecoderStream (there’s also a TextEncoderStream). You can also create your own transform stream from scratch. The TransformStream constructor can accept an object. You can specify three methods in the object: start, transform and flush. They’re all optional, but transform is what actually does the transformation.

As an example, let’s pretend that TextDecoderStream() doesn’t exist and implement the same functionality (be sure to use TextDecoderStream in production though as the following is an over-simplified example):

const decoder = new TextDecoder(); const decodeStream = new TransformStream({ transform(chunk, controller) { controller.enqueue(decoder.decode(chunk, {stream: true})); } });

Each received chunk is modified and then forwarded on by the controller. In the above example, each chunk is some encoded text that gets decoded and then forwarded. Let’s take a quick look at the other two methods:

const transformStream = new TransformStream({ start(controller) { // Called immediately when the TransformStream is created }, flush(controller) { // Called when chunks are no longer being forwarded to the transformer } });

A transform stream is a readable stream and a writable stream working together, usually to transform some data. Every object made with new TransformStream() has a property called readable, which is a ReadableStream, and a property called writable, which is a writable stream. Calling someReadableStream.pipeThrough() writes the data from someReadableStream to transformStream.writable, possibly transforms the data, then pushes the data to transformStream.readable.

Some people find it helpful to create a transform stream that doesn’t actually transform data. This is known as an “identity transform stream” — created by calling new TransformStream() without passing in any object argument, or by leaving off the transform method. It forwards all chunks written to its writable side to its readable side, without any changes. As a simple example of the concept, “hello” is logged by the following code:

const {readable, writable} = new TransformStream(); writable.getWriter().write('hello'); readable.getReader().read().then(({value, done}) => console.log(value)) Creating your own readable stream

It’s possible to create a custom stream and populate it with your own chunks. The new ReadableStream() constructor takes an object that can contain a start function, a pull function, and a cancel function. This function is invoked immediately when the ReadableStream is created. Inside the start function, use controller.enqueue to add chunks to the stream.

Here’s a basic “hello world” example:

import { ReadableStream } from "node:stream/web"; const readable = new ReadableStream({ start(controller) { controller.enqueue("hello"); controller.enqueue("world"); controller.close(); }, }); const allChunks = []; for await (const chunk of readable) { allChunks.push(chunk); } console.log(allChunks.join(" "));

Here’s an more real-world example taken from the streams specification that turns a web socket into a readable stream:

function makeReadableWebSocketStream(url, protocols) { let websocket = new WebSocket(url, protocols); websocket.binaryType = "arraybuffer"; return new ReadableStream({ start(controller) { websocket.onmessage = event => controller.enqueue(event.data); websocket.onclose = () => controller.close(); websocket.onerror = () => controller.error(new Error("The WebSocket errored")); } }); } Node streams interoperability

In Node, the old Node-specific way of working with streams isn’t being removed. The old node streams API and the web streams API will coexist. It might therefore sometimes be necessary to turn a Node stream into a web stream, and vice versa, using .fromWeb() and .toWeb() methods, which are being added in Node 17.

import {Readable} from 'node:stream'; import {fetch} from 'undici'; const response = await fetch(url); const readableNodeStream = Readable.fromWeb(response.body); Conclusion

ES modules, EventTarget, AbortController, URL parser, Web Crypto, Blob, TextEncoder/Decoder: increasingly more browser APIs are ending up in Node.js. The knowledge and skills are transferable. Fetch and streams are an important part of that convergence.

Domenic Denicola, a co-author of the streams spec, has written that the goal of the streams API is to provide an efficient abstraction and unifying primitive for I/O, like promises have become for asynchronicity. To become truly useful on the front end, more APIs need to actually support streams. At the moment a MediaStream, despite its name, is not a readable stream. If you’re working with video or audio (at least at the moment), a readable stream can’t be assigned to srcObject. Or let’s say you want to get an image and pass it through a transform stream, then insert it onto the page. At the time of writing, the code for using a stream as the src of an image element is somewhat verbose:

const response = await fetch('cute-cat.png'); const bodyStream = response.body; const newResponse = new Response(bodyStream); const blob = await newResponse.blob(); const url = URL.createObjectURL(blob); document.querySelector('img').src = url; CodePen Embed Fallback

Over time, though, more APIs in both the browser and Node (and Deno) will make use of streams, so they’re worth learning about. There’s already a stream API for working with Web Sockets in Deno and Chrome, for example. Chrome has implemented Fetch request streams. Node and Chrome have implemented transferable streams to pipe data to and from a worker to process the chunks in a separate thread. People are already using streams to do interesting things for products in the real world: the creators of file-sharing web app Wormhole have open-sourced code to encrypt a stream, for example.

Perhaps 2022 will be the year of web streams…

The post Web Streams Everywhere (and Fetch for Node.js) appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Front-End Dev Shortcuts in iOS 15

Css Tricks - Wed, 09/29/2021 - 4:51am

I was pretty stoked when Chris shared a way to “View Source” on mobile. Sure, it’s not the same as a built-in feature but it allows iOS users like myself a way to peek at a site’s code the same way folks on Android can by prepending view-source: to a URL.

I was curious what sorts of dev-related tools might be baked right into my iPhone, so I dug around. It’s actually a perfect time to do that with iOS 15 fresh out of the oven and all.

The iOS Shortcuts app might be the most underrated app of all. It’s sorta like IFTTT or Zapier for iOS in that you get these hooks you can play around with to make one app respond to another and do things. Like dim the lights in my house when Marvin Gaye hits the HomePod. Useful stuff like that.

And guess what? Turns out there are a few pre-made Shortcuts recipes that can be useful for front-end developers. And you can grab them right in the Shortcuts app’s Gallery.

View source (for realzzz)

Of course, there’s a pre-fab shortcut to view the source of any webpage.

Now, when I’m on a webpage, I can ask Siri to view source, or open the Share Sheet to trigger the shortcut.

The option is added to the Share Sheet. Notice the option to share the source.

This would be perfect if we have line numbers, syntax highlighting, a mono font (seriously!), zooming, and… OK, maybe it’s far from perfect.

Grab all the images on a page

This is another Shortcuts gem. Open a webpage and ask Siri to “get images from page.” Now, I’m no fan of scraping assets off webpages, especially in bulk, but not all image collection has to be nefarious.

It’s also available in the Share Sheet. Sorry, not sorry.

Wayback Machine

How fun is this?! Someone had the idea to create a shortcut that opens the current page in the Wayback Machine to see past versions.

So cool to see that name in a list of options. Saved 7,599 times since 2007? That’s more than once per day!

Edit webpage

I actually think this one is legitimately useful. Say you want to test content in a design and want to see exactly how it looks on mobile. This shortcut basically drops contenteditable on every element on the page.

That’s all! Again, all of these are available right in the iOS Shortcuts app in the Gallery tab. Maybe there are others. Or maybe someone’s made a cool shortcut to share with the rest of us. &#x1f609;

The post Front-End Dev Shortcuts in iOS 15 appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

So many little design helper sites!

Css Tricks - Tue, 09/28/2021 - 11:00am

I had one of those little single-serving designer helper sites bookmarked the other day: getwaves.io. Randomized SVG waves! Lots of cool options! Easy to customize! Easy to copy and paste! Well played, z creative labs.

But then I saw the little link at the top of the page, that it was part of something called Haikei. So I checked that out, and holy mackerel, it’s great! There are a dozen or more similar “generators” within one app, each just as well done as the SVG waves one.

Random scattering of vector stars.

Kind of reminds me of Omatsuri which is a similar collection of useful little on-off tools.

But heck, those are just two apps, even if they are collections of mini apps in an of themselves. Around the same time, I became aware of tiny-helpers.dev, which is a roundup site of all sorts of these little one-off helper sites. Haikei and Omatsuri are both on there, along with many hundred more. Just the SVG area alone is super:

I’m sure y’all find these things just as useful as I do. They don’t make us lazy, they make us efficient. I know how to make a pattern. I know how to draw a curve with a Pen Tool. I know how to convert SVG into JSX. But using a dedicated tool makes me faster and better at it. And sometimes I don’t know how to do those things, but that doesn’t mean I can’t take advantage. Fake it ’til you make it, right?

The post So many little design helper sites! appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

iOS Browser Choice

Css Tricks - Tue, 09/28/2021 - 4:26am

Just last week I got one of those really?! &#x1f928; faces when this fact came up in conversation amongst smart and engaged fellow web developers: there is no browser choice on iOS. It’s all Safari. You can download apps that are named Chrome or Firefox, or anything else, but they are just veneer over Safari. If you’re viewing a website on iOS, it’s Safari.

I should probably call it what the App Store Review Guidelines call it: WebKit. I usually think it’s more clear to refer to browsers by their common names rather than the engine behind it, since each of The Big Three web browsers have distinct engines (for now anyway), but in this case, the engine is the important bit.

I’ll say how I feel: that sucks. I have this expensive computer in my pocket and it feels unfair that it is hamstrung in this very specific way of not allowing other browser engines. I also have an Apple laptop and it’s not hamstrung in that way, and I really hope it never is.

There is, of course, all sorts of nuance to this. My Apple laptop is hamstrung in that I can’t just install whatever OS I want on it unless I do it a sanctioned way. I also like the fact that there is some gatekeeping in iOS apps, and sometimes wish it was more strict. Like when I try to download simple games for my kid, and I end up downloading some game that is so laden with upsells, ads, and dark patterns that I think the developer should be in prison. I wish Apple just wouldn’t allow that garbage on the App Store at all. So that’s me wishing for more and less gatekeeping at the same time.

But what sucks about this lack of browser choice on iOS isn’t just the philosophy of gatekeeping, it’s that WebKit on iOS just isn’t that great. See Dave’s post for a rundown of just some of the problems from a day-to-day web developer perspective that I relate to. And because WebKit has literally zero competition on iOS, because Apple doesn’t allow competition, the incentive to make Safari better is much lighter than it could (should) be.

It’s not something like Google’s AMP, where if you really dislike it you can both not use it on your own sites and redirect yourself away from them on other sites. This choice is made for you.

My ability to talk intelligently about this is dwarfed by many others though, so what I really want to do is point out some of that recent writing. Allow me to pull a quote from a bunch of them…

iOS Engine Choice In Depth — Alex Russell

None of this is theoretical; needing to re-develop features through a straw, using less-secure, more poorly tested and analyzed mechanisms, has led to serious security issues in alternative iOS browsers. Apple’s policy, far from insulating responsible WebKit browsers from security issues, is a veritable bug farm for the projects wrenched between the impoverished feature set of Apple’s WebKit and the features they can securely deliver with high fidelity on every other platform.

This is, of course, a serious problem for Apple’s argument as to why it should be exclusively responsible for delivering updates to browser engines on iOS.

Chrome is the new Safari. And so are Edge and Firefox. — Niels Leenheer

The Safari and Chrome team both want to make the web safer and work hard to improve the web. But they do have different views on what the web should be.

Google is focussing on improving the web by making it more capable. To expand the relevance of the web, to go beyond what is possible today. And that also means allowing it to compete with native apps, with which the Android team surely does not always agree.

Safari seems to focus on improving the web as it currently is. To let it be a safer place, much faster and more beautiful. And if you want something more, you can use an app for that.

Browser choice on Apple’s iOS: privacy and security aspects — Stuart Langridge

Alternative browsers on iOS aren’t just restricted to WebKit, they’re restricted to the version of WebKit which is in the current version of Safari. Not even different or more modern versions of WebKit itself are allowed.

Even motivated users who work hard to get out of the browser choice they’re forced into don’t actually get a choice; if they choose a different browser, they still get the same one. If there’s a requirement from people for something, the market can’t provide it because competition is not permitted.

Briefing to the UK Competition and Markets Authority on Apple’s iOS browser monopoly and Progressive Web Apps — Bruce Lawson

[…] these people at Echo Pharmacy, not only have they got a really great website, but they also have to build an app for iOS just because they want to send push notifications. And, perhaps ironically, given Apple’s insistence that they do all of this for security and privacy, is that if I did choose to install this app, I would also be giving it permission to access my health and fitness data, my contact info, my identifiers sensitive info, financial info, user content, user data and diagnostics. Whereas, if I had push notifications and I were using a PWA, I’d be leaking none of this data.

So, we can see that despite Apple’s claims, I cannot recommend a PWA as being an equal experience an iOS simply here because of push notifications. But it’s not just hurting current business, it’s also holding back future business.

I’ve heard precious few arguments defending Apple’s choice to only allow Safari on iOS. Vague Google can’t be trusted sentiment is the bulk of it, privacy-focused, performance forced, or both. All in all, nobody wants this complete lack of choice but Apple.

As far as I know, there isn’t any super clear language from Apple on why this requirement is in place. That would be nice to hear, because maybe then whatever the reasons are could be addressed.

We hear mind-blowing tech news all the time. I’d love to wake up one morning and have the news be “Apple now allows other browser engines on iOS.” You’ll hear a faint yesssssss in the air because I’ve screamed it so loud from my office in Bend, Oregon, you can hear it at your house.

The post iOS Browser Choice appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Web Unleashed is Back! (Use Coupon Code “CSS-Tricks” for 50% Off)

Css Tricks - Tue, 09/28/2021 - 4:24am

(This is a sponsored post.)

Now in its tenth year (!!!), Web Unleashed is one of the top events for web devs. It’s coming up quick: October 20-22, 2021.

The lineup is amazing. You’ll hear from leaders in the industry, including Ethan Marcotte, Rachel Andrew, Harry Roberts, Addy Osmani, Tracy Lee Ire, Aderinokun, Suz Hinton, Shawn Wang, Jen Looper, and many more.

Register Now

And don’t forget the coupon code CSS-Tricks as 50% off is a massively good deal! (Here’s where you apply it)

So many hot topics will be covered, including responsive design, optimizing images, open source projects, performance metrics, Gatsby, GraphQL, Applied ML, growing your career, and much much more.

But, seriously folks, take it from someone who has gone to this event a bunch of years… it’s really well done, you’re going to learn a lot, and the price right now is unbeatable. The entire conference clocks in at CA $179.16, which is CA $89.91 with the CSS-Tricks coupon code. That’s… just go.

Direct Link to ArticlePermalink

The post Web Unleashed is Back! (Use Coupon Code “CSS-Tricks” for 50% Off) appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Steven Heller’s Font of the Month: Oposta

Typography - Tue, 09/28/2021 - 12:00am

Read the book, Typographic Firsts

In this second episode of our new monthly column, the legendary Steven Heller takes a closer look at one of his favorite fonts. Oposta, designed by Pedro Leal for DSType, with swashes aplenty, is an unabashed and oversized slab serif that seamlessly melds flamboyance and finesse.

The post Steven Heller’s Font of the Month: Oposta appeared first on I Love Typography.

Tonic (Component Framework)

Css Tricks - Mon, 09/27/2021 - 8:52am

I enjoy little frameworks like Tonic. It’s essentially syntactic sugar over <web-components /> to make them feel easier to use. Define a Class, template literal an HTML template, probably some other fancy helpers, and you’ve got a component that doesn’t feel terribly different to something like a React component, except you need no build process or other exotic tooling.

Here’s a Hello World + Counter example:

CodePen Embed Fallback

They have a whole bunch of examples (in a separate repo). You can snag and use them, and they are pretty nice! So that makes Tonic a bit like a design system as well as a web component framework.

To be fair, it’s not that different from Lit, which Google is behind and pushing pretty actively.

Here’s a Hello, World + Counter with Lit:

CodePen Embed Fallback

And Dave was just showing me petite-vue the other day, so I figured I might as well do that one, too:

CodePen Embed Fallback

I’d say that petite-vue example wins for just how super easy that is to pull of in just declarative HTML. But of course, there are a bunch of other considerations from specific features, syntax, philosophy, and size. Just looking at size, if I pop open the Network tab in DevTools and see the over-the-wire JavaScript for each demo…

  • Tonic = 5.1 KB
  • Lit = 12.6 KB
  • petite-vue = 8.1 KB

They are all basically the same: tiny.

I’ve never actually built anything real in any of them, so I’m not the best to judge one from the other. But they all seem pretty neat to me, particularly because they require no build step.

Like Video?

Dave and I went through all this:

The post Tonic (Component Framework) appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Systems for z-index

Css Tricks - Fri, 09/24/2021 - 11:19am

Say you have a z-index bug. Something is being covered up by something else. In my experience, a typical solution is to put position: relative on the thing so z-index works in the first place, and maybe rejigger the z-index value until the right thing is on top.

The danger here is that this sets off a little z-index war. Raising a z-index value over here fixes one bug, and then causes another by covering up some other element somewhere else when you didn’t want to. Hopefully, you can reason about the situation and fix the bugs, even if it means refactoring more z-index values than you thought you might need to.

If the “stack” of z-index values is complicated enough, you might consider an abstraction. One of the problems is that your z-index values are probably sprinkled rather randomly throughout your CSS. So, instead of simply letting that be, you can give yourself a central location for z-index values to reference later.

I’ve covered doing that as a Sass map in the past:

$zindex: ( modal : 9000, overlay : 8000, dropdown : 7000, header : 6000, footer : 5000 ); .header { z-index: map-get($zindex, header); }

Now you’ve got a central place to manage those values.

Rafi Strauss has taken that a step further with OZMap. It’s also a Sass map situation, but it’s configured like this:

$globalZIndexes: ( a: null, b: null, c: 2000, d: null, );

The idea here is that most values are auto-generated by the tool (the null values), but you can specify them if you want. That way, if you have a third-party component with a z-index value you can’t change, you plug that into the map, and then the auto-generated numbers will factor that in when you make layers on top. It also means it’s very easy to slip layers in between others.

I think all that is clever and useful stuff — but I also think it doesn’t help with another common z-index bug: stacking contexts. It’s always the stacking context, as I’ve written. If some element is in a stacking context that is under some other stacking context, there is no z-index value possible that will bring it on top. That’s a much harder bug to fix.

Andy Blum wrote about this recently.

One of the great frustrations of front-end development is the unexpected interaction and overlapping of those same elements. Struggling to arrange elements along the z-axis, which extends perpendicularly through the computer screen towards and away from the viewer, is such a shared front-end experience that an element’s z-index can sometimes be used as a frustrate-o-meter gauging the developer’s mood.

The key to maintainable z-index values is understanding that z-index values can’t always be directly compared. They’re not an absolute measurement along an imaginary ruler extending out of the viewport; rather, they are a relative order between elements within the same stacking context.

Turns out there is a nice little debugging tool for stacking contexts in the form of a browser extension (Chrome and Firefox.) Andy gets into a very tricky situation where an RTL version of a website has a rule that uses a transform to move a video on the page. That transform triggers a new stacking context and hence a bug with an unclickable button. Gnarly.

Kinda makes me think that there could be some kinda built-in DevTools way of seeing/understanding stacking contexts. I don’t know if this is the right answer, but I’ve been seeing a leaning into “badges” in DevTools more and more. These things:

In fact, Chrome 94 recently dropped a new one for container queries, so they are being used more and more.

Maybe there could be a badge for stacking contexts? Typically what happens with badges is that you click it on and it shows a special UI. For example, for flexbox or grid it will show the overlay for all the grid lines. The special UI for stacking contexts could be color/texture coded and labelled areas showing all the stacking contexts and how they are layered.

The post Systems for z-index appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

The Self Provisioning Runtime

Css Tricks - Fri, 09/24/2021 - 8:56am

Big thoughts on where the industry is headed from Shawn Wang:

Advancements in two fields — programming languages and cloud infrastructure — will converge in a single paradigm: where all resources required by a program will be automatically provisioned, and optimized, by the environment that runs it.

I can’t articulate it like Shawn, but this all feels right.

I think of how front-end development has exploded over time with JavaScript being everywhere (see: “ooops, I guess we’re full-stack developers now”). Services have also exploded to help. Oh hiiii front-end developers! I see you can write a little JavaScript! Come over here and we’ll give you a complete database with GraphQL endpoints! And we’ll run your cloud functions! But that means there are a lot more people doing this who, in some sense, have no business doing it (points at self). I just have to trust that these services are going to protect me from myself as best they can.

Follow this trend line, and it will get easier and easier to run everything you need to run. Maybe I’ll write code like:

/* - Be a cloud function - Run at the edge - Get data from my data store I named "locations", require JWT auth - Return no slower than 250ms - I'm willing to pay $8/month for this, alert me if we're on target to exceed that */ exports.hello = (message) => { const name = message.data const location = locations.get("location").where(message.id); return `Hello, ${name} from ${location}`; };

That’s just some fake code, but you can see what I mean. Right by your code, you explain what infrastructure you need to have it work and it just does it. I saw a demo of cloudcompiler.run the other day and it was essentially like this. Even the conventions Netlify offers point highly in this direction, e.g. put your .js files in a functions folder, and we’ll take care of the rest. You just hit it with a local relative URL.

I’d actually bet the future is even more magical than this, guessing what you need and making it happen.

Direct Link to ArticlePermalink

The post The Self Provisioning Runtime appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

What is Your Page Title on a Google Search Engine Results Page?

Css Tricks - Thu, 09/23/2021 - 8:11am

Whatever Google wants it to be. I always thought it was exactly what your <title> element was. Perhaps in lieu of that, what the first <h1> on the page is. But recently I noticed some pages on this site that were showing a title on SERPs that was a string that appeared nowhere at all in the source of the page.

When I first noticed it, I tweeted my basic findings…

Where does that title come from?

It's not the <h1> on the page, that reads:
backdrop-filter

It's not the <title> of the page, that reads:
backdrop-filter | CSS-Tricks pic.twitter.com/xzoPcge9nT

— Chris Coyier (@chriscoyier) August 20, 2021 The thread has more info than what unfurled here.

This is a known thing. Apparently, they’ve been doing this for a long time (~10 years), but it’s the first I’ve noticed it. And it’s undergone a recent change:

Last week, we introduced a new system of generating titles for web pages. Here’s the rundown on how it works: https://t.co/tgOGR8y8Ww

— Google Search Central (@googlesearchc) August 25, 2021

The article is pretty clear about things:

[…] we think our new system is producing titles that work better for documents overall, to describe what they are about, regardless of the particular query.

Also, while we’ve gone beyond HTML text to create titles for over a decade, our new system is making even more use of such text. In particular, we are making use of text that humans can visually see when they arrive at a web page. We consider the main visual title or headline shown on a page, content that site owners often place within <H1> tags or other header tags, and content that’s large and prominent through the use of style treatments.

Other text contained in the page might be considered, as might be text within links that point at pages.

The change is in response to people having sucky <title> text. Like it’s too long, too jacked up with SEO garbage (irony!), or are just plain non-descriptive.

I’m not entirely sure how much I care just yet.

Part of me thinks, well, google.com isn’t the web. As important as it is, it’s a proprietary product by a private company and they can do whatever they want within the bounds of the law. In this case, it’s clear the intention is to help: to provide titles that are more clear than what the original page has.

Part of me thinks, well, that sucks that, as site owners, we have no control. If Google wanted to change the SERP title for every results to this website to “CSS-Tricks is a stupid website, never visit it,” they could and that’s that.

Part of me connects this kind of work to AMP. AMP was basically saying, “Y’all are absolutely horrible at building performant mobile websites, so we’re going to build a strict set of rules such that you can’t screw it up anymore, and dangling a carrot of better SERP placement if you buy into the rules.” This way of creating page titles is basically saying, “Y’all are absolutely horrible at providing good titles, so we’re going to title your pages for you so you can’t screw it up anymore and we can improve our SERPs.”

Except with AMP, you had to put in the development hours to make it happen. It was opt-in, even if the carrot was un-ignorable by content companies. This doesn’t carry the risk of burning development hours, but it’s also not something we get to opt into.

The post What is Your Page Title on a Google Search Engine Results Page? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Working With GraphQL Caching

Css Tricks - Thu, 09/23/2021 - 4:32am

If you’ve recently started working with GraphQL, or reviewed its pros and cons, you’ve no doubt heard things like “GraphQL doesn’t support caching” or “GraphQL doesn’t care about caching.” And for most, that is a big deal.

The official GraphQL documentation refers to caching techniques so, clearly, the folks behind it do care about caching and its performance benefits.

The perception that GraphQL is at odds with caching is something I want to address in this post. I want to walk through different caching techniques and how to leverage cache for GraphQL queries.

Getting GraphQL to cache automatically

Consider the following query to fetch a post and the post author:

query getPost { post(slug: "working-with-graphql-caching") { id title author { id name avatar } } }

The one “magic trick” that makes automatic GraphQL caching work is the __typename meta field that all GraphQL APIs expose.

As the name suggests, __typename returns the name of the objects type. This field can even be manually added to existing queries — and most of the time, a GraphQL client, or CDN will do it for you. urql is one such GraphQL client. The server might receive a query like this:

query getPost { post(slug: "working-with-graphql-caching") { __typename id title author { __typename id name } } }

The response with __typename might look a little something like this:

{ data: { __typename: "Post", id: 5, title: "Working with GraphQL Caching", author: { __typename: "User", id: 1, name: "Jamie Barton" } } }

The __typename is a critical piece of the GraphQL caching puzzle because we can now cache this result, and know it contains the Post ID 5 and User ID 1.

Then there are libraries like Apollo and Relay, that also have some level of built-in caching we can use for automatic caching. Since they already know what’s in the cache, they can defer to the cache instead of remote APIs to fetch what the client asks for in a query.

Automatically invalidate the cache when there are changes

Imagine the post author edits the post’s title with the editPost mutation:

mutation { editPost(input: { id: 5, title: "Working with GraphQL Caching" }) { id title } }

Since the GraphQL client automatically adds the __typename field, the result of this mutation immediately tells the cache that Post ID 5 has changed, and any cached query result containing that post needs to be invalidated:

{ data: { __typename: "Post", id: 5, title: "Working with GraphQL Caching" } }

The next time a user sends the same query, the query fetches the new data from the origin rather than serving the stale result from the cache. Magic!

Normalized GraphQL caching

Many GraphQL clients won’t cache entire query results.

Instead, they normalize the cached data into two data structures; one that associates each object with its data (e.g. Post #5: { … }, User #1: { … }, etc.); and one that associates each query with the objects it contains (e.g. getPost: { Post #5, User #1}, etc.).

See urql’s documentation on normalized caching or Apollo’s “Demystifying Cache Normalization” for specific examples and use cases.

Caching edge cases with GraphQL

The one major edge case that GraphQL caches are unable to handle automatically is adding items to a list. So, if a createPost mutation passes through the cache, it doesn’t know which specific list to add that item to.

The easiest “fix” for this is to query the parent type in the mutation if it exists. For example, in the query below, we query the community relation on post:

query getPost { post(slug: "working-with-graphql-caching") { id title author { id name avatar } # Also query the community of the post community { id name } } }

Then we can also query that community from the createPost mutation, and invalidate any cached query results that contain that community:

mutation createPost { createPost(input: { ... }) { id title # Also query and thus invalidate the community of the post community { id name } } }

While it’s imperfect, the typed schema and __typename meta field are the keys that make GraphQL APIs ideal for caching.

You might be thinking by now that all of this is a hack and that GraphQL still doesn’t support traditional caching. And you wouldn’t be wrong. Since GraphQL operates via a POST request, you’ll need to offload to the application client and use the “tricks” above to take advantage of modern browser caching with GraphQL.

That said, this sort of thing isn’t always possible to do, nor does it make a lot of sense for managing cache on the client. GraphQL clients make you manually update the cache for even trickier cases, but a service like GraphCDN provides a “server-like” caching experience, which also exposes a manual purging API that lets you do things like this for greater cache control:

# Purge all occurrences of a specific object mutation { purgeUser(id: [5]) } # Purge by query name mutation { _purgeQuery(queries: [listUsers, listPosts]) } # Purge all occurrences of a type mutation { purgeUser }

Now, no matter where you consume your GraphCDN endpoint, there’s no longer a need to re-implement cache strategies in all of your client-side logic on mobile, web, etc. Edge caching makes your API feel super fast, and reduces load by sharing the cache amongst your users, and keeping it away from each of their clients.

Having recently used GraphCDN on a project, it’s taken care of configuring a cache on the client or server for me, allowing me to get on with my project. For instance, I can swap out my endpoint with GraphCDN and obtain query complexity (which is coming soon), analytics, and more for free.

So, does GraphQL care about caching? It most certainly does! Not only does it provide some automatic caching methods baked right into it, but a number of GraphQL libraries offer additional ways to do it and even manage it.

Hopefully this article has given you some insight into the GraphQL caching story, and how to go about implementing it on the client, as well as taking advantage of CDNs to do it all for you. My goal here isn’t to sell you on using GraphQL on all your projects or anything, but if you are choosing between query languages and caching is a big concern, know that GraphQL is more than up for the task.

The post Working With GraphQL Caching appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Learn How to Build True Edge Apps With Cloudflare Workers and Fauna

Css Tricks - Thu, 09/23/2021 - 4:31am

(This is a sponsored post.)

There is a lot of buzz around apps running on the edge instead of on a centralized server in web development. Running your app on the edge allows your code to be closer to your users, which makes it faster. However, there is a spectrum of edge apps. Many apps only have some parts, usually static content, on the edge. But you can move even more to the edge, like computing and databases. This article describes how to do that.

Intro to the edge

First, let’s look at what the edge really is.

The “edge” refers to locations designed to be close to users instead of being at one centralized place. Edge servers are smaller servers put on the edge. Traditionally, servers have been centralized so that there was only one server available. This made websites slower and less reliable. They were slower because the server can often be far away from the user. Say if you have two users, one in Singapore and one in the U.S., and your server is in the U.S. For the customer in the U.S., the server would be close, but for the person in Singapore, the signal would have to travel across the entire Pacific. This adds latency, which makes your app slower and less responsive for the user. Placing your servers on the edge mitigates this latency problem.

Normal server architecture

With an edge server design, your servers have lighter-weight versions in multiple different areas, so a user in Singapore would be able to access a server in Singapore, and a user in the U.S. would also be able to access a close server. Multiple servers on the edge also make an app more reliable because if the server in Singapore went offline, the user in Singapore would still be able to access the U.S. server.

Edge architecture

Many apps have more than 100 different server locations on the edge. However, multiple server locations can add significant cost. To make it cheaper and easier for developers to harness the power of the edge, many services offer the ability to easily deploy to the edge without having to spend a lot of money or time managing multiple servers. There are many different types of these. The most basic and widely used is an edge Content Delivery Network (CDN), which allows static content to be served from the edge. However, CDNs cannot do anything more complicated than serving content. If you need databases or custom code on the edge, you will need a more advanced service.

Introducing edge functions and edge databases

Luckily, there are solutions to this. The first of which, for running code on the edge, is edge functions. These are small pieces of code, automatically provisioned when needed, that are designed to respond to HTTP requests. They are also commonly called serverless functions. However, not all serverless functions run on the edge. Some edge function providers are Lambda@Edge, Cloudflare Workers, and Deno Deploy. In this article, we will focus on Cloudflare Workers. We can also take databases to the edge to ensure that our serverless functions run fast even when querying a database. There are also solutions for databases, the easiest of which is Fauna. With traditional databases, it is very hard or almost impossible to scale to multiple different regions. You have to manage different servers and how database updates are replicated between them. Fauna, however, abstracts all of that away, allowing you to use a cross-region database with a click of a button. It also provides an easy-to-use GraphQL interface and its own query language if you need more. By using Cloudflare Workers and Fauna, we can build a true edge app where everything is run on the edge.

Using Cloudflare Workers and Fauna to build a URL shortener Setting up Cloudflare Workers and the code

URL Shorteners need to be fast, which makes Cloudflare Workers and Fauna perfect for this. To get started, clone the repository at github.com/AsyncBanana/url-shortener and set your directory to the folder generated.

git clone https://github.com/AsyncBanana/url-shortener.git cd url-shortener

Then, install wrangler, the CLI needed for Cloudflare Workers. After that, install all npm dependencies.

npm install -g @cloudflare/wrangler npm install

Then, sign up for Cloudflare workers at https://dash.cloudflare.com/sign-up/workers and run wrangler login. Finally, to finish off the Cloudflare Workers set up, run wrangler whoami and take the account id from there and put it inside wrangler.toml, which is in the URL shortener.

Setting up Fauna

Good job, now we need to set up Fauna, which will provide the edge database for our URL shortener.

First, register for a Fauna account. Once you have finished that, create a new database by clicking “create database” on the dashboard. Enter URL-Shortener for the name, click classic for the region, and uncheck use demo data.

This is what it should look like

Once you create the database, click Collections on the dashboard sidebar and click “create new collection.” Name the collection URLs and click save.

Next, click the Security tab on the sidebar and click “New key.” Next, click Save on the modal and copy the resulting API key. You can also name the key, but it is not required. Finally, copy the key into the file named .env in the code under FAUNA_KEY.

This is what the .env file should look like, except with API_KEY_HERE replaced with your key

Good job! Now we can start coding.

Create the URL shortener

There are two main folders in the code, public and src. The public folder is where all of the user-facing files are stored. src is the folder where the server code is. You can look through and edit the HTML, CSS, and client-side JavaScript if you want, but we will be focusing on the server-side code right now. If you look in src, you should see a file called urlManager.js. This is where our URL Shortening code will go.

This is the URL manager

First, we need to make the code to create shortened URLs. in the function createUrl, create a database query by running FaunaClient.query(). Now, we will use Fauna Query Language (FQL) to structure the query. Fauna Query Language is structured using functions, which are all available under q in this case. When you execute a query, you put all of the functions as arguments in FaunaClient.query(). Inside FaunaClient.query(), add:

q.Create(q.Collection("urls"),{   data: {     url: url   } })

What this does is creates a new document in the collection urls and puts in an object containing the URL to redirect to. Now, we need to get the id of the document so we can return it as a redirection point. First, to get the document id in the Fauna query, put q.Create in the second argument of q.Select, with the first argument being [“ref”,”id”]. This will get the id of the new document. Then, return the value returned by awaiting FaunaClient.query(). The function should now look like this:

return await FaunaClient.query(   q.Select(     ["ref", "id"],       q.Create(q.Collection("urls"), {         data: {           url: url,         },       })     )   ); }

Now, if you run wrangler dev and go to localhost:8787, you should see the URL shortener page. You can enter in a URL and click submit, and you should see another URL generated. However, if you go to that URL it will not do anything. Now we need to add the second part of this, the URL redirect.

Look back in urlManager.js. You should see a function called processUrl. In that function, put:

const res = await FaunaClient.query(q.Get(q.Ref(q.Collection("urls"), id)));

What this does is executes a Fauna query that gets the value of the document in the collection URLs with the specified id. You can use this to get the URL of the id in the URL. Next return res.data.url.url.

const res = await FaunaClient.query(q.Get(q.Ref(q.Collection("urls"), id))); return res.data.url.url

Now you should be all set! Just run wrangler publish, go to your workers.dev domain, and try it out!

Conclusion

Now have a URL shortener that runs entirely on the edge! If you want to add more features or learn more about Fauna or Cloudflare Workers, look below. I hope you have learned something from this, and thank you for reading.

Next steps
  • Further improve the speed of your URL shortener by adding caching
  • Add analytics
  • Read more about Fauna

Read more about Cloudflare Workers

The post Learn How to Build True Edge Apps With Cloudflare Workers and Fauna appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Container Units Should Be Pretty Handy

Css Tricks - Wed, 09/22/2021 - 1:09pm

Container queries are going to solve this long-standing issue in web design where we want to make design choices based on the size of an element (the container) rather than the size of the entire page. So, if a container is 600px wide, perhaps it has a row-like design, but any narrower than that it has a column-like design, and we’ll have that kind of control. That’s much different than transitioning between layouts based on screen size.

We can already size some things based on the size of an element, thanks to the % unit. For example, all these containers are 50% as wide as their parent container.

The % here is 1-to-1 with the property in use, so width is a % of width. Likewise, I could use % for font-size, but it will be a % of the parent container’s font-size. There is nothing that lets me cross properties and set the font-size as a % of a container’s width.

That is, unless we get container units! Here’s the table of units per the draft spec:

unitrelative toqw1% of a query container’s widthqh1% of a query container’s heightqi1% of a query container’s inline sizeqb1% of a query container’s block sizeqminThe smaller value of qi or qbqmaxThe larger value of qi or qb

With these, I could easily set the font-size to a percentage of the parent container’s width. Or line-height! Or gap! Or margin! Or whatever!

Miriam notes that we can actually play with these units right now in Chrome Canary, as long as the container queries flag is on.

https://twitter.com/TerribleMia/status/1436087037040414720

I had a quick play too. I’ll just put a video here as that’ll be easier to see in these super early days.

And some great exploratory work from Scott here as well:

Developed a demo using the new container units and it’s really powerful stuff! Just two font size declarations and <100 lines of CSS in this demo and it has a ton of flexibility.https://t.co/LlrHFhVND0

+ Thanks @TerribleMia for the nudge! https://t.co/5cs8JE75nW

— Scott Kellum (@ScottKellum) September 10, 2021

Ahmad Shadeed is also all over this!

Query units can save us effort and time when dealing with things like font-size, padding, and margin within a component. Instead of manually increasing the font size, we can use query units instead.

Ahmad Shadeed, “CSS Container Query Units”

Maybe container queries and container units will drop for real at the same time. &#x1f937;

The post Container Units Should Be Pretty Handy appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Twitter’s div Soup and Uglyfied CSS, Explained

Css Tricks - Wed, 09/22/2021 - 11:09am

When I came up in web development (2005-2010 were formative years for me), one of the first lessons I learned was to have a clean foundation of HTML. “What Beautiful HTML Code Looks Like” is actually one of the most popular posts on this very site. The image in that post made its way to popular pages on subreddits every once in a while.

Now, while I still generally write HTML like that by default when working on sites like this one, I also work on projects that don’t have HTML output like that at all. I don’t work on Twitter, but here’s what you might see when opening up DevTools and inspecting the DOM there:

Nobody would accuse that of being “clean” HTML. In fact, it’s not hard to imagine criticism being thrown at it. Why all the divs! divitis!! Is that seriously a <div role="button"> c’mon now. Those are awful class names! Robot barf!

What’s probably closer to the truth is that it doesn’t actually matter that much. It’s not that semantics don’t matter. It’s not that accessibility doesn’t matter. It’s not that performance doesn’t matter. It’s that this output actually does those things fairly well, or at least as well as they intend to do them.

Giuseppe Gurgone gets into the details.

React Native for Web provides cross platform primitives that normalize inconsistencies and allow to build web applications that are, among other things, touch friendly.

To the eyes of somebody who’s not familiar with the framework, the HTML produced by React Native for Web might look utterly ugly and full of bad practices.

That DOM actually does produce an accessibility tree that is expected and usable. The <div>s with roles are to overcome certain cross-platform styling limitations. Those classes are from a styling framework that helps with scoping CSS. It looks wacky, but it’s all for a reason.

That’s not to say all this is above criticism. You could argue that robotic class names don’t allow for user stylesheets that may assist with accessibility. You could argue the superfluous divs make for an unnecessarily heavy DOM. You could argue that shipping robot barf makes the web less learnable, particularly without sourcemaps.

There are things to talk about, but just seeing a bunch of divs with weird class names doesn’t mean it’s bad code. And it’s not limited to React Native either, loads of frameworks have their own special twists in what they actually ship to browsers, and it’s almost always in service of making the site work better in some fashion, not to serve in teaching or readability.

Direct Link to ArticlePermalink

The post Twitter’s div Soup and Uglyfied CSS, Explained appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.