Front End Web Development

Scrollbar Reflowing

Css Tricks - Tue, 08/24/2021 - 9:13am

This is a bit of advice for developers on Macs I’ve heard quite a few times, and I’ll echo it: go into System Preferences > General > Show scroll bars and set to always. This isn’t about you, it’s about the web. See, the problem is that without this setting on, you’ll never experience scrollbar-triggered layout shifts, but everyone else with this setting on will. Since you want to design around not causing this type of jank, you should use this setting yourself.

Here’s Stefan Judis demonstrating that usage of viewport units can be one of the causes:

Today I learned a little CSS fact I wasn't aware of. Viewport units are not taking scrollbars into consideration and can trigger overflows easily.

Found on the @polypane blog: https://t.co/RpTnwzrYcA

Video alt: Example showing that 100vw can trigger overflows. pic.twitter.com/JXakqV3Vna

— Stefan Judis (@stefanjudis) August 22, 2021

There, 100vw causes horizontal overflow, because the vertical scrollbar was already in play, taking up some of that space. Feels incredibly wrong to me somehow, but here we are.

Stefan points to Kilian Valkhof’s article about dealing with this. The classic fixes:

The easy fix is to use width: 100% instead. Percentages don’t include the width of the scrollbar, so will automatically fit.

If you can’t do that, or you’re setting the width on another element, add overflow-x: hidden or overflow: hidden to the surrounding element to prevent the scrollbar.

Kilian Valkhof, “How to find the cause of horizontal scrollbars”

Those are hacks, I’d say, since they are both things that aren’t exact matches for what you were wanting to do.

Fortunately, there is an incoming spec-based solution. Bramus has the scoop:

A side-effect when showing scrollbars on the web is that the layout of the content might change depending on the type of scrollbar. The scrollbar-gutter CSS property —which will soon ship with Chromium — aims to give us developers more control over that.

Bramus Van Damme, “Prevent unwanted Layout Shifts caused by Scrollbars with the scrollbar-gutter CSS property”

Sounds like the trick, and I wouldn’t be surprised if this becomes a very common line in reset stylesheets:

body { scrollbar-gutter: stable both-edges; }

That makes me wonder though… it’s the <body> when dealing with this at the whole-page level, right? Not the <html>? That’s been weird in the past with scrolling-related things.

Are we actually going to get it across all browsers? Who knows. Seems somewhat likely, but even if it gets close, and the behavior is specced, I’d go for it. Feels progressive-enhancement-friendly.

The post Scrollbar Reflowing appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Architecting With Next.js

Css Tricks - Tue, 08/24/2021 - 9:12am

(This is a sponsored post.)

Free event hosted by Netlify coming up next week (Wednesday, August 25th): Architecting with Next.js. It’s just a little half-day thing. No brainer.

Join us for a special event where we’ll highlight business teams using Next.js in production, including architecture deep dives, best practices and challenges. Next.js is the fastest-growing framework for Jamstack developers. With a compelling developer experience and highly performant results, it’s an emerging choice for delivering customer-facing sites and apps.

Next.js is such a nice framework, it’s no surprise to me it’s blowing up. It’s in React, a framework familiar to tons of people, thus enabling component-based front-ends, with common niceties built right in, like CSS modules. It produces HTML output, so it’s fast and good for SEO. It has smart defaults, so you’re rarely doing stuff like schlubbing your way through webpack config (unless you need that control, then you can). It does basic routing without you having to code it. Good stuff.

Direct Link to ArticlePermalink

The post Architecting With Next.js appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Introduction to the Solid JavaScript Library

Css Tricks - Tue, 08/24/2021 - 4:30am

Solid is a reactive JavaScript library for creating user interfaces without a virtual DOM. It compiles templates down to real DOM nodes once and wraps updates in fine-grained reactions so that when state updates, only the related code runs.

This way, the compiler can optimize initial render and the runtime optimizes updates. This focus on performance makes it one of the top-rated JavaScript frameworks.

I got curious about it and wanted to give it a try, so I spent some time creating a small to-do app to explore how this framework handles rendering components, updating state, setting up stores, and more.

Here’s the final demo if you just can’t wait to see the final code and result:

Getting started

Like most frameworks, we can start by installing the npm package. To use the framework with JSX, run:

npm install solid-js babel-preset-solid

Then, we need to add babel-preset-solid to our Babel, webpack, or Rollup config file with:

"presets": ["solid"]

Or if you’d like to scaffold a small app, you can also use one of their templates:

# Create a small app from a Solid template npx degit solidjs/templates/js my-app # Change directory to the project created cd my-app # Install dependencies npm i # or yarn or pnpm # Start the dev server npm run dev

There is TypeScript support so if you’d like to start a TypeScript project, change the first command to npx degit solidjs/templates/ts my-app.

Creating and rendering components

To render components, the syntax is similar to React.js, so it might seem familiar:

import { render } from "solid-js/web"; const HelloMessage = props => <div>Hello {props.name}</div>; render( () => <HelloMessage name="Taylor" />, document.getElementById("hello-example") );

We need to start by importing the render function, then we create a div with some text and a prop, and we call render, passing the component and the container element.

This code then compiles down to real DOM expressions. For example, the code sample above, once compiled by Solid, looks something like this:

import { render, template, insert, createComponent } from "solid-js/web"; const _tmpl$ = template(`<div>Hello </div>`); const HelloMessage = props => { const _el$ = _tmpl$.cloneNode(true); insert(_el$, () => props.name); return _el$; }; render( () => createComponent(HelloMessage, { name: "Taylor" }), document.getElementById("hello-example") );

The Solid Playground is pretty cool and shows that Solid has different ways to render, including client-side, server-side, and client-side with hydration.

Tracking changing values with Signals

Solid uses a hook called createSignal that returns two functions: a getter and a setter. If you’re used to using a framework like React.js, this might seem a little weird. You’d normally expect the first element to be the value itself; however in Solid, we need to explicitly call the getter to intercept where the value is read in order to track its changes.

For example, if we’re writing the following code:

const [todos, addTodos] = createSignal([]);

Logging todos will not return the value, but a function instead. If we want to use the value, we need to call the function, as in todos().

For a small todo list, this would be:

import { createSignal } from "solid-js"; const TodoList = () => { let input; const [todos, addTodos] = createSignal([]); const addTodo = value => { return addTodos([...todos(), value]); }; return ( <section> <h1>To do list:</h1> <label for="todo-item">Todo item</label> <input type="text" ref={input} name="todo-item" id="todo-item" /> <button onClick={() => addTodo(input.value)}>Add item</button> <ul> {todos().map(item => ( <li>{item}</li> ))} </ul> </section> ); };

The code sample above would display a text field and, upon clicking the “Add item” button, would update the todos with the new item and display it in a list.

This can seem pretty similar to using useState, so how is using a getter different? Consider the following code sample:

console.log("Create Signals"); const [firstName, setFirstName] = createSignal("Whitney"); const [lastName, setLastName] = createSignal("Houston"); const [displayFullName, setDisplayFullName] = createSignal(true); const displayName = createMemo(() => { if (!displayFullName()) return firstName(); return `${firstName()} ${lastName()}`; }); createEffect(() => console.log("My name is", displayName())); console.log("Set showFullName: false "); setDisplayFullName(false); console.log("Change lastName "); setLastName("Boop"); console.log("Set showFullName: true "); setDisplayFullName(true);

Running the above code would result in:

Create Signals My name is Whitney Houston Set showFullName: false My name is Whitney Change lastName Set showFullName: true My name is Whitney Boop

The main thing to notice is how My name is ... is not logged after setting a new last name. This is because at this point, nothing is listening to changes on lastName(). The new value of displayName() is only set when the value of displayFullName() changes, this is why we can see the new last name displayed when setShowFullName is set back to true.

This gives us a safer way to track values updates.

Reactivity primitives

In that last code sample, I introduced createSignal, but also a couple of other primitives: createEffect and createMemo.

createEffect

createEffect tracks dependencies and runs after each render where a dependency has changed.

// Don't forget to import it first with 'import { createEffect } from "solid-js";' const [count, setCount] = createSignal(0); createEffect(() => { console.log("Count is at", count()); });

Count is at... logs every time the value of count() changes.

createMemo

createMemo creates a read-only signal that recalculates its value whenever the executed code’s dependencies update. You would use it when you want to cache some values and access them without re-evaluating them until a dependency changes.

For example, if we wanted to display a counter 100 times and update the value when clicking on a button, using createMemo would allow the recalculation to happen only once per click:

function Counter() { const [count, setCount] = createSignal(0); // Calling `counter` without wrapping it in `createMemo` would result in calling it 100 times. // const counter = () => { // return count(); // } // Calling `counter` wrapped in `createMemo` results in calling it once per update. // Don't forget to import it first with 'import { createMemo } from "solid-js";' const counter = createMemo(() => { return count() }) return ( <> <button onClick={() => setCount(count() + 1)}>Count: {count()}</button> <div>1. {counter()}</div> <div>2. {counter()}</div> <div>3. {counter()}</div> <div>4. {counter()}</div> <!-- 96 more times --> </> ); } Lifecycle methods

Solid exposes a few lifecycle methods, such as onMount, onCleanup and onError. If we want some code to run after the initial render, we need to use onMount:

// Don't forget to import it first with 'import { onMount } from "solid-js";' onMount(() => { console.log("I mounted!"); });

onCleanup is similar to componentDidUnmount in React — it runs when there is a recalculation of the reactive scope.

onError executes when there’s an error in the nearest child’s scope. For example we could use it when fetching data fails.

Stores

To create stores for data, Solid exposes createStore which return value is a readonly proxy object and a setter function.

For example, if we changed our todo example to use a store instead of state, it would look something like this:

const [todos, addTodos] = createStore({ list: [] }); createEffect(() => { console.log(todos.list); }); onMount(() => { addTodos("list", [ ...todos.list, { item: "a new todo item", completed: false } ]); });

The code sample above would start by logging a proxy object with an empty array, followed by a proxy object with an array containing the object {item: "a new todo item", completed: false}.

One thing to note is that the top level state object cannot be tracked without accessing a property on it — this is why we’re logging todos.list and not todos.

If we only logged todo` in createEffect, we would be seeing the initial value of the list but not the one after the update made in onMount.

To change values in stores, we can update them using the setting function we define when using createStore. For example, if we wanted to update a todo list item to “completed” we could update the store this way:

const [todos, setTodos] = createStore({ list: [{ item: "new item", completed: false }] }); const markAsComplete = text => { setTodos( "list", i => i.item === text, "completed", c => !c ); }; return ( <button onClick={() => markAsComplete("new item")}>Mark as complete</button> ); Control Flow

To avoid wastefully recreating all the DOM nodes on every update when using methods like .map(), Solid lets us use template helpers.

A few of them are available, such as For to loop through items, Show to conditionally show and hide elements, Switch and Match to show elements that match a certain condition, and more!

Here are some examples showing how to use them:

<For each={todos.list} fallback={<div>Loading...</div>}> {(item) => <div>{item}</div>} </For> <Show when={todos.list[0].completed} fallback={<div>Loading...</div>}> <div>1st item completed</div> </Show> <Switch fallback={<div>No items</div>}> <Match when={todos.list[0].completed}> <CompletedList /> </Match> <Match when={!todos.list[0].completed}> <TodosList /> </Match> </Switch> Demo project

This was a quick introduction to the basics of Solid. If you’d like to play around with it, I made a starter project you can automatically deploy to Netlify and clone to your GitHub by clicking on the button below!

Deploy to Netlify

This project includes the default setup for a Solid project, as well as a sample Todo app with the basic concepts I’ve mentioned in this post to get you going!

There is much more to this framework than what I covered here so feel free to check the docs for more info!

The post Introduction to the Solid JavaScript Library appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Detecting Media Query Support in CSS and JavaScript

Css Tricks - Mon, 08/23/2021 - 7:16am

You can’t just do @media (prefers-reduced-data: no-preference) alone because, as Kilian Valkhof says:

[…] that would be false if either there was no support (since the browser wouldn’t understand the media query) or if it was supported but the user wanted to preserve data.

Usually @supports is the tool for this in CSS, but that doesn’t work with @media queries. Turns out there is a solution though:

@media not all and (prefers-reduced-data), (prefers-reduced-data) { /* ... */ }

This is a somewhat complex logic puzzle involving media query syntax and how browsers evaluate these things. It’s nice you can ultimately handle a no-support fallback situation in one expression.

Direct Link to ArticlePermalink

The post Detecting Media Query Support in CSS and JavaScript appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Native JavaScript Routing?

Css Tricks - Mon, 08/23/2021 - 4:28am

We can update the URL in JavaScript. We’ve got the APIs pushState and replaceState:

// Adds to browser history history.pushState({}, "About Page", "/about"); // Doesn't history.replaceState({}, "About Page", "/about");

JavaScript is also capable of replacing any content in the DOM.

// Hardcore document.body.innerHTML = ` <div>New body who dis.</div> `;

So with those powers combined, we can build a website where we navigate to different “pages” but the browser never refreshes. That’s literally what “Single Page App” (SPA) means.

But routing can get a bit complicated. We’re really on our own implementing it outside these somewhat low-level APIs. I’m most familiar with reaching for something like React Router, which allows the expression of routes in JSX. Something like this:

<Router> <Switch> <Route path="/about"> <About /> </Route> <Route path="/users"> <Users /> </Route> <Route path="/user/:id"> <User id={id} /> </Route> <Route path="/"> <Home /> </Route> </Switch> </Router>

The docs describe this bit like:

A <Switch> looks through its children <Route> and renders the first one that matches the current URL.

So it’s a little bit like a RegEx matcher with API niceties, like the ability to make a “token” with something like :id that acts as a wildcard you can pass to components to use in queries and such.

This is work! Hence the reason we have libraries to help us. But it looks like the web platform is doing what it does best and stepping in to help where it can. Over on the Google webdev blog, this is explained largely the same way:

Routing is a key piece of every web application. At its heart, routing involves taking a URL, applying some pattern matching or other app-specific logic to it, and then, usually, displaying web content based on the result. Routing might be implemented in a number of ways: it’s sometimes code running on a server that maps a path to files on disk, or logic in a single-page app that waits for changes to the current location and creates a corresponding piece of DOM to display.

While there is no one definitive standard, web developers have gravitated towards a common syntax for expressing URL routing patterns that share a lot in common with regular expressions, but with some domain-specific additions like tokens for matching path segments.

Jeff Posnick, “URLPattern brings routing to the web platform”

New tech!

const p = new URLPattern({ pathname: '/foo/:image.jpg', baseURL: 'https://example.com', });

We can set up a pattern like that, and then run tests against it by shooting it a URL (probably the currently navigated-to one):

let result = p.test('https://example.com/foo/cat.jpg'); // true result = p.exec('https://imagecdn1.example.com/foo/cat.jpg'); // result.hostname.groups.subdomain will be 'imagecdn1' // result.pathname.groups[0] will be 'foo', corresponding to * // result.pathname.groups.image will be 'cat'

I would think the point of all this is perhaps being able to build routing into SPAs without having to reach for libraries, making for lighter/faster websites. Or that the libraries that help us with routing can leverage it, making the libraries smaller, and ultimately websites that are lighter and faster.

This is not solid tech yet, so probably best to just read the blog post to get the gist. And use the polyfill if you want to try it out.

And speaking of the web platform showing love on SPAs lately, check out Shared Element Transitions which seems to be re-gaining momentum.

The post Native JavaScript Routing? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

“Disambiguating Tailwind”

Css Tricks - Fri, 08/20/2021 - 9:59am

I appreciated this bit of nuance from a post on Viget’s blog:

There could be a whole article written about the many flavours of Tailwind, but broadly speaking those flavours are:

1. Stock tailwind, ie. no changes to the configuration,
2. Tailwind that heavily relies on @apply in CSS files but still follows BEM or some other component organization,
3. Tailwind UI, and
4. heavily customizing Tailwind’s configuration and writing custom plugins.

Leo Bauza, “How does Viget CSS?”

The way you use some particular technologies can be super different from how someone else does, to the point they share little resemblance, even if they share the same core.

Bootstrap is similar. You can link up Bootstrap off a CDN, the entire untouched built version of everything it offers. You can download the Sass/JavaScript source files, include them in your own project, and bring-your-own build process. This gives you the ability to customize them, but then that complicates the upgrade path. Or you could use Bootstrap from a package manager, meaning you’re referencing the source files from your own build process, but never touching them directly. Either way, if you’re using the source, you can then do things like customize it (change colors, fonts, etc.), and even trim down what parts of it you want to use.

React is similar. Vue is similar. You can link them up right off a CDN and use them right in the browser with no build process. Or they can be at the heart of your build process, and pulled from npm. Or they can be the foundation of a framework like Next or Nuxt.

When you multiply the fact that any given single technology can be used so many different ways with how many different technologies are in use on any given project, it’s no wonder why developer’s experience on projects is so wildly different and you hear a lot of people talking past each other in debate.

The post “Disambiguating Tailwind” appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Creating a Headless WordPress Site With Frontity

Css Tricks - Fri, 08/20/2021 - 4:59am

Frontity is a WordPress-focused React-based server-side dynamic-rendering framework (phew!) that allows us to create fast headless websites. Chris has a good introduction to Frontity. I guess you could think of it like Next.js for WordPress. And while the demand for headless WordPress sites may be a niche market at the moment, the Frontity showcase page demonstrates that there is excitement for it.

Frontity’s documentation, tutorials and guides focus on creating headless blog sites and its showcase page lists more than 60 sites, including CNBC Africa, Forbes Africa, Hoffmann Academy, Aleteia, Diariomotor and others. In that list, five headless WordPress sites made the cut as production level showcase studies.

Frontity’s official website itself is a very interesting production-level use case that demonstrates how to successfully link the WordPress Block Editor to Frontity’s framework.

So what I’m going to do is walk you through the steps to create a Frontity site in this article, then follow it up with another article on using and customizing Frontity’s default Mars theme. We’ll start with this post, where we’ll cover the basics of setting up a headless WordPress site on the Frontity framework.

Table of contents

This is not an expert guide but rather a headless WordPress site enthusiast’s journey toward learning the Frontity experience. For a more detailed and authoritative guide, please refer to Frontity’s documentation. frontity doc.

Prerequisites and requirements

Because Frontity is a React-based framework, I’d recommend that you have a working knowledge of React, and JavaScript with ES6 features. Frontity’s tutorial doc details some additional requirements, including:

  • Proficiency in HTML and CSS
  • Experience using the command line
  • A node.js server
  • And, of course, a code editor

Ready? Let’s go!

First, let’s get to know Frontity

Chris has explained here already what Frontity is and how it works. Frontity is a WordPress focused and opinionated React framework with its own state manager and CSS styling solutions. Recently updated Frontity architecture describes how a Frontity project can be run either in decoupled mode or embedded mode.

In the decoupled mode (see below) Frontity fetches REST API data from a WordPress PHP server and returns the final HTML to users as an isomorphic React App (used in the custom theme). In this mode, main domain points to Frontity whereas sub-domain pointing to WordPress site.

In the embedded mode, the Frontity theme package (an Isomorphic React App) replaces the WordPress PHP theme via a required Frontity Embedded Mode plugin. The plugin makes an internal HTTP request to the Frontity/Node.js server to retrieve the HTML pages. In this mode, the main domain points to WordPress where both the site visitors and content editors use the same domain, while Frontity uses the secondary domain (i.e. sub-domain).

Frontity’s built-in AMP feature generates a stripped down version of HTML pages for faster server-side-rendering thus overcoming multiple WordPress requests. It provides a more dynamic static site experience that is fast and has built-in server extedability that could be further improved using a Serverless Pre-redendering (SPR) (also called stale-while-revalidate cache) technique through KeyCDN and StackPath.

There’s more on Frontity mode in the Frontity architecture documentation.

Frontity site installation

To start our project, we need to install a Frontity project site and a WordPress installation for the data source endpoint. In the following sections, we will learn how to set up our Frontity site and connect it to our WordPress installation. The Frontity quick start guide is a very handy step-by-step guide and following guide allows us to set up our Frontity project.

First, check if Node.js and npm is already installed in your machine. If not, download and install them.

#! check node -- version node --version V14.9.0 #! output if installed #! check npm version npm --version 6.14.7 #! output if installed #! to upgrade npm to latest version npm install npm@latest -g Step 1: Creating a Frontity project

Let’s run the following command using the Frontity CLI to create a new my-frontity project.

### creating a frontity project npx frontity create my-frontity

The above code produces the following output.

Step 2: Select the Frontity mars-theme

Frontity provides two themes, twentytwenty-theme and mars-theme. For starters, Frontity recommends selecting the mars-theme and provides the following output:

If you answer the prompt for e-mail, a valid email address should be entered. I found it useful to enter the email for the first time so I can stay in contact with Frontity developers, but thereafter I didn’t see any use.

Step 3: Frontity project installation

The Frontity server installs the project and its dependencies. If successfully installed, the following output should be displayed:

Step 4: Change directory and restart development server

To get into the project folder, change directory with the following command and start the server to view the newly-created project:

### change dir to project folder cd my-frontity

The Frontity development server can be started with the following command:

### start development server with npx npx frontity dev ### starting dev server with yarn yarn frontity dev

When the development server successfully completes, the project can be viewed at http://localhost:3000 and should display the following screen in the browser:

The above screenshot shows a completed Frontity powered WordPress site front-end with mars-theme. The site is not connected to our own site yet which we will discuss in the next section.

Section 2: WordPress site installation

We need a WordPress site for our data source. We can either use an already installed site or install a fresh test site on your local machine. For this project, I install the latest version of WordPress in my machine with Local and imported theme test data which includes test data for block editor styling as well.

In recent versions of WordPress, the WordPress REST API is built right into WordPress core, so we can check whether it is publicly extending our wp-content data by appending /wp-json to our site URL (e.g. http//mytestsite.local/wp-json). This should return the content in JSON format. Then we are good to proceed.

Select pretty permalinks

One other condition Frontity requires in our WordPress installation is that the pretty permalinks (post name) needs to be activated in Settings > Permalinks.

Section 3: Connecting the Frontity project to WordPress

To connect our WordPress site to frontity project, we should update the frontity.settings.js file:

// change source URL in frontity.settings.js const settings = { ..., packages: [ ..., { name: "@frontity/wp-source", state: { source: { // Change this url to point to your WordPress site. api: "http://frontitytest.local/wp-json" } } } ] }

Please take note that while updating the URL to our WordPress install, we need to change the state.source object name from url to api (highlighted above) and save the file with our updates. Restart the development server, and we will see that the Frontity site is now connected to our own WordPress site.

In the screenshot above, you will notice that the menu items (Nature, Travel, Japan, About Us) are still displayed from the Frontity demo site, which we will fix in the next step.

Step 1: Updating our menu in Frontity site

WordPress treats menus items as private custom post types and are visible only to those who are logged into WordPress. Until the WordPress REST-API Version 2 is released, menu items are not exposed as visible endpoints, but registered menus can be extended using WP-REST-API V2 Menu plugin.

Because menu items are changed infrequently, Frontity Mars theme menu items are often hard-coded in the frontity.settings.js file to be store as state and then exported to the index.js file. For this demo project, I created the WordPress site menu as described in the frontity Mars theme with category and tags.

Next, let’s add our menu items to frontity-settings.js file as described in the Frontity Mars theme.

// add menu items in frontity-settings.js { name: "@frontity/mars-theme", state: { theme: { menu: [ ["Home", "/"], ["Block", "/category/block/"], ["Classic", "/category/classic/"], ["Alignments", "/tag/alignment-2/"], ["About", "/about/"] ], featured: { showOnList: true, showOnPost: true } } } },

Let’s save our updates and restart development server as before. We should be able to see menu items (Block, Classic, Alignment, About) from our own site in the header section.

Lines 13-16 define whether we would like to show the featured image on the list (e.g. index page) or on post (e.g. single page).

Step 2: Frontity project folder structure

Our frontity-demo project (we changed project folder name from my-frontity) should contain two files, package.json and frontity.settings.js, and both node_modules/ and packages/mars-theme folders.

### File structure frontity-demo/ |__ node_modules/ |__ package.json |__ frontity.settings.js |__ favicon.ico |__ packages/ |__ mars-theme/

A brief descriptions of the files/folders as described in the Frontity doc:

  • node_modules: where the Frontity project dependencies are installed (aren’t meant to be modified).
  • packages/ : a folder with mars-theme installed. The theme folder contains src folder which contains custom packages, and maybe some core packages from Frontity that can be edited and customized as desired. Everything in Frontity is a package.
  • frontity.setiings.js: This is most import file where the basic setup for our app is already populated. Currently these set up are Frontity default but any desired settings and extension are configured in this file. For example, data source URL (e.g. WordPress site URL), and required packages and libraries to run the project are defined under Frontity state package.
  • package.json: file where the dependencies needed for your app to work are declared.

We’ll get into Frontity theme packages and other dependencies, but in a later article since they warrant a deeper explanation.

Step 3: Modifying styles

Frontity uses the popular CSS-in-JS library Emotion for styling its component. Frontity’s default mars-theme is styled with styled components available from @emotion/syled. Styled components is very similar to CSS. Later in other sections, we will deep-dive into styling frontity project and with a use case example of modifying the entire mars-theme’s styling.

For now let’s do a quick demonstration of changing the color of our site title and description. The header and description styles are defined as Title and Description styled components at the bottom of the header.js component. Now let’s change title color to yellow and the description color to some sort of aqua (left panel). We see the changes reflected in our site header.

Section 4: Deploying the site to Vercel

Frontity lists three popular hosting service providers for hosting a Frontity project, including Vercel, Moovweb XDN, and Heroku. However, in practice it appears that most Frontity projects are hosted at Vercel, as Chris writes, “it’s a perfect match for Vercel.“ Frontity highly recommends Vercel and has prepared a handy step-by-step deployment guide.

Step 1: Create a production version of frontity project

While developing our Frontity project, we develop with the npx frontity dev command. For deployment, we should create a production version of the project from the root of our Frontity project.

#! create production version of project npx frontity build

This creates a build folder “containing both our Frontity project (isomorphic) React app and Frontity (Node.js) server and the content will be used by the command npm frontity serve.”

Step 2: Create an account at Vercel

First, we should create a Vercel account following this signup form, which we can do using our GitHub credentials. We should login from our Frontity projects root folder in the terminal:

#! login to vercel npx vercel login Step 3: Create vercel.json file

To deploy our site to Vercel, we need the following vercel.json file at the root of our project:

{ "version": 2, "builds": [ { "src": "package.json", "use": "@frontity/now" } ] } Step 4: Deploying to Vercel

Finally, let’s deploy our project using the vercel command from the root of our project folder:

#! deployment vercel npx vercel

Next, we are asked brief deployment-related questions:

Wrapping up

If you have been reading my other articles on WordPress headless sites using Gatsby’s framework, I have had an admirable but frustrating experience, primarily because of my own technical skills to learn and maintain advanced frameworks as a one-man team. Then I came across the Frontity React Framework while reading an article on CSS-Tricks.

As we learned from this and Chris’ article, creating a headless WordPress site with Frontity is pretty simple, all things considered. I am very impressed with its easy setup, streamlined UI, plus it appears to be a better option for less tech-savvy users. For example, you get all of the WordPress content without writing a single query.

In a follow-up article, we will do a deep dive on the default Frontity Mars theme and learn how to customize it to make it our own.

Credits and resources

The post Creating a Headless WordPress Site With Frontity appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

The Big Gotcha With Custom Properties

Css Tricks - Thu, 08/19/2021 - 9:04am

I’ve seen this confuse more than a handful of people recently, including myself, so I’m making sure it’s written down.

Let’s chuck a couple of custom properties into CSS:

html { --color-1: red; --color-2: blue; }

Let’s use them right away to make a background gradient:

html { --color-1: red; --color-2: blue; --bg: linear-gradient(to right, var(--color-1), var(--color-2)); }

Now say there is a couple of divs sitting on the page:

<div></div> <div class="variation"></div>

Lemme style them up:

div { background: var(--bg); }

That totally works! Hell yes!

Now lemme style that variation. I don’t want it to go from red to blue, I want it to go from green to blue. Easy cheesy, I’ll update red to green:

html { --color-1: red; --color-2: blue; --bg: linear-gradient(to right, var(--color-1), var(--color-2)); } div { background: var(--bg); } .variation { --color-1: green; }

Nope! (Sirens blaring, horns honking, farm animals taking cover).

That doesn’t work, friends.

The problem, as best I understand it, is that --bg was never declared on either of the divs. It can use --bg, because it was declared higher up, but by the time it is being used there, the value of it is locked. Just because you change some other property that --bg happens to use at the time it was declared, it doesn’t mean that property goes out searching for places it was used and updating everything that’s used it as a dependency.

Ugh, that explanation doesn’t feel quite right. But it’s the best I got.

The solution? Well, there are a few.

Solution 1: Scope the variable to where you’re using it.

You could do this:

html { --color-1: red; --color-2: blue; } div { --bg: linear-gradient(to right, var(--color-1), var(--color-2)); background: var(--bg); } .variant { --color-1: green; }

Now that --bg is declared on both divs, the change to the --color-1 dependency does work.

Solution 2: Comma-separate the selector where you set most of the variables.

Say you do the common thing where you set a bunch of variables at the :root. Then you run into this problem. You can just add extra selectors to that main declaration to make sure you hit the right scope.

html, div { --color-1: red; --color-2: blue; --bg: linear-gradient(to right, var(--color-1), var(--color-2)); } div { background: var(--bg); } .variation { --color-1: green; }

In some other perhaps less-contrived example, it might look something like this:

:root, .button, .whatever-it-is-a-bandaid { --padding-inline: 1rem; --padding-block: 1rem; --padding: var(--padding-block) var(--padding-inline); } .button { padding: var(--padding); } .button.less-wide { --padding-inline: 0.5rem; } Solution 3: Blanket Mode

Screw it — put the variables everywhere.

* { --access: me; --whereever: you; --want: to; --hogwild: var(--access) var(--whereever); }

This is not a good plan. I overheard a chat recently in which a medium-sized site experienced a 500ms page rendering delay because every draw to the page needed to compute all the properties. It “works” but it’s one of the rare cases where you can cause legit performance problems with a selector.

Solution 4: Introduce a new “default” property and fallback

All credit here to Stephen Shaw who’s exploration on all this is one of the places I saw this confusion in the first place.

Let’s go back to our first demonstration of this problem:

html { --color-1: red; --color-2: blue; --bg: linear-gradient(to right, var(--color-1), var(--color-2)); }

What we want to do is give ourselves two things:

  1. A way to override the entire background
  2. A way to overide a part of the gradient background

So we’re gonna do it this way:

html { --color-1: red; --color-2: blue; } div { --bg-default: linear-gradient(to right, var(--color-1), var(--color-2)); background: var(--bg, var(--bg-default)); }

Notice that we haven’t declared --bg at all. It’s just sitting there waiting for a value, and if it ever gets one, that’s the value that “wins.” But without one, it’ll fall back to our --bg-default. Now…

  1. If I set --color-1 or --color-2, it replaces that part of the gradient as expected (so long as I do it on a selector that touches one of the divs).
  2. Or, I can set --bg to reset the entire background to whatever I want.

Feels like a nice way to handle things.

CodePen Embed Fallback

Sometimes there are actual bugs with CSS custom properties. This isn’t one of them. Even though it sort of feels like a bug to me, apparently it’s not. Just one of those things you gotta know about.

The post The Big Gotcha With Custom Properties appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Using Nuxt and Supabase for a Multi-User Blogging App

Css Tricks - Thu, 08/19/2021 - 4:32am

Nuxt is a JavaScript framework that extends the existing functionality of Vue.js with features like server-side rendering, static page generation, file-based routing, and automatic code splitting among other things.

I’ve been enjoying using frameworks like Nuxt and Next because they offer not only more features, but better performance and a better developer experience than the underlying libraries alone without having to learn a lot of new concepts. Because of this, many developers are starting to default to these frameworks when creating new projects as opposed to their single-page application (SPA)) ancestors that initially paved the way for their success in the first place.

In the spirit of these abstractions, I’m also a big fan of serverless/managed services that do a lot of the heavy lifting of building out back-end features and functionality for things like authentication, file storage, data, compute, and an API layer. Services and tools like Supabase, Firebase, Netlify, AWS Amplify, and Hasura all enable traditionally front-end developers to extend their personal capabilities and skillsets to add these various important pieces of back-end functionality without having to become back-end developers themselves.

In this tutorial, we’ll be building a multi-user app from scratch with Nuxt and Supabase, while pulling in Tailwind CSS for styling.

GitHub Repo Why I’ve been liking Supabase

Supabase is an open source alternative to Firebase that lets you create a real-time back-end in minutes. At the time of this writing, Supabase has support for features like file storage, real-time API + Postgres database, authentication, and soon, serverless functions.

Postgres

One of the reasons I like Supabase because is that it’s easy to set up. Plus, it offers Postgres as its data layer.

I’ve been building apps for 10 years. One of the biggest limitations that I’ve encountered in NoSQL Backend-as-a-Service (BaaS) offerings is how tough it is for developers to scale their apps and be successful. With NoSQL, it is much harder to model data, do migrations, and modify data access patterns after you’ve started to build your app. Enabling things like relationships is also much tougher to grok in the NoSQL world.

Supabase leverages Postgres to enable an extremely rich set of performant querying capabilities out of the box without having to write any additional back-end code. Real time is also baked in by default.

Auth

It’s really easy to set up authorization rules on specific tables, enabling authorization and fine grained access controls without a lot of effort.

When you create a project, Supabase automatically gives you a Postgres SQL database, user authentication, and an API endpoint. From there you can easily implement additional features, like real-time subscriptions and file storage.

Multiple authentication providers

Another thing I like about Supabase is the variety of authentication providers that come ready to use with it right out of the box. Supabase enables all of the following types of authentication mechanisms:

  • Ssername and password
  • Magic email link
  • Google
  • Facebook
  • Apple
  • Discord
  • GitHub
  • Twitter
  • Azure
  • GitLab
  • Bitbucket
The app ingredients

Most applications, while having varying characteristics in their implementation details, often leverage a similar set of functionality tied together. These usually are:

  • user authentication
  • client-side identity management
  • routing
  • file storage
  • database
  • API layer
  • API authorization

Understanding how to build a full-stack app that implements all of these features lays the ground for developers to then continue building out many other different types of applications that rely on this same or similar set of functionality. The app that we’re building in this tutorial implements most of these features.

Unauthenticated users can view others posts in a list and then view the post details by clicking and navigating to that individual post. Users can sign up with their email address and receive a magic link to sign in. Once they are signed in, they are able to view links to create and edit their own posts as well. We will also provide a profile view for users to see their user profile and sign out.

Now that we’ve reviewed the app, let’s start building!

Starting our Supabase app

The first thing we’ll need to do is create the Supabase app. Head over to Supabase.io and click Start Your Project. Authenticate and create a new project under the organization that is provided to you in your account.

Give the project a Name and Password and click Create new project. It will take approximately two minutes for your project to spin up.

Creating the table

Once the project is ready, we create the table for our app along with all of the permissions we’ll need. To do so, click on the SQL link in the left-hand menu.

Click on Query-1 under Open queries and paste the following SQL query into the provided text area and click Run:

CREATE TABLE posts ( id bigint generated by default as identity primary key, user_id uuid references auth.users not null, user_email text, title text, content text, inserted_at timestamp with time zone default timezone('utc'::text, now()) not null ); alter table posts enable row level security; create policy "Individuals can create posts." on posts for insert with check (auth.uid() = user_id); create policy "Individuals can update their own posts." on posts for update using (auth.uid() = user_id); create policy "Individuals can delete their own posts." on posts for delete using (auth.uid() = user_id); create policy "Posts are public." on posts for select using (true);

This creates the posts table for the database of our app. It has also enables some row-level permissions on the database:

  • Any user can query for a list of posts or individual posts.
  • Only signed in users can create a post. Authorization rules state that their user ID must match the user ID passed into the arguments.
  • Only the owner of a post can update or delete it.

Now, if we click on the Table editor link, we should see our new table created with the proper schema.

That’s all we need for the Supabase project! We can move on to our local development environment to begin building out the front end with Nuxt.

Project setup

Let’s get started building the front end. Open up a terminal in an empty directory and create the Nuxt app:

yarn create nuxt-app nuxt-supabase

Here, we’re prompted with the following questions:

? Project name: nuxt-supabase ? Programming language: JavaScript ? Package manager: (your preference) ? UI framework: Tailwind CSS ? Nuxt.js modules: n/a ? Linting tools: n/a ? Testing framework: None ? Rendering mode: Universal (SSR / SSG) ? Deployment target: Server (Node.js hosting) ? Development tools: n/a ? What is your GitHub username? (your username) ? Version control system: Git

Once the project has been created, change into the new directory:

cd nuxt-supabase Configuration and dependencies

Now that the project has been initialized, we need to install some dependencies for both Supabase, as well as Tailwind CSS. We also need to configure the Nuxt project to recognize and use these tools.

Tailwind CSS

Let’s start with Tailwind. Install the Tailwind dependencies using either npm or Yarn:

npm install -D tailwindcss@latest postcss@latest autoprefixer@latest @tailwindcss/typography

Next, run the following command to create a tailwind.config.js file:

npx tailwind init

Next, add a new folder named assets/css to the project directory and a file in it named tailwind.css. Here’s some code we can throw in there to import what we need from Tailwind:

/* assets/css/tailwind.css */ @tailwind base; @tailwind components; @tailwind utilities;

Next, add the @nuxtjs/tailwindcss module to the buildModules section of the nuxt.config.js file (this may have already been updated by the Tailwind CLI):

buildModules: [ '@nuxtjs/tailwindcss' ],

Tailwind is now set up and we can begin using the utility classes directly in our HTML! &#x1f389;

Markdown editor and parser

Next, let’s install and configure a Markdown editor and parser that allows users to write blog posts with formatting and rich text editing features. We’re using marked along with the Vue SimpleMDE library to make this happen.

npm install vue-simplemde marked

Next, we need to define a new Vue component to use the new Markdown editor in our HTML. So, create a new plugins folder and add a new file in it named simplemde.js. Here’ the code we need in there to import what we need:

/* plugins/simplemde.js */ import Vue from 'vue' import VueSimplemde from 'vue-simplemde' import 'simplemde/dist/simplemde.min.css' Vue.component('vue-simplemde', VueSimplemde)

Next, open nuxt.config.js and update the css globals so that they include the simplemde CSS as well as the plugins array:

css: [ 'simplemde/dist/simplemde.min.css', ], plugins: [ { src: '~plugins/simplemde.js', mode: 'client' }, ],

Now, we can use vue-simplemde directly in our HTML any time we’d like to use the component!

Configuring Supabase

The last thing we need to configure is for the Supabase client. This is the API we use to interact with the Supabase back-end for authentication and data access.

First, install the Supabase JavaScript library:

npm install @supabase/supabase-js

Next, let’s create another plugin that injects a $supabase variable into the scope of our app so we can access it any time and anywhere we need it. We need to get the API endpoint and public API key for our project, which we can get from the Supabase dashboard in the Settings tab.

Click the Settings icon in the Supabase menu, then select API to locate the information.

Now let’s create a new client.js file in the plugins folder with the following code in there:

/* plugins/client.js */ import { createClient } from '@supabase/supabase-js' const supabase = createClient( "https://yoururl.supabase.co", "your-api-key" ) export default (_, inject) => { inject('supabase', supabase) }

Now we can update the plugins array in nuxt.config.js with the new plugin:

plugins: [ { src: '~plugins/client.js' }, { src: '~plugins/simplemde.js', mode: 'client' }, ],

That’s the last thing we need to do to set up our project. Mow we can start writing some code!

Creating the layout

Our app needs a good layout component to hold the navigation as well as some basic styling that will be applied to all of the other pages.

To use a layout, Nuxt looks for a layouts directory for a default layout that’s applied to all pages. We can override layouts on a page-by-page basis if we need to customize something specific. We’re sticking to the default layout for everything in this tutorial for the sake of simplicity.

We need that layouts folder, so add it to the project directory and add a default.vue file in it with the following markup for the default layout:

<!-- layouts/default.vue --> <template> <div> <nav class="p-6 border-b border-gray-300"> <NuxtLink to="/" class="mr-6"> Home </NuxtLink> <NuxtLink to="/profile" class="mr-6"> Profile </NuxtLink> <NuxtLink to="/create-post" class="mr-6" v-if="authenticated"> Create post </NuxtLink> <NuxtLink to="/my-posts" class="mr-6" v-if="authenticated"> My Posts </NuxtLink> </nav> <div class="py-8 px-16"> <Nuxt /> </div> </div> </template> <script> export default { data: () => ({ authenticated: false, authListener: null }), async mounted() { /* When the app loads, check to see if the user is signed in */ /* also create a listener for when someone signs in or out */ const { data: authListener } = this.$supabase.auth.onAuthStateChange( () => this.checkUser() ) this.authListener = authListener this.checkUser() }, methods: { async checkUser() { const user = await this.$supabase.auth.user() if (user) { this.authenticated = true } else { this.authenticated = false } } }, beforeUnmount() { this.authListener?.unsubscribe() } } </script>

The layout has two links that are shown by default, and two others that are only displayed if a user is signed in.

To fetch the signed in user at any time (or to see if they are authenticated), we are using the supabase.auth.user() method. If a user is signed in, their profile is returned. If they are not, the return value is null.

The home page

Next, let’s update the home page. When the user opens the app, we want to show a list of posts and allow them to click on and navigate to read the post. If there are no posts, we show them a message instead.

In this component, we’re making our first call to the Supabase back-end to fetch data — in this case, we’re calling an array that contains all posts. See how the Supabase API interacts with your data, which to me, is very intuitive:

/* example of how to fetch data from Supabase */ const { data: posts, error } = await this.$supabase .from('posts') .select('*')

Supabase offers filters and modifiers that make it straightforward to implement a rich set of various data access patterns and selection sets of your data. For instance, if we want to update that last query to only query for users with a specific user ID, we could do this:

const { data: posts, error } = await this.$supabase .from('posts') .select('*') .filter('user_id', 'eq', 'some-user-id')

Update the template file for the homepage, pages/index.vue, with the following markup and query for displaying a loop of posts:

<!-- pages/index.vue --> <template> <main> <div v-for="post in posts" :key="post.id"> <NuxtLink key={post.id} :to="`/posts/${post.id}`"> <div class="cursor-pointer border-b border-gray-300 mt-8 pb-4"> <h2 class="text-xl font-semibold">{{ post.title }}</h2> <p class="text-gray-500 mt-2">Author: {{ post.user_email }}</p> </div> </NuxtLink> </div> <h1 v-if="loaded && !posts.length" class="text-2xl">No posts...</h1> </main> </template> <script> export default { async created() { const { data: posts, error } = await this.$supabase .from('posts') .select('*') this.posts = posts this.loaded = true }, data() { return { loaded: false, posts: [] } } } </script> User profile

Now let’s create the profile page with a new profile.vue file in the pages with the following code:

<!-- pages/profile.vue --> <template> <main class="m-auto py-20" style="width: 700px"> <!-- if the user is not signed in, show the sign in form --> <div v-if="!profile && !submitted" class="flex flex-col"> <h2 class="text-2xl">Sign up / sign in</h2> <input v-model="email" placeholder="Email" class="border py-2 px-4 rounded mt-4" /> <button @click="signIn" class="mt-4 py-4 px-20 w-full bg-blue-500 text-white font-bold" >Submit</button> </div> <!-- if the user is signed in, show them their profile --> <div v-if="profile"> <h2 class="text-xl">Hello, {{ profile.email }}</h2> <p class="text-gray-400 my-3">User ID: {{ profile.id }}</p> <button @click="signOut" class="mt-4 py-4 px-20 w-full bg-blue-500 text-white font-bold" >Sign Out</button> </div> <div v-if="submitted"> <h1 class="text-xl text-center">Please check your email to sign in</h1> </div> </main> </template> <script> export default { data: () => ({ profile: null, submitted: false, email: '' }), methods: { async signOut() { /* signOut deletes the user's session */ await this.$supabase.auth.signOut() this.profile = null }, async signIn() { /* signIn sends the user a magic link */ const { email } = this if (!email) return const { error, data } = await this.$supabase.auth.signIn({ email }) this.submitted = true }, }, async mounted() { /* when the component loads, fetch the user's profile */ const profile = await this.$supabase.auth.user() this.profile = profile } } </script>

In the template, we have a few different view states:

  1. If the user is not signed in, show them the sign in form.
  2. If the user is signed in, show them their profile information and a sign out button.
  3. If the user has submitted the sign in form, show them a message to check their email.

This app utilizes magic link authentication because of its simplicity. There is no separate process for signing up and signing in. All the user needs to do is submit their email address and they are sent a link to sign in. Once they click on the link, a session is set in their browser by Supabase, and they are redirected to the app.

Creating a post

Next, let’s create the page with the form that allows users to create and save new posts. That means a new create-post.vue file in the pages directory with some code for the post editor:

<!-- pages/create-post.vue --> <template> <main> <div id="editor"> <h1 class="text-3xl font-semibold tracking-wide mt-6">Create new post</h1> <input name="title" placeholder="Title" v-model="post.title" class="border-b pb-2 text-lg my-4 focus:outline-none w-full font-light text-gray-500 placeholder-gray-500 y-2" /> <client-only> <vue-simplemde v-model="post.content"></vue-simplemde> </client-only> <button type="button" class="mb-4 w-full bg-blue-500 text-white font-semibold px-8 py-4" @click="createPost" >Create Post</button> </div> </main> </template> <script> export default { data() { return { post: {} } }, methods: { async createPost() { const {title, content} = this.post if (!title || !content) return const user = this.$supabase.auth.user() const { data } = await this.$supabase .from('posts') .insert([ { title, content, user_id: user.id, user_email: user.email } ]) .single() this.$router.push(`/posts/${data.id}`) } } } </script>

This code is using the vue-simplemde component we registered as a plugin in an earlier step! It is wrapped in a client-only component that renders the component only on client-side — vue-simplemde is a client-side-only plugin so it’s unnecessary for it to be on the server.

The createPost function creates a new post in the Supabase database, and then redirects us to view the individual post in a page we have yet to create. Let’s create it now!

Dynamic routes for viewing individual posts

To create a dynamic route in Nuxt, we need to add an underscore before .vue in the file name (or before the name of the directory).

If a user navigates to a page, say /posts/123. We want to use post ID 123 to fetch the data for the post. In the app, we can then access the route parameters in the page by referencing route.params.

So, let’s add yet another new folder, pages/posts, with a new file named in it, _id.vue:

<!-- pages/posts/_id.vue --> <template> <main> <div> <h1 class="text-5xl mt-4 font-semibold tracking-wide">{{ post.title }}</h1> <p class="text-sm font-light my-4">by {{ post.user_email }}</p> <div class="mt-8 prose" > <div v-html="compiledMarkdown"></div> </div> </div> </main> </template> <script> import marked from 'marked' export default { computed: { compiledMarkdown: function () { return marked(this.post.content, { sanitize: true }) } }, async asyncData({ route, $supabase }) { /* use the ID from the route parameter to fetch the post */ const { data: post } = await $supabase .from('posts') .select() .filter('id', 'eq', route.params.id) .single() return { post } } } </script>

When the page is loaded, the route parameter is used to fetch the post metadata.

Managing posts

The last piece of functionality we want is to allow users the ability to edit and delete their own posts, but in order to do that, we should provide them with a page that displays their own posts instead of everyone’s.

That’s right, we need another new file, this time called my-posts.vue, in the pages directory. It’s going to fetches only the posts of the current authenticated user:

<!-- pages/my-posts.vue --> <template> <main> <div v-for="post in posts" :key="post.id"> <div class="cursor-pointer border-b border-gray-300 mt-8 pb-4"> <h2 class="text-xl font-semibold">{{ post.title }}</h2> <p class="text-gray-500 mt-2">Author: {{ post.user_email }}</p> <NuxtLink :to="`/edit-post?id=${post.id}`" class="text-sm mr-4 text-blue-500">Edit Post</NuxtLink> <NuxtLink :to="`/posts/${post.id}`" class="text-sm mr-4 text-blue-500">View Post</NuxtLink> <button class="text-sm mr-4 text-red-500" @click="deletePost(post.id)" >Delete Post</button> </div> </div> <h1 v-if="loaded && !posts.length" class="text-2xl">No posts...</h1> </main> </template> <script> export default { async created() { this.fetchPosts() }, data() { return { posts: [], loaded: false } }, methods: { async fetchPosts() { const user = this.$supabase.auth.user() if (!user) return /* fetch only the posts for the signed in user */ const { data: posts, error } = await this.$supabase .from('posts') .select('*') .filter('user_id', 'eq', user.id) this.posts = posts this.loaded = true }, async deletePost(id) { await this.$supabase .from('posts') .delete() .match({ id }) this.fetchPosts() } } } </script>

The query in this page for fetching the posts uses a filter, passing in the user ID of the signed in user. There is also a button for deleting a post and a button for editing a post. If a post is deleted, we then refetch the posts to update the UI. If a user wants to edit a post, we redirect them to the edit-post.vue page that we’re creating next.

Editing a post

The last page we want to create allows users to edit a post. This page is very similar to the create-post.vue page, the main difference being we fetch the post using the id retrieved from the route parameter. So, create that file and drop it into the pages folder with this code:

<!-- pages/edit-post.vue --> <template> <main> <div id="editor"> <h1 class="text-3xl font-semibold tracking-wide mt-6">Create new post</h1> <input name="title" placeholder="Title" v-model="post.title" class="border-b pb-2 text-lg my-4 focus:outline-none w-full font-light text-gray-500 placeholder-gray-500 y-2" /> <client-only> <vue-simplemde v-model="post.content"></vue-simplemde> </client-only> <button type="button" class="mb-4 w-full bg-blue-500 text-white font-semibold px-8 py-4" @click="editPost" >Edit Post</button> </div> </main> </template> <script> export default { async created() { /* when the page loads, fetch the post using the route id parameter */ const id = this.$route.query.id const { data: post } = await this.$supabase .from('posts') .select() .filter('id', 'eq', id) .single() if (!post) this.$router.push('/') this.post = post }, data() { return { post: {} } }, methods: { async editPost() { /* when the user edits a post, redirect them back to their posts */ const { title, content } = this.post if (!title || !content) return await this.$supabase .from('posts') .update([ { title, content } ]) .match({ id: this.post.id }) this.$router.push('/my-posts') } } } </script> Testing it out

That’s all of the code, we should be able to test it out! We can test locally with the following command:

npm run dev

When the app loads, sign up for a new account using the magic link enabled in the profile page. Once you’ve signed up, test everything out by adding, editing, and deleting posts.

Wrapping up

Pretty nice, right? This is the sort of ease and simplicity I was talking about at the beginning of this tutorial. We spun up a new app with Supabase, and with a few dependencies, a little configuration, and a handful of templates, we made a fully-functional app that lets folks create and manage blog post — complete with a back end that supports authentication, identity management, and routing!

What we have is baseline functionality, but you can probably see what a high ceiling there is to do more here. And I hope you do! With all the right ingredients in place, you can take what we made and extend it with your own enhancements and styling.

GitHub Repo

The post Using Nuxt and Supabase for a Multi-User Blogging App appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Implementing a single GraphQL across multiple data sources

Css Tricks - Thu, 08/19/2021 - 4:30am

(This is a sponsored post.)

In this article, we will discuss how we can apply schema stitching across multiple Fauna instances. We will also discuss how to combine other GraphQL services and data sources with Fauna in one graph.

Get the code What is Schema Stitching?

Schema stitching is the process of creating a single GraphQL API from multiple underlying GraphQL APIs.

Where is it useful?

While building large-scale applications, we often break down various functionalities and business logic into micro-services. It ensures the separation of concerns. However, there will be a time when our client applications need to query data from multiple sources. The best practice is to expose one unified graph to all your client applications. However, this could be challenging as we do not want to end up with a tightly coupled, monolithic GraphQL server. If you are using Fauna, each database has its own native GraphQL. Ideally, we would want to leverage Fauna’s native GraphQL as much as possible and avoid writing application layer code. However, if we are using multiple databases our front-end application will have to connect to multiple GraphQL instances. Such arrangement creates tight coupling. We want to avoid this in favor of one unified GraphQL server.

To remedy these problems, we can use schema stitching. Schema stitching will allow us to combine multiple GraphQL services into one unified schema. In this article, we will discuss

  1. Combining multiple Fauna instances into one GraphQL service
  2. Combining Fauna with other GraphQL APIs and data sources
  3. How to build a serverless GraphQL gateway with AWS Lambda?
Combining multiple Fauna instances into one GraphQL service

First, let’s take a look at how we can combine multiple Fauna instances into one GraphQL service. Imagine we have three Fauna database instances Product, Inventory, and Review. Each is independent of the other. Each has its graph (we will refer to them as subgraphs). We want to create a unified graph interface and expose it to the client applications. Clients will be able to query any combination of the downstream data sources.

We will call the unified graph to interface our gateway service. Let’s go ahead and write this service.

We’ll start with a fresh node project. We will create a new folder. Then navigate inside it and initiate a new node app with the following commands.

mkdir my-gateway cd my-gateway npm init --yes

Next, we will create a simple express GraphQL server. So let’s go ahead and install the express and express-graphqlpackage with the following command.

npm i express express-graphql graphql --save Creating the gateway server

We will create a file called gateway.js . This is our main entry point to the application. We will start by creating a very simple GraphQL server.

const express = require('express'); const { graphqlHTTP } = require('express-graphql'); const { buildSchema } = require('graphql'); // Construct a schema, using GraphQL schema language const schema = buildSchema(` type Query { hello: String } `); // The root provides a resolver function for each API endpoint const rootValue = { hello: () => 'Hello world!', }; const app = express(); app.use( '/graphql', graphqlHTTP((req) => ({ schema, rootValue, graphiql: true, })), ); app.listen(4000); console.log('Running a GraphQL API server at <http://localhost:4000/graphql>');

In the code above we created a bare-bone express-graphql server with a sample query and a resolver. Let’s test our app by running the following command.

node gateway.js

Navigate to [<http://localhost:4000/graphql>](<http://localhost:4000/graphql>) and you will be able to interact with the GraphQL playground.

Creating Fauna instances

Next, we will create three Fauna databases. Each of them will act as a GraphQL service. Let’s head over to fauna.com and create our databases. I will name them Product, Inventory and Review

Once the databases are created we will generate admin keys for them. These keys are required to connect to our GraphQL APIs.

Let’s create three distinct GraphQL schemas and upload them to the respective databases. Here’s how our schemas will look.

# Schema for Inventory database type Inventory { name: String description: String sku: Float availableLocation: [String] } # Schema for Product database type Product { name: String description: String price: Float } # Schema for Review database type Review { email: String comment: String rating: Float }

Head over to the relative databases, select GraphQL from the sidebar and import the schemas for each database.

Now we have three GraphQL services running on Fauna. We can go ahead and interact with these services through the GraphQL playground inside Fauna. Feel free to enter some dummy data if you are following along. It will come in handy later while querying multiple data sources.

Setting up the gateway service

Next, we will combine these into one graph with schema stitching. To do so we need a gateway server. Let’s create a new file gateway.js. We will be using a couple of libraries from graphql tools to stitch the graphs.

Let’s go ahead and install these dependencies on our gateway server.

npm i @graphql-tools/schema @graphql-tools/stitch @graphql-tools/wrap cross-fetch --save

In our gateway, we are going to create a new generic function called makeRemoteExecutor. This function is a factory function that returns another function. The returned asynchronous function will make the GraphQL query API call.

// gateway.js const express = require('express'); const { graphqlHTTP } = require('express-graphql'); const { buildSchema } = require('graphql'); function makeRemoteExecutor(url, token) { return async ({ document, variables }) => { const query = print(document); const fetchResult = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + token }, body: JSON.stringify({ query, variables }), }); return fetchResult.json(); } } // Construct a schema, using GraphQL schema language const schema = buildSchema(` type Query { hello: String } `); // The root provides a resolver function for each API endpoint const rootValue = { hello: () => 'Hello world!', }; const app = express(); app.use( '/graphql', graphqlHTTP(async (req) => { return { schema, rootValue, graphiql: true, } }), ); app.listen(4000); console.log('Running a GraphQL API server at http://localhost:4000/graphql');

As you can see above the makeRemoteExecutor has two parsed arguments. The url argument specifies the remote GraphQL url and the token argument specifies the authorization token.

We will create another function called makeGatewaySchema. In this function, we will make the proxy calls to the remote GraphQL APIs using the previously created makeRemoteExecutor function.

// gateway.js const express = require('express'); const { graphqlHTTP } = require('express-graphql'); const { introspectSchema } = require('@graphql-tools/wrap'); const { stitchSchemas } = require('@graphql-tools/stitch'); const { fetch } = require('cross-fetch'); const { print } = require('graphql'); function makeRemoteExecutor(url, token) { return async ({ document, variables }) => { const query = print(document); const fetchResult = await fetch(url, { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': 'Bearer ' + token }, body: JSON.stringify({ query, variables }), }); return fetchResult.json(); } } async function makeGatewaySchema() { const reviewExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQZPUejACQ2xuvfi50APAJ397hlGrTjhdXVta'); const productExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQbI02HACQwTaUF9iOBbGC3fatQtclCOxZNfp'); const inventoryExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQbI02HACQwTaUF9iOBbGC3fatQtclCOxZNfp'); return stitchSchemas({ subschemas: [ { schema: await introspectSchema(reviewExecutor), executor: reviewExecutor, }, { schema: await introspectSchema(productExecutor), executor: productExecutor }, { schema: await introspectSchema(inventoryExecutor), executor: inventoryExecutor } ], typeDefs: 'type Query { heartbeat: String! }', resolvers: { Query: { heartbeat: () => 'OK' } } }); } // ...

We are using the makeRemoteExecutor function to make our remote GraphQL executors. We have three remote executors here one pointing to Product , Inventory , and Review services. As this is a demo application I have hardcoded the admin API key from Fauna directly in the code. Avoid doing this in a real application. These secrets should not be exposed in code at any time. Please use environment variables or secret managers to pull these values on runtime.

As you can see from the highlighted code above we are returning the output of the switchSchemas function from @graphql-tools. The function has an argument property called subschemas. In this property, we can pass in an array of all the subgraphs we want to fetch and combine. We are also using a function called introspectSchema from graphql-tools. This function is responsible for transforming the request from the gateway and making the proxy API request to the downstream services.

You can learn more about these functions on the graphql-tools documentation site.

Finally, we need to call the makeGatewaySchema. We can remove the previously hardcoded schema from our code and replace it with the stitched schema.

// gateway.js // ... const app = express(); app.use( '/graphql', graphqlHTTP(async (req) => { const schema = await makeGatewaySchema(); return { schema, context: { authHeader: req.headers.authorization }, graphiql: true, } }), ); // ...

When we restart our server and go back to localhost we will see that queries and mutations from all Fauna instances are available in our GraphQL playground.

Let’s write a simple query that will fetch data from all Fauna instances simultaneously.

Stitch third party GraphQL APIs

We can stitch third-party GraphQL APIs into our gateway as well. For this demo, we are going to stitch the SpaceX open GraphQL API with our services.

The process is the same as above. We create a new executor and add it to our sub-graph array.

// ... async function makeGatewaySchema() { const reviewExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQdRZVpACRMEEM1GKKYQxH2Qa4TzLKusTW2gN'); const productExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQdSdXiACRGmgJgAEgmF_ZfO7iobiXGVP2NzT'); const inventoryExecutor = await makeRemoteExecutor('https://graphql.fauna.com/graphql', 'fnAEQdR0kYACRWKJJUUwWIYoZuD6cJDTvXI0_Y70'); const spacexExecutor = await makeRemoteExecutor('https://api.spacex.land/graphql/') return stitchSchemas({ subschemas: [ { schema: await introspectSchema(reviewExecutor), executor: reviewExecutor, }, { schema: await introspectSchema(productExecutor), executor: productExecutor }, { schema: await introspectSchema(inventoryExecutor), executor: inventoryExecutor }, { schema: await introspectSchema(spacexExecutor), executor: spacexExecutor } ], typeDefs: 'type Query { heartbeat: String! }', resolvers: { Query: { heartbeat: () => 'OK' } } }); } // ... Deploying the gateway

To make this a true serverless solution we should deploy our gateway to a serverless function. For this demo, I am going to deploy the gateway into an AWS lambda function. Netlify and Vercel are the two other alternatives to AWS Lambda.

I am going to use the serverless framework to deploy the code to AWS. Let’s install the dependencies for it.

npm i -g serverless # if you don't have the serverless framework installed already npm i serverless-http body-parser --save

Next, we need to make a configuration file called serverless.yaml

# serverless.yaml service: my-graphql-gateway provider: name: aws runtime: nodejs14.x stage: dev region: us-east-1 functions: app: handler: gateway.handler events: - http: ANY / - http: 'ANY {proxy+}'

Inside the serverless.yaml we define information such as cloud provider, runtime, and the path to our lambda function. Feel free to take look at the official documentation for the serverless framework for more in-depth information.

We will need to make some minor changes to our code before we can deploy it to AWS.

npm i -g serverless # if you don't have the serverless framework installed already npm i serverless-http body-parser --save

Notice the highlighted code above. We added the body-parser library to parse JSON body. We have also added the serverless-http library. Wrapping the express app instance with the serverless function will take care of all the underlying lambda configuration.

We can run the following command to deploy this to AWS Lambda.

serverless deploy

This will take a minute or two to deploy. Once the deployment is complete we will see the API URL in our terminal.

Make sure you put /graphql at the end of the generated URL. (i.e. https://gy06ffhe00.execute-api.us-east-1.amazonaws.com/dev/graphql).

There you have it. We have achieved complete serverless nirvana &#x1f609;. We are now running three Fauna instances independent of each other stitched together with a GraphQL gateway.

Feel free to check out the code for this article here.

Conclusion

Schema stitching is one of the most popular solutions to break down monoliths and achieve separation of concerns between data sources. However, there are other solutions such as Apollo Federation which pretty much works the same way. If you would like to see an article like this with Apollo Federation please let us know in the comment section. That’s it for today, see you next time.

The post Implementing a single GraphQL across multiple data sources appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

“We had 90% unused CSS because everybody was afraid to touch the old stuff”

Css Tricks - Wed, 08/18/2021 - 12:16pm

Over at the JS Party podcast:

[Kend C. Dodds]: […] ask anybody who’s done regular, old CSS and they’ll tell you that “I don’t know if it’s okay for me to change this, so I’m gonna duplicate it.” And now we’ve got – at PayPal (this is not made up) we had 90% unused CSS on the project I was using, because everybody was afraid to touch the old stuff. So we just duplicated something new and called it something else. And you might just say that we’re bad at CSS, but maybe CSS was bad at us, I don’t know… [laughter]

[Emma Bostain]: Well, that’s why styled-components and CSS-in-JS was so pivotal; it was like “Oh, hey, we can actually encapsulate all of this logic inside the component that it’s touching and don’t have to worry about bleeding code anymore.” It’s so much easier to delete things, and add things, and all of those things.

[Kend C. Dodds]: Yeah, you’re precisely right. That was the problem that those things were made to solve.

Audio clip:

I’ve heard this exact story before several times, usually from large companies. Lots of developers, typical developer turnover… nobody knows what CSS is actually used and what isn’t because that is a very hard problem.

That’s one of the reasons I sometimes like component-based-styling solutions (CSS-in-JS, if you’re nasty). Not because I love complex tooling. Not because I like JavaScript syntax better than CSS. Because of the co-location of styles and componentry. Because nobody is afraid of the styles anymore — they are tightly coupled to what they are styling. It’s not needed on every project, but if you’re building with components anyway (an awfully nice way to architect front-ends that doesn’t require JavaScript), you might as well style this way.

For this reason, I’m excited that “scoped styles” are making a bit of a comeback in standards discussions.

I remember an ancient idea (that maybe even shipped in browsers for a minute?) where you’d just chuck a <style scoped> block right in the HTML and whatever the parent was, the styles were scoped to that parent. That was so cool, I wish we could have that again.

But it seems like the newer stuff (here’s Miriam’s original proposal) has some more clever stuff that that basic concept doesn’t cover — like being able to set a lower-boundary in addition to an upper-boundary, making it possible to scope “donut-shaped” styles in the DOM (a Nicole Sullivan term). Whatever happens, shadow DOM-free scoped styles with zero tooling is huge.

The post “We had 90% unused CSS because everybody was afraid to touch the old stuff” appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

“We had 90% unused CSS because everybody was afraid to touch the old stuff”

Css Tricks - Wed, 08/18/2021 - 12:16pm

Over at the JS Party podcast:

[Kend C. Dodds]: […] ask anybody who’s done regular, old CSS and they’ll tell you that “I don’t know if it’s okay for me to change this, so I’m gonna duplicate it.” And now we’ve got – at PayPal (this is not made up) we had 90% unused CSS on the project I was using, because everybody was afraid to touch the old stuff. So we just duplicated something new and called it something else. And you might just say that we’re bad at CSS, but maybe CSS was bad at us, I don’t know… [laughter]

[Emma Bostain]: Well, that’s why styled-components and CSS-in-JS was so pivotal; it was like “Oh, hey, we can actually encapsulate all of this logic inside the component that it’s touching and don’t have to worry about bleeding code anymore.” It’s so much easier to delete things, and add things, and all of those things.

[Kend C. Dodds]: Yeah, you’re precisely right. That was the problem that those things were made to solve.

Audio clip:

I’ve heard this exact story before several times, usually from large companies. Lots of developers, typical developer turnover… nobody knows what CSS is actually used and what isn’t because that is a very hard problem.

That’s one of the reasons I sometimes like component-based-styling solutions (CSS-in-JS, if you’re nasty). Not because I love complex tooling. Not because I like JavaScript syntax better than CSS. Because of the co-location of styles and componentry. Because nobody is afraid of the styles anymore — they are tightly coupled to what they are styling. It’s not needed on every project, but if you’re building with components anyway (an awfully nice way to architect front-ends that doesn’t require JavaScript), you might as well style this way.

For this reason, I’m excited that “scoped styles” are making a bit of a comeback in standards discussions.

I remember an ancient idea (that maybe even shipped in browsers for a minute?) where you’d just chuck a <style scoped> block right in the HTML and whatever the parent was, the styles were scoped to that parent. That was so cool, I wish we could have that again.

But it seems like the newer stuff (here’s Miriam’s original proposal) has some more clever stuff that that basic concept doesn’t cover — like being able to set a lower-boundary in addition to an upper-boundary, making it possible to scope “donut-shaped” styles in the DOM (a Nicole Sullivan term). Whatever happens, shadow DOM-free scoped styles with zero tooling is huge.

The post “We had 90% unused CSS because everybody was afraid to touch the old stuff” appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Native Search vs. Jetpack Instant Search in Headless WordPress With Gatsby

Css Tricks - Wed, 08/18/2021 - 4:46am

Have you already tried using WordPress headlessly with Gatsby? If you haven’t, you might check this article around the new Gatsby source plugin for WordPress; gatsby-source-wordpress is the official source plugin introduced in March 2021 as a part of the Gatsby 3 release. It significantly improves the integration with WordPress. Also, the WordPress plugin WPGraphQL providing the GraphQL API is now available via the official WordPress repository.

With stable and maintained tools, developing Gatsby websites powered by WordPress becomes easier and more interesting. I got myself involved in this field, I co-founded (with Alexandra Spalato), and recently launched Gatsby WP Themes — a niche marketplace for developers building WordPress-powered sites with Gatsby. In this article, I would love to share my insights and, in particular, discuss the search functionality.

Search does not come out of the box, but there are many options to consider. I will focus on two distinct possibilities — taking advantage of WordPress native search (WordPress search query) vs. using Jetpack Instant Search.

Getting started

Let’s start by setting up a WordPress-powered Gatsby website. For the sake of simplicity, I will follow the getting started instructions and install the gatsby-starter-wordpress-blog starter.

gatsby new gatsby-wordpress-w-search https://github.com/gatsbyjs/gatsby-starter-wordpress-blog

This simple, bare-bone starter creates routes exclusively for individual posts and blog pages. But we can keep it that simple here. Let’s imagine that we don’t want to include pages within the search results.

For the moment, I will leave the WordPress source website as it is and pull the content from the starter author’s WordPress demo. If you use your own source, just remember that there are two plugins required on the WordPress end (both available via the plugin repository):

  • WPGraphQL – a plugin that runs a GraphQL server on the WordPress instance
  • WPGatsby – a plugin that modifies the WPGraphQL schema in Gatsby-specific ways (it also adds some mechanism to optimize the build process)
Setting up Apollo Client

With Gatsby, we usually either use the data from queries run on page creation (page queries) or call the useStaticQuery hook. The latter is available in components and does not allow dynamic query parameters; its role is to retrieve GraphQL data at build time. None of those two query solutions works for a user’s-initiated search. Instead, we will ask WordPress to run a search query and send us back the results. Can we send a graphQL search query? Yes! WPGraphQL provides search; you can search posts in WPGraphQL like so:

posts(where: {search: "gallery"}) { nodes { id title content } }

In order to communicate directly with our WPGraphQL API, we will install Apollo Client; it takes care of requesting and caching the data as well as updating our UI components.

yarn add @apollo/client cross-fetch

To access Apollo Client anywhere in our component tree, we need to wrap our app with ApolloProvider. Gatsby does not expose the App component that wraps around the whole application. Instead, it provides the wrapRootElement API. It’s a part of the Gatsby Browser API and needs to be implemented in the gatsby-browser.js file located at the project’s root.

// gatsby-browser.js import React from "react" import fetch from "cross-fetch" import { ApolloClient, HttpLink, InMemoryCache, ApolloProvider } from "@apollo/client" const cache = new InMemoryCache() const link = new HttpLink({ /* Set the endpoint for your GraphQL server, (same as in gatsby-config.js) */ uri: "https://wpgatsbydemo.wpengine.com/graphql", /* Use fetch from cross-fetch to provide replacement for server environment */ fetch }) const client = new ApolloClient({ link, cache, }) export const wrapRootElement = ({ element }) => ( <ApolloProvider client={client}>{element}</ApolloProvider> ) SearchForm component

Now that we’ve set up ApolloClient, let’s build our Search component.

touch src/components/search.js src/components/search-form.js src/components/search-results.js src/css/search.css

The Search component wraps SearchForm and SearchResults

// src/components/search.js import React, { useState } from "react" import SearchForm from "./search-form" import SearchResults from "./search-results" const Search = () => { const [searchTerm, setSearchTerm] = useState("") return ( <div className="search-container"> <SearchForm setSearchTerm={setSearchTerm} /> {searchTerm && <SearchResults searchTerm={searchTerm} />} </div> ) } export default Search

<SearchForm /> is a simple form with controlled input and a submit handler that sets the searchTerm state value to the user submission.

// src/components/search-form.js import React, { useState } from "react" const SearchForm = ({ searchTerm, setSearchTerm }) => { const [value, setValue] = useState(searchTerm) const handleSubmit = e => { e.preventDefault() setSearchTerm(value) } return ( <form role="search" onSubmit={handleSubmit}> <label htmlFor="search">Search blog posts:</label> <input id="search" type="search" value={value} onChange={e => setValue(e.target.value)} /> <button type="submit">Submit</button> </form> ) } export default SearchForm

The SearchResults component receives the searchTerm via props, and that’s where we use Apollo Client.

For each searchTerm, we would like to display the matching posts as a list containing the post’s title, excerpt, and a link to this individual post. Our query will be like so:

const GET_RESULTS = gql` query($searchTerm: String) { posts(where: { search: $searchTerm }) { edges { node { id uri title excerpt } } } } `

We will use the useQuery hook from @apollo-client to run the GET_RESULTS query with a search variable.

// src/components/search-results.js import React from "react" import { Link } from "gatsby" import { useQuery, gql } from "@apollo/client" const GET_RESULTS = gql` query($searchTerm: String) { posts(where: { search: $searchTerm }) { edges { node { id uri title excerpt } } } } ` const SearchResults = ({ searchTerm }) => { const { data, loading, error } = useQuery(GET_RESULTS, { variables: { searchTerm } }) if (loading) return <p>Searching posts for {searchTerm}...</p> if (error) return <p>Error - {error.message}</p> return ( <section className="search-results"> <h2>Found {data.posts.edges.length} results for {searchTerm}:</h2> <ul> {data.posts.edges.map(el => { return ( <li key={el.node.id}> <Link to={el.node.uri}>{el.node.title}</Link> </li> ) })} </ul> </section> ) } export default SearchResults

The useQuery hook returns an object that contains loading, error, and data properties. We can render different UI elements according to the query’s state. As long as loading is truthy, we display <p>Searching posts...</p>. If loading and error are both falsy, the query has completed and we can loop over the data.posts.edges and display the results.

if (loading) return <p>Searching posts...</p> if (error) return <p>Error - {error.message}</p> // else return ( //... )

For the moment, I am adding the <Search /> to the layout component. (I’ll move it somewhere else a little bit later.) Then, with some styling and a visible state variable, I made it feel more like a widget, opening on click and fixed-positioned in the top right corner.

Paginated queries

Without the number of entries specified, the WPGraphQL posts query returns ten first posts; we need to take care of the pagination. WPGraphQL implements the pagination following the Relay Specification for GraphQL Schema Design. I will not go into the details; let’s just note that it is a standardized pattern. Within the Relay specification, in addition to posts.edges (which is a list of { cursor, node } objects), we have access to the posts.pageInfo object that provides:

  • endCursor – cursor of the last item in posts.edges,
  • startCursor – cursor of the first item in posts.edges,
  • hasPreviousPage – boolean for “are there more results available (backward),” and
  • hasNextPage – boolean for “are there more results available (forward).”

We can modify the slice of the data we want to access with the additional query variables:

  • first – the number of returned entries
  • after – the cursor we should start after

How do we deal with pagination queries with Apollo Client? The recommended approach is to use the fetchMore function, that is (together with loading, error and data) a part of the object returned by the useQuery hook.

// src/components/search-results.js import React from "react" import { Link } from "gatsby" import { useQuery, gql } from "@apollo/client" const GET_RESULTS = gql` query($searchTerm: String, $after: String) { posts(first: 10, after: $after, where: { search: $searchTerm }) { edges { node { id uri title } } pageInfo { hasNextPage endCursor } } } ` const SearchResults = ({ searchTerm }) => { const { data, loading, error, fetchMore } = useQuery(GET_RESULTS, { variables: { searchTerm, after: "" }, }) if (loading && !data) return <p>Searching posts for {searchTerm}...</p> if (error) return <p>Error - {error.message}</p> const loadMore = () => { fetchMore({ variables: { after: data.posts.pageInfo.endCursor, }, // with notifyOnNetworkStatusChange our component re-renders while a refetch is in flight so that we can mark loading state when waiting for more results (see lines 42, 43) notifyOnNetworkStatusChange: true, }) } return ( <section className="search-results"> {/* as before */} {data.posts.pageInfo.hasNextPage && ( <button type="button" onClick={loadMore} disabled={loading}> {loading ? "Loading..." : "More results"} </button> )} </section> ) } export default SearchResults

The first argument has its default value but is necessary here to indicate that we are sending a paginated request. Without first, pageInfo.hasNextPage will always be false, no matter the search keyword.

Calling fetchMore fetches the next slice of results but we still need to tell Apollo how it should merge the “fetch more” results with the existing cached data. We specify all the pagination logic in a central location as an option passed to the InMemoryCache constructor (in the gatsby-browser.js file). And guess what? With the Relay specification, we’ve got it covered — Apollo Client provides the relayStylePagination function that does all the magic for us.

// gatsby-browser.js import { ApolloClient, HttpLink, InMemoryCache, ApolloProvider } from "@apollo/client" import { relayStylePagination } from "@apollo/client/utilities" const cache = new InMemoryCache({ typePolicies: { Query: { fields: { posts: relayStylePagination(["where"]), }, }, }, }) /* as before */

Just one important detail: we don’t paginate all posts, but instead the posts that correspond to a specific where condition. Adding ["where"] as an argument to relayStylePagination creates a distinct storage key for different search terms.

Making search persistent

Right now my Search component lives in the Layout component. It’s displayed on every page but gets unmounted every time the route changes. What if we could keep the search results while navigating? We can take advantage of the Gatsby wrapPageElement browser API to set persistent UI elements around pages.

Let’s move <Search /> from the layout component to the wrapPageElement:

// gatsby-browser.js import Search from "./src/components/search" /* as before */ export const wrapPageElement = ({ element }) => { return <><Search />{element}</> }

The APIs wrapPageElement and wrapRootElement exist in both the browser and Server-Side Rendering (SSR) APIs. Gatsby recommends that we implement wrapPageElement and wrapRootElement in both gatsby-browser.js and gatsby-ssr.js. Let’s create the gatsby-ssr.js (in the root of the project) and re-export our elements:

// gatsby-ssr.js export { wrapRootElement, wrapPageElement } from "./gatsby-browser"

I deployed a demo where you can see it in action. You can also find the code in this repo.

The wrapPageElement approach may not be ideal in all cases. Our search widget is “detached” from the layout component. It works well with the position “fixed” like in our working example or within an off-canvas sidebar like in this Gatsby WordPress theme.

But what if you want to have “persistent” search results displayed within a “classic” sidebar? In that case, you could move the searchTerm state from the Search component to a search context provider placed within the wrapRootElement:

// gatsby-browser.js import SearchContextProvider from "./src/search-context" /* as before */ export const wrapRootElement = ({ element }) => ( <ApolloProvider client={client}> <SearchContextProvider> {element} </SearchContextProvider> </ApolloProvider> )

…with the SearchContextProvider defined as below:

// src/search-context.js import React, {createContext, useState} from "react" export const SearchContext = createContext() export const SearchContextProvider = ({ children }) => { const [searchTerm, setSearchTerm] = useState("") return ( <SearchContext.Provider value={{ searchTerm, setSearchTerm }}> {children} </SearchContext.Provider> ) }

You can see it in action in another Gatsby WordPress theme:

Note how, since Apollo Client caches the search results, we immediately get them on the route change.

Results from posts and pages

If you checked the theme examples above, you might have noticed how I deal with querying more than just posts. My approach is to replicate the same logic for pages and display results for each post type separately.

Alternatively, you could use the Content Node interface to query nodes of different post types in a single connection:

const GET_RESULTS = gql` query($searchTerm: String, $after: String) { contentNodes(first: 10, after: $after, where: { search: $searchTerm }) { edges { node { id uri ... on Page { title } ... on Post { title excerpt } } } pageInfo { hasNextPage endCursor } } } ` Going beyond the default WordPress search

Our solution seems to work but let’s remember that the underlying mechanism that actually does the search for us is the native WordPress search query. And the WordPress default search function isn’t great. Its problems are limited search fields (in particular, taxonomies are not taken into account), no fuzzy matching, no control over the order of results. Big websites can also suffer from performance issues — there is no prebuilt search index, and the search query is performed directly on the website SQL database.

There are a few WordPress plugins that enhance the default search. Plugins like WP Extended Search add the ability to include selected meta keys and taxonomies in search queries.

The Relevanssi plugin replaces the standard WordPress search with its search engine using the full-text indexing capabilities of the database. Relevanssi deactivates the default search query which breaks the WPGraphQL where: {search : …}. There is some work already done on enabling Relevanssi search through WPGraphQL; the code might not be compatible with the latest WPGraphQL version, but it seems to be a good start for those who opt for Relevanssi search.

In the second part of this article, we’ll take one more possible path and have a closer look at the premium service from Jetpack — an advanced search powered by Elasticsearch. By the way, Jetpack Instant search is the solution adopted by CSS-Tricks.

Using Jetpack Instant Search with Gatsby

Jetpack Search is a per-site premium solution by Jetpack. Once installed and activated, it will take care of building an Elasticsearch index. The search queries no longer hit the SQL database. Instead, the search query requests are sent to the cloud Elasticsearch server, more precisely to:

https://public-api.wordpress.com/rest/v1.3/sites/{your-blog-id}/search

There are a lot of search parameters to specify within the URL above. In our case, we will add the following:

  • filter[bool][must][0][term][post_type]=post: We only need results that are posts here, simply because our Gatsby website is limited to post. In real-life use, you might need spend some time configuring the boolean queries.
  • size=10 sets the number of returned results (maximum 20).
  • with highlight_fields[0]=title, we get the title string (or a part of it) with the searchTerm within the <mark> tags.
  • highlight_fields[0]=content is the same as below but for the post’s content.

There are three more search parameters depending on the user’s action:

  • query: The search term from the search input, e.g. gallery
  • sort: how the results should be orderer, the default is by score "score_default" (relevance) but there is also "date_asc" (newest) and "date_desc" (oldest)
  • page_handle: something like the “after” cursor for paginated results. We only request 10 results at once, and we will have a “load more” button.

Now, let’s see how a successful response is structured:

{ total: 9, corrected_query: false, page_handle: false, // or a string it the total value > 10 results: [ { _score: 196.51814, fields: { date: '2018-11-03 03:55:09', 'title.default': 'Block: Gallery', 'excerpt.default': '', post_id: 1918, // we can configure what fields we want to add here with the query search parameters }, result_type: 'post', railcar: {/* we will not use this data */}, highlight: { title: ['Block: <mark>Gallery</mark>'], content: [ 'automatically stretch to the width of your <mark>gallery</mark>. ... A four column <mark>gallery</mark> with a wide width:', '<mark>Gallery</mark> blocks have two settings: the number of columns, and whether or not images should be cropped', ], }, }, /* more results */ ], suggestions: [], // we will not use suggestions here aggregations: [], // nor the aggregations }

The results field provides an array containing the database post IDs. To display the search results within a Gatsby site, we need to extract the corresponding post nodes (in particular their uri ) from the Gatsby data layer. My approach is to implement an instant search with asynchronous calls to the rest API and intersect the results with those of the static GraphQL query that returns all post nodes.

Let’s start by building an instant search widget that communicates with the search API. Since this is not specific to Gatsby, let’s see it in action in this Pen:

CodePen Embed Fallback

Here, useDebouncedInstantSearch is a custom hook responsible for fetching the results from the Jetpack Search API. My solution uses the awesome-debounce-promise library that allows us to take some extra care of the fetching mechanism. An instant search responds to the input directly without waiting for an explicit “Go!” from the user. If I’m typing fast, the request may change several times before even the first response arrives. Thus, there might be some unnecessary network bandwidth waste. The awesome-debounce-promise waits a given time interval (say 300ms) before making a call to an API; if there is a new call within this interval, the previous one will never be executed. It also resolves only the last promise returned from the call — this prevents the concurrency issues.

Now, with the search results available, let’s move back to Gatsby and build another custom hook:

import {useStaticQuery, graphql} from "gatsby" export const useJetpackSearch = (params) => { const { allWpPost: { nodes }, } = useStaticQuery(graphql` query AllPostsQuery { allWpPost { nodes { id databaseId uri title excerpt } } } `) const { error, loading, data } = useDebouncedInstantSearch(params) return { error, loading, data: { ...data, // map the results results: data.results.map(el => { // for each result find a node that has the same databaseId as the result field post_id const node = nodes.find(item => item.databaseId === el.fields.post_id) return { // spread the node ...node, // keep the highlight info highlight: el.highlight } }), } } }

I will call the useJetpackSearch within <SearchResults />. The Gatsby-version of <SearchResults /> is almost identical as that in the Pen above. The differences are highlighted in the code block below. The hook useDebouncedInstantSearch is replaced by useJetpackSearch (that calls the former internally). There is a Gatsby Link that replaces h2 as well as el.fields["title.default"] and el.fields["excerpt.default"] are replaced by el.title and el.excerpt.

const SearchResults = ({ params, setParams }) => { const { loading, error, data } = useJetpackSearch(params) const { searchTerm } = params if (error) { return <p>Error - {error}</p> } return ( <section className="search-results"> {loading ? ( <p className="info">Searching posts .....</p> ) : ( <> {data.total !== undefined && ( <p> Found {data.total} results for{" "} {data.corrected_query ? ( <> <del>{searchTerm}</del> <span>{data.corrected_query}</span> </> ) : ( <span>{searchTerm}</span> )} </p> )} </> )} {data.results?.length > 0 && ( <ul> {data.results.map((el) => { return ( <li key={el.id}> <Link to={el.uri}> {el.highlight.title[0] ? el.highlight.title.map((item, index) => ( <React.Fragment key={index}> {parse(item)} </React.Fragment> )) : parse(el.title)} </Link> <div className="post-excerpt"> {el.highlight.content[0] ? el.highlight.content.map((item, index) => ( <div key={index}>{parse(item)}</div> )) : parse(el.excerpt)} </div> </li> ); })} </ul> )} {data.page_handle && ( <button type="button" disabled={loading} onClick={() => setParams({ pageHandle: data.page_handle })} > {loading ? "loading..." : "load more"} </button> )} </section> ) }

You can find the complete code in this repo and see it in action in this demo. Note that I no longer source WordPress data from the generic WordPress demo used by Gatsby starter. I need to have a website with Jetpack Search activated.

Wrapping up

We’ve just seen two ways of dealing with search in headless WordPress. Besides a few Gatsby-specific technical details (like using Gatsby Browser API), you can implement both discussed approaches within other frameworks. We’ve seen how to make use of the native WordPress search. I guess that it is an acceptable solution in many cases.

But if you need something better, there are better options available. One of them is Jetpack Search. Jetpack Instant Search does a great job on CSS-Tricks and, as we’ve just seen, can work with headless WordPress as well. There are probably other ways of implementing it. You can also go further with the query configuration, the filter functionalities, and how you display the results.

The post Native Search vs. Jetpack Instant Search in Headless WordPress With Gatsby appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Native Search vs. Jetpack Instant Search in Headless WordPress With Gatsby

Css Tricks - Wed, 08/18/2021 - 4:46am

Have you already tried using WordPress headlessly with Gatsby? If you haven’t, you might check this article around the new Gatsby source plugin for WordPress; gatsby-source-wordpress is the official source plugin introduced in March 2021 as a part of the Gatsby 3 release. It significantly improves the integration with WordPress. Also, the WordPress plugin WPGraphQL providing the GraphQL API is now available via the official WordPress repository.

With stable and maintained tools, developing Gatsby websites powered by WordPress becomes easier and more interesting. I got myself involved in this field, I co-founded (with Alexandra Spalato), and recently launched Gatsby WP Themes — a niche marketplace for developers building WordPress-powered sites with Gatsby. In this article, I would love to share my insights and, in particular, discuss the search functionality.

Search does not come out of the box, but there are many options to consider. I will focus on two distinct possibilities — taking advantage of WordPress native search (WordPress search query) vs. using Jetpack Instant Search.

Getting started

Let’s start by setting up a WordPress-powered Gatsby website. For the sake of simplicity, I will follow the getting started instructions and install the gatsby-starter-wordpress-blog starter.

gatsby new gatsby-wordpress-w-search https://github.com/gatsbyjs/gatsby-starter-wordpress-blog

This simple, bare-bone starter creates routes exclusively for individual posts and blog pages. But we can keep it that simple here. Let’s imagine that we don’t want to include pages within the search results.

For the moment, I will leave the WordPress source website as it is and pull the content from the starter author’s WordPress demo. If you use your own source, just remember that there are two plugins required on the WordPress end (both available via the plugin repository):

  • WPGraphQL – a plugin that runs a GraphQL server on the WordPress instance
  • WPGatsby – a plugin that modifies the WPGraphQL schema in Gatsby-specific ways (it also adds some mechanism to optimize the build process)
Setting up Apollo Client

With Gatsby, we usually either use the data from queries run on page creation (page queries) or call the useStaticQuery hook. The latter is available in components and does not allow dynamic query parameters; its role is to retrieve GraphQL data at build time. None of those two query solutions works for a user’s-initiated search. Instead, we will ask WordPress to run a search query and send us back the results. Can we send a graphQL search query? Yes! WPGraphQL provides search; you can search posts in WPGraphQL like so:

posts(where: {search: "gallery"}) { nodes { id title content } }

In order to communicate directly with our WPGraphQL API, we will install Apollo Client; it takes care of requesting and caching the data as well as updating our UI components.

yarn add @apollo/client cross-fetch

To access Apollo Client anywhere in our component tree, we need to wrap our app with ApolloProvider. Gatsby does not expose the App component that wraps around the whole application. Instead, it provides the wrapRootElement API. It’s a part of the Gatsby Browser API and needs to be implemented in the gatsby-browser.js file located at the project’s root.

// gatsby-browser.js import React from "react" import fetch from "cross-fetch" import { ApolloClient, HttpLink, InMemoryCache, ApolloProvider } from "@apollo/client" const cache = new InMemoryCache() const link = new HttpLink({ /* Set the endpoint for your GraphQL server, (same as in gatsby-config.js) */ uri: "https://wpgatsbydemo.wpengine.com/graphql", /* Use fetch from cross-fetch to provide replacement for server environment */ fetch }) const client = new ApolloClient({ link, cache, }) export const wrapRootElement = ({ element }) => ( <ApolloProvider client={client}>{element}</ApolloProvider> ) SearchForm component

Now that we’ve set up ApolloClient, let’s build our Search component.

touch src/components/search.js src/components/search-form.js src/components/search-results.js src/css/search.css

The Search component wraps SearchForm and SearchResults

// src/components/search.js import React, { useState } from "react" import SearchForm from "./search-form" import SearchResults from "./search-results" const Search = () => { const [searchTerm, setSearchTerm] = useState("") return ( <div className="search-container"> <SearchForm setSearchTerm={setSearchTerm} /> {searchTerm && <SearchResults searchTerm={searchTerm} />} </div> ) } export default Search

<SearchForm /> is a simple form with controlled input and a submit handler that sets the searchTerm state value to the user submission.

// src/components/search-form.js import React, { useState } from "react" const SearchForm = ({ searchTerm, setSearchTerm }) => { const [value, setValue] = useState(searchTerm) const handleSubmit = e => { e.preventDefault() setSearchTerm(value) } return ( <form role="search" onSubmit={handleSubmit}> <label htmlFor="search">Search blog posts:</label> <input id="search" type="search" value={value} onChange={e => setValue(e.target.value)} /> <button type="submit">Submit</button> </form> ) } export default SearchForm

The SearchResults component receives the searchTerm via props, and that’s where we use Apollo Client.

For each searchTerm, we would like to display the matching posts as a list containing the post’s title, excerpt, and a link to this individual post. Our query will be like so:

const GET_RESULTS = gql` query($searchTerm: String) { posts(where: { search: $searchTerm }) { edges { node { id uri title excerpt } } } } `

We will use the useQuery hook from @apollo-client to run the GET_RESULTS query with a search variable.

// src/components/search-results.js import React from "react" import { Link } from "gatsby" import { useQuery, gql } from "@apollo/client" const GET_RESULTS = gql` query($searchTerm: String) { posts(where: { search: $searchTerm }) { edges { node { id uri title excerpt } } } } ` const SearchResults = ({ searchTerm }) => { const { data, loading, error } = useQuery(GET_RESULTS, { variables: { searchTerm } }) if (loading) return <p>Searching posts for {searchTerm}...</p> if (error) return <p>Error - {error.message}</p> return ( <section className="search-results"> <h2>Found {data.posts.edges.length} results for {searchTerm}:</h2> <ul> {data.posts.edges.map(el => { return ( <li key={el.node.id}> <Link to={el.node.uri}>{el.node.title}</Link> </li> ) })} </ul> </section> ) } export default SearchResults

The useQuery hook returns an object that contains loading, error, and data properties. We can render different UI elements according to the query’s state. As long as loading is truthy, we display <p>Searching posts...</p>. If loading and error are both falsy, the query has completed and we can loop over the data.posts.edges and display the results.

if (loading) return <p>Searching posts...</p> if (error) return <p>Error - {error.message}</p> // else return ( //... )

For the moment, I am adding the <Search /> to the layout component. (I’ll move it somewhere else a little bit later.) Then, with some styling and a visible state variable, I made it feel more like a widget, opening on click and fixed-positioned in the top right corner.

Paginated queries

Without the number of entries specified, the WPGraphQL posts query returns ten first posts; we need to take care of the pagination. WPGraphQL implements the pagination following the Relay Specification for GraphQL Schema Design. I will not go into the details; let’s just note that it is a standardized pattern. Within the Relay specification, in addition to posts.edges (which is a list of { cursor, node } objects), we have access to the posts.pageInfo object that provides:

  • endCursor – cursor of the last item in posts.edges,
  • startCursor – cursor of the first item in posts.edges,
  • hasPreviousPage – boolean for “are there more results available (backward),” and
  • hasNextPage – boolean for “are there more results available (forward).”

We can modify the slice of the data we want to access with the additional query variables:

  • first – the number of returned entries
  • after – the cursor we should start after

How do we deal with pagination queries with Apollo Client? The recommended approach is to use the fetchMore function, that is (together with loading, error and data) a part of the object returned by the useQuery hook.

// src/components/search-results.js import React from "react" import { Link } from "gatsby" import { useQuery, gql } from "@apollo/client" const GET_RESULTS = gql` query($searchTerm: String, $after: String) { posts(first: 10, after: $after, where: { search: $searchTerm }) { edges { node { id uri title } } pageInfo { hasNextPage endCursor } } } ` const SearchResults = ({ searchTerm }) => { const { data, loading, error, fetchMore } = useQuery(GET_RESULTS, { variables: { searchTerm, after: "" }, }) if (loading && !data) return <p>Searching posts for {searchTerm}...</p> if (error) return <p>Error - {error.message}</p> const loadMore = () => { fetchMore({ variables: { after: data.posts.pageInfo.endCursor, }, // with notifyOnNetworkStatusChange our component re-renders while a refetch is in flight so that we can mark loading state when waiting for more results (see lines 42, 43) notifyOnNetworkStatusChange: true, }) } return ( <section className="search-results"> {/* as before */} {data.posts.pageInfo.hasNextPage && ( <button type="button" onClick={loadMore} disabled={loading}> {loading ? "Loading..." : "More results"} </button> )} </section> ) } export default SearchResults

The first argument has its default value but is necessary here to indicate that we are sending a paginated request. Without first, pageInfo.hasNextPage will always be false, no matter the search keyword.

Calling fetchMore fetches the next slice of results but we still need to tell Apollo how it should merge the “fetch more” results with the existing cached data. We specify all the pagination logic in a central location as an option passed to the InMemoryCache constructor (in the gatsby-browser.js file). And guess what? With the Relay specification, we’ve got it covered — Apollo Client provides the relayStylePagination function that does all the magic for us.

// gatsby-browser.js import { ApolloClient, HttpLink, InMemoryCache, ApolloProvider } from "@apollo/client" import { relayStylePagination } from "@apollo/client/utilities" const cache = new InMemoryCache({ typePolicies: { Query: { fields: { posts: relayStylePagination(["where"]), }, }, }, }) /* as before */

Just one important detail: we don’t paginate all posts, but instead the posts that correspond to a specific where condition. Adding ["where"] as an argument to relayStylePagination creates a distinct storage key for different search terms.

Making search persistent

Right now my Search component lives in the Layout component. It’s displayed on every page but gets unmounted every time the route changes. What if we could keep the search results while navigating? We can take advantage of the Gatsby wrapPageElement browser API to set persistent UI elements around pages.

Let’s move <Search /> from the layout component to the wrapPageElement:

// gatsby-browser.js import Search from "./src/components/search" /* as before */ export const wrapPageElement = ({ element }) => { return <><Search />{element}</> }

The APIs wrapPageElement and wrapRootElement exist in both the browser and Server-Side Rendering (SSR) APIs. Gatsby recommends that we implement wrapPageElement and wrapRootElement in both gatsby-browser.js and gatsby-ssr.js. Let’s create the gatsby-ssr.js (in the root of the project) and re-export our elements:

// gatsby-ssr.js export { wrapRootElement, wrapPageElement } from "./gatsby-browser"

I deployed a demo where you can see it in action. You can also find the code in this repo.

The wrapPageElement approach may not be ideal in all cases. Our search widget is “detached” from the layout component. It works well with the position “fixed” like in our working example or within an off-canvas sidebar like in this Gatsby WordPress theme.

But what if you want to have “persistent” search results displayed within a “classic” sidebar? In that case, you could move the searchTerm state from the Search component to a search context provider placed within the wrapRootElement:

// gatsby-browser.js import SearchContextProvider from "./src/search-context" /* as before */ export const wrapRootElement = ({ element }) => ( <ApolloProvider client={client}> <SearchContextProvider> {element} </SearchContextProvider> </ApolloProvider> )

…with the SearchContextProvider defined as below:

// src/search-context.js import React, {createContext, useState} from "react" export const SearchContext = createContext() export const SearchContextProvider = ({ children }) => { const [searchTerm, setSearchTerm] = useState("") return ( <SearchContext.Provider value={{ searchTerm, setSearchTerm }}> {children} </SearchContext.Provider> ) }

You can see it in action in another Gatsby WordPress theme:

Note how, since Apollo Client caches the search results, we immediately get them on the route change.

Results from posts and pages

If you checked the theme examples above, you might have noticed how I deal with querying more than just posts. My approach is to replicate the same logic for pages and display results for each post type separately.

Alternatively, you could use the Content Node interface to query nodes of different post types in a single connection:

const GET_RESULTS = gql` query($searchTerm: String, $after: String) { contentNodes(first: 10, after: $after, where: { search: $searchTerm }) { edges { node { id uri ... on Page { title } ... on Post { title excerpt } } } pageInfo { hasNextPage endCursor } } } ` Going beyond the default WordPress search

Our solution seems to work but let’s remember that the underlying mechanism that actually does the search for us is the native WordPress search query. And the WordPress default search function isn’t great. Its problems are limited search fields (in particular, taxonomies are not taken into account), no fuzzy matching, no control over the order of results. Big websites can also suffer from performance issues — there is no prebuilt search index, and the search query is performed directly on the website SQL database.

There are a few WordPress plugins that enhance the default search. Plugins like WP Extended Search add the ability to include selected meta keys and taxonomies in search queries.

The Relevanssi plugin replaces the standard WordPress search with its search engine using the full-text indexing capabilities of the database. Relevanssi deactivates the default search query which breaks the WPGraphQL where: {search : …}. There is some work already done on enabling Relevanssi search through WPGraphQL; the code might not be compatible with the latest WPGraphQL version, but it seems to be a good start for those who opt for Relevanssi search.

In the second part of this article, we’ll take one more possible path and have a closer look at the premium service from Jetpack — an advanced search powered by Elasticsearch. By the way, Jetpack Instant search is the solution adopted by CSS-Tricks.

Using Jetpack Instant Search with Gatsby

Jetpack Search is a per-site premium solution by Jetpack. Once installed and activated, it will take care of building an Elasticsearch index. The search queries no longer hit the SQL database. Instead, the search query requests are sent to the cloud Elasticsearch server, more precisely to:

https://public-api.wordpress.com/rest/v1.3/sites/{your-blog-id}/search

There are a lot of search parameters to specify within the URL above. In our case, we will add the following:

  • filter[bool][must][0][term][post_type]=post: We only need results that are posts here, simply because our Gatsby website is limited to post. In real-life use, you might need spend some time configuring the boolean queries.
  • size=10 sets the number of returned results (maximum 20).
  • with highlight_fields[0]=title, we get the title string (or a part of it) with the searchTerm within the <mark> tags.
  • highlight_fields[0]=content is the same as below but for the post’s content.

There are three more search parameters depending on the user’s action:

  • query: The search term from the search input, e.g. gallery
  • sort: how the results should be orderer, the default is by score "score_default" (relevance) but there is also "date_asc" (newest) and "date_desc" (oldest)
  • page_handle: something like the “after” cursor for paginated results. We only request 10 results at once, and we will have a “load more” button.

Now, let’s see how a successful response is structured:

{ total: 9, corrected_query: false, page_handle: false, // or a string it the total value > 10 results: [ { _score: 196.51814, fields: { date: '2018-11-03 03:55:09', 'title.default': 'Block: Gallery', 'excerpt.default': '', post_id: 1918, // we can configure what fields we want to add here with the query search parameters }, result_type: 'post', railcar: {/* we will not use this data */}, highlight: { title: ['Block: <mark>Gallery</mark>'], content: [ 'automatically stretch to the width of your <mark>gallery</mark>. ... A four column <mark>gallery</mark> with a wide width:', '<mark>Gallery</mark> blocks have two settings: the number of columns, and whether or not images should be cropped', ], }, }, /* more results */ ], suggestions: [], // we will not use suggestions here aggregations: [], // nor the aggregations }

The results field provides an array containing the database post IDs. To display the search results within a Gatsby site, we need to extract the corresponding post nodes (in particular their uri ) from the Gatsby data layer. My approach is to implement an instant search with asynchronous calls to the rest API and intersect the results with those of the static GraphQL query that returns all post nodes.

Let’s start by building an instant search widget that communicates with the search API. Since this is not specific to Gatsby, let’s see it in action in this Pen:

CodePen Embed Fallback

Here, useDebouncedInstantSearch is a custom hook responsible for fetching the results from the Jetpack Search API. My solution uses the awesome-debounce-promise library that allows us to take some extra care of the fetching mechanism. An instant search responds to the input directly without waiting for an explicit “Go!” from the user. If I’m typing fast, the request may change several times before even the first response arrives. Thus, there might be some unnecessary network bandwidth waste. The awesome-debounce-promise waits a given time interval (say 300ms) before making a call to an API; if there is a new call within this interval, the previous one will never be executed. It also resolves only the last promise returned from the call — this prevents the concurrency issues.

Now, with the search results available, let’s move back to Gatsby and build another custom hook:

import {useStaticQuery, graphql} from "gatsby" export const useJetpackSearch = (params) => { const { allWpPost: { nodes }, } = useStaticQuery(graphql` query AllPostsQuery { allWpPost { nodes { id databaseId uri title excerpt } } } `) const { error, loading, data } = useDebouncedInstantSearch(params) return { error, loading, data: { ...data, // map the results results: data.results.map(el => { // for each result find a node that has the same databaseId as the result field post_id const node = nodes.find(item => item.databaseId === el.fields.post_id) return { // spread the node ...node, // keep the highlight info highlight: el.highlight } }), } } }

I will call the useJetpackSearch within <SearchResults />. The Gatsby-version of <SearchResults /> is almost identical as that in the Pen above. The differences are highlighted in the code block below. The hook useDebouncedInstantSearch is replaced by useJetpackSearch (that calls the former internally). There is a Gatsby Link that replaces h2 as well as el.fields["title.default"] and el.fields["excerpt.default"] are replaced by el.title and el.excerpt.

const SearchResults = ({ params, setParams }) => { const { loading, error, data } = useJetpackSearch(params) const { searchTerm } = params if (error) { return <p>Error - {error}</p> } return ( <section className="search-results"> {loading ? ( <p className="info">Searching posts .....</p> ) : ( <> {data.total !== undefined && ( <p> Found {data.total} results for{" "} {data.corrected_query ? ( <> <del>{searchTerm}</del> <span>{data.corrected_query}</span> </> ) : ( <span>{searchTerm}</span> )} </p> )} </> )} {data.results?.length > 0 && ( <ul> {data.results.map((el) => { return ( <li key={el.id}> <Link to={el.uri}> {el.highlight.title[0] ? el.highlight.title.map((item, index) => ( <React.Fragment key={index}> {parse(item)} </React.Fragment> )) : parse(el.title)} </Link> <div className="post-excerpt"> {el.highlight.content[0] ? el.highlight.content.map((item, index) => ( <div key={index}>{parse(item)}</div> )) : parse(el.excerpt)} </div> </li> ); })} </ul> )} {data.page_handle && ( <button type="button" disabled={loading} onClick={() => setParams({ pageHandle: data.page_handle })} > {loading ? "loading..." : "load more"} </button> )} </section> ) }

You can find the complete code in this repo and see it in action in this demo. Note that I no longer source WordPress data from the generic WordPress demo used by Gatsby starter. I need to have a website with Jetpack Search activated.

Wrapping up

We’ve just seen two ways of dealing with search in headless WordPress. Besides a few Gatsby-specific technical details (like using Gatsby Browser API), you can implement both discussed approaches within other frameworks. We’ve seen how to make use of the native WordPress search. I guess that it is an acceptable solution in many cases.

But if you need something better, there are better options available. One of them is Jetpack Search. Jetpack Instant Search does a great job on CSS-Tricks and, as we’ve just seen, can work with headless WordPress as well. There are probably other ways of implementing it. You can also go further with the query configuration, the filter functionalities, and how you display the results.

The post Native Search vs. Jetpack Instant Search in Headless WordPress With Gatsby appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

DX, to Whom?

Css Tricks - Tue, 08/17/2021 - 11:09am

Dave points to Sarah’s post on Developer Experience (DX) at Netlify. Part of what Sarah did there is lay out what the role means. It’s a three-part thing:

  1. Integrations Engineering (e.g. features)
  2. Developer Experience Engineering (e.g. building integrations to ensure quality end-to-end for customers)
  3. Documentation (e.g. … uh, documentation)

I like it. You gotta define the thing to do the thing. Dave, though, writes about being a consumer of DX rather than a creator of DX. Another three-parter:

  1. Is it easy? Does this technology solve a problem I have better than I’m currently doing it.
  2. Can I get help? If I have a problem, can I talk to someone? Will I talk to someone helpful or someone shitty?
  3. Is the community healthy? If I do go all-in on this, is the community toxic or nice? If applicable, do good community extensions exist?

Another favorite of mine on this subject is Shawn Wang’s Developer Exception Engineering, which agrees with the basic premise of DX, but then digs a little deeper into the “uncomfortable” (yet honest and candid) aspects. Here’s one example:

Is your pricing predictable or do your users need a spreadsheet to figure out what you are going to charge them? If charges are unexpectedly high, can developers use your software to figure out why or do they have to beg for help? Are good defaults in place to get advance warning?

I like that good DX can be born out of clarity in the uncomfortable bits. Where are the rough edges? Tell me, and you earn my trust. Hide it, and you lose it.

The post DX, to Whom? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

From a Single Repo, to Multi-Repos, to Monorepo, to Multi-Monorepo

Css Tricks - Tue, 08/17/2021 - 4:53am

I’ve been working on the same project for several years. Its initial version was a huge monolithic app containing thousands of files. It was poorly architected and non-reusable, but was hosted in a single repo making it easy to work with. Later, I “fixed” the mess in the project by splitting the codebase into autonomous packages, hosting each of them on its own repo, and managing them with Composer. The codebase became properly architected and reusable, but being split across multiple repos made it a lot more difficult to work with.

As the code was reformatted time and again, its hosting in the repo also had to adapt, going from the initial single repo, to multiple repos, to a monorepo, to what may be called a “multi-monorepo.”

Let me take you on the journey of how this took place, explaining why and when I felt I had to switch to a new approach. The journey consists of four stages (so far!) so let’s break it down like that.

Stage 1: Single repo

The project is leoloso/PoP and it’s been through several hosting schemes, following how its code was re-architected at different times.

It was born as this WordPress site, comprising a theme and several plugins. All of the code was hosted together in the same repo.

Some time later, I needed another site with similar features so I went the quick and easy way: I duplicated the theme and added its own custom plugins, all in the same repo. I got the new site running in no time.

I did the same for another site, and then another one, and another one. Eventually the repo was hosting some 10 sites, comprising thousands of files.

A single repository hosting all our code. Issues with the single repo

While this setup made it easy to spin up new sites, it didn’t scale well at all. The big thing is that a single change involved searching for the same string across all 10 sites. That was completely unmanageable. Let’s just say that copy/paste/search/replace became a routine thing for me.

So it was time to start coding PHP the right way.

Stage 2: Multirepo

Fast forward a couple of years. I completely split the application into PHP packages, managed via Composer and dependency injection.

Composer uses Packagist as its main PHP package repository. In order to publish a package, Packagist requires a composer.json file placed at the root of the package’s repo. That means we are unable to have multiple PHP packages, each of them with its own composer.json hosted on the same repo.

As a consequence, I had to switch from hosting all of the code in the single leoloso/PoP repo, to using multiple repos, with one repo per PHP package. To help manage them, I created the organization “PoP” in GitHub and hosted all repos there, including getpop/root, getpop/component-model, getpop/engine, and many others.

In the multirepo, each package is hosted on its own repo. Issues with the multirepo

Handling a multirepo can be easy when you have a handful of PHP packages. But in my case, the codebase comprised over 200 PHP packages. Managing them was no fun.

The reason that the project was split into so many packages is because I also decoupled the code from WordPress (so that these could also be used with other CMSs), for which every package must be very granular, dealing with a single goal.

Now, 200 packages is not ordinary. But even if a project comprises only 10 packages, it can be difficult to manage across 10 repositories. That’s because every package must be versioned, and every version of a package depends on some version of another package. When creating pull requests, we need to configure the composer.json file on every package to use the corresponding development branch of its dependencies. It’s cumbersome and bureaucratic.

I ended up not using feature branches at all, at least in my case, and simply pointed every package to the dev-master version of its dependencies (i.e. I was not versioning packages). I wouldn’t be surprised to learn that this is a common practice more often than not.

There are tools to help manage multiple repos, like meta. It creates a project composed of multiple repos and doing git commit -m "some message" on the project executes a git commit -m "some message" command on every repo, allowing them to be in sync with each other.

However, meta will not help manage the versioning of each dependency on their composer.json file. Even though it helps alleviate the pain, it is not a definitive solution.

So, it was time to bring all packages to the same repo.

Stage 3: Monorepo

The monorepo is a single repo that hosts the code for multiple projects. Since it hosts different packages together, we can version control them together too. This way, all packages can be published with the same version, and linked across dependencies. This makes pull requests very simple.

The monorepo hosts multiple packages.

As I mentioned earlier, we are not able to publish PHP packages to Packagist if they are hosted on the same repo. But we can overcome this constraint by decoupling development and distribution of the code: we use the monorepo to host and edit the source code, and multiple repos (at one repo per package) to publish them to Packagist for distribution and consumption.

The monorepo hosts the source code, multiple repos distribute it. Switching to the Monorepo

Switching to the monorepo approach involved the following steps:

First, I created the folder structure in leoloso/PoP to host the multiple projects. I decided to use a two-level hierarchy, first under layers/ to indicate the broader project, and then under packages/, plugins/, clients/ and whatnot to indicate the category.

The monorepo layers indicate the broader project.

Then, I copied all source code from all repos (getpop/engine, getpop/component-model, etc.) to the corresponding location for that package in the monorepo (i.e. layers/Engine/packages/engine, layers/Engine/packages/component-model, etc).

I didn’t need to keep the Git history of the packages, so I just copied the files with Finder. Otherwise, we can use hraban/tomono or shopsys/monorepo-tools to port repos into the monorepo, while preserving their Git history and commit hashes.

Next, I updated the description of all downstream repos, to start with [READ ONLY], such as this one.

The downstream repo’s “READ ONLY” is located in the repo description.

I executed this task in bulk via GitHub’s GraphQL API. I first obtained all of the descriptions from all of the repos, with this query:

{ repositoryOwner(login: "getpop") { repositories(first: 100) { nodes { id name description } } } }

…which returned a list like this:

{ "data": { "repositoryOwner": { "repositories": { "nodes": [ { "id": "MDEwOlJlcG9zaXRvcnkxODQ2OTYyODc=", "name": "hooks", "description": "Contracts to implement hooks (filters and actions) for PoP" }, { "id": "MDEwOlJlcG9zaXRvcnkxODU1NTQ4MDE=", "name": "root", "description": "Declaration of dependencies shared by all PoP components" }, { "id": "MDEwOlJlcG9zaXRvcnkxODYyMjczNTk=", "name": "engine", "description": "Engine for PoP" } ] } } } }

From there, I copied all descriptions, added [READ ONLY] to them, and for every repo generated a new query executing the updateRepository GraphQL mutation:

mutation { updateRepository( input: { repositoryId: "MDEwOlJlcG9zaXRvcnkxODYyMjczNTk=" description: "[READ ONLY] Engine for PoP" } ) { repository { description } } }

Finally, I introduced tooling to help “split the monorepo.” Using a monorepo relies on synchronizing the code between the upstream monorepo and the downstream repos, triggered whenever a pull request is merged. This action is called “splitting the monorepo.” Splitting the monorepo can be achieved with a git subtree split command but, because I’m lazy, I’d rather use a tool.

I chose Monorepo builder, which is written in PHP. I like this tool because I can customize it with my own functionality. Other popular tools are the Git Subtree Splitter (written in Go) and Git Subsplit (bash script).

What I like about the Monorepo

I feel at home with the monorepo. The speed of development has improved because dealing with 200 packages feels pretty much like dealing with just one. The boost is most evident when refactoring the codebase, i.e. when executing updates across many packages.

The monorepo also allows me to release multiple WordPress plugins at once. All I need to do is provide a configuration to GitHub Actions via PHP code (when using the Monorepo builder) instead of hard-coding it in YAML.

To generate a WordPress plugin for distribution, I had created a generate_plugins.yml workflow that triggers when creating a release. With the monorepo, I have adapted it to generate not just one, but multiple plugins, configured via PHP through a custom command in plugin-config-entries-json, and invoked like this in GitHub Actions:

- id: output_data run: | echo "quot;::set-output name=plugin_config_entries::$(vendor/bin/monorepo-builder plugin-config-entries-json)"

This way, I can generate my GraphQL API plugin and other plugins hosted in the monorepo all at once. The configuration defined via PHP is this one.

class PluginDataSource { public function getPluginConfigEntries(): array { return [ // GraphQL API for WordPress [ 'path' => 'layers/GraphQLAPIForWP/plugins/graphql-api-for-wp', 'zip_file' => 'graphql-api.zip', 'main_file' => 'graphql-api.php', 'dist_repo_organization' => 'GraphQLAPI', 'dist_repo_name' => 'graphql-api-for-wp-dist', ], // GraphQL API - Extension Demo [ 'path' => 'layers/GraphQLAPIForWP/plugins/extension-demo', 'zip_file' => 'graphql-api-extension-demo.zip', 'main_file' =>; 'graphql-api-extension-demo.php', 'dist_repo_organization' => 'GraphQLAPI', 'dist_repo_name' => 'extension-demo-dist', ], ]; } }

When creating a release, the plugins are generated via GitHub Actions.

This figure shows plugins generated when a release is created.

If, in the future, I add the code for yet another plugin to the repo, it will also be generated without any trouble. Investing some time and energy producing this setup now will definitely save plenty of time and energy in the future.

Issues with the Monorepo

I believe the monorepo is particularly useful when all packages are coded in the same programming language, tightly coupled, and relying on the same tooling. If instead we have multiple projects based on different programming languages (such as JavaScript and PHP), composed of unrelated parts (such as the main website code and a subdomain that handles newsletter subscriptions), or tooling (such as PHPUnit and Jest), then I don’t believe the monorepo provides much of an advantage.

That said, there are downsides to the monorepo:

  • We must use the same license for all of the code hosted in the monorepo; otherwise, we’re unable to add a LICENSE.md file at the root of the monorepo and have GitHub pick it up automatically. Indeed, leoloso/PoP initially provided several libraries using MIT and the plugin using GPLv2. So, I decided to simplify it using the lowest common denominator between them, which is GPLv2.
  • There is a lot of code, a lot of documentation, and plenty of issues, all from different projects. As such, potential contributors that were attracted to a specific project can easily get confused.
  • When tagging the code, all packages are versioned independently with that tag whether their particular code was updated or not. This is an issue with the Monorepo builder and not necessarily with the monorepo approach (Symfony has solved this problem for its monorepo).
  • The issues board needs proper management. In particular, it requires labels to assign issues to the corresponding project, or risk it becoming chaotic.
The issues board can become chaotic without labels that are associated with projects.

All these issues are not roadblocks though. I can cope with them. However, there is an issue that the monorepo cannot help me with: hosting both public and private code together.

I’m planning to create a “PRO” version of my plugin which I plan to host in a private repo. However, the code in the repo is either public or private, so I’m unable to host my private code in the public leoloso/PoP repo. At the same time, I want to keep using my setup for the private repo too, particularly the generate_plugins.yml workflow (which already scopes the plugin and downgrades its code from PHP 8.0 to 7.1) and its possibility to configure it via PHP. And I want to keep it DRY, avoiding copy/pastes.

It was time to switch to the multi-monorepo.

Stage 4: Multi-monorepo

The multi-monorepo approach consists of different monorepos sharing their files with each other, linked via Git submodules. At its most basic, a multi-monorepo comprises two monorepos: an autonomous upstream monorepo, and a downstream monorepo that embeds the upstream repo as a Git submodule that’s able to access its files:

The upstream monorepo is contained within the downstream monorepo.

This approach satisfies my requirements by:

  • having the public repo leoloso/PoP be the upstream monorepo, and
  • creating a private repo leoloso/GraphQLAPI-PRO that serves as the downstream monorepo.
A private monorepo can access the files from a public monorepo.

leoloso/GraphQLAPI-PRO embeds leoloso/PoP under subfolder submodules/PoP (notice how GitHub links to the specific commit of the embedded repo):

This figure show how the public monorepo is embedded within the private monorepo in the GitHub project.

Now, leoloso/GraphQLAPI-PRO can access all the files from leoloso/PoP. For instance, script ci/downgrade/downgrade_code.sh from leoloso/PoP (which downgrades the code from PHP 8.0 to 7.1) can be accessed under submodules/PoP/ci/downgrade/downgrade_code.sh.

In addition, the downstream repo can load the PHP code from the upstream repo and even extend it. This way, the configuration to generate the public WordPress plugins can be overridden to produce the PRO plugin versions instead:

class PluginDataSource extends UpstreamPluginDataSource { public function getPluginConfigEntries(): array { return [ // GraphQL API PRO [ 'path' => 'layers/GraphQLAPIForWP/plugins/graphql-api-pro', 'zip_file' => 'graphql-api-pro.zip', 'main_file' => 'graphql-api-pro.php', 'dist_repo_organization' => 'GraphQLAPI-PRO', 'dist_repo_name' => 'graphql-api-pro-dist', ], // GraphQL API Extensions // Google Translate [ 'path' => 'layers/GraphQLAPIForWP/plugins/google-translate', 'zip_file' => 'graphql-api-google-translate.zip', 'main_file' => 'graphql-api-google-translate.php', 'dist_repo_organization' => 'GraphQLAPI-PRO', 'dist_repo_name' => 'graphql-api-google-translate-dist', ], // Events Manager [ 'path' => 'layers/GraphQLAPIForWP/plugins/events-manager', 'zip_file' => 'graphql-api-events-manager.zip', 'main_file' => 'graphql-api-events-manager.php', 'dist_repo_organization' => 'GraphQLAPI-PRO', 'dist_repo_name' => 'graphql-api-events-manager-dist', ], ]; } }

GitHub Actions will only load workflows from under .github/workflows, and the upstream workflows are under submodules/PoP/.github/workflows; hence we need to copy them. This is not ideal, though we can avoid editing the copied workflows and treat the upstream files as the single source of truth.

To copy the workflows over, a simple Composer script can do:

{ "scripts": { "copy-workflows": [ "php -r \"copy('submodules/PoP/.github/workflows/generate_plugins.yml', '.github/workflows/generate_plugins.yml');\"", "php -r \"copy('submodules/PoP/.github/workflows/split_monorepo.yaml', '.github/workflows/split_monorepo.yaml');\"" ] } }

Then, each time I edit the workflows in the upstream monorepo, I also copy them to the downstream monorepo by executing the following command:

composer copy-workflows

Once this setup is in place, the private repo generates its own plugins by reusing the workflow from the public repo:

This figure shows the PRO plugins generated in GitHub Actions.

I am extremely satisfied with this approach. I feel it has removed all of the burden from my shoulders concerning the way projects are managed. I read about a WordPress plugin author complaining that managing the releases of his 10+ plugins was taking a considerable amount of time. That doesn’t happen here—after I merge my pull request, both public and private plugins are generated automatically, like magic.

Issues with the multi-monorepo

First off, it leaks. Ideally, leoloso/PoP should be completely autonomous and unaware that it is used as an upstream monorepo in a grander scheme—but that’s not the case.

When doing git checkout, the downstream monorepo must pass the --recurse-submodules option as to also checkout the submodules. In the GitHub Actions workflows for the private repo, the checkout must be done like this:

- uses: actions/checkout@v2 with: submodules: recursive

As a result, we have to input submodules: recursive to the downstream workflow, but not to the upstream one even though they both use the same source file.

To solve this while maintaining the public monorepo as the single source of truth, the workflows in leoloso/PoP are injected the value for submodules via an environment variable CHECKOUT_SUBMODULES, like this:

env: CHECKOUT_SUBMODULES: ""; jobs: provide_data: steps: - uses: actions/checkout@v2 with: submodules: ${{ env.CHECKOUT_SUBMODULES }}

The environment value is empty for the upstream monorepo, so doing submodules: "" works well. And then, when copying over the workflows from upstream to downstream, I replace the value of the environment variable to "recursive" so that it becomes:

env: CHECKOUT_SUBMODULES: "recursive"

(I have a PHP command to do the replacement, but we could also pipe sed in the copy-workflows composer script.)

This leakage reveals another issue with this setup: I must review all contributions to the public repo before they are merged, or they could break something downstream. The contributors would also completely unaware of those leakages (and they couldn’t be blamed for it). This situation is specific to the public/private-monorepo setup, where I am the only person who is aware of the full setup. While I share access to the public repo, I am the only one accessing the private one.

As an example of how things could go wrong, a contributor to leoloso/PoP might remove CHECKOUT_SUBMODULES: "" since it is superfluous. What the contributor doesn’t know is that, while that line is not needed, removing it will break the private repo.

I guess I need to add a warning!

env: ### ☠️ Do not delete this line! Or bad things will happen! ☠️ CHECKOUT_SUBMODULES: "" Wrapping up

My repo has gone through quite a journey, being adapted to the new requirements of my code and application at different stages:

  • It started as a single repo, hosting a monolithic app.
  • It became a multirepo when splitting the app into packages.
  • It was switched to a monorepo to better manage all the packages.
  • It was upgraded to a multi-monorepo to share files with a private monorepo.

Context means everything, so there is no “best” approach here—only solutions that are more or less suitable to different scenarios.

Has my repo reached the end of its journey? Who knows? The multi-monorepo satisfies my current requirements, but it hosts all private plugins together. If I ever need to grant contractors access to a specific private plugin, while preventing them to access other code, then the monorepo may no longer be the ideal solution for me, and I’ll need to iterate again.

I hope you have enjoyed the journey. And, if you have any ideas or examples from your own experiences, I’d love to hear about them in the comments.

The post From a Single Repo, to Multi-Repos, to Monorepo, to Multi-Monorepo appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Tabs in HTML?

Css Tricks - Mon, 08/16/2021 - 1:31pm
You know what tabs are, Brian.

I mean… You use them every day, on every OS. Everybody knows they exist in every toolbox. All that’s left is to “just pave the cowpaths!” But when you get right down to it, it’s a lot more complicated than that.

Brian Kardell shares a bit about the progress of bringing “Tabs” to HTML. We kinda think we know what they are, but you have to be really specific when dealing with specs and defining them. It’s tricky. Then, even if you settle on a solid definition, an HTML expression of that isn’t exactly clear. There are all kinds of expressions of tabs that all make sense in their own way. Imagine marking up tabs where you put all the tabs as a row of links or buttons up top, and then a bunch of panels below that. They call that a “Table of Contents” style of markup, and it makes some kind of logical sense (“the markup looks like tabs already”). But it also has some problems, and it looks like sections-with-headers is more practical (“If you have the heading, you can build the TOC, but not vice-versa”). Spicy sections are a totally different pattern. And that’s just one problem they are facing.

I don’t envy the work, but I look forward to the progress in no small part because authoring tabs is tricky business. Not hard to do, but very hard to do right. I’ve talked in the past about how I’ve built tabs many times in jQuery where just a click handler on a row of links hides or shows some matching divs below. That “works” if you ignore accessibility entirely (e.g. how you navigate between tabs, focus management, ARIA expectations, etc).

Here’s the ShopTalk discussion and here’s a different perspective in a chat I had with Stephen on CodePen Radio where we get into our <Tabs /> React component on CodePen.

Direct Link to ArticlePermalink

The post Tabs in HTML? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Cutouts

Css Tricks - Mon, 08/16/2021 - 10:49am

Ahmad Shadeed dug into shape “cutouts” the other day. Imagine a shape with another smaller shape carved out of it. In his typical comprehensive way, Ahmad laid out the situation well—looking at tricky situations that complicate things.

The first thing I’d think of is CSS’ clip-path, since it has that circle() syntax that seems like it a good fit, but no!, we need the opposite of what clip-path: circle() does, as we aren’t drawing a circle to be the clipping path here, but drawing all the way around the shape and then up into that second smaller circle and back out, like a bite out of a cookie. That puts us in clip-path: path() territory, which mercifully exists, and yet!, doesn’t quite get there because the path() syntax in CSS only works with fixed-pixel units which is often too limiting in fluid width layouts.

So that puts us at clip-path: url("#my-path"); (referencing an <svg> path), which is exactly where Ahmad starts this journey. But then he explores other options like a clever use of mask-image and a direct use of SVG <mask> and <image>, which turns out to be the winner.

Ideas like this have a weird way of entering the collective front-end developer consciousness somehow. Jay wrote up a very similar journey of wanting to do a shape cutout. Again, the problem:

clip-path defines a visible region, meaning that if you want all but a tiny chunk of the button to be visible, you need to define a path or polygon which is the inverse of the original. Here’s a demo of what I mean, using Clippy:

Jay Freestone, “Cutouts with CSS Masks”

In this case, polygon() has potential because it supports % units for flexibility (also, don’t miss Ana’s idea where the unit types are mixed within the polygon for a some-fixed-some-fluid concept).

Jay’s conclusion is that SVG has the most benefits of all the options:

[…] my overall impression is that mask-composite remains the more flexible solution, since it becomes trivial to use any SVG shape as the mask, not just a triangle or a simple polygon. The likelihood is that you’ll want to simply export an SVG and drop it in. Engineering the inverse result as clip-path is likely to get pretty hairy quickly.

Link on Mar 4, 2021 An Initial Implementation of clip-path: path(); Chris Coyier Link on Apr 2, 2021 Let’s Create an Image Pop-Out Effect With SVG Clip Path Adrian Bece Link on Oct 8, 2019 Clipping, Clipping, and More Clipping! Mikael Ainalem Link on Aug 24, 2018 Using CSS Clip Path to Create Interactive Effects, Part II Mikael Ainalem Link on Nov 6, 2016 Clipping and Masking in CSS Chris Coyier Almanac on Aug 4, 2021 mask .element { mask: url(mask.png) right bottom / 100px repeat-y; } Mojtaba Seyedi

Direct Link to ArticlePermalink

The post Cutouts appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

HTML is Not a Programming Language?

Css Tricks - Mon, 08/16/2021 - 4:44am

HTML is not a programming language.

I’ve heard that sentence so many times and it’s tiring. Normally, it is followed by something like, It doesn’t have logic, or, It is not Turing complete,.so… obviously it is not a programming language. Like it’s case-closed and should be the end of the conversation.

Should it be, though?

I want to look at typical arguments I hear used to belittle HTML and offer my own rebuttals to show how those claims are not completely correct.

My goal is not to prove that HTML is or is not a programming language, but to show that the three main arguments used for claiming it is not are flawed or incorrect, thus invalidating the conclusion from a logical point of view.

“HTML is a markup language, not a programming language”

This statement, by itself, sounds great… but it is wrong: markup languages can be programming languages. Not all of them are (most are not) but they can be. If we drew a Venn diagram of programming languages and markup languages, it would not be two separate circles, but two circles that slightly intersect:

CodePen Embed Fallback

A markup language that operates with variables, has control structures, loops, etc., would also be a programming language. They are not mutually exclusive concepts.

TeX and LaTeX are examples of markup languages that are also considered programming languages. It may not be practical to develop with them, but it is possible. And we can find examples online, like a BASIC interpreter or a Mars Rover controller (which won the Judges’ prize in the ICFP 2008 programming contest).

While some markup languages might be considered programming languages, I’m not saying that HTML is one of them. The point is that the original statement is wrong: markup languages can be programming languages. Therefore, saying that HTML is not a programming language because it is a markup language is based on a false statement, and whatever conclusion you arrive at from that premise will be categorically wrong.

“HTML doesn’t have logic”

This claim demands that we clarify what “logic” means because the definition might just surprise you.

As with Turing-completeness (which we’ll definitely get to), those who bring this argument to the table seem to misunderstand what it is exactly. I’ve asked people to tell me what they mean by “logic” and have gotten interesting answers back like:

Logic is a sensible reason or way of thinking.

That’s nice if what we’re looking for is a dictionary definition of logic. But we are talking about programming logic, not just logic as a general term. I’ve also received answers like:

Programming languages have variables, conditions, loops, etc. HTML is not a programming language because you can’t use variables or conditions. It has no logic.

This is fine (and definitely better than getting into true/false/AND/OR/etc.), but also incorrect. HTML does have variables — in the form of attributes — and there are control structures that can be used along with those variables/attributes to determine what is displayed.

But how do you control those variables? You need JavaScript!

Wrong again. There are some HTML elements that have internal control logic and don’t require JavaScript or CSS to work. And I’m not talking about things like <link> or <noscript> – which are rudimentary control structures and have been part of the standard for decades. I’m referring to elements that will respond to the user input and perform conditional actions depending on the current state of the element and the value of a variable. Take the <details>/<summary> tuple or the <dialog> element as examples: when a user clicks on them, they will close if the open attribute is present, and they will open if it is not. No JavaScript required.

CodePen Embed Fallback

So just saying alone that HTML isn’t a programming language because it lacks logic is misleading. We know that HTML is indeed capable of making decisions based on user input. HTML has logic, but it is inherently different from the logic of other languages that are designed to manipulate data. We’re going to need a stronger argument than that to prove that HTML isn’t a form of programming.

“HTML is not ‘Turing complete’”

OK, this is the one we see most often in this debate. It’s technically correct (the best kind of correct) to say HTML is not Turing complete, but it should spark a bigger debate than just using it as a case-closing statement.

I’m not going to get into the weeds on what it means to be Turing complete because there are plenty of resources on the topic. In fact, Lara Schenck summarizes it nicely in a post where she argues that CSS is Turing complete:

In the simplest terms, for a language or machine to be Turing complete, it means that it is capable of doing what a Turing machine could do: perform any calculation, a.k.a. universal computation. After all, programming was invented to do math although we do a lot more with it now, of course!

Because most modern programming languages are Turing complete, people use that as the definition of a programming language. But Turing-completeness is not that. It is a criterion to identify if a system (or its ruleset) can simulate a Turing machine. It can be used to classify programming languages; it doesn’t define them. It doesn’t even apply exclusively to programming languages. Take, for example, the game Minecraft (which meets that criterion) or the card game Magic: The Gathering (which also meets the criterion). Both are Turing complete but I doubt anyone would classify them as programming languages.

Turing-completeness is fashionable right now the same way that some in the past considered the difference between compiled vs. interpreted languages to be good criteria. Yes. We don’t have to make a big memory effort to remember when developers (mainly back-end) downplayed front-end programming (including JavaScript and PHP) as not “real programming.” You still hear it sometimes, although now faded, mumbled, and muttered.

The definition of what programming is (or is not) changes with time. I bet someone sorting through punched cards complained about how typing code in assembly was not real programming. There’s nothing universal or written in stone. There’s no actual definition.

Turing-completeness is a fair standard, I must say, but one that is biased and subjective — not in its form but in the way it is picked. Why is it that a language capable of generating a Turing Complete Machine gets riveted as a “programming language” while another capable of generating a Finite State Machine is not? It is subjective. It is an excuse like any other to differentiate between “real developers” (the ones making the claim) and those inferior to them.

To add insult to injury, it is obvious that many of the people parroting the “HTML is not Turing complete” mantra don’t even know or understand what Turing-completeness means. It is not an award or a seal of quality. It is not a badge of honor. It is just a way to categorize programming languages — to group them, not define them. A programming language could be Turing complete or not in the same way that it could be interpreted or compiled, imperative or declarative, procedural or object-oriented.

So, is HTML a programming language?

If we can debase the main arguments claiming that HTML is not a programming language, does that actually mean that HTML is a programming language? No, it doesn’t. And so, the debate will live on until the HTML standard evolves or the “current definition” of programming language changes.

But as developers, we must be wary of this question as, in many cases, it is not used to spark a serious debate but to stir controversy while hiding ulterior motives: from getting easy Internet reactions, to dangerously diminishing the contribution of a group of people to the development ecosystem.

Or, as Ashley Kolodziej beautifully sums it up in her ode to HTML:

They say you’re not a real programming language like the others, that you’re just markup, and technically speaking, I suppose that’s right. Technically speaking, JavaScript and PHP are scripting languages. I remember when it wasn’t cool to know JavaScript, when it wasn’t a “real” language too. Sometimes, I feel like these distinctions are meaningless, like we built a vocabulary to hold you (and by extension, ourselves as developers) back. You, as a markup language, have your own unique value and strengths. Knowing how to work with you best is a true expertise, one that is too often overlooked.

Independent of the stance that we take on the “HTML is/isn’t a programming language” discussion, let’s celebrate it and not deny its importance: HTML is the backbone of the Internet. It’s a beautiful language with vast documentation and extensive syntax, yet so simple that it can be learned in an afternoon, and so complex that it takes years to master. Programming language or not, what really matters is that we have HTML in the first place.

The post HTML is Not a Programming Language? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Building a Cool Front End Thing Generator

Css Tricks - Fri, 08/13/2021 - 4:41am

Whether you are just starting out on the front end, or you’ve been doing it for a long time, building a tool that can generate some cool front-end magic can help you learn something new, develop your skills and maybe even get you a little notoriety.

You might have run across some of these popular online generators:

I’ve had fun building a few of these myself over the years. Basically, any time you run across some cool front-end thing, there might be an opportunity to make an interactive generator for that thing.

In this case, we are going to make an Animated Background Gradient Generator.

Scaffolding the project in Next

A nice thing about these projects is that they’re all yours. Choose whatever stack you want and get going. I’m a big fan of Next.js, so for this project, I’m going to start as a basic Create Next App project.

npx create-next-app animated-gradient-background-generator

This generates all the files we need to get started. We can edit pages/index.js to be the shell for our project.

import Head from "next/head" import Image from "next/image" export default function Home() { return ( <> <Head> <title>Animated CSS Gradient Background Generator</title> <meta name="description" content="A tool for creating animated background gradients in pure CSS." /> <link rel="icon" href="/favicon.ico" /> </Head> <main> <h1> Animated CSS Gradient Background Generator </h1> </main> </> ) } Animated gradients?

At the time I’m writing this article, if you do a search for animated CSS gradient background, the first result is this Pen by Manuel Pinto.

Let’s take a look at the CSS:

body { background: linear-gradient(-45deg, #ee7752, #e73c7e, #23a6d5, #23d5ab); background-size: 400% 400%; animation: gradient 15s ease infinite; } @keyframes gradient { 0% { background-position: 0% 50%; } 50% { background-position: 100% 50%; } 100% { background-position: 0% 50%; } }

This is a great example that we can use as the foundation for the generated animation.

A React component to describe an animated gradient

We can break out a few possible configurable options for the generator:

  • An array of gradient colors
  • The angle of the gradient
  • The speed of the animation

To put in context, we want to provide these settings as data throughout our little app using a higher-order component, context/SettingsContext.js, along with some defaults.

import React, { useState, createContext } from "react" const SettingsContext = createContext({ colorSelection: [] }) const SettingsProvider = ({ children }) => { const [colorSelection, setColorSelection] = useState([ "deepskyblue", "darkviolet", "blue", ]) const [angle, setAngle] = useState(300) const [speed, setSpeed] = useState(5) return ( <SettingsContext.Provider value={{ colorSelection, setColorSelection, angle, setAngle, speed, setSpeed, }} > {children} </SettingsContext.Provider> ) } export { SettingsContext, SettingsProvider }

For our generator’s components, we want to create:

  • a control components to adjust these settings,
  • a visual display component for generated animated gradient, and
  • a component for the CSS code output.

Let’s start with a Controls component that contains the various inputs we used to adjust the settings.

import Colors from "./Colors" const Controls = (props) => ( <> <Colors /> </> ) export default Controls

We can add our SettingsProvider and Controls components to pages/index.js:

import Head from "next/head" import Image from "next/image" import { SettingsProvider } from "../context/SettingsContext" import Controls from "../components/Controls" import Output from "../components/Output" export default function Home() { return ( <> <Head> ... </Head> <SettingsProvider> <main style={{ textAlign: "center", padding: "64px" }}> <h1>Animated CSS Gradient Background Generator</h1> <Controls /> <Output /> </main> </SettingsProvider> </> ) }

Our SettingsProvider begins with the three colors from our CodePen example as defaults. We can verify that we are getting the color settings via our SettingsContext in a new Colors component.

import React, { useContext } from "react" import { SettingsContext } from "../context/SettingsContext" const Colors = () => { const { colorSelection } = useContext(SettingsContext) return ( <> {colorSelection.map((color) => ( <div>{color}</div> ))} </> ) } export default Colors

Let’s use the Colors component to display individual color swatches with a small button to delete via our SettingsContext.

import React, { useContext } from "react" import { SettingsContext } from "../context/SettingsContext" const Colors = () => { const { colorSelection, setColorSelection } = useContext(SettingsContext) const onDelete = (deleteColor) => { setColorSelection(colorSelection.filter((color) => color !== deleteColor)) } return ( <div> {colorSelection.map((color) => ( <div key={color} style={{ background: color, display: "inline-block", padding: "32px", margin: "16px", position: "relative", borderRadius: "4px", }} > <button onClick={() => onDelete(color)} style={{ background: "crimson", color: "white", display: "inline-block", borderRadius: "50%", position: "absolute", top: "-8px", right: "-8px", border: "none", fontSize: "18px", lineHeight: 1, width: "24px", height: "24px", cursor: "pointer", boxShadow: "0 0 1px #000", }} > × </button> </div> ))} </div> ) } export default Colors

You may notice that we have been using inline styles for CSS at this point. Who cares! We’re having fun here, so we can do whatever floats our boats.

Handling colors

Next, we create an AddColor component with a button that opens a color picker used to add more colors to the gradient.

For the color picker, we will install react-color and use the ChromePicker option.

npm install react-color

Once again, we will utilize SettingsContext to update the gradient color selection.

import React, { useState, useContext } from "react" import { ChromePicker } from "react-color" import { SettingsContext } from "../context/SettingsContext" const AddColor = () => { const [color, setColor] = useState("white") const { colorSelection, setColorSelection } = useContext(SettingsContext) return ( <> <div style={{ display: "inline-block", paddingBottom: "32px" }}> <ChromePicker header="Pick Colors" color={color} onChange={(newColor) => { setColor(newColor.hex) }} /> </div> <div> <button onClick={() => { setColorSelection([...colorSelection, color]) }} style={{ background: "royalblue", color: "white", padding: "12px 16px", borderRadius: "8px", border: "none", fontSize: "16px", cursor: "pointer", lineHeight: 1, }} > + Add Color </button> </div> </> ) } export default AddColor Handling angle and speed

Now that our color controls are finished, let’s add some components with range inputs for setting the angle and animation speed.

Here’s the code for AngleRange, with SpeedRange being very similar.

import React, { useContext } from "react" import { SettingsContext } from "../context/SettingsContext" const AngleRange = () => { const { angle, setAngle } = useContext(SettingsContext) return ( <div style={{ padding: "32px 0", fontSize: "18px" }}> <label style={{ display: "inline-block", fontWeight: "bold", width: "100px", textAlign: "right", }} htmlFor="angle" > Angle </label> <input type="range" id="angle" name="angle" min="-180" max="180" value={angle} onChange={(e) => { setAngle(e.target.value) }} style={{ margin: "0 16px", width: "180px", position: "relative", top: "2px", }} /> <span style={{ fontSize: "14px", padding: "0 8px", position: "relative", top: "-2px", width: "120px", display: "inline-block", }} > {angle} degrees </span> </div> ) } export default AngleRange

Now for the fun part: rendering the animated background. Let’s apply this to the entire background of the page with an AnimatedBackground wrapper component.

import React, { useContext } from "react" import { SettingsContext } from "../context/SettingsContext" const AnimatedBackground = ({ children }) => { const { colorSelection, speed, angle } = useContext(SettingsContext) const background = "linear-gradient(" + angle + "deg, " + colorSelection.toString() + ")" const backgroundSize = colorSelection.length * 60 + "%" + " " + colorSelection.length * 60 + "%" const animation = "gradient-animation " + colorSelection.length * Math.abs(speed - 11) + "s ease infinite" return ( <div style={{ background, "background-size": backgroundSize, animation, color: "white" }}> {children} </div> ) } export default AnimatedBackground

We’re calling the CSS animation for the gradient gradient-animation. We need to add that to styles/globals.css to trigger the animation:

@keyframes gradient-animation { 0% { background-position: 0% 50%; } 50% { background-position: 100% 50%; } 100% { background-position: 0% 50%; } } Making it useful to users

Next, let’s add some code output so people can copy and paste the generated CSS and use in their own projects.

import React, { useContext, useState } from "react" import { SettingsContext } from "../context/SettingsContext" const Output = () => { const [copied, setCopied] = useState(false) const { colorSelection, speed, angle } = useContext(SettingsContext) const background = "linear-gradient(" + angle + "deg," + colorSelection.toString() + ")" const backgroundSize = colorSelection.length * 60 + "%" + " " + colorSelection.length * 60 + "%" const animation = "gradient-animation " + colorSelection.length * Math.abs(speed - 11) + "s ease infinite" const code = `.gradient-background { background: ${background}; background-size: ${backgroundSize}; animation: ${animation}; } @keyframes gradient-animation { 0% { background-position: 0% 50%; } 50% { background-position: 100% 50%; } 100% { background-position: 0% 50%; } }` return ( <div style={{ position: "relative", maxWidth: "640px", margin: "64px auto" }} > <pre style={{ background: "#fff", color: "#222", padding: "32px", width: "100%", borderRadius: "4px", textAlign: "left", whiteSpace: "pre", boxShadow: "0 2px 8px rgba(0,0,0,.33)", overflowX: "scroll", }} > <code>{code}</code> <button style={{ position: "absolute", top: "8px", right: "8px", background: "royalblue", color: "white", padding: "8px 12px", borderRadius: "8px", border: "none", fontSize: "16px", cursor: "pointer", lineHeight: 1, }} onClick={() => { setCopied(true) navigator.clipboard.writeText(code) }} > {copied ? "copied" : "copy"} </button> </pre> </div> ) } export default Output Making it fun

It is sometimes fun (and useful) to add a button that sets random values on a generator like this. That gives people a way to quickly experiment and see what kinds of results they can get out of the tool. It is also an opportunity to look up cool stuff like how to generate random hex colors.

import React, { useContext } from "react" import { SettingsContext } from "../context/SettingsContext" const Random = () => { const { setColorSelection, setAngle, setSpeed } = useContext(SettingsContext) const goRandom = () => { const numColors = 3 + Math.round(Math.random() * 3) const colors = [...Array(numColors)].map(() => { return "#" + Math.floor(Math.random() * 16777215).toString(16) }) setColorSelection(colors) setAngle(Math.floor(Math.random() * 361)) setSpeed(Math.floor(Math.random() * 10) + 1) } return ( <div style={{ padding: "48px 0 16px" }}> <button onClick={goRandom} style={{ fontSize: "24px", fontWeight: 200, background: "rgba(255,255,255,.9)", color: "blue", padding: "24px 48px", borderRadius: "8px", cursor: "pointer", boxShadow: "0 0 4px #000", border: "none", }} > RANDOM </button> </div> ) } export default Random Wrapping up

There will be a few final things you’ll want to do to wrap up your project for initial release:

  • Update package.json with your project information.
  • Add some links to your personal site, the project’s repository and give credit where its due.
  • Update the README.md file that was generated with default content by Create Next App.

That’s it! We’re ready to release our new cool front end thing generator and reap the rewards of fame and fortune that await us!

You can see the code for this project on GitHub and the demo is hosted on Netlify.

The post Building a Cool Front End Thing Generator appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.