Front End Web Development

AnimXYZ

Css Tricks - Mon, 01/18/2021 - 5:41am

There are quite a few CSS animation libraries. They tend to be a pile of class names that you can apply as needed like “bounce” or “slide-right” and it’ll… do those things. They tend to be pretty opinionated with nice defaults, and not particularly designed around customization.

It looks like AnimXYZ is designed to be highly customizable, calling itself “the first composable CSS animation toolkit.”

You use as many of the different composable bits as you need to get the in/out animation you want. Play with their builder and you’ll see output like:

<div class="square-group" xyz="tall-2 duration-6 ease-out-back stagger-1 skew-left-2 big-25% fade-50% right-5" > <div class="square xyz-out"></div> <div class="square xyz-out"></div> <div class="square xyz-out"></div> </div>

The class name xyz-out becomes xyz-in to trigger the opposite animation.

I don’t love it when libraries use made up HTML attributes to control themselves. It’s unlikely that web standards will use xyz in the future, but who knows, and if this goes on enough production sites, that door is closed forever. But worse, it encourages other libraries to do the same.

All those attribute values are reminiscent of Tailwind. To use Tailwind effectively, the build process runs PurgeCSS to remove all unused classes, which will serve a tiny fraction of the complete set of classes Tailwind offers. I think of that because the processed stylesheet of AnimXYZ is ~9.7 kB compressed, which is larger than the file size Tailwind uses as an example on their marketing page. The point being, if classes were used, there would probably be a more straightforward way of purging the unused classes, which I bet would make the size almost negligible. Perhaps the JavaScript framework-specific usage is more clever.

But those criticisms aside, it’s cool! Not only are there smart defaults that are highly composable, you have 100% control via CSS Custom Properties.

CodePen Embed Fallback

Don’t miss the XYZ-ray button on the lower right of the website that lets you see what animations are powering what elements. It’s also on the docs which are super nice.

There is just something nice about declarative animations. I remember chatting with Matt Perry about Framer Motion and enjoying its approach.

The post AnimXYZ appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

State of JavaScript 2020

Css Tricks - Mon, 01/18/2021 - 5:40am

We rounded up a bunch of published 2020 annual reports right before the year ended and compiled them into a big ol’ list. The end of the list called out a couple of in-progress surveys, one of which was the 2020 State of JavaScript. Well, the results are in and available to check out!

Just shy of 24,000 folks participated in this year’s survey… almost exactly 2,000 more than 2019.

I love charts like this:

Notice how quickly some technologies take off then start to gain negative opinions, even as the rate of adoption increases.

What I like about this particular survey (and the State of CSS) is how the data is readily available to export in addition to all the great and telling charts. That opens up the possibility of creating your own reports and gleaning your own insights. But here’s what I’ve found interesting in the short time I’ve spent looking at it:

  • React’s facing negative opinions. It’s not so much that everybody’s fleeing from it, but the “shiny” factor may be waning (coming in at 88% satisfaction versus a 93% peak in 2017). Is that because it suffers from the same frustration that devs expressed with a lack of documentation in other surveys? I don’t know, but the fact that we see both growth and a sway toward negative opinions is interesting enough that I’ll want to see where it goes in 2021.
  • Awwwww, Gulp, what happened?! Wow, what a change in perception. Its usage has dipped a bit, but the impression of it is now solidly in “negative opinions” territory. I still use it personally on a number of projects and it does exactly what I need, but I get that there are better build processes these days that don’t require writing a bunch of pipes and whatnot.
  • Hello, Svelte. It’s the most fourth most used framework (15%) but enjoys the highest level of satisfaction (89%). I’m already interested in giving it a go, but this makes me want to dive into it even more — which is consistent with the fact that it also garners the most interest of all frameworks (68%).
  • Javascript is sorta overused and sorta overly complex. Well, according to the polls. It’s just so interesting that the distribution between the opinions is almost perfectly even. At the same time, the vast majority of folks (80.6%) believe JavaScript is heading in the right direction.

Direct Link to ArticlePermalink

The post State of JavaScript 2020 appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

On Auto-Generated Atomic CSS

Css Tricks - Fri, 01/15/2021 - 10:48am

Robin Weser’s “The Shorthand-Longhand Problem in Atomic CSS” in an interesting journey through a tricky problem. The point is that when you take on the job of converting something HTML and CSS-like into actual HTML and CSS, there are edge cases that you’ll have to program yourself out of, if you even can at all. In this case, Fela (which we just mentioned), turns CSS into “atomic” classes, but when you mix together shorthand and longhand, the resulting classes, mixed with the cascade, can cause mistakes.

I think this whole idea of CSS-in-JS that produces Atomic CSS is pretty interesting, so let’s take a quick step back and look at that.

Atomic CSS means one class = one job

Like this:

.mb-8 { margin-bottom: 2rem; }

Now imagine, like, thousands of those that are available to use and can do just about anything CSS can do.

Why would you do that?

Here’s some reasons:

  • If you go all-in on that idea, it means that you’ll ship less CSS because there are no property/value pairs that are repeated, and there are are no made-up-for-authoring-reasons class names. I would guess an all-atomic stylesheet (trimmed for usage, which we’ll get to) is a quarter the size of a hand-authored stylesheet, or smaller. Shipping less CSS is significant because CSS is a blocking resource.
  • You get to avoid naming things.
  • You get some degree of design consistency “for free” if you limit the available classes.
  • Some people just prefer it and say it makes them faster.
How do you get Atomic CSS?

There is nothing stopping you from just doing it yourself. That’s what GitHub did with Primer and Facebook did in FB5 (not that you should do what mega corporations do!). They decided on a bunch of utility styles and shipped it (to themselves, largely) as a package.

Perhaps the originator of the whole idea was Tachyons, which is a just a big ol’ opinionated pile of classes you can grab as use as-is.

But for the most part…

Tailwind is the big player.

Tailwind has a bunch of nice defaults, but it does some very smart things beyond being a collection of atomic styles:

  • It’s configurable. You tell it what you want all those classes to do.
  • It encourages you to “purge” the unused classes. You really need to get this part right, as you aren’t really getting the benefit of Atomic CSS if you don’t.
  • It’s got a UI library so you can get moving right away.
Wait weren’t we talking about automatically-generated Atomic CSS?

Oh right.

It’s worth mentioning that Yahoo was also an early player here. Their big idea is that you’d essentially use functions as class names (e.g. class="P(20px)") and that would be processed into a class (both in the HTML and CSS) during a build step. I’m not sure how popular that got really, but you can see how it’s not terribly dissimilar to Tailwind + PurgeCSS.

These days, you don’t have to write Atomic CSS to get Atomic CSS. From Robin’s article:

It allows us to write our styles in a familiar “monolithic” way, but get Atomic CSS out. This increases reusability and decreases the final CSS bundle size. Each property-value pair is only rendered once, namely on its first occurence. From there on, every time we use that specific pair again, we can reuse the same class name from a cache. Some libraries that do that are:

Fela
Styletron
React Native Web
Otion
StyleSheet

In my honest opinion, I think that this is the only reasonable way to actually use Atomic CSS as it does not impact the developer experience when writing styles. I would not recommend to write Atomic CSS by hand.

Johan Holmerin wrote about style9 here on CSS-Tricks too which does the same.

I think that’s neat. I’ve tried writing Atomic CSS directly a number of times and I just don’t like it. Who knows why. I’ve learned lots of new things in my life, and this one just doesn’t click with me. But I definitely like the idea of computers doing whatever they have to do to boost web performance in production. If a build step turns my authored CSS into Atomic CSS… hey that’s cool. There are five libraries above that do it, so the concept certainly has legs.

It makes sense that the approaches are based on CSS-in-JS, as they absolutely need to process both the markup and the CSS — so that’s the context that makes the most sense.

What do y’all think?

The post On Auto-Generated Atomic CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

3 Approaches to Integrate React with Custom Elements

Css Tricks - Fri, 01/15/2021 - 9:26am

In my role as a web developer who sits at the intersection of design and code, I am drawn to Web Components because of their portability. It makes sense: custom elements are fully-functional HTML elements that work in all modern browsers, and the shadow DOM encapsulates the right styles with a decent surface area for customization. It’s a really nice fit, especially for larger organizations looking to create consistent user experiences across multiple frameworks, like Angular, Svelte and Vue.

In my experience, however, there is an outlier where many developers believe that custom elements don’t work, specifically those who work with React, which is, arguably, the most popular front-end library out there right now. And it’s true, React does have some definite opportunities for increased compatibility with the web components specifications; however, the idea that React cannot integrate deeply with Web Components is a myth.

In this article, I am going to walk through how to integrate a React application with Web Components to create a (nearly) seamless developer experience. We will look at React best practices its and limitations, then create generic wrappers and custom JSX pragmas in order to more tightly couple our custom elements and today’s most popular framework.

Coloring in the lines

If React is a coloring book — forgive the metaphor, I have two small children who love to color — there are definitely ways to stay within the lines to work with custom elements. To start, we’ll write a very simple custom element that attaches a text input to the shadow DOM and emits an event when the value changes. For the sake of simplicity, we’ll be using LitElement as a base, but you can certainly write your own custom element from scratch if you’d like.

CodePen Embed Fallback

Our super-cool-input element is basically a wrapper with some styles for a plain ol’ <input> element that emits a custom event. It has a reportValue method for letting users know the current value in the most obnoxious way possible. While this element might not be the most useful, the techniques we will illustrate while plugging it into React will be helpful for working with other custom elements.

Approach 1: Use ref

According to React’s documentation for Web Components, “[t]o access the imperative APIs of a Web Component, you will need to use a ref to interact with the DOM node directly.”

This is necessary because React currently doesn’t have a way to listen to native DOM events (preferring, instead, to use it’s own proprietary SyntheticEvent system), nor does it have a way to declaratively access the current DOM element without using a ref.

We will make use of React’s useRef hook to create a reference to the native DOM element we have defined. We will also use React’s useEffect and useState hooks to gain access to the input’s value and render it to our app. We will also use the ref to call our super-cool-input’s reportValue method if the value is ever a variant of the word “rad.”

CodePen Embed Fallback

One thing to take note of in the example above is our React component’s useEffect block.

useEffect(() => { coolInput.current.addEventListener('custom-input', eventListener); return () => { coolInput.current.removeEventListener('custom-input', eventListener); } });

The useEffect block creates a side effect (adding an event listener not managed by React), so we have to be careful to remove the event listener when the component needs a change so that we don’t have any unintentional memory leaks.

While the above example simply binds an event listener, this is also a technique that can be employed to bind to DOM properties (defined as entries on the DOM object, rather than React props or DOM attributes).

This isn’t too bad. We have our custom element working in React, and we’re able to bind to our custom event, access the value from it, and call our custom element’s methods as well. While this does work, it is verbose and doesn’t really look like React.

Approach 2: Use a wrapper

Our next attempt at using our custom element in our React application is to create a wrapper for the element. Our wrapper is simply a React component that passes down props to our element and creates an API for interfacing with the parts of our element that aren’t typically available in React.

Here, we have moved the complexity into a wrapper component for our custom element. The new CoolInput React component manages creating a ref while adding and removing event listeners for us so that any consuming component can pass props in like any other React component.

function CoolInput(props) { const ref = useRef(); const { children, onCustomInput, ...rest } = props; function invokeCallback(event) { if (onCustomInput) { onCustomInput(event, ref.current); } } useEffect(() => { const { current } = ref; current.addEventListener('custom-input', invokeCallback); return () => { current.removeEventListener('custom-input', invokeCallback); } }); return <super-cool-input ref={ref} {...rest}>{children}</super-cool-input>; }

On this component, we have created a prop, onCustomInput, that, when present, triggers an event callback from the parent component. Unlike a normal event callback, we chose to add a second argument that passes along the current value of the CoolInput’s internal ref.

CodePen Embed Fallback

Using these same techniques, it is possible to create a generic wrapper for a custom element, such as this reactifyLitElement component from Mathieu Puech. This particular component takes on defining the React component and managing the entire lifecycle.

Approach 3: Use a JSX pragma

One other option is to use a JSX pragma, which is sort of like hijacking React’s JSX parser and adding our own features to the language. In the example below, we import the package jsx-native-events from Skypack. This pragma adds an additional prop type to React elements, and any prop that is prefixed with onEvent adds an event listener to the host.

To invoke a pragma, we need to import it into the file we are using and call it using the /** @jsx <PRAGMA_NAME> */ comment at the top of the file. Your JSX compiler will generally know what to do with this comment (and Babel can be configured to make this global). You might have seen this in libraries like Emotion.

An <input> element with the onEventInput={callback} prop will run the callback function whenever an event with the name 'input' is dispatched. Let’s see how that looks for our super-cool-input.

CodePen Embed Fallback

The code for the pragma is available on GitHub. If you want to bind to native properties instead of React props, you can use react-bind-properties. Let’s take a quick look at that:

import React from 'react' /** * Convert a string from camelCase to kebab-case * @param {string} string - The base string (ostensibly camelCase) * @return {string} - A kebab-case string */ const toKebabCase = string => string.replace(/([a-z0-9]|(?=[A-Z]))([A-Z])/g, '$1-$2').toLowerCase() /** @type {Symbol} - Used to save reference to active listeners */ const listeners = Symbol('jsx-native-events/event-listeners') const eventPattern = /^onEvent/ export default function jsx (type, props, ...children) { // Make a copy of the props object const newProps = { ...props } if (typeof type === 'string') { newProps.ref = (element) => { // Merge existing ref prop if (props && props.ref) { if (typeof props.ref === 'function') { props.ref(element) } else if (typeof props.ref === 'object') { props.ref.current = element } } if (element) { if (props) { const keys = Object.keys(props) /** Get all keys that have the `onEvent` prefix */ keys .filter(key => key.match(eventPattern)) .map(key => ({ key, eventName: toKebabCase( key.replace('onEvent', '') ).replace('-', '') }) ) .map(({ eventName, key }) => { /** Add the listeners Map if not present */ if (!element[listeners]) { element[listeners] = new Map() } /** If the listener hasn't be attached, attach it */ if (!element[listeners].has(eventName)) { element.addEventListener(eventName, props[key]) /** Save a reference to avoid listening to the same value twice */ element[listeners].set(eventName, props[key]) } }) } } } } return React.createElement.apply(null, [type, newProps, ...children]) }

Essentially, this code converts any existing props with the onEvent prefix and transforms them to an event name, taking the value passed to that prop (ostensibly a function with the signature (e: Event) => void) and adding it as an event listener on the element instance.

Looking forward

As of the time of this writing, React recently released version 17. The React team had initially planned to release improvements for compatibility with custom elements; unfortunately, those plans seem to have been pushed back to version 18.

Until then it will take a little extra work to use all the features custom elements offer with React. Hopefully, the React team will continue to improve support to bridge the gap between React and the web platform.

The post 3 Approaches to Integrate React with Custom Elements appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Proper Tabbing to Interactive Elements in Firefox on macOS

Css Tricks - Thu, 01/14/2021 - 2:48pm

I just had to debug an issue with focusable elements in Firefox. Someone reported to me that when tabbing to a certain element within a CodePen embed, it shot the scroll position to the top of the page (WTF?!). So, I went to go debug the problem by tabbing through an example page in Firefox, and this is what I saw:

I didn’t even know what to make of that. It was like some elements you could tab to but not others? You can tab to <button>s but not <a>s? Uhhhhh, that doesn’t seem right that you can’t tab to links in Firefox?

After searching and asking around, it turns out it’s this preference at the OS level on macOS.

System Preferences > Keyboard > Shortcuts > User keyboard navigation to move focus between controls

If you have to turn that on, you also have to restart Firefox. Once you have, then you can tab to things you’d expect to be able to tab to, like links.

About that bug with the scrolling to the top of the page. See that “Skip Results Iframe” link that shows up when tabbing through the CodePen Embed? It only shows up when :focus-ed (as the point of it is to skip over the <iframe> rather than being forced to tab through it). I “hid” it by doing a position: absolute; top: -9999px; left: -9999px thing (old muscle memory), then removing those values when in focus. For some reason, when tabbed to, Firefox would see those values and instantly jump the page up, even though the focus style moved it back into a normal place. Must have been some kind of race condition thing.

I also found it very silly that Firefox would do that to the parent page when that link was inside an iframe. I fixed it up using a more vetted accessible hiding technique.

The post Proper Tabbing to Interactive Elements in Firefox on macOS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Building an Ethereum app using Redwood.js and Fauna

Css Tricks - Thu, 01/14/2021 - 2:45pm

With the recent climb of Bitcoin’s price over 20k $USD, and to it recently breaking 30k, I thought it’s worth taking a deep dive back into creating Ethereum applications. Ethereum, as you should know by now, is a public (meaning, open-to-everyone-without-restrictions) blockchain that functions as a distributed consensus and data processing network, with the data being in the canonical form of “transactions” (txns). However, the current capabilities of Ethereum let it store (constrained by gas fees) and process (constrained by block size or size of the parties participating in consensus) only so many txns and txns/sec. Now, since this is a “how to” article on building with Redwood and Fauna and not an article on “how does […],” I will not go further into the technical details about how Ethereum works, what constraints it has and does not have, et cetera. Instead, I will assume you, as the reader, already have some understanding about Ethereum and how to build on it or with it.

I realized that there will be some new people stumbling onto this post with no prior experience with Ethereum, and it would behoove me to point these viewers in some direction. Thankfully, as of the time of this rewriting, Ethereum recently revamped their Developers page with tons of resources and tutorials. I highly recommend newcomers to go through it!

Although, I will be providing relevant specific details as we go along so that anyone familiar with either building Ethereum apps, Redwood.js apps, or apps that rely on a Fauna, can easily follow the content in this tutorial. With that out of the way, let’s dive in!

Preliminaries

This project is a fork of the Emanator monorepo, a project that is well described by Patrick Gallagher, one of the creators of the app, in his blog post he made for his team’s Superfluid hackathon submission. While Patrick’s app used Heroku for their database, I will be showing how you can use Fauna with this same app! 

Since this project is a fork, make sure to have downloaded the MetaMask browser extension before continuing.

Fauna

Fauna is a web-native GraphQL interface, with support for custom business logic and integration with the serverless ecosystem, enabling developers to simplify code and ship faster. The underlying globally-distributed storage and compute fabric is fast, consistent, and reliable, with a modern security infrastructure. Fauna is easy to get started with and offers a 100 percent serverless experience with nothing to manage.

Fauna also provides us with a High Availability solution with each server globally located containing a partition of our database, replicating our data asynchronously with each request with a copy of our database or the transaction made. 

Some of the benefits to using Fauna can be summarized as: 

  • Transactional 
  • Multi-document 
  • Geo-distributed 

In short, Fauna frees the developer from worrying about single or multi-document solutions. Guarantees consistent data without burdening the developer on how to model their system to avoid consistency issues. To get a good overview of how Fauna does this see this blog post about the FaunaDB distributed transaction protocol. 

There are a few other alternatives that one could choose instead of using Fauna such as: 

  • Firebase 
  • Cassandra 
  • MongoDB 

But these options don’t give us the ACID guarantees that Fauna does, compromising scaling. ACID stands for: 

  • Atomic:  all transactions are a single unit of truth, either they all pass or none. If we have multiple transactions in the same request then either both are good or neither are, one cannot fail and the other succeed. 
  • Consistent: A transaction can only bring the database from one valid state to another, that is, any data written to the database must follow the rules set out by the database, this ensures that all transactions are legal. 
  • Isolation: When a transaction is made or created, concurrent transactions leave the state of the database the same as they would be if each request was made sequentially. 
  • Durability: Any transaction that is made and committed to the database is persisted in the database, regardless of down time of the system or failure.
Redwood.js

Since I’ve used Fauna several times, I can vouch for Fauna’s database first-hand, and of all the things I enjoy about it, what I love the most is how simple and easy it is to use! Not only that, but Fauna is also great and easy to pair with GraphQL and GraphQL tools like Apollo Client and Apollo Server!! However, we will not be using Apollo Client and Apollo Server directly. We’ll be using Redwood.js instead, a full-stack JavaScript/TypeScript (not production-ready) serverless framework which comes prepackaged with Apollo Client/Server! 

You can check out Redwood.js on its site, and the GitHub page.

Redwood.js is a newer framework to come out of the woodwork (lol) and was started by Tom Preston-Werner (one of the founders of GitHub). Even so, do be warned that this is an opinionated web-app framework, coming with a lot of the dev environment decisions already made for you. While some folk may not like this approach, it does offer us a faster way to build Ethereum apps, which is what this post is all about.

Superfluid

One of the challenges of working with Ethereum applications is block confirmations. The corollary to block confirmations is txn confirmations (i.e. data), and confirmations take time, which means time (usually minutes) that the user must wait until a computation they initiated (either directly via a UI or indirectly via another smart contract) is considered truthful or trustworthy. Superfluid is a protocol that aims to address this issue by introducing cashflows or txn streams to enable real-time financial applications; that is; apps where the user no longer needs to wait for txn confirmations and can immediately follow-up on the next set of computational actions. 

Learn more about Superfluid by reading their documentation.

Emanator

Patrick’s team did something really cool and applied Superfluid’s streaming functionality to NFTs, allowing for a user to “mint a continuous supply of NFTs”. This stream of NFTs can then be sold via auctions. Another interesting part of the emanator app is that these NFTs are for creators, artists &#x1f469;‍&#x1f3a8; , or musicians &#x1f3bc; . 

There are a lot more technical details about how this application works, like the use of a Superfluid Instant Distribution Agreement (IDA), revenue split per auction, auction process, and the smart contract itself; however, since this is a “how-to” and not a “how does […]” tutorial, I’ll leave you with a link to the README.md of the original Emanator `monorepo`, if you want to learn more.  

Finally, let’s get to some code!

Setup

1. Download the repo from redwood-eth-with-fauna

Git clone the redwood-eth-with-fauna repo on your terminal, favorite text editor, or IDE. For greater cognitive ease, I’ll be using VSCode for this tutorial.

2. Install app dependencies and setup environment variables &#x1f510;

To install this project’s dependencies after you’ve cloned the repo, just run:

yarn

…at the root of the directory. Then, we need to get our .env file from our .env.example file. To do that run:

cp .env.example .env

In your .env file, you still need to provide INFURA_ENDPOINT_KEY. Contrary to what you might initially think, this variable is actually your PROJECT ID of your Infura app. 

If you don’t have an Infura account, you can create one for free! &#x1f193; &#x1f57a;

An example view of the Infura dashboard for my redwood-eth-with-fauna app. Copy the PROJECT ID and paste it in your .env file as for INFURA_ENDPOINT_KEY

3. Update the GraphQL schema and run the database migration

In the schema file found by at:

api/prisma/schema.prisma 

…we need to add a field to the Auction model. This is due to a bug in the code where this field is actually missing from the monorepo. So, we must add it to get our app working!

We are adding line 33, a contentHash field with the type `String` so that our Auctions can be added to our database and then shown to the user.

After that, we need to run a database migration using a Redwood.js command that will automatically update some of our project’s code. (How generous of the Redwood devs to abstract this responsibility from us; this command just works!) To do that, run:

yarn rw db save redwood-eth-with-fauna && yarn rw db up

You should see something like the following if this process was successful.

At this point, you could start the app by running

yarn rw dev

…and create, and then mint your first NFT! &#x1f389; &#x1f389; 

Note: You may get the following error when minting a new NFT:

If you do, just refresh the page to see your new NFT on the right!

You can also click on the name of your new NFT to view it’s auction details like the one shown below:

You can also notice on your terminal that Redwood updates the API resolver when you navigate to this page.

That’s all for the setup! Unfortunately, I won’t be touching on how to use this part of the UI, but you’re welcome to visit Emanator’s monorepo to learn more.

Now, we want to add Fauna to our app.

Adding Fauna

Before we get to adding Fauna to our Redwood app, let’s make sure to power it down by pressing CTL+C (on macOS). Redwood handles hot reloading for us and will automatically re-render pages as we make edits which can get quite annoying while we make your adjustments. So, we’ll keep our app down for now until we’ve finished adding Fauna. 

Next, we want to make sure we have a Fauna secret API key from a Fauna database that we create on Fauna’s dashboard (I will not walk through how to do that, but this helpful article does a good job of covering it!). Once you have copied your key secret, paste it into your .env file by replacing <FAUNA_SECRET_KEY>:

Make sure to leave the quotation marks in place! 

Importing GraphQL Schema to Fauna

To import our GraphQL schema of our project to Fauna, we need to first schema stitch our 3 separate schemas together, a process we’ll do manually. Make a new file api/src/graphql/fauna-schema-to-import.gql. In this file, we will add the following:

type Query { bids: [Bid!]! auctions: [Auction!]! auction(address: String!): Auction web3Auction(address: String!): Web3Auction! web3User(address: String!, auctionAddress: String!): Web3User! } # ------ Auction schema ------ type Auction { id: Int! owner: String! address: String! name: String! winLength: Int! description: String contentHash: String createdAt: String! status: String! highBid: Int! generation: Int! revenue: Int! bids: [Bid]! } input CreateAuctionInput { address: String! name: String! owner: String! winLength: Int! description: String! contentHash: String! status: String highBid: Int generation: Int } # Comment out to bypass Fauna `Import your GraphQL schema' error # type Mutation { # createAuction(input: CreateAuctionInput!): Auction # } # ------ Bids ------ type Bid { id: Int! amount: Int! auction: Auction! auctionAddress: String! } input CreateBidInput { amount: Int! auctionAddress: String! } input UpdateBidInput { amount: Int auctionAddress: String } # ------ Web3 ------ type Web3Auction { address: String! highBidder: String! status: String! highBid: Int! currentGeneration: Int! auctionBalance: Int! endTime: String! lastBidTime: String! # Unfortunately, the Fauna GraphQL API does not support custom scalars. # So, we'll this field from the app. # pastAuctions: JSON! revenue: Int! } type Web3User { address: String! auctionAddress: String! superTokenBalance: String! isSubscribed: Boolean! }

Using this schema, we can now import it to our Fauna database.

Also, don’t forget to make the necessary changes to our 3 separate schema files api/src/graphql/auctions.sdl.js, api/src/graphql/bids.sdl.js, and api/src/graphql/web3.sdl.js to correspond to our new Fauna GraphQL schema!! This is important to maintain consistency between our app’s GraphQL schema and Fauna’s.

View Complete Project Diffs — Quick Start section

If you want to take a deep dive and learn the necessary changes required to get this project up and running, great! Head on to the next section!!  

Otherwise, if you want to just get up and running quickly, this section is for you. 

You can git checkout the `integrating-fauna` branch at the root directory of this project’s repo. To do that, run the following command:

git checkout integrating-fauna

Then, run yarn again, for a sanity check:

yarn

To start the app, you can then run:

yarn rw dev Steps to add Fauna

Now for some more steps to get our project going!

1. Install faunadb and graphql-request

First, let’s install the Fauna JavaScript driver faunadb and the graphql-request. We will use both of these for our main modifications to our database scripts folder to add Fauna. 

To install, run:

yarn workspace api add faunadb graphql-request

2. Edit  api/src/lib/db.js and api/src/functions/graphql.js

Now, we will replace the PrismaClient instance in api/src/lib/db.js with our Fauna instance. You can delete everything in file and replace it with the following:

Then, we must make a small update to our api/src/functions/graphql.js file like so:

3. Create api/src/lib/fauna-client.js

In this simple file, we will instantiate our client-side instance of the Fauna database with two variables which we will be using in the next step. This file should end up looking like the following:

4. Update our first service under api/src/services/auctions/auctions.js

Here comes the hard part. In order to get our services running, we need to replace all Prisma related commands with commands using an instance of the Fauna client from our fauna-client.js we just created. This part doesn’t seem straightforward initially, but with some deep thought and thinking, all the necessary changes come down to understanding how Fauna’s FQL commands work. 

FQL (Fauna Query Language) is Fauna’s native API for querying Fauna. Since FQL is expression-oriented, using it is as simple as chaining several functional commands. Thus, for the first changes in api/services/auctions/auctions.js, we’ll do the following:

To break this down a bit, first, we import the client variables and `db` instance from the proper project file paths. Then, we remove line 11, and replace it with lines 13 – 28 (you can ignore the comments for now, but if you really want to see the rest of these, you can check out the integrating-fauna branch from this project’s repo to see the complete diffs). Here, all we’re doing is using FQL to query the auctions Index of our Fauna Indexes to get all the auctions data from our Fauna database. You can test this out by running console.log(auctionsRaw).

From running that console.log(), we see that we need to do some object destructing to get the data we need to update what was previously line 18:

const auctions = await auctionsRaw.map(async (auction, i) => {

Since we dealing with an object, but we want an array, we’ll add the following in the next line after finishing the declaration of const auctionsRaw:

Now we can see that we’re getting the right data format.

Next, let’s update the call instance of `auctionsRaw` to our new auctionsDataObjects:

Here comes the most challenging part of updating this file. We want to update the simple return statement of both the auction and createAuction functions. Actually, the changes we make are actually quite similar. So, let’s make update our auction function like so:

Again, you can ignore the comments, as this comment is just to note the preference return command statement that was there prior to our changes.

All this query says is, “in the auction Collection, find one specific auction that has this address.”

This next step to complete this createAuctin function is admittedly quite hacky. While making this tutorial, I realized that Fauna’s GraphQL API unfortunately does not support custom scalars (you can read more about that under the Limitations section of their GraphQL documentation). This sadly meant that the GraphQL schema of Emanator’s monorepo would not work directly out of the box. In the end, this resulted in having to make many minor changes to get the app to properly run the creation of an auction. So, instead of walking in detail through this section, I will first show you the diff, then briefly summarize the purpose of the changes. 

Looking at the green lines of 100 and 101, we can see that the functional commands we’re using here are not that much different; here, we’re just creating a new document in our Auction collection, instead of reading from the Indexes. 

Turning back to the data fields of this createAuction function, we can see that we are given an input as argument, which actually refers to the UI input fields of the new NFT auction form on the Home page. Thus, input is an object of six fields, namely address, name, owner, winLength, description, and contentHash. However, the other four fields that are required to fulfill our GraphQL schema for an Auction type are still missing! Therefore, the other variables I created, id, dateTime, status, and highBid are variables I, more or less, hardcoded so that this function could complete successfully. 

Lastly, we need to complete the export of the Auction constant. To do that, we’ll make use of the Fauna client once more to make the following changes:

And, we’re finally done with our first service &#x1f38a; , phew!

Completing GraphQL services

By now, you may be feeling a bit tired from these changes from updating the GraphQL services (I know I was while I was trying to learn the necessary changes to make!). So, to save you time getting this app to work, I’ll instead of walking through them entirely, I will share the git diffs again from the integrating-fauna branch that I have already working in the repo. After sharing them, I will summarize the changes that were made.

First file to update is api/src/services/bids/bids.js:

And, updating our last GraphQL service:

Finally, one final change in web/src/components/AuctionCell/AuctionCell.js:

So, back to Fauna not supporting custom scalars. Since Fauna doesn’t support custom scalars, we had to comment out the pastAuctions field from our web3.js service query (along with commenting it out from our GraphQL schemas). 

The last change that was made in web/src/components/AuctionCell/AuctionCell.js is another hacky change to make the newly created NFT address domains (you can navigate to these when you click on the hyperlink of the NFT name, located on the right of the home page after you create a new NFT) clickable without throwing an error. &#x1f604; 

Conclusion

Finally, when you run:

yarn rw dev

…and you create a new token, you can now do so using Fauna!! &#x1f389;&#x1f389;&#x1f389;&#x1f389;

Final notes

There are two caveats. First, you will see this annoying error message appear above the create NFT form after you have created one and confirmed the transaction with MetaMask.

Unfortunately, I couldn’t find a solution for this besides refreshing the page. So, we will do this just like we did with our original Emanator monorepo version. 

But when you do refresh the page, you should see your new shiny token displayed on the right! &#x1f44f; 

 And, this is with the NFT token data fetched from Fauna! &#x1f64c; &#x1f57a; &#x1f64c;&#x1f64c;

The second caveat is that the page for a new NFT is still not renderable due to the bug web/src/components/AuctionCell/AuctionCell.js.

This is another issue I couldn’t solve. However, this is where you, the community, can step in! This repo, redwood-eth-with-fauna is openly available on GitHub, along with the (currently) finalized integrating-fauna branch that has a working (as it currently does &#x1f605;) version of the Emanator app. So, if you’re really interested in this app and would like to explore how to leverage this app further with Fauna, feel free to fork the project and explore or make changes! I can always be reached on GitHub and am always happy to help you! &#x1f60a;

That’s all for this tut, and I hope you enjoyed! Feel free to reach out with any questions on GitHub!

The post Building an Ethereum app using Redwood.js and Fauna appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Make GraphQL and DynamoDB Play Nicely Together

Css Tricks - Thu, 01/14/2021 - 5:54am

Serverless, GraphQL, and DynamoDB are a powerful combination for building websites. The first two are well-loved, but DynamoDB is often misunderstood or actively avoided. It’s often dismissed by folks who consider it only worth the effort “at scale.”

That was my assumption, too, and I tried to stick with a SQL database for my serverless apps. But after learning and using DynamoDB, I see the benefits of it for projects of any scale.

To show you what I mean, let’s build an API from start to finish — without any heavy Object Relational Mapper (ORM) or GraphQL framework to hide what is really going on. Maybe when we’re done you might consider giving DynamoDB a second look. I think it is worth the effort.

The main objections to DynamoDB and GraphQL

The main objection to DynamoDB is that it is hard to learn, but few people argue about its power. I agree the learning curve feels very steep. But SQL databases are not the best fit with serverless applications. Where do you stand up that SQL database? How do you manage connections to it? These things just don’t mesh with the serverless model very well. DynamoDB is serverless-friendly by design. You are trading the up-front pain of learning something hard to save yourself from future pain. Future pain that only grows if your application grows.

The case against using GraphQL with DynamoDB is a little more nuanced. GraphQL seems to fit well with relational databases partly because it is assumed by a lot of the documentation, tutorials, and examples. Alex Debrie is a DynamoDB expert who wrote The DynamoDB Book which is a great resource to deeply learn it. Even he recommends against using the two together, mostly because of the way that GraphQL resolvers are often written as sequential independent database calls that can result in excessive database reads.

Another potential problem is that DynamoDB works best when you know your access patterns beforehand. One of the strengths of GraphQL is that it can handle arbitrary queries more easily by design than REST. This is more of a problem with a public API where users can write arbitrary queries. In reality, GraphQL is often used for private APIs where you control both the client and the server. In this case, you know and can control the queries you run. With a GraphQL API it is possible to write queries that clobber any database without taking steps to avoid them.

A basic data model

For this example API, we will model an organization with teams, users, and certifications. The entity relational diagram is shown below. Each team has many users and each user can have many certifications.

Relational database model

Our end goal is to model this data in a DynamoDB table, but if we did model it in a SQL database, it would look like the following diagram:

To represent the many-to-many relationship of users to certifications, we add an intermediate table called “Credential.” The only unique attribute on this table is the expiration date. There would be other attributes for each of the tables, but we reduce it to just a name for each for simplicity.

Access patterns

The key to designing a data model for DynamoDB is to know your access patterns up front. In a relational database you start with normalized data and perform joins across the data to access it. DynamoDB does not have joins, so we build a data model that matches how we intend to access it. This is an iterative process. The goal is to identify the most frequent patterns to start. Most of these will directly map to a GraphQL query, but some may be only used internally to the back end to authenticate or check permissions, etc. An access pattern that is rarely used, like a check run once a week by an administrator, does not need to be designed. Something very inefficient (like a table scan) can handle these queries.

Most frequently accessed:

  • User by ID or name
  • Team by ID or name
  • Certification by ID or name

Frequently accessed:

  • All Users on a Team by Team ID
  • All Certifications for a given User
  • All Teams
  • All Certifications

Rarely accessed

  • All Certifications of users on a Team
  • All Users who have a Certification
  • All Users who have a Certification on a Team
DynamoDB single table design

DynamoDB does not have joins and you can only query based on the primary key or predefined indexes. There is no set schema for items imposed by the database, so many different types of items can be stored in a single table. In fact, the recommended best practice for your data schema is to store all items in a single table so that you can access related items together with a single query. Below is a single table model representing our data. To design this schema, you take the access patterns above and choose attributes for the keys and indexes that match.

The primary key here is a composite of the partition/hash key (pk) and the sort key (sk). To retrieve an item in DynamoDB, you must specify the partition key exactly and either a single value or a range of values for the sort key. This allows you to retrieve more than one item if they share a partition key. The indexes here are shown as gsi1pk, gsi1sk, etc. These generic attribute names are used for the indexes (i.e. gsi1pk) so that the same index can be used to access different types of items with different access pattern. With a composite key, the sort key cannot be empty, so we use “#” as a placeholder when the sort key is not needed.

Access patternQuery conditionsTeam, User, or Certification by ID  Primary Key, pk=”T#”+ID, sk=”#”  Team, User, or Certification by nameIndex GSI 1, gsi1pk=type, gsi1sk=nameAll Teams, Users, or Certifications  Index GSI 1, gsi1pk=type    All Users on a Team by IDIndex GSI 2, gsi2pk=”T#”+teamIDAll Certifications for a User by IDPrimary Key, pk=”U#”+userID, sk=”C#”+certIDAll Users with a Certification by IDIndex GSI 1, gsi1pk=”C#”+certID, gsi1sk=”U#”+userID Database schema

We enforce the “database schema” in the application. The DynamoDB API is powerful, but also verbose and complicated. Many people jump directly to using an ORM to simplify it. Here, we will directly access the database using the helper functions below to create the schema for the Team item.

const DB_MAP = { TEAM: { get: ({ teamId }) => ({ pk: 'T#'+teamId, sk: '#', }), put: ({ teamId, teamName }) => ({ pk: 'T#'+teamId, sk: '#', gsi1pk: 'Team', gsi1sk: teamName, _tp: 'Team', tn: teamName, }), parse: ({ pk, tn, _tp }) => { if (_tp === 'Team') { return { id: pk.slice(2), name: tn, }; } else return null; }, queryByName: ({ teamName }) => ({ IndexName: 'gsi1pk-gsi1sk-index', ExpressionAttributeNames: { '#p': 'gsi1pk', '#s': 'gsi1sk' }, KeyConditionExpression: '#p = :p AND #s = :s', ExpressionAttributeValues: { ':p': 'Team', ':s': teamName }, ScanIndexForward: true, }), queryAll: { IndexName: 'gsi1pk-gsi1sk-index', ExpressionAttributeNames: { '#p': 'gsi1pk' }, KeyConditionExpression: '#p = :p ', ExpressionAttributeValues: { ':p': 'Team' }, ScanIndexForward: true, }, }, parseList: (list, type) => { if (Array.isArray(list)) { return list.map(i => DB_MAP[type].parse(i)); } if (Array.isArray(list.Items)) { return list.Items.map(i => DB_MAP[type].parse(i)); } }, };

To put a new team item in the database you call:

DB_MAP.TEAM.put({teamId:"t_01",teamName:"North Team"})

This forms the index and key values that are passed to the database API. The parse method takes an item from the database and translates it back to the application model.

GraphQL schema type Team { id: ID! name: String members: [User] } type User { id: ID! name: String team: Team credentials: [Credential] } type Certification { id: ID! name: String } type Credential { id: ID! user: User certification: Certification expiration: String } type Query { team(id: ID!): Team teamByName(name: String!): [Team] user(id: ID!): User userByName(name: String!): [User] certification(id: ID!): Certification certificationByName(name: String!): [Certification] allTeams: [Team] allCertifications: [Certification] allUsers: [User] } Bridging the gap between GraphQL and DynamoDB with resolvers

Resolvers are where a GraphQL query is executed. You can get a long way in GraphQL without ever writing a resolver. But to build our API, we’ll need to write some. For each query in the GraphQL schema above there is a root resolver below (only the team resolvers are shown here). This root resolver returns either a promise or an object with part of the query results.

If the query returns a Team type as the result, then execution is passed down to the Team type resolver. That resolver has a function for each of the values in a Team. If there is no resolver for a given value (i.e. id), it will look to see if the root resolver already passed it down.

A query takes four arguments. The first, called root or parent, is an object passed down from the resolver above with any partial results. The second, called args, contains the arguments passed to the query. The third, called context, can contain anything the application needs to resolve the query. In this case, we add a reference for the database to the context. The final argument, called info, is not used here. It contains more details about the query (like an abstract syntax tree).

In the resolvers below, ctx.db.singletable is the reference to the DynamoDB table that contains all the data. The get and query methods directly execute against the database and the DB_MAP.TEAM.... translates the schema to the database using the helper functions we wrote earlier. The parse method translates the data back to the from needed for the GraphQL schema.

const resolverMap = { Query: { team: (root, args, ctx, info) => { return ctx.db.singletable.get(DB_MAP.TEAM.get({ teamId: args.id })) .then(data => DB_MAP.TEAM.parse(data)); }, teamByName: (root, args, ctx, info) =>; { return ctx.db.singletable .query(DB_MAP.TEAM.queryByName({ teamName: args.name })) .then(data => DB_MAP.parseList(data, 'TEAM')); }, allTeams: (root, args, ctx, info) => { return ctx.db.singletable.query(DB_MAP.TEAM.queryAll) .then(data => DB_MAP.parseList(data, 'TEAM')); }, }, Team: { name: (root, _, ctx) => { if (root.name) { return root.name; } else { return ctx.db.singletable.get(DB_MAP.TEAM.get({ teamId: root.id })) .then(data => DB_MAP.TEAM.parse(data).name); } }, members: (root, _, ctx) => { return ctx.db.singletable .query(DB_MAP.USER.queryByTeamId({ teamId: root.id })) .then(data => DB_MAP.parseList(data, 'USER')); }, }, User: { name: (root, _, ctx) => { if (root.name) { return root.name; } else { return ctx.db.singletable.get(DB_MAP.USER.get({ userId: root.id })) .then(data => DB_MAP.USER.parse(data).name); } }, credentials: (root, _, ctx) => { return ctx.db.singletable .query(DB_MAP.CREDENTIAL.queryByUserId({ userId: root.id })) .then(data =>DB_MAP.parseList(data, 'CREDENTIAL')); }, }, };

Now let’s follow the execution of the query below. First, the team root resolver reads the team by id and returns id and name. Then the Team type resolver reads all the members of that team. Then the User type resolver is called for each user to get all of their credentials and certifications. If there are five members on the team and each member has five credentials, that results in a total of seven reads for the database. You could argue that is too many. In a SQL database this might be reduced to four database calls. I would argue that the seven DynamoDB reads will be cheaper and faster than the four SQL reads in many cases. But this comes with a big dose of “it depends” on a lot of factors.

query { team( id:"t_01" ){ id name members{ id name credentials{ id certification{ id name } } } }} Over-fetching and the N+1 problem

Optimizing a GraphQL API involves balancing a whole lot of tradeoffs that we won’t get into here. But two that weigh heavily in the decision of DynamoDB versus SQL are over-fetching and the N+1 problem. In many ways, these are opposite sides of the same coin. Over-fetching is when a resolver requests more data from the database than it needs to respond to the query. This often happens when you try to make one call to the database in the root resolver or a type resolver (e.g., members in the Team type resolver above) to get as much of the data as you can. If the query did not request the name attribute, it can be seen as wasted effort.

The N+1 problem is almost the opposite. If all the reads are pushed down to the lowest level resolver, then the team root resolver and the members resolver (for Team type) would make only a minimal or no request to the database. They would just pass the IDs down to the Team type and User type resolver. In this case, instead of members making one call to get all five members, it would push down to User to make five separate reads. This would result in potentially 36 or more separate reads for the query above. In practice, this does not happen because an optimized server would use something like the DataLoader library that acts as a middleware to intercept those 36 calls and batch them into probably only four calls to the database. These smaller atomic read requests are needed so that the DataLoader (or similar tool) can efficiently batch them into fewer reads.

So, to optimize a GraphQL API with SQL, it is usually best to have small resolvers at the lowest levels and use something like DataLoader to optimize them. But for a DynamoDB API it is better to have “smarter” resolvers higher up that better match the access patterns your single table database it written for. The over-fetching that results in this case is usually the lesser of the two evils.

Deploy this example in 60 seconds GitHub Repo

This is where you realize the full payoff of using DynamoDB together with serverless GraphQL. I built this example with Architect. It is an open-source tool to build serverless apps on AWS without most of the headaches of directly using AWS. Once you clone the repo and run npm install, you can launch the app for local development (including a built-in local version of the database) with a single command. Not only that, you can also deploy it straight to production infrastructure (including DynamoDB) on AWS with a single command when you are ready.

The post How to Make GraphQL and DynamoDB Play Nicely Together appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Dynamic, Conditional Imports

Css Tricks - Wed, 01/13/2021 - 12:44pm

With ES Modules, you can natively import other JavaScript. Like confetti, duh:

import confetti from 'https://cdn.skypack.dev/canvas-confetti'; confetti();

That import statement is just gonna run. There is a pattern to do it conditionally though. It’s like this:

(async () => { if (condition) { // await import("stuff.js"); // Like confetti! Which you have to import this special way because the web const { default: confetti } = await import( "https://cdn.skypack.dev/canvas-confetti@latest" ); confetti(); } })();

Why? Any sort of condition, I suppose. You could check the URL and only load certain things on certain pages. You could only be loading certain web components in certain conditions. I dunno. I’m sure you can think of a million things.

Responsible, conditional loading is another idea. Here’s only loading a module if saveData isn’t on:

CodePen Embed Fallback

The post Dynamic, Conditional Imports appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Fading in a Page on Load with CSS & JavaScript

Css Tricks - Wed, 01/13/2021 - 12:44pm

Louis Lazaris demonstrates a very simple way of doing this.

  1. Hide the body (with JavaScript) right away with a CSS class that declares opacity: 0
  2. Wait for all the JavaScript to execute
  3. Unhide the body by transitioning it back to opacity: 1

Like this:

CodePen Embed Fallback

Louis demonstrates a callback method, as well as mentioning you could wait for window.load or a DOM Ready event. I suppose you could also just have the line that sets the className to visible as the very last line of script that runs like I did above.

Louis knows it’s not particularly en vogue:

I know nowadays we’re obsessed in this industry with gaining every millisecond in page performance. But in a couple of projects that I recently overhauled, I added a subtle and clean loading mechanism that I think makes the experience nicer, even if it does ultimately slightly delay the time that the user is able to start interacting with my page.

I think of stuff like font-display: swap; which is dedicated to rendering your text as absolutely fast as possible, FOUT be damned, rather than chiller options.

Direct Link to ArticlePermalink

The post Fading in a Page on Load with CSS & JavaScript appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Two Issues Styling the Details Element and How to Solve Them

Css Tricks - Wed, 01/13/2021 - 5:56am

In the not-too-distant past, even basic accordion-like interactions required JavaScript event listeners or some CSS… trickery. And, depending on the solution used, editing the underlying HTML could get complicated.

Now, the <details> and <summary> elements (which combine to form what’s called a “disclosure widget”) have made creation and maintenance of these components relatively trivial.

At my job, we use them for things like frequently asked questions.

Pretty standard question/answer format There are a couple of issues to consider

Because expand-and-collapse interactivity is already baked into the <details> and <summary> HTML tags, you can now make disclosure widgets without any JavaScript or CSS. But you still might want some. Left unstyled, <details> disclosure widgets present us with two issues.

Issue 1: The <summary> cursor

Though the <summary> section invites interaction, the element’s default cursor is a text selection icon rather than the pointing finger you may expect:

We get the text cursor but might prefer the pointer to indicate interaction instead. Issue 2: Nested block elements in <summary>

Nesting a block-level element (e.g. a heading) inside a <summary> element pushes that content down below the arrow marker, rather than keeping it inline:

Block-level elements won’t share space with the summary marker. The CSS Reset fix

To remedy these issues, we can add the following two styles to the reset section of our stylesheets:

details summary { cursor: pointer; } details summary > * { display: inline; }

Read on for more on each issue and its respective solution.

Changing the <summary> cursor value

When users hover over an element on a page, we always want them to see a cursor “that reflects the expected user interaction on that element.”

We touched briefly on the fact that, although <summary> elements are interactive (like a link or form button), its default cursor is not the pointing finger we typically see for such elements. Instead, we get the text cursor, which we usually expect when entering or selecting text on a page.

To fix this, switch the cursor’s value to pointer:

details summary { cursor: pointer; } CodePen Embed Fallback

Some notable sites already include this property when they style <details> elements. The MDN Web Docs page on the element itself does exactly that. GitHub also uses disclosure widgets for certain items, like the actions to watch, star and fork a repo.

GitHub uses cursor: pointer on the <summary> element of its disclosure widget menus. 

I’m guessing the default cursor: text value was chosen to indicate that the summary text can (along with the rest of a disclosure widget’s content) be selected by the user. But, in most cases, I feel it’s more important to indicate that the <summary> element is interactive.

Summary text is still selectable, even after we’ve changed the cursor value from text to pointer. Note that changing the cursor only affects appearance, and not its functionality.

Displaying nested <summary> contents inline

Inside each <summary> section of the FAQ entries I shared earlier, I usually enclose the question in an appropriate heading tag (depending on the page outline):

<details> <summary> <h3>Will my child's 504 Plan be implemented?</h3> </summary> <p>Yes. Similar to the Spring, case managers will reach out to students.</p> </details>

Nesting a heading inside <summary> can be helpful for a few reasons:

  • Consistent visual styling. I like my FAQ questions to look like other headings on my pages.
  • Using headings keeps the page structure valid for users of Internet Explorer and pre-Chromium versions of Edge, which don’t support <details> elements. (In these browsers, such content is always visible, rather than interactive.)
  • Proper headings can help users of assistive technologies navigate within pages. (That said, headings within <summary> elements pose a unique case, as explained in detail below. Some screen readers interpret these headings as what they are, but others don’t.)
Headings vs. buttons

Keep in mind that the <summary> element is a bit of an odd duck. It operates like a button in many ways. In fact, it even has implicit role=button ARIA mapping. But, very much unlike buttons, headings are allowed to be nested directly inside <summary> elements.

This poses us — and browser and assistive technology developers — with a contradiction:

  • Headings are permitted in <summary> elements to provide in-page navigational assistance.
  • Buttons strip the semantics out of anything (like headings) nested within them.

Unfortunately, assistive technologies are inconsistent in how they’ve handled this situation. Some screen-reading technologies, like NVDA and Apple’s VoiceOver, do acknowledge headings inside <summary> elements. JAWS, on the other hand, does not.

What this means for us is that, when we place a heading inside a <summary>, we can style the heading’s appearance. But we cannot guarantee our heading will actually be interpreted as a heading!

In other words, it probably doesn’t hurt to put a heading there. It just may not always help.

Inline all the things

When using a heading tag (or another block element) directly inside our <summary>, we’ll probably want to change its display style to inline. Otherwise, we’ll get some undesired wrapping, like the expand/collapse arrow icon displayed above the heading, instead of beside it.

We can use the following CSS to apply a display value of inline to every heading — and to any other element nested directly inside the <summary>:

details summary > * { display: inline; } CodePen Embed Fallback

A couple notes on this technique. First, I recommend using inline, and not inline-block, as the line wrapping issue still occurs with inline-block when the heading text extends beyond one line.

Second, rather than changing the display value of the nested elements, you might be tempted to replace the <summary> element’s default display: list-item value with display: flex. At least I was! However, if we do this, the arrow marker will disappear. Whoops!

Bonus tip: Excluding Internet Explorer from your styles

I mentioned earlier that Internet Explorer and pre-Chromium (a.k.a. EdgeHTML) versions of Edge don’t support <details> elements. So, unless we’re using polyfills for these browsers, we may want to make sure our custom disclosure widget styles aren’t applied for them. Otherwise, we end up with a situation where all our inline styling garbles the element.

Inline <summary> headings could have odd or undesirable effects in Internet Explorer and EdgeHTML.

Plus, the <summary> element is no longer interactive when this happens, meaning the cursor’s default text style is more appropriate than pointer.

If we decide that we want our reset styles to target only the appropriate browsers, we can add a feature query that prevents IE and EdgeHTML from ever having our styles applied. Here’s how we do that using @supports to detect a feature only those browsers support:

@supports not (-ms-ime-align: auto) { details summary { cursor: pointer; } details summary > * { display: inline; } /* Plus any other <details>/<summary> styles you want IE to ignore. }

IE actually doesn’t support feature queries at all, so it will ignore everything in the above block, which is fine! EdgeHTML does support feature queries, but it too will not apply anything within the block, as it is the only browser engine that supports -ms-ime-align.

The main caveat here is that there are also a few older versions of Chrome (namely 12-27) and Safari (macOS and iOS versions 6-8) that do support <details> but don’t support feature queries. Using a feature query means that these browsers, which account for about 0.06% of global usage (as of January 2021), will not apply our custom disclosure widget styles, either.

Using a @supports selector(details) block, instead of @supports not (-ms-ime-align: auto), would be an ideal solution. But selector queries have even less browser support than property-based feature queries.

Final thoughts

Once we’ve got our HTML structure set and our two CSS reset styles added, we can spruce up all our disclosure widgets however else we like. Even some simple border and background color styles can go a long way for aesthetics and usability. Just know that customizing the <summary> markers can get a little complicated!

CodePen Embed Fallback

The post Two Issues Styling the Details Element and How to Solve Them appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

A (terrible?) way to do footnotes in HTML

Css Tricks - Wed, 01/13/2021 - 5:53am

Terence Eden poked around with a way to do footnotes using the <details>/<summary> elements. I think it’s kind of clever. Rather than a hyperlink that jumps down to explain the footnote elsewhere, the details are right there next to the text. I like that proximity in the code. Plus, you get the native open/close interactivity of the disclosure widget.

It’s got some tricky parts though. The <details> element is block-level, so it needs to become inline to be the footnote, and sized/positioned to look “right.” I think it’s a shame that it won’t sit within a <p> tag, so that makes it impractical for my own usage.

Craig Shoemaker in the comments forked the original to fiddle with the CSS, and that inspired me to do the same.

Rather than display the footnote text itself right inline (which is extra-tricky), I moved that content to a fixed-position location at the bottom of the page:

CodePen Embed Fallback

I’m not 100% convinced it’s a good idea, but I’m also not convinced it’s a terrible one.

The post A (terrible?) way to do footnotes in HTML appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

My Favorite Typefaces of 2020

Typography - Tue, 01/12/2021 - 6:27pm

Read the book, Typographic Firsts

After a decade, our annual Favorite Fonts list is back. In addition to a top-ten of favorite typefaces, there are now another 50 typefaces in the Honorable Mentions list. There's also a section devoted to my favorite glyphs or characters from fonts released in 2020, and a few words about the magical selection process. Oh, and there's even a typographic Space Invaders Easter egg! The list is back!

The post My Favorite Typefaces of 2020 appeared first on I Love Typography.

The WordPress.com Business Plan is way more powerful than you think

Css Tricks - Tue, 01/12/2021 - 10:14am

WordPress.com is where you go to use WordPress that is completely hosted for you. You don’t have to worry about anything but building your site. There is a free plan to get started with, and paid plans that offer more features. The Business plan is particularly interesting, and my guess is that most people don’t fully understand everything that it unlocks for you, so let’s dig into that.

You get straight up SFTP access to your site.

Here’s me using Transmit to pop right into one of my sites over SFTP.

What this means is that you can do local WordPress development like you normally would, then use real deployment tools to kick your work out to production (which is your WordPress.com site). That’s what I do with Buddy. (Here a screencast demonstrating the workflow.)

That means real control.

I can upload and use whatever plugins I want. I can upload and use whatever themes I want. The database too — I get literal direct MySQL access.

I can even manage what PHP version the site uses. That’s not something I’d normally even need to do, but that’s just how much access there is.

A big jump in storage.

200 GB. You’ll probably never get anywhere near that limit, unless you are uploading video, and if you are, now you’ve got the space to do it.

Backups you’ll probably actually use.

You don’t have to worry about anything nasty happening on WordPress.com, like your server being hacked and losing all your data or anything. So in that sense, WordPress.com is handling your backups for you. But with the Business plan, you’ll see a backup log right in your dashboard:

That’s a backup of your theme, data, assets… everything. You can download it anytime you like.

The clutch feature? You can restore things to any point in time with the click of a button.

Powered by a global CDN

Not every site on WordPress.com is upgraded to the global CDN. Yours will be if it’s on the Business plan. That means speed, and speed is important for every reason, including SEO. And speaking of SEO tools, those are unlocked for you on the Business plan as well.

Some of the best themes unlock at the Premium/Business plan level.

You can buy them one-off, but you don’t have to if you’re on the Business plan because it opens the door for more playing around. This Aquene theme is pretty stylish with a high-end design:

It’s only $300/year.

(Or $33/month billed monthly.)

So it’s not ultra-budget hosting, but the price tag is a lot less if you consider all the things we covered here and how much they cost if you were to cobble something together yourself. And we didn’t even talk about support, which is baked right into the plan.

Hosting, backups, monitoring, performance, security, plugins, themes, and support — toss in a free year or domain registration, and that’s a lot of website for $300.

They have less expensive plans as well. But the Business plan is the level where serious control, speed, and security kick in.

Coupon code CSSTRICKS gets you 15% off the $300/year Business Plan. Valid until the end of February 2021.

The post The WordPress.com Business Plan is way more powerful than you think appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

How to Add Commas Between a List of Items Dynamically with CSS

Css Tricks - Tue, 01/12/2021 - 5:53am

Imagine you have a list of items. Say, fruit: Banana, Apple, Orange, Pear, Nectarine

We could put those commas (,) in the HTML, but let’s look at how we could do that in CSS instead, giving us an extra level of control. We’ll make sure that last item doesn’t have a comma while we’re at it.

I needed this for a real project recently, and part of the requirements were that any of the items in the list could be hidden/revealed via JavaScript. The commas needed to work correctly no matter which items were currently shown.

One solution I found rather elegant solution is using general sibling combinator. We’ll get to that in a minute. Let’s start with some example HTML. Say you start out with a list of fruits:

<ul class="fruits"> <li class="fruit on">Banana</li> <li class="fruit on">Apple</li> <li class="fruit on">Orange</li> <li class="fruit on">Pear</li> <li class="fruit on">Nectarine</li> </ul>

And some basic CSS to make them appear in a list:

.fruits { display: flex; padding-inline-start: 0; list-style: none; } .fruit { display: none; /* hidden by default */ } .fruit.on { /* JavaScript-added class to reveal list items */ display: inline-block; }

Now say things happen inside this interface, like a user toggles controls that filter out all fruits that grow in cold climates. Now a different set of fruits is shown, so the fruit.on class is manipulated with the classList API.

So far, our HTML and CSS would create a list like this:

BananaOrangeNectarine

Now we can reach for that general sibling combinator to apply a comma-and-space between any two on elements:

.fruit.on ~ .fruit.on::before { content: ', '; }

Nice!

You might be thinking: why not just apply commas to all the list items and remove it from the last with something like :last-child or :last-of-type. The trouble with that is the last child might be “off” at any given time. So what we really want is the last item that is “on,” which isn’t easily possible in CSS, since there is nothing like “last of class” available. Hence, the general sibling combinator trick!

In the UI, I used max-width instead of display and toggled that between 0 and a reasonable maximum value so that I could use transitions to push items on and off more naturally, making it easier for the user to see which items are being added or removed from the list. You can add the same effect to the pseudo-element as well to make it super smooth.

Here’s a demo with a couple of examples that are both slight variations. The fruits example uses a hidden class instead of on, and the veggies example has the animations. SCSS is also used here for the nesting:

CodePen Embed Fallback

I hope this helps others looking for something similar!

The post How to Add Commas Between a List of Items Dynamically with CSS appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Building Flexible Components With Transparency

Css Tricks - Tue, 01/12/2021 - 5:44am

Good thinking from Paul Hebert on the Cloudfour blog about colorizing a component. You might look at a design comp and see a card component with a header background of #dddddd, content background of #ffffff, on an overall background of #eeeeee. OK, easy enough. But what if the overall background becomes #dddddd? Now your header looks lost within it.

That darker header? Design-wise, it’s not being exactly #dddddd that’s important; it’s about looking slightly darker than the background. When that’s the case, a background of, say rgba(0, 0, 0, 0.135) is more resiliant.

That will then remain resilient against backgrounds of any kind.

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post Building Flexible Components With Transparency appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Flash’s Web Tech Legacy

Css Tricks - Mon, 01/11/2021 - 11:54am

Tiffany B. Brown on how Flash paved the way for some things we might think of as fairly modern web technologies:

Flash wasn’t just good for playing multimedia. It was also good for manipulating it. Using ActionScript, you could pan audio, adjusting the input for the user’s left and right speakers, perhaps when they shifted their mouse from one side of the screen to the other. Now we can do that using the Web Audio API.

Web Storage and the localStorage/ sessionStorage APIs are conceptually similar to SharedObjects, or Flash cookies. And the demand for rich web typography enabled by Flash and sIFR, helped bring us @font-face, WOFF, and web-licensed fonts.

Flash also popularized the idea of the cross-domain policy file, an XML file that specifies whether one domain can read the content and data of another. It’s a precursor to cross-origin resource sharing (CORS), which uses HTTP headers instead of an XML configuration file.

Mike Davidson had some nostolgic thoughts as well:

Most technology is transitional if your window is long enough. Cassette tapes showed us that taking our music with us was possible. Tapes served their purpose until compact discs and then MP3s came along. Then they took their rightful place in history alongside other evolutionary technologies. Flash showed us where we could go, without ever promising that it would be the long-term solution once we got there.

Direct Link to ArticlePermalink

The post Flash’s Web Tech Legacy appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Animating with Lottie

Css Tricks - Mon, 01/11/2021 - 6:09am

I believe animation on the web is not only fun, but engaging in such a way that it has converted site visitors into customers. Think of the “Like” button on Twitter. When you “like” a tweet, tiny colorful bubbles spread around the heart button while it appears to morph into a circle around the button before settling into the final “liked” state, a red fill. It would be much less exciting if the heart just went from being outlined to filled. That excitement and satisfaction is a perfect example of how animation can be used to enhance user experience.

This article is going to introduce the concept of rendering Adobe After Effects animation on the web with Lottie, which can make advanced animations— like that Twitter button — achievable.

Bodymovin is a plugin for Adobe After Effects that exports animations as JSON, and Lottie is the library that renders them natively on mobile and on the web. It was created by Hernan Torrisi. If you’re thinking Oh, I don’t use After Effects, this article is probably not for me, hold on just a moment. I don’t use After Effects either, but I’ve used Lottie in a project.

You don’t have to use Lottie to do animation on the web, of course. An alternative is to design animations from scratch. But that can be time-consuming, especially for the complex types of animations that Lottie is good at. Another alternative is using GIF animations, which are limitless in the types of animation they can display, but are typically double the size of the JSON files that Bodymovin produces.

So let’s jump into it and see how it works.

Get the JSON

To use Lottie, we need a JSON file containing the animation from After Effects. Luckily for us, Icons8 has a lot of free animated icons here in JSON, GIF, and After Effects formats.

Add the script to HTML

We also need to get the Bodymovin player’s JavaScript library in our HTML, and call its loadAnimation() method. The fundamentals are demonstrated here:

<div id="icon-container"></div> <script src="https://cdnjs.cloudflare.com/ajax/libs/bodymovin/5.7.4/lottie.min.js"> <script> var animation = bodymovin.loadAnimation({ // animationData: { /* ... */ }, container: document.getElementById('icon-container'), // required path: 'data.json', // required renderer: 'svg', // required loop: true, // optional autoplay: true, // optional name: "Demo Animation", // optional }); </script> Activate the animation

After the animation has loaded in the container, we can configure it to how we want it to be activated and what action should activate it with event listeners. Her are the properties we have to work with:

  • container: the DOM element that the animation is loaded into
  • path: the relative path of the JSON file that contains the animation
  • renderer: the format of the animation, including SVG, canvas, and HTML
  • loop: boolean to specify whether or not the animation should loop
  • autoplay: boolean to specify whether or not the animation should play as soon as it’s loaded
  • name: animation name for future referencing

Note in the earlier example that the animationData property is commented out. It is mutually exclusive with the path property and is an object that contains the exported animated data.

Let’s try an example

I’d like to demonstrate how to use Lottie with this animated play/pause control icon from Icons8:

The Bodymovin player library is statically hosted here and can be dropped into the HTML that way, but it is also available as a package:

npm install lottie-web ### or yarn add lottie-web

And then, in your HTML file, include the script from the dist folder in the installed package. You could also import the library as a module from Skypack:

import lottieWeb from "https://cdn.skypack.dev/lottie-web";

For now, our pause button is in a loop and it also plays automatically:

CodePen Embed Fallback

Let’s change that so the animation is triggered by an action.

Animating on a trigger

If we turn autoplay off, we get a static pause icon because that was how it was exported from After Effects.

CodePen Embed Fallback

But, worry not! Lottie provides some methods that can be applied to animation instances. That said, the documentation of the npm package is more comprehensive.

We need to do a couple things here:

  • Make it show as the “play” state initially.
  • Animate it to the “paused” state on click
  • Animate between the two on subsequent clicks.

The goToAndStop(value, isFrame) method is appropriate here. When the animation has loaded in the container, this method sets the animation to go to the provided value, then stop there. In this situation, we’d have to find the animation value when it’s at play and set it. The second parameter specifies whether the value provided is based on time or frame. It’s a boolean type and the default is false (i.e., time-based value). Since we want to set the animation to the play frame, we set it to true.

A time-based value sets the animation to a particular point in the timeline. For example, the time value at the beginning of the animation, when it’s paused, is 1. However, a frame-based value sets the animation to a particular frame value. A frame, according to TechTerms, is an individual picture in a sequence of images. So, if I set the frame value of the animation to 5, the animation goes to the fifth frame in the animation (the “sequence of images” in this situation).

CodePen Embed Fallback

After trying different values, I found out the animation plays from frame values 11 through 16. Hence, I chose 14 to be on the safe side.

Now we have to set the animation to change to pause when the user clicks it, and play when the user clicks it again. Next, we need the playSegments(segments, forceFlag) method. The segments parameter is an array type containing two numbers. The first and second numbers represent the first and last frame that the method should read, respectively. The forceFlag is a boolean that indicates whether or not the method should be fired immediately. If set to false, it will wait until the animation plays to the value specified as the first frame in the segments array before it is triggered. If true, it plays the segments immediately.

CodePen Embed Fallback

Here, I created a flag to indicate when to play the segments from play to pause, and from pause to play. I also set the forceFlag boolean to true because I want an immediate transition.

So there we have it! We rendered an animation from After Effects to the browser! Thanks Lottie!

Canvas?

I prefer to use SVG as my renderer because it supports scaling and I think it renders the sharpest animations. Canvas doesn’t render quite as nicely, and also doesn’t support scaling. However, if you want to use an existing canvas to render an animation, there are some extra things you’d have to do.

Doing more

Animation instances also have events that can also be used to configure how the animation should act.

For example, in the Pen below, I added two event listeners to the animation and set some text to be displayed when the events are fired.

CodePen Embed Fallback

All the events are available on the npm package’s docs. With that I say, go forth and render some amazing animations!

The post Animating with Lottie appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

CSS Snapshot 2020

Css Tricks - Mon, 01/11/2021 - 6:08am

I think it’s great that the CSS Working Group does these. It’s like planting a flag in the ground saying this is what CSS looks like at this specific point in time. They do specifically say it’s not for us CSS authors though…

This document collects together into one definition all the specs that together form the current state of Cascading Style Sheets (CSS) as of 2020. The primary audience is CSS implementers, not CSS authors, as this definition includes modules by specification stability, not Web browser adoption rate.

Remember “CSS3”? That was the closest thing we had to a “snapshot” that was designed for CSS authors (and learners). Because CSS3 was so wildly successful, we saw a short round of enthusiasm for CSS4, me included. There is zero marketing panache on that snapshot page, which is exactly what CSS4 would need to succeed. Remember, HTML5 and friends (including CSS3) even had fancy logos!

If someone were to say to me “Chris, when CSS3 came around, I boned up on all that, but I haven’t kept up with CSS since, what should I learn?” I’d say “That’s a damn fine question, developer that has a normal healthy relationship with technology.” But honestly, I might struggle to answer cohesively.

I’d say: Uhm, CSS grid for sure. Custom properties. Clipping and Offset paths I suppose. prefers-reduced-motion. I dunno. There are probably like 100 things, but there is no great single reference point to see them all together.

I’ll work on putting a list together. I don’t think I’ll have the gumption to call it CSS4, but at least I’ll be able to answer that question. Feel free to suggest ideas in the comments.

The post CSS Snapshot 2020 appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Painters Tape and Fault Tolerance

Css Tricks - Fri, 01/08/2021 - 2:48pm

Snipping the top bit of Nicholas C. Zakas’s Top of the Month newsletter (go sign up!), with permission.

One of my favorite things in the world is painters tape (also called masking tape). It seems like something silly: some tape you put on a wall when you’re painting to avoid getting paint on the wall. The tape doesn’t have a strong adhesive, so it can be pulled back off the wall without damaging it. What I love about painters tape is the philosophy behind it: painting is messy, and rather than trying to avoid making a mess, painters tape allows you to make a mess initially and then clean it up easily. Even the best, most talented painter is going to splatter some paint here and there, get distracted, or otherwise end up with paint going where it shouldn’t. It’s a lot faster, easier, and less frustrating to use painters tape to cover up areas where paint is likely to go and then remove the tape to create a nice, clean, finished area. What does this have to do with software engineering?

Painters tape is all about a concept called fault tolerance. Instead of expecting everything to go well, you instead expect that there will be mistakes. When you expect there to be mistakes, you make decisions not to avoid all mistakes but rather to easily recover when a mistake occurs. Got paint where it shouldn’t be? It doesn’t matter if that spot was covered by painters tape. Forgot to put on the painters tape? Now that mistake is a bigger deal. As software engineers, we can think the same way with the code we write.

Making your code fault tolerant is about asking yourself the question: how will this fail? Not if it will fail, but assuming that it will fail, and in which ways will it fail?

The post Painters Tape and Fault Tolerance appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

`aspect-ratio` is going to deprecate FitVids

Css Tricks - Fri, 01/08/2021 - 11:23am

Jen was just tweetin’ about how the latest Safari Technical Preview has aspect-ratio. Looks like Chrome and Firefox both have it behind a flag, so with Safari joining the party, we’ll all have it soon.

I played with it a while back. It’s awesome and much needed. There are ways to make `aspect-ratio` boxes, but they largely revolve around “padding hacks.”

Dave is excited about being released from jail:

Yesssss! Soon I will be released from my Open Source prison!

Seeing it working in Edge Dev 89 (M1) as well. https://t.co/bSKrWEPQyE

— Dave Rupert (@davatron5000) January 7, 2021

Once we can rely on it, FitVids (which I use on literally every site I make in one form or another) can entirely go away in favor of a handful of CSS applied directly to the elements (usually videos-in-<iframe>s).

FitVids 2021:

CodePen Embed Fallback

The post `aspect-ratio` is going to deprecate FitVids appeared first on CSS-Tricks.

You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.