Front End Web Development

Better Collaboration With Pull Requests

Css Tricks - Mon, 10/11/2021 - 4:29am

This article is part of our “Advanced Git” series. Be sure to follow us on Twitter or sign up for our newsletter to hear about the next articles!

In this third installment of our “Advanced Git” series, we’ll look at pull requests — a great feature which helps both small and larger teams of developers. Pull requests not only improve the review and the feedback process, but they also help tracking and discussing code changes. Last, but not least, pull requests are the ideal way to contribute to other repositories you don’t have write access to.

Advanced Git series: What are pull requests?

First of all, it’s important to understand that pull requests are not a core Git feature. Instead, they are provided by the Git hosting platform you’re using: GitHub, GitLab, Bitbucket, AzureDevops and others all have such a functionality built into their platforms.

Why should I create a pull request?

Before we get into the details of how to create the perfect pull request, let’s talk about why you would want to use this feature at all.

Imagine you’ve just finished a new feature for your software. Maybe you’ve been working in a feature branch, so your next step would be merging it into the mainline branch (master or main). This is totally fine in some cases, for example, if you’re the only developer on the project or if you’re experienced enough and know for certain your team members will be happy about it.

By the way: If you want to know more about branches and typical branching workflows, have a look at our second article in our “Advanced Git” series: “Branching Strategies in Git.”

However, what if your changes are a bit more complex and you’d like someone else to look at your work? This is what pull requests were made for. With pull requests you can invite other people to review your work and give you feedback. 

Once a pull request is open, you can discuss your code with other developers. Most Git hosting platforms allow other users to add comments and suggest changes during that process. After your reviewers have approved your work, it might be merged into another branch.

Having a reviewing workflow is not the only reason for pull requests, though. They come in handy if you want to contribute to other repositories you don’t have write access to. Think of all the open source projects out there: if you have an idea for a new feature, or if you want to submit a patch, pull requests are a great way to present your ideas without having to join the project and become a main contributor.

This brings us to a topic that’s tightly connected to pull requests: forks.

Working with forks

A fork is your personal copy of an existing Git repository. Going back to our Open Source example: your first step is to create a fork of the original repository. After that, you can change code in your own, personal copy.

After you’re done, you open a pull request to ask the owners of the original repository to include your changes. The owner or one of the other main contributors can review your code and then decide to include it (or not).

Important Note: Pull requests are always based on branches and not on individual commits! When you create a pull request, you base it on a certain branch and request that it gets included.

Making a reviewer’s life easier: How to create a great pull request

As mentioned earlier, pull requests are not a core Git feature. Instead, every Git platform has its own design and its own idea about how a pull request should work. They look different on GitLab, GitHub, Bitbucket, etc. Every platform has a slightly different workflow for tracking, discussing, and reviewing changes.

Desktop GUIs like the Tower Git client, for example, can make this easier: they provide a consistent user interface, no matter what code hosting service you’re using.

Having said that, the general workflow is always the same and includes the following steps:

  1. If you don’t have write access to the repository in question, the first step is to create a fork, i.e. your personal version of the repository.
  2. Create a new local branch in your forked repository. (Reminder: pull requests are based on branches, not on commits!)
  3. Make some changes in your local branch and commit them.
  4. Push the changes to your own remote repository.
  5. Create a pull request with your changes and start the discussion with others.

Let’s look at the pull request itself and how to create one which makes another developer’s life easier. First of all, it should be short so it can be reviewed quickly. It’s harder to understand code when looking at 3,000 lines instead of 30 lines. 

Also, make sure to add a good and self-explanatory title and a meaningful description. Try to describe what you changed, why you opened the pull request, and how your changes affect the project. Most platforms allow you to add a screenshot which can help to demonstrate the changes.

Approve, merge, or decline?

Once your changes have been approved, you (or someone with write access) can merge the forked branch into the main branch. But what if the reviewer doesn’t want to merge the pull request in its current state? Well, you can always add new commits, and after pushing that branch, the existing pull request is updated.

Alternatively, the owner or someone else with write access can decline the pull request when they don’t want to merge the changes.

Safety net for developers

As you can see, pull requests are a great way to communicate and collaborate with your fellow developers. By asking others to review your work, you make sure that only high-quality code enters your codebase. 

If you want to dive deeper into advanced Git tools, feel free to check out my (free!) “Advanced Git Kit”: it’s a collection of short videos about topics like branching strategies, Interactive Rebase, Reflog, Submodules and much more.

Advanced Git series:

The post Better Collaboration With Pull Requests appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

The Case for ‘Developer Experience’

Css Tricks - Fri, 10/08/2021 - 10:23am

A good essay from Jean Yang.

What I mean by developer experience is the sum total of how developers interface with their tools, end-to-end, day-in and day-out. Sure, there’s more focus than ever on how developers use and adopt tools, and there are entire talks and panels devoted to the topic of so-called “DX” — yet large parts of developer experience are still largely ignored. With developers spending less than a third of their time actually writing code, developer experience includes all the other stuff: maintaining code, testing, security issues, addressing incidents, and more. And many of these aspects of developer experience continue getting ignored because they’re complex, they’re messy, and they don’t have “silver bullet” solutions.

She makes the case that DX has perhaps been generally oversimplified and there are categories of tools that have very different DX:

My major revelation was that there are actually two categories of tools — and therefore, two different categories of developer experience needs: abstraction tools (which assume we code in a vacuum) and complexity-exploring tools (which assume we work in complex environments). Most developer experience until now has been solely focused on the former category of abstraction, where there are more straightforward ways to understand good developer experience than the former. 

Reminds me of how Shawn thinks:

It’s time we look beyond the easy questions in developer experience and start addressing the uncomfortable ones.

Direct Link to ArticlePermalink

The post The Case for ‘Developer Experience’ appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Jekyll doesn’t do components? Liar!

Css Tricks - Fri, 10/08/2021 - 7:14am

I like the pushback from Katie Kodes here. I’ve said in the past that I don’t think server-side languages haven’t quite nailed “building in components” as well as JavaScript has, but hey, this is a good point:

1. Any basic fragment-of-HTML “component” you can define with JSX in a file and then cross-reference as <MyComponent key="value" />, you can just as easily define with Liquid in a file and cross-reference in Jekyll as {% include MyComponent.html key=value %}.

2. Any basic fragment-of-HTML “layout” you can define with JSX in a file and then cross-reference as <MyLayout>Hello, world</MyLayout>, you can just as easily define with Liquid in a file and then cross-reference in the front matter of a Jekyll template as layout: MyLayout.

Any HTML preprocessor that can do partials with local variables is pretty close to replicating the best of stateless JavaScript components. The line gets even blurrier with stuff like Eleventy Serverless that can build individual pages on the fly by hitting the URL of a cloud function.

Direct Link to ArticlePermalink

The post Jekyll doesn’t do components? Liar! appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Building a Tennis Trivia App With Next.js and Netlify

Css Tricks - Fri, 10/08/2021 - 4:30am

Today we will be learning how to build a tennis trivia app using Next.js and Netlify. This technology stack has become my go-to on many projects. It allows for rapid development and easy deployment.

Without further ado let’s jump in!

What we’re using
  • Next.js
  • Netlify
  • TypeScript
  • Tailwind CSS
Why Next.js and Netlify

You may think that this is a simple app that might not require a React framework. The truth is that Next.js gives me a ton of features out of the box that allow me to just start coding the main part of my app. Things like webpack configuration, getServerSideProps, and Netlify’s automatic creation of serverless functions are a few examples.

Netlify also makes deploying a Next.js git repo super easy. More on the deployment a bit later on.

What we’re building

Basically, we are going to build a trivia game that randomly shows you the name of a tennis player and you have to guess what country they are from. It consists of five rounds and keeps a running score of how many you get correct.

The data we need for this application is a list of players along with their country. Initially, I was thinking of querying some live API, but on second thought, decided to just use a local JSON file. I took a snapshot from RapidAPI and have included it in the starter repo.

The final product looks something like this:

You can find the final deployed version on Netlify.

Starter repo tour

If you want to follow along you can clone this repository and then go to the start branch:

git clone git@github.com:brenelz/tennis-trivia.git cd tennis-trivia git checkout start

In this starter repo, I went ahead and wrote some boilerplate to get things going. I created a Next.js app using the command npx create-next-app tennis-trivia. I then proceeded to manually change a couple JavaScript files to .ts and .tsx. Surprisingly, Next.js automatically picked up that I wanted to use TypeScript. It was too easy! I also went ahead and configured Tailwind CSS using this article as a guide.

Enough talk, let’s code!

Initial setup

The first step is setting up environment variables. For local development, we do this with a .env.local file. You can copy the .env.sample from the starter repo.

cp .env.sample .env.local

Notice it currently has one value, which is the path of our application. We will use this on the front end of our app, so we must prefix it with NEXT_PUBLIC_.

Finally, let’s use the following commands to install the dependencies and start the dev server: 

npm install npm run dev

Now we access our application at http://localhost:3000. We should see a fairly empty page with just a headline:

Creating the UI markup

In pages/index.tsx, let’s add the following markup to the existing Home() function:

export default function Home() { return ( <div className="bg-blue-500"> <div className="max-w-2xl mx-auto text-center py-16 px-4 sm:py-20 sm:px-6 lg:px-8"> <h2 className="text-3xl font-extrabold text-white sm:text-4xl"> <span className="block">Tennis Trivia - Next.js Netlify</span> </h2> <div> <p className="mt-4 text-lg leading-6 text-blue-200"> What country is the following tennis player from? </p> <h2 className="text-lg font-extrabold text-white my-5"> Roger Federer </h2> <form> <input list="countries" type="text" className="p-2 outline-none" placeholder="Choose Country" /> <datalist id="countries"> <option>Switzerland</option> </datalist> <p> <button className="mt-8 w-full inline-flex items-center justify-center px-5 py-3 border border-transparent text-base font-medium rounded-md text-blue-600 bg-white hover:bg-blue-50 sm:w-auto" type="submit" > Guess </button> </p> </form> <p className="mt-4 text-lg leading-6 text-white"> <strong>Current score:</strong> 0 </p> </div> </div> </div> );

This forms the scaffold for our UI. As you can see, we are using lots of utility classes from Tailwind CSS to make things look a little prettier. We also have a simple autocomplete input and a submit button. This is where you will select the country you think the player is from and then hit the button. Lastly, at the bottom, there is a score that changes based on correct or incorrect answers.

Setting up our data

If you take a look at the data folder, there should be a tennisPlayers.json with all the data we will need for this application. Create a lib folder at the root and, inside of it, create a players.ts file. Remember, the .ts extension is required since is a TypeScript file. Let’s define a type that matches our JSON data..

export type Player = { id: number, first_name: string, last_name: string, full_name: string, country: string, ranking: number, movement: string, ranking_points: number, };

This is how we create a type in TypeScript. We have the name of the property on the left, and the type it is on the right. They can be basic types, or even other types themselves.

From here, let’s create specific variables that represent our data:

export const playerData: Player[] = require("../data/tennisPlayers.json"); export const top100Players = playerData.slice(0, 100); const allCountries = playerData.map((player) => player.country).sort(); export const uniqueCountries = [...Array.from(new Set(allCountries))];

A couple things to note is that we are saying our playerData is an array of Player types. This is denoted by the colon followed by the type. In fact, if we hover over the playerData we can see its type:

In that last line we are getting a unique list of countries to use in our country dropdown. We pass our countries into a JavaScript Set, which gets rid of the duplicate values. We then create an array from it, and spread it into a new array. It may seem unnecessary but this was done to make TypeScript happy.

Believe it or not, that is really all the data we need for our application!

Let’s make our UI dynamic!

All our values are hardcoded currently, but let’s change that. The dynamic pieces are the tennis player’s name, the list of countries, and the score.

Back in pages/index.tsx, let’s modify our getServerSideProps function to create a list of five random players as well as pull in our uniqueCountries variable.

import { Player, uniqueCountries, top100Players } from "../lib/players"; ... export async function getServerSideProps() { const randomizedPlayers = top100Players.sort((a, b) => 0.5 - Math.random()); const players = randomizedPlayers.slice(0, 5); return { props: { players, countries: uniqueCountries, }, }; }

Whatever is in the props object we return will be passed to our React component. Let’s use them on our page:

type HomeProps = { players: Player[]; countries: string[]; }; export default function Home({ players, countries }: HomeProps) { const player = players[0]; ... }

As you can see, we define another type for our page component. Then we add the HomeProps type to the Home() function. We have again specified that players is an array of the Player type.

Now we can use these props further down in our UI. Replace “Roger Federer” with {player.full_name} (he’s my favorite tennis player by the way). You should be getting nice autocompletion on the player variable as it lists all the property names we have access to because of the types that we defined.

Further down from this, let’s now update the list of countries to this:

<datalist id="countries"> {countries.map((country, i) => ( <option key={i}>{country}</option> ))} </datalist>

Now that we have two of the three dynamic pieces in place, we need to tackle the score. Specifically, we need to create a piece of state for the current score.

export default function Home({ players, countries }: HomeProps) { const [score, setScore] = useState(0); ... }

Once this is done, replace the 0 with {score} in our UI.

You can now check our progress by going to http://localhost:3000. You can see that every time the page refreshes, we get a new name; and when typing in the input field, it lists all of the available unique countries.

Adding some interactivity

We’ve come a decent way but we need to add some interactivity.

Hooking up the guess button

For this we need to have some way of knowing what country was picked. We do this by adding some more state and attaching it to our input field.

export default function Home({ players, countries }: HomeProps) { const [score, setScore] = useState(0); const [pickedCountry, setPickedCountry] = useState(""); ... return ( ... <input list="countries" type="text" value={pickedCountry} onChange={(e) => setPickedCountry(e.target.value)} className="p-2 outline-none" placeholder="Choose Country" /> ... ); }

Next, let’s add a guessCountry function and attach it to the form submission:

const guessCountry = () => { if (player.country.toLowerCase() === pickedCountry.toLowerCase()) { setScore(score + 1); } else { alert(‘incorrect’); } }; ... <form onSubmit={(e) => { e.preventDefault(); guessCountry(); }} >

All we do is basically compare the current player’s country to the guessed country. Now, when we go back to the app and guess the country right, the score increases as expected.

Adding a status indicator

To make this a bit nicer, we can render some UI depending whether the guess is correct or not.

So, let’s create another piece of state for status, and update the guess country method:

const [status, setStatus] = useState(null); ... const guessCountry = () => { if (player.country.toLowerCase() === pickedCountry.toLowerCase()) { setStatus({ status: "correct", country: player.country }); setScore(score + 1); } else { setStatus({ status: "incorrect", country: player.country }); } };

Then render this UI below the player name:

{status && ( <div className="mt-4 text-lg leading-6 text-white"> <p> You are {status.status}. It is {status.country} </p> <p> <button autoFocus className="outline-none mt-8 w-full inline-flex items-center justify-center px-5 py-3 border border-transparent text-base font-medium rounded-md text-blue-600 bg-white hover:bg-blue-50 sm:w-auto" > Next Player </button> </p> </div> )}

Lastly, we want to make sure our input field doesn’t show when we are in a correct or incorrect status. We achieve this by wrapping the form with the following:

{!status && ( <form> ... </form> )}

Now, if we go back to the app and guess the player’s country, we get a nice message with the result of the guess.

Progressing through players

Now probably comes the most challenging part: How do we go from one player to the next?

First thing we need to do is store the currentStep in state so that we can update it with a number from 0 to 4. Then, when it hits 5, we want to show a completed state since the trivia game is over.

Once again, let’s add the following state variables:

const [currentStep, setCurrentStep] = useState(0); const [playersData, setPlayersData] = useState(players);

…then replace our previous player variable with:

const player = playersData[currentStep];

Next, we create a nextStep function and hook it up to the UI:

const nextStep = () => { setPickedCountry(""); setCurrentStep(currentStep + 1); setStatus(null); }; ... <button autoFocus onClick={nextStep} className="outline-none mt-8 w-full inline-flex items-center justify-center px-5 py-3 border border-transparent text-base font-medium rounded-md text-blue-600 bg-white hover:bg-blue-50 sm:w-auto" > Next Player </button>

Now, when we make a guess and hit the next step button, we’re taken to a new tennis player. Guess again and we see the next, and so on. 

What happens when we hit next on the last player? Right now, we get an error. Let’s fix that by adding a conditional that represents that the game has been completed. This happens when the player variable is undefined.

{player ? ( <div> <p className="mt-4 text-lg leading-6 text-blue-200"> What country is the following tennis player from? </p> ... <p className="mt-4 text-lg leading-6 text-white"> <strong>Current score:</strong> {score} </p> </div> ) : ( <div> <button autoFocus className="outline-none mt-8 w-full inline-flex items-center justify-center px-5 py-3 border border-transparent text-base font-medium rounded-md text-indigo-600 bg-white hover:bg-indigo-50 sm:w-auto" > Play Again </button> </div> )}

Now we see a nice completed state at the end of the game.

Play again button

We are almost done! For our “Play Again” button we want to reset the state all of the game. We also want to get a new list of players from the server without needing a refresh. We do it like this:

const playAgain = async () => { setPickedCountry(""); setPlayersData([]); const response = await fetch( process.env.NEXT_PUBLIC_API_URL + "/api/newGame" ); const data = await response.json(); setPlayersData(data.players); setCurrentStep(0); setScore(0); }; <button autoFocus onClick={playAgain} className="outline-none mt-8 w-full inline-flex items-center justify-center px-5 py-3 border border-transparent text-base font-medium rounded-md text-indigo-600 bg-white hover:bg-indigo-50 sm:w-auto" > Play Again </button>

Notice we are using the environment variable we set up before via the process.env object. We are also updating our playersData by overriding our server state with our client state that we just retrieved.

We haven’t filled out our newGame route yet, but this is easy with Next.js and Netlify serverless functions . We only need to edit the file in pages/api/newGame.ts.

import { NextApiRequest, NextApiResponse } from "next" import { top100Players } from "../../lib/players"; export default (req: NextApiRequest, res: NextApiResponse) => { const randomizedPlayers = top100Players.sort((a, b) => 0.5 - Math.random()); const top5Players = randomizedPlayers.slice(0, 5); res.status(200).json({players: top5Players}); }

This looks much the same as our getServerSideProps because we can reuse our nice helper variables.

If we go back to the app, notice the “Play Again” button works as expected.

Improving focus states

One last thing we can do to improve our user experience is set the focus on the country input field every time the step changes. That’s just a nice touch and convenient for the user. We do this using a ref and a useEffect:

const inputRef = useRef(null); ... useEffect(() => { inputRef?.current?.focus(); }, [currentStep]); <input list="countries" type="text" value={pickedCountry} onChange={(e) => setPickedCountry(e.target.value)} ref={inputRef} className="p-2 outline-none" placeholder="Choose Country" />

Now we can navigate much easier just using the Enter key and typing a country.

Deploying to Netlify

You may be wondering how we deploy this thing. Well, using Netlify makes it so simple as it detects a Next.js application out of the box and automatically configures it.

All I did was set up a GitHub repo and connect my GitHub account to my Netlify account. From there, I simply pick a repo to deploy and use all the defaults.

The one thing to note is that you have to add the NEXT_PUBLIC_API_URL environment variable and redeploy for it to take effect.

You can find my final deployed version here.

Also note that you can just hit the “Deploy to Netlify” button on the GitHub repo.

Conclusion

Woohoo, you made it! That was a journey and I hope you learned something about React, Next.js, and Netlify along the way.

I have plans to expand this tennis trivia app to use Supabase in the near future so stay tuned!

If you have any questions/comments feel free to reach out to me on Twitter.

The post Building a Tennis Trivia App With Next.js and Netlify appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Comparing Google Analytics and Plausible Numbers

Css Tricks - Fri, 10/08/2021 - 4:30am

I saw this blog post the other day: 58% of Hacker News, Reddit and tech-savvy audiences block Google Analytics. That’s an enticing title to me. I’ve had Google Analytics on this site literally from the day I launched it. While we tend to see some small growth each year, I’m also aware that the ad-blocking usage (and general third-party script blocking) goes up over time. So maybe we have more growth than we can easily visualize in Google Analytics?

The level of Google Analytics blockage varies by industry, audience, the device used and the individual website. In a previous study, I’ve found that less than 10% of visitors block Google Analytics on foodie and lifestyle sites but more than 25% block it on tech-focused sites.

Marko Saric

Marko looked at three days of data on his own website when he had a post trending on both Hacker News and Reddit, and that’s where the 58% came from. Plausible analytics reported 50.9k unique visitors and Google Analytics reported 21.1k. (Google Analytics calls them “Users.”)

I had to try this myself. If we’re literally getting double the traffic Google Analytics says we are, I’d really like to know that. So for the last week, I’ve had Plausible installed on this site (they have a generous unlimited 30-day free trial).

Here’s the Plausible data:

And here’s the Google Analytics data for the same exact period:

It’ll be easier to digest in a table:

MetricPlausibleGoogle AnalyticsUnique Visitors973k841kPageviews1.4m1.5mBounce Rate82%82%Visit Duration1m 31s1m 24s

So… the picture isn’t exactly clear. On one hand, Plausible reports 15% more unique visitors than Google Analytics. Definitely not double, but more! But then, as far as raw traffic is concerned (which actually matters more to me as a site that has ads), Plausible reports 5% less traffic. So it’s still fairly unclear to me what the real story is.

I do think Plausible is a pretty nice bit of software. Easy to install. Simple and nice charts. Very affordable.

Plausible is also third-party JavaScript

I’m sure less people block Plausible’s JavaScript, as, basically, it’s less-known and not Google. But it’s still third-party JavaScript and just as easily blockable as anything else. It’s not a fundamentally different way to measure traffic.

What is fundamentally different is something like Netlify Analytics, where the data comes from server logs that are not blockable (we’ve mentioned this before). That has its own issues, like the fact that Netlify counts bots and Google Analytics doesn’t. Maybe the best bet is something like server-side Google Analytics? That’s just way more technical debt than I’m ready for. Certainly a lot harder than slapping a script tag on the page.

Multiple sources

It felt weird to have multiple analytics tracking on the site at the same time. If I was doing things like tracking custom events, that would get even weirder. If I was going to continue to do it all client-side, I’d probably reach for something like Analytics which is designed to abstract that. But I prefer the idea of sending the data from the client once, and then if it has to go multiple places, do that server-side. That’s basically the entire premise of Segment, which is expensive, but you can self-host or probably write your own without terribly too much trouble. Just seems smart to me to keep less data going over the wire, and having it be first-party JavaScript.

The post Comparing Google Analytics and Plausible Numbers appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Writing Your Own Code Rules

Css Tricks - Thu, 10/07/2021 - 1:35pm

There comes a time on a project when it’s worth investing in tooling to protect the codebase. I’m not sure how to articulate when, but it’s somewhere after the project has proven to be something long-term and rough edges are starting to show, and before things feel like a complete mess. Avoid premature optimization but avoid, uh, postmature optimization.

Some of this tooling is so easy to implement, it is often done right up-front. I think of Prettier here, a code formatter that keeps your code in shape, usually right as you are coding. There are whole suites of tools you can put in that “as-you-are-coding” bucket, like accessibility linting, compatibility linting, security linting, etc. Webhint bundles a bunch of those together and is probably worth a look.

Then there is tooling that protects your code via more code that you have to write. Tests are the big player here, which can even be set up to run as you code. They are about making sure your code does what it is meant to do, and as such, deliver a hell of a lot of value.

Protecting your code with more code that you write is where I wanted to go with this, not with traditional tests, but with custom linting rules. I thought about it as two different posts about custom linting crossed my desk recently:

I was interested as a user of both ESLint and Stylelint in my main codebase. But fair warning, I found the process for writing custom rules in both of those pretty difficult. You gotta know you way around an Abstract Syntax Tree. It’s nothing like if (rules.find.selector.startsWith("old")) throw("Deprecated selector.") or something easy like that.

I found this all related to an interesting question that came my way:

I work on a development team working on an old project, and we want to get of rid many of our oldest and buggiest CSS selectors. For example, one of us might open a HTML file and see an element with a class name of deprecated-selector, our goal is to have our IDE literally mark it as a linting error and say like “This is a deprecated selector, use .ui-fresh__selector instead”.

The first thing I thought of was a custom Stylelint rules that would look for selectors that your team knows to be deprecated and warn you. But unfortunately, Stylelint is for linting CSS and it sounds like the main issue here is HTML. I know html-inspector had a way to write your own rules, but it’s getting a bit long in the tooth so I don’t know if there is success to be found there or not.

The post Writing Your Own Code Rules appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Developer Decisions For Building Flexible Components

Css Tricks - Thu, 10/07/2021 - 11:33am

Blog posts that get into the whole “how to think like a front-end developer” vibe are my favorite. Michelle Barker nails that in this post, and does it without sharing a line of code!

We simply can no longer design and develop only for “optimal” content or browsing conditions. Instead, we must embrace the inherent flexibility and unpredictability of the web, and build resilient components. Static mockups cannot cater to every scenario, so many design decisions fall to developers at build time. Like it or not, if you’re a UI developer, you are a designer — even if you don’t consider yourself one!

There are a lot of unknowns in front-end development. Much longer than my little list. Content of unknown size and length is certainly one of them. Then square the possibilities with every component variation while ensuring good accessibility and you’ve got, well, a heck of a job to do.

Direct Link to ArticlePermalink

The post Developer Decisions For Building Flexible Components appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS in TypeScript with vanilla-extract

Css Tricks - Thu, 10/07/2021 - 4:36am

vanilla-extract is a new framework-agnostic CSS-in-TypeScript library. It’s a lightweight, robust, and intuitive way to write your styles. vanilla-extract isn’t a prescriptive CSS framework, but a flexible piece of developer tooling. CSS tooling has been a relatively stable space over the last few years with PostCSS, Sass, CSS Modules, and styled-components all coming out before 2017 (some long before that) and they remain popular today. Tailwind is one of a few tools that has shaken things up in CSS tooling over the last few years.

vanilla-extract aims to shake things up again. It was released this year and has the benefit of being able to leverage some recent trends, including:

  • JavaScript developers switching to TypeScript
  • Browser support for CSS custom properties
  • Utility-first styling

There are a whole bunch of clever innovations in vanilla-extract that I think make it a big deal.

Zero runtime

CSS-in-JS libraries usually inject styles into the document at runtime. This has benefits, including critical CSS extraction and dynamic styling.

But as a general rule of thumb, a separate CSS file is going to be more performant. That’s because JavaScript code needs to go through more expensive parsing/compilation, whereas a separate CSS file can be cached while the HTTP2 protocol lowers the cost of the extra request. Also, custom properties can now provide a lot of dynamic styling for free.

So, instead of injecting styles at runtime, vanilla-extract takes after Linaria and astroturf. These libraries let you author styles using JavaScript functions that get ripped out at build time and used to construct a CSS file. Although you write vanilla-extract in TypeScript, it doesn’t affect the overall size of your production JavaScript bundle.

TypeScript

A big vanilla-extract value proposition is that you get typing. If it’s important enough to keep the rest of your codebase type-safe, then why not do the same with your styles?

TypeScript provides a number of benefits. First, there’s autocomplete. If you type “fo” then, in a TypeScript-friendly editor, you get a list of font options in a drop down — fontFamily, fontKerning, fontWeight, or whatever else matches — to choose from. This makes CSS properties discoverable from the comfort of your editor. If you can’t remember the name of fontVariant but know it’s going to start with the word “font” you type it and scroll through the options. In VS Code, you don’t need to download any additional tooling to get this to happen.

This really speeds up the authoring of styles:

It also means your editor is watching over your shoulder to make sure you aren’t making any spelling mistakes that could cause frustrating bugs.

vanilla-extract types also provide an explanation of the syntax in their type definition and a link to the MDN documentation for the CSS property you’re editing. This removes a step of frantically Googling when styles are behaving unexpectedly.

Writing in TypeScript means you’re using camel-case names for CSS properties, like backgroundColor. This might be a bit of a change for developers who are used regular CSS syntax, like background-color.

Integrations

vanilla-extract provides first-class integrations for all the newest bundlers. Here’s a full list of integrations it currently supports:

  • webpack
  • esbuild
  • Vite
  • Snowpack
  • NextJS
  • Gatsby

It’s also completely framework-agnostic. All you need to do is import class names from vanilla-Extract, which get converted into a string at build time.

Usage

To use vanilla-Extract, you write up a .css.ts file that your components can import. Calls to these functions get converted to hashed and scoped class name strings in the build step. This might sound similar to CSS Modules, and this isn’t by coincidence: one of the creators of vanilla-Extract, Mark Dalgleish, is also co-creator of CSS Modules.

style()

You can create an automatically scoped CSS class using the style() function. You pass in the element’s styles, then export the returned value. Import this value somewhere in your user code, and it’s converted into a scoped class name.

// title.css.ts import {style} from "@vanilla-extract/css"; export const titleStyle = style({ backgroundColor: "hsl(210deg,30%,90%)", fontFamily: "helvetica, Sans-Serif", color: "hsl(210deg,60%,25%)", padding: 30, borderRadius: 20, }); // title.ts import {titleStyle} from "./title.css"; document.getElementById("root").innerHTML = `<h1 class="${titleStyle}">Vanilla Extract</h1>`;

Media queries and pseudo selectors can be included inside style declarations, too:

// title.css.ts backgroundColor: "hsl(210deg,30%,90%)", fontFamily: "helvetica, Sans-Serif", color: "hsl(210deg,60%,25%)", padding: 30, borderRadius: 20, "@media": { "screen and (max-width: 700px)": { padding: 10 } }, ":hover":{ backgroundColor: "hsl(210deg,70%,80%)" }

These style function calls are a thin abstraction over CSS — all of the property names and values map to the CSS properties and values you’re familiar with. One change to get used to is that values can sometimes be declared as a number (e.g. padding: 30) which defaults to a pixel unit value, while some values need to be declared as a string (e.g. padding: "10px 20px 15px 15px").

The properties that go inside the style function can only affect a single HTML node. This means you can’t use nesting to declare styles for the children of an element — something you might be used to in Sass or PostCSS. Instead, you need to style children separately. If a child element needs different styles based on the parent, you can use the selectors property to add styles that are dependent on the parent:

// title.css.ts export const innerSpan = style({ selectors:{[`${titleStyle} &`]:{ color: "hsl(190deg,90%,25%)", fontStyle: "italic", textDecoration: "underline" }} }); // title.ts import {titleStyle,innerSpan} from "./title.css"; document.getElementById("root").innerHTML = `<h1 class="${titleStyle}">Vanilla <span class="${innerSpan}">Extract</span></h1> <span class="${innerSpan}">Unstyled</span>`;

Or you can also use the Theming API (which we’ll get to next) to create custom properties in the parent element that are consumed by the child nodes. This might sound restrictive, but it’s intentionally been left this way to increase maintainability in larger codebases. It means that you’ll know exactly where the styles have been declared for each element in your project.

Theming

You can use the createTheme function to build out variables in a TypeScript object:

// title.css.ts import {style,createTheme } from "@vanilla-extract/css"; // Creating the theme export const [mainTheme,vars] = createTheme({ color:{ text: "hsl(210deg,60%,25%)", background: "hsl(210deg,30%,90%)" }, lengths:{ mediumGap: "30px" } }) // Using the theme export const titleStyle = style({ backgroundColor:vars.color.background, color: vars.color.text, fontFamily: "helvetica, Sans-Serif", padding: vars.lengths.mediumGap, borderRadius: 20, });

Then vanilla-extract allows you to make a variant of your theme. TypeScript helps it ensure that your variant uses all the same property names, so you get a warning if you forget to add the background property to the theme.

This is how you might create a regular theme and a dark mode:

// title.css.ts import {style,createTheme } from "@vanilla-extract/css"; export const [mainTheme,vars] = createTheme({ color:{ text: "hsl(210deg,60%,25%)", background: "hsl(210deg,30%,90%)" }, lengths:{ mediumGap: "30px" } }) // Theme variant - note this part does not use the array syntax export const darkMode = createTheme(vars,{ color:{ text:"hsl(210deg,60%,80%)", background: "hsl(210deg,30%,7%)", }, lengths:{ mediumGap: "30px" } }) // Consuming the theme export const titleStyle = style({ backgroundColor: vars.color.background, color: vars.color.text, fontFamily: "helvetica, Sans-Serif", padding: vars.lengths.mediumGap, borderRadius: 20, });

Then, using JavaScript, you can dynamically apply the class names returned by vanilla-extract to switch themes:

// title.ts import {titleStyle,mainTheme,darkMode} from "./title.css"; document.getElementById("root").innerHTML = `<div class="${mainTheme}" id="wrapper"> <h1 class="${titleStyle}">Vanilla Extract</h1> <button onClick="document.getElementById('wrapper').className='${darkMode}'">Dark mode</button> </div>`

How does this work under the hood? The objects you declare in the createTheme function are turned into CSS custom properties attached to the element’s class. These custom properties are hashed to prevent conflicts. The output CSS for our mainTheme example looks like this:

.src__ohrzop0 { --color-brand__ohrzop1: hsl(210deg,80%,25%); --color-text__ohrzop2: hsl(210deg,60%,25%); --color-background__ohrzop3: hsl(210deg,30%,90%); --lengths-mediumGap__ohrzop4: 30px; }

And the CSS output of our darkMode theme looks like this:

.src__ohrzop5 { --color-brand__ohrzop1: hsl(210deg,80%,60%); --color-text__ohrzop2: hsl(210deg,60%,80%); --color-background__ohrzop3: hsl(210deg,30%,10%); --lengths-mediumGap__ohrzop4: 30px; }

So, all we need to change in our user code is the class name. Apply the darkmode class name to the parent element, and the mainTheme custom properties get swapped out for darkMode ones.

Recipes API

The style and createTheme functions provide enough power to style a website on their own, but vanilla-extract provides a few extra APIs to promote reusability. The Recipes API allows you to create a bunch of variants for an element, which you can choose from in your markup or user code.

First, it needs to be separately installed:

npm install @vanilla-extract/recipes

Here’s how it works. You import the recipe function and pass in an object with the properties base and variants:

// button.css.ts import { recipe } from '@vanilla-extract/recipes'; export const buttonStyles = recipe({ base:{ // Styles that get applied to ALL buttons go in here }, variants:{ // Styles that we choose from go in here } });

Inside base, you can declare the styles that will be applied to all variants. Inside variants, you can provide different ways to customize the element:

// button.css.ts import { recipe } from '@vanilla-extract/recipes'; export const buttonStyles = recipe({ base: { fontWeight: "bold", }, variants: { color: { normal: { backgroundColor: "hsl(210deg,30%,90%)", }, callToAction: { backgroundColor: "hsl(210deg,80%,65%)", }, }, size: { large: { padding: 30, }, medium: { padding: 15, }, }, }, });

Then you can declare which variant you want to use in the markup:

// button.ts import { buttonStyles } from "./button.css"; <button class=`${buttonStyles({color: "normal",size: "medium",})}`>Click me</button>

And vanilla-extract leverages TypeScript giving autocomplete for your own variant names!

You can name your variants whatever you like, and put whatever properties you want in them, like so:

// button.css.ts export const buttonStyles = recipe({ variants: { animal: { dog: { backgroundImage: 'url("./dog.png")', }, cat: { backgroundImage: 'url("./cat.png")', }, rabbit: { backgroundImage: 'url("./rabbit.png")', }, }, }, });

You can see how this would be incredibly useful for building a design system, as you can create reusable components and control the ways they vary. These variations become easily discoverable with TypeScript — all you need to type is CMD/CTRL + Space (on most editors) and you get a dropdown list of the different ways to customize your component.

Utility-first with Sprinkles

Sprinkles is a utility-first framework built on top of vanilla-extract. This is how the vanilla-extract docs describe it:

Basically, it’s like building your own zero-runtime, type-safe version of Tailwind, Styled System, etc.

So if you’re not a fan of naming things (we all have nightmares of creating an outer-wrapper div then realising we need to wrap it with an . . . outer-outer-wrapper ) Sprinkles might be your preferred way to use vanilla-extract.

The Sprinkles API also needs to be separately installed:

npm install @vanilla-extract/sprinkles

Now we can create some building blocks for our utility functions to use. Let’s create a list of colors and lengths by declaring a couple of objects. The JavaScript key names can be whatever we want. The values will need to be valid CSS values for the CSS properties we plan to use them for:

// sprinkles.css.ts const colors = { blue100: "hsl(210deg,70%,15%)", blue200: "hsl(210deg,60%,25%)", blue300: "hsl(210deg,55%,35%)", blue400: "hsl(210deg,50%,45%)", blue500: "hsl(210deg,45%,55%)", blue600: "hsl(210deg,50%,65%)", blue700: "hsl(207deg,55%,75%)", blue800: "hsl(205deg,60%,80%)", blue900: "hsl(203deg,70%,85%)", }; const lengths = { small: "4px", medium: "8px", large: "16px", humungous: "64px" };

We can declare which CSS properties these values are going to apply to by using the defineProperties function:

  • Pass it an object with a properties property.
  • In properties, we declare an object where the keys are the CSS properties the user can set (these need to be valid CSS properties) and the values are the objects we created earlier (our lists of colors and lengths).
// sprinkles.css.ts import { defineProperties } from "@vanilla-extract/sprinkles"; const colors = { blue100: "hsl(210deg,70%,15%)" // etc. } const lengths = { small: "4px", // etc. } const properties = defineProperties({ properties: { // The keys of this object need to be valid CSS properties // The values are the options we provide the user color: colors, backgroundColor: colors, padding: lengths, }, });

Then the final step is to pass the return value of defineProperties to the createSprinkles function, and export the returned value:

// sprinkles.css.ts import { defineProperties, createSprinkles } from "@vanilla-extract/sprinkles"; const colors = { blue100: "hsl(210deg,70%,15%)" // etc. } const lengths = { small: "4px", // etc. } const properties = defineProperties({ properties: { color: colors, // etc. }, }); export const sprinkles = createSprinkles(properties);

Then we can start styling inside our components inline by calling the sprinkles function in the class attribute and choosing which options we want for each element.

// index.ts import { sprinkles } from "./sprinkles.css"; document.getElementById("root").innerHTML = `<button class="${sprinkles({ color: "blue200", backgroundColor: "blue800", padding: "large", })}">Click me</button> </div>`;

The JavaScript output holds a class name string for each style property. These class names match a single rule in the output CSS file.

<button class="src_color_blue200__ohrzop1 src_backgroundColor_blue800__ohrzopg src_padding_large__ohrzopk">Click me</button>

As you can see, this API allows you to style elements inside your markup using a set of pre-defined constraints. You also avoid the difficult task of coming up with names of classes for every element. The result is something that feels a lot like Tailwind, but also benefits from all the infrastructure that has been built around TypeScript.

The Sprinkles API also allows you to write conditions and shorthands to create responsive styles using utility classes.

Wrapping up

vanilla-extract feels like a big new step in CSS tooling. A lot of thought has been put into building it into an intuitive, robust solution for styling that utilizes all of the power that static typing provides.

Further reading

The post CSS in TypeScript with vanilla-extract appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

A Themeable React Data Grid With Great UX-Focused Features

Css Tricks - Thu, 10/07/2021 - 4:35am

(This is a sponsored post.)

KendoReact can save you boatloads of time because it offers pre-built componentry you can use in your app right away. They look nice, but more importantly, they are easily themeable, so they look however you need them to look. And I’d say the looks aren’t even the important part. There are lots of component libraries out there that focus on the visuals. These components tackle the hardest interactivity problems in UI/UX, and do it with grace, speed, and accessibility in mind.

Let’s take a look at their React Data Grid component.

The ol’ <table> element is the right tool for the job for data grids, but a table doesn’t offer most of the features that make for a good data browsing experience. If we use the KendoReact <Grid /> component (and friends), we get an absolute ton of extra features, any one of which is non-trivial to pull off nicely, and all together make for an extremely compelling solution. Let’s go through a list of what you get.

CodePen Embed Fallback Sortable Columns

You’ll surely pick a default ordering for your data, but if any given row of data has things like ID’s, dates, or names, it’s perfectly likely that a user would want to sort the column by that data. Perhaps they want to view the oldest orders, or the orders of the highest total value. HTML does not help with ordering in tables, so this is table stakes (get it?!) for a JavaScript library for data grids, and it’s perfectly handled here.

Pagination and Limits

When you have any more than, say, a few dozen rows of data, it’s common that you want to paginate it. That way users don’t have to scroll as much, and equally importantly, it keeps the page fast by not making the DOM too enormous. One of the problems with pagination though is it makes things like sorting harder! You can’t just sort the 20 rows you can see, it is expected that the entire data set gets sorted. Of course that’s handled in KendoReact’s Data Grid component.

Or, if pagination isn’t your thing, the data grid offers virtualized scrolling — in both the column and row directions. That’s a nice touch as the data loads quickly for smooth, natural scrolling.

Expandable Rows

A data grid might have a bunch of data visible across the row itself, but there might be even more data that a user might want to dig out of an entry once they find it. Perhaps it is data that doesn’t need to be cross-referenced in the same way column data is. This can be tricky to pull off, because of the way table cells are laid out. The data is still associated with a single row, but you often need more room than the width of one cell offers. With the KendoReact Data Grid component, you can pass in a detail prop with an arbitrary React component to show when a row is expanded. Super flexible!

Notice how the expanded details can have their own <Grid /> inside! Responsive Design

Perhaps the most notoriously difficult thing to pull off with <table> designs is how to display them on small screens. Zooming out isn’t very good UX, nor is collapsing the table into something non-table-like. The thing about data grids is that they are all different, and you’ll know data is most important to your users best. The KendoReact Data Grid component helps with this by making your data grid scrollable/swipeable, and also being able to lock columns to make sure they continue to be easy to find and cross-reference.

Filtering Data

This is perhaps my favorite feature just because of how UX-focused it is. Imagine you’re looking at a big data grid of orders, and you’re like “Let me see all orders from White Clover Markets.” With a filtering feature, perhaps you quickly type “clover” into the filter input, and viola, all those orders are right there. That’s extra tricky stuff when you’re also supporting ordering and pagination — so it’s great all these features work together.

Grouping Data

Now this feature actually blows my mind &#x1f92f; a little bit. Filtering and sorting are both very useful, but in some cases, they leave a little bit to be desired. For example, it’s easy to filter too far too quickly, leaving the data you are looking at very limited. And with sorting, you might be trying to look at a subset of data as well, but it’s up to your brain to figure out where that data begins and ends. With grouping, you can tell the data grid to strongly group together things that are the most important to you, but then still leverage filtering and sorting on top of that. It instantly makes your data exploration easier and more useful.

Localization

This is where you can really tell KendoReact went full monty. It would be highly unfortunate to pick some kind of component library and then realize that you need localization and realize it wasn’t made to be a first-class citizen. You avoid all that with KendoReact, which you can see in this Data Grid component. In the demo, you can flip out English for Spanish with a simple dropdown and see all the dates localized. You pull off any sort of translation and localization with the <LocalizationProvider> and <IntlProvider>, both comfortable React concepts.

Exporting to PDF or Excel

Here’s a live demo of this:

C’mon now! That’s very cool.

That’s not all…

Go check out the docs for the React Data Grid. There are a bunch more features we didn’t even get to here (row pinning! cell editing!). And here’s something to ease your mind: this component, and all the KendoReact components, are keyboard friendly and meet Section 508 accessibility standards. That is no small feat. When components are this complex and involve this much interactivity, getting the accessibility right is tough. So not only are you getting good-looking components that work everywhere, you’re getting richly interactive components that deliver UX beyond what you might even think of, and it’s all done fast and accessiblty. That’s pretty unreal, really.

Get started with KendoReact Data Grid

The post A Themeable React Data Grid With Great UX-Focused Features appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Websites We Like: MD Nichrome

Css Tricks - Wed, 10/06/2021 - 12:24pm

Here’s a beautiful website: it’s a type specimen for Mass-Driver’s ever-so-lovely type family MD Nichrome. There’s a ton of nifty animations and graphics explaining all the features inside…

If you’re wondering how those animations work, they’re actually styled <video> elements.

There’s lots of great graphic design touches as well, such as how the letters below trail off and fade away…

That little bit of CSS is neat. It makes sure that each <h1> stays on a single line with white-space, then sets hidden overflow on them so the heading trails off. The fading is courtesy of a linear gradient that incorporates transparency. The gradient is actually a mask-image in this case. That’s a good reminder that CSS gradients are images generated by the browser.

h1 { white-space: nowrap; overflow: hidden; -webkit-mask-image: linear-gradient(to right, black 75%, transparent); }

In the image above you can also see how Mass-Driver is advertising the OpenType features of the font. That’s stuff like fractions or alternate letters that gives your text superpowers. By default, these sections show what the feature is, but when you hover over them they do the following:

.element { font-feature-setting: unset; }

I don’t think I’ve ever used unset before but this is a great place to use it—show what the feature looks like up front and then when you hover show what the default is. Smart stuff.

But the part that caught my eye—besides the kick ass typography—is the background. It’s made up of two parts: a shimmery animation that makes the page look like paper, and the gradient that’s stacked on top of it. And after digging around in DevTools I realized that shimmering effect is a PNG image where the background-position property moves the picture around slightly to animate it like a GIF. It’s hard to see in pictures, so here’s a copycat demo I made with the opacity turned off to make it easier to see:

CodePen Embed Fallback

See that lovely fuzziness? It gives the background a kind of… texture… that I haven’t seen for a long time, perhaps since around 2008 when everyone was trying to make buttons look like real, analog buttons on the web. Geoff covered the same sort of technique a while back where you can get a deep dive into how it works.

The other part of the design of this website is the gradient in the background. How are those so smooth? Well, Rutherford Craze, the designer behind this ingenious bit of web design, made a thread explaining how he got this effect to work in the browser. He created a gradients tool that lets you create a similar effect:

Rutherford writes:

Conventional CSS gradients plot a straight line through colour space, interpolating directly from the start to the end colour. This tool applies the principles of bézier curves, the basis of digital fonts, to this operation.

By introducing ‘control points’ along the gradient, you can more finely control the interpolation and produce a smoother end result. The tool then samples this ‘bézier gradient’ to produce a linear gradient you can work with in CSS.

What Rutherford is describing above is what’s known as the “Gray Dead Zone” of gradients, where often in a linear gradient there’s this gray color that forms in the middle.

Another small detail that I almost didn’t catch was the sticky navigation: when you first load the website you just see the logo with nothing else, but then as you scroll you’ll see the nav and it locks into place:

Notice how sticky positioning is also used later on to demonstrate the font’s glyphs.

CSS makes this sort of thing so easy. Declare sticky positioning on the element, then offset the stickiness if the element should start sticking at a certain spot.

.sticky-thing { position: sticky; top: 75px; }

Since they want you to focus on the letters first and not all the rest of the UI, it makes a ton of sense to put the navigation off to one side, only when you need it. And this makes the overall design feel incredibly focused and straightforward, barely worth commenting on at all perhaps, but when most websites are so full of distractions then I think it’s worth celebrating quiet websites like these.

The post Websites We Like: MD Nichrome appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

The Single Page App Morality Play

Css Tricks - Wed, 10/06/2021 - 12:20pm

Baldur Bjarnason brings some baby bear porridge to the discussion of Single Page App (SPA) vs. Multi Page App (MPA).

Single-Page-Apps can be fantastic. Most teams will mess them up because most teams operate in dysfunctional organisations. Multi-Page-Apps can also be fantastic, both in highly functional organisations that can apply them when and where they are appropriate and in dysfunctional ones, as they enforce a limit to project scope.

Both approaches can be good and bad. Baldur makes the point that management plays the biggest role in ensuring either outcome.

Another truth: there are an awful lot of projects out there that aren’t all-in on either approach, but are a mixture. I feel that. I also feel like there is a strong desire to declare a winner or have a simple checklist to decide which approach to go for, and that just ain’t gonna happen because the landscape is too shifty and there are just too many factors. Technology: it’s complicated.

A recent episode of Web Rush went into all this as well:

Direct Link to ArticlePermalink

The post The Single Page App Morality Play appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Considerations for Using Markdown Writing Apps on Static Sites

Css Tricks - Wed, 10/06/2021 - 4:22am

If you run or have recently switched to a static site generator, you might find yourself writing a lot of Markdown. And the more you write it, the more you want the tooling experience to disappear so that the content takes focus.

I’m going to give you some options (including my favorite), but more importantly, I’ll walk though features that these apps offer that are especially relevant when choosing. Here are some key considerations for Markdown editing apps to help the words flow.

Consideration #1: Separate writing and reading modes

UX principles tell us that modes are problematic. But perhaps there is an exception for text editing software. From vi(m) to Google Docs, separate modes for writing and reading seem to appeal to writers. Similarly, many Markdown editors have separate modes or views for writing, editing and reading.

I happen to like Markdown editors that provide a side-by-side or paned design where I can see both at once. Writing Markdown is not the same as writing code. What it looks like matters, and having a preview can give you a feel for that. It’s kind of like static site generators that auto-refresh so that you can see your changes as you make them.

Having both edit and preview mode active at once can help you feel more connected to the finished product.

In contrast, I’m not a fan of the one-mode-to-rule-them-all design where Markdown formatting automatically converts to styled text, hiding the formatted code (implemented in some form by Dropbox Paper, Typora, Ulysses, and Bear). I can’t stand the work of futzing with the app to change a heading level, for example. Do I click it, double-click, triple-click? What if I’m just using the keyboard?

It’s not so much that these features aren’t useful, it’s that they break my flow.

I want to see all the Markdown that I’ve written, even if the end user won’t. That’s one thing that I do want a Markdown editor to borrow from code editors.

Consideration #2: Good themes

Some Markdown editors allow full customization of editor themes, while others ship with nice ones out of the box. Regardless, I think a good editor should have just the right amount of styling to differentiate plain text from formatted text, but not so much that it distracts you from being able to read it and focus on the content. Even with the preview pane open, I typically spend most of my time looking at the editing view.

Different colors for each style

Since most of the text in the editor isn’t going to be rendered as it would in the browser, it’s nice to quickly see which text you’ve formatted using Markdown. This helps you determine, for example, whether a URL is actually written out in the text or is used inside a hyperlink. So, I like to have a different color for each Markdown style (headings, links, bold, italic, quotes, images, code, bullets, etc.)

Colors tell you which text has Markdown formatting applied. Apply bold and italics styles too

I prefer to use asterisks for Markdown formatting everywhere I’m able to (e.g., bold, italics, and unordered lists), so I find it helpful to have extra styling beyond color to distinguish bold, italic, and bold+italic. When skimming it can be hard to differentiate between **this is important** and *this is important*, whereas **this is important** and *this is important* are easier to separate. It also helps me see if I’ve accidentally mismatched asterisks (e.g., **is this important?*).

Different font sizes for each heading level

This might be a bit controversial and may split the audience. Code editors don’t show different font sizes within a file. Colors and styles, sure, but not sizes. But, for me, it helps.

When writing, hierarchy is the key to organization. With different font sizes for each heading, you can see the outline of whatever you’re writing just by skimming through it.

Seeing the headings in different font sizes is a subtle way to help you visualize the structure. This is especially helpful for long content. Shortcuts and smart keyboard behaviors

I expect all the standard shortcuts that work in a text editor to work. CTRL/CMD + B for bold, I for italic, etc., as well as some that are nice-to-have when writing articles, in particular CTRL/CMD + (number) for headings. CTRL/CMD + 1 for H1, etc.

Making something into a heading should be a simple shortcut.

But there are also some keyboard behaviors I like that are borrowed from code editors. For example, if I select some text and press [ or ( it won’t overwrite that text, but, instead, enclose it with the opening and closing character. Same for using text formatting characters like *, `, and _.

A good Markdown editor won’t overwrite your text when you select it and press a valid Markdown formatting character.

I also rely on keyboard shortcuts to create links and images. Even after more than five years of writing Markdown on a regular basis, I still sometimes forget whether the brackets or parentheses comes first. So, I really like having a handy shortcut to insert them correctly.

Even better, in some editors, if you have a URL in your clipboard and you select text then use a keyboard shortcut to make it into a link, it will insert the URL in the hyperlink field. This has really sped up my workflow.

When I have a URL in the clipboard and use the create link shortcut, it assumes that’s what I want to link to. Handy! Bonus feature: Copy to HTML

The editor that I use most often has a one-click “Copy HTML” feature (with keyboard shortcut) that takes all of the Markdown I’ve written and copies the HTML to the clipboard. This can be very convenient when using an online editor (e.g., WordPress) that has a code/source option.

Easy peasy! Consideration #3: Stand-alone editor vs. CMS/IDE plugin

I know that a lot of people who work with static site generators love their IDEs and may even jump back and forth between code and Markdown in a single day. I often do. So I can see why using a familiar IDE would be more attractive than having a separate app for Markdown.

But when I’m sitting down to write a page in Markdown or an article, where I’m focusing on the text itself, I prefer a separate app.

I’m not fanatical about using standalone Markdown editors over IDE editor or plugins; I use one occasionally for complex find-and-replace tasks and other edits. As long as it offers the benefits listed above, I wouldn’t try to talk anyone out of it.

Here are a few reasons why a standalone app might work better for writing:

  • Cleaner interface. I’m not someone who needs “Zen mode” in my writing app, but I do like to have as few panels open as possible when I’m writing, which typically requires turning a lot of things off in an IDE.
  • Performance. Most Markdown tools just feel lighter to me. They are certainly less complex and do less stuff, so they should be faster. I don’t ever want to feel like my writing app is exerting any effort. It should launch fast and respond instantly, always.
  • Availability. I just haven’t found a Markdown editor in an IDE that I really like. Perhaps there is one out there; I just don’t have time to try them all. But I like most standalone Markdown editors that I’ve used, and I can’t say the same for what I’ve tried in IDE-land.
  • Mental shift. When I open my IDE, I’m thinking about writing code, but when I open my Markdown editor, I’m thinking about writing words. I like that it gets me into the right mindset.
That’s a few too many choices. My favorite Markdown editors for writing

While these are my top picks, it doesn’t mean that if an app isn’t on this list that it’s bad. There are several good apps that I didn’t mention because they had too many features or were too expensive given the number of decent free or cheap options. And similar to IDE packages, there are a ton of Markdown apps out there and I haven’t tried them all (but I have tried a lot of them!).

A note about features that help you get “into the zone,” such as “typewriter” or “focus” modes, or soothing background music. They’ve never really worked for me and I eventually turn them off, so they aren’t a feature that I go looking for. (Although if you are into those, you can try Typora, which is free (during Beta) and runs on Mac, Windows, and Linux.)

My top choice MacDown

Free; Mac

Meets all the criteria listed above. It’s light and snappy, and open source.

A good, similar alternative for Windows and Linux is Ghostwriter (also free).

Honorable mentions Lightpaper

$15; Mac

Good for if you want just a bit more functionality. It adds a third pane so that you can easily switch between your files and folders.

Obsidian

Free for personal use; Mac, Windows, Linux

For a more full-featured app, the editor interface is pretty good, and meets most of the criteria mentioned above. Zettlr offers similar features, but just feels more complicated, IMO.

Byword

$11; Mac, iOS

Not my favorite app for writing and editing text, but it has the nice added ability to publish to various platforms (e.g., Medium, WordPress, Tumblr, Blogger, and Evernote).

Bear

Free or $1.49/mo. for Pro version; Mac, iOS

A good choice if you use Markdown for more than just site content (personal notes, task management, etc.). Scores high in appearance and usability, too.

Summary

With Markdown syntax being supported in more and more places — including Slack, GitHub, WordPress, etc. — it is quickly becoming a lingua franca for richer communication in our increasingly text-based lives. It’s here to stay because it’s not only easy to learn and use, it’s intuitive. And luckily we’re currently spoiled for choice when it comes to quality Markdown writing apps.

The post Considerations for Using Markdown Writing Apps on Static Sites appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

The Options for Password Revealing Inputs

Css Tricks - Wed, 10/06/2021 - 4:22am

In HTML, there is a very clear input type for dealing with passwords:

<input type="password">

If you use that, you get the obfuscated bullet-points when you type into it, like:

••••••••

That’s the web trying to help with security. If it didn’t do that, the classic problem is that someone could be peering over your shoulder, spying on what you’re entering. Easier than looking at what keys your fingers press, I suppose.

But UX has evolved a bit, and it’s much more common to have an option like:

☑️ Reveal Password?

I think the idea is that we should have a choice. Most of us can have a quick look about and see if there are prying eyes, and if not, we might choose to reveal our password so we can make sure we type I_luv_T@cos696969 correctly and aren’t made to suffer the agony of typing it wrong 8 times.

So! What to do?

Option 1: Use type="password", then swap it out for type="text" with JavaScript

This is what everybody is doing right now, because it’s the one that actually works across all browsers right now.

const input = document.querySelector(".password-input"); // When an input is checked, or whatever... if (input.getAttribute("type") === "password") { input.setAttribute("type", "text"); } else { input.setAttribute("type", "password"); }

The problem here — aside from it just being kinda weird that you have to change input types just for this — is password manager tools. I wish I had exact details on this, but it doesn’t surprise me that messing with input types might confuse any tool specifically designed to look for and prefill passwords, maybe even the tools built right into browsers themselves.

Option 2: Use -webkit-text-security in CSS

This isn’t a real option because you can’t just have an important feature work in some browsers and not in others. But clearly there was a desire to move this functionality to CSS at some point, as it does work in some browsers.

input[type="password"] { -webkit-text-security: square; } form.show-passwords input[type="password"] { -webkit-text-security: none; } Option 3: input-security in CSS

There is an Editor’s Draft spec for input security. It’s basically a toggle value.

form.show-passwords input[type="password"] { input-security: none; }

I like it. Simple. But I don’t think any browser supports it yet. So, kinda realistically, we’re stuck with Option #1 for now.

Demos, all together CodePen Embed Fallback

The post The Options for Password Revealing Inputs appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Scroll Shadows With JavaScript

Css Tricks - Tue, 10/05/2021 - 12:40pm

Scroll shadows are when you can see a little inset shadow on elements if (and only if) you can scroll in that direction. It’s just good UX. You can actually pull it off in CSS, which I think is amazing and one of the great CSS tricks. Except… it just doesn’t work on iOS Safari. It used to work, and then it broke in iOS 13, along with some other useful CSS things, with no explanation why and has never been fixed.

So, now, if you really want scroll shadows (I think they are extra useful on mobile browsers anyway), it’s probably best to reach for JavaScript.

Here’s a pure CSS example so you can see it work in all browsers except iOS Safari. Screenshots:

I’m bringing this up now because I see Jonnie Hallman is blogging about it again. He mentioned it as an awesome little detail back in May. There are certain interfaces where scroll shadows really extra make sense.

Taking a step back, I thought about the solution that currently worked, using scroll events. If the scroll area has scrolled, show the top and left shadows. If the scroll area isn’t all the way scrolled, show the bottom and right shadows. With this in mind, I tried the simplest, most straight-forward, and least clever approach by putting empty divs at the top, right, bottom, and left of the scroll areas. I called these “edges”, and I observed them using the Intersection Observer API. If any of the edges were not intersecting with the scroll area, I could assume that the edge in question had been scrolled, and I could show the shadow for that edge. Then, once the edge is intersecting, I could assume that the scroll area has reached the edge of the scroll, so I could hide that shadow.

Clever clever. No live demo, unfortunately, but read the post for a few extra details on the implementation.

Other JavaScript-powered examples CodePen Embed Fallback CodePen Embed Fallback CodePen Embed Fallback

I do think if you’re going to do this you should go the IntersectionObserver route though. Would love to see someone port the best of these ideas all together (wink wink).

The post Scroll Shadows With JavaScript appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Conditional Border Radius In CSS

Css Tricks - Tue, 10/05/2021 - 9:18am

Ahmad Shadeed documents a bonafide CSS trick from the Facebook CSS codebase. The idea is that when an element is the full width of the viewport, it doesn’t have any border-radius. But otherwise, it has 8px of border-radius. Here’s the code:

.card { border-radius: max(0px, min(8px, calc((100vw - 4px - 100%) * 9999))); }

One line! Super neat. The guts of it is the comparison between 100vw and 100%. Essentially, the border-radius comes out to 8px most of the time. But if the component becomes the same width as the viewport (within 4px, but I’d say that part is optional), then the value of the border-radius becomes 0px, because the equation yields a negative (invalid) number.

The 9999 multiplication means that you’ll never get low-positive numbers. It’s a toggle. You’ll either get 8px or 0px and nothing in between. Try removing that part, resizing the screen, and seeing it sorta morph as the viewport becomes close to the component size:

CodePen Embed Fallback

Why do it like this rather than at a @media query? Frank, the developer behind the Facebook choice, says:

It’s not a media query, which compares the viewport to a fixed width. It’s effectively a kind of container query that compares the viewport to its own variable width.

Frank Yan

And:

This is a more general purpose solution as it doesn’t need to know the size of the card. A media query would be dependent on the width of the card.

Naman Goel

The technique is apparently called the “Fab Four” Technique which is useful in emails.

Direct Link to ArticlePermalink

The post Conditional Border Radius In CSS appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Branching Strategies in Git

Css Tricks - Tue, 10/05/2021 - 4:28am

This article is part of our “Advanced Git” series. Be sure to follow us on Twitter or sign up for our newsletter to hear about the next articles!

Almost all version control systems (VCS) have some kind of support for branching. In a nutshell, branching means that you leave the main development line by creating a new, separate container for your work and continue to work there. This way you can experiment and try out new things without messing up the production code base. Git users know that Git’s branching model is special and incredibly powerful; it’s one of the coolest features of this VCS. It’s fast and lightweight, and switching back and forth between the branches is just as fast as creating or deleting them. You could say that Git encourages workflows that use a lot of branching and merging.

Git totally leaves it up to you how many branches you create and how often you merge them. Now, if you’re coding on your own, you can choose when to create a new branch and how many branches you want to keep. This changes when you’re working in a team, though. Git provides the tool, but you and your team are responsible for using it in the optimal way!

In this article, I’m going to talk about branching strategies and different types of Git branches. I’m also going to introduce you to two common branching workflows: Git Flow and GitHub Flow.

Advanced Git series:
  • Part 1: Creating the Perfect Commit in Git
  • Part 2: Branching Strategies in Git
    You are here!
  • Part 3: Better Collaboration With Pull Requests
    Coming soon!
  • Part 4: Merge Conflicts
  • Part 5: Rebase vs. Merge
  • Part 6: Interactive Rebase
  • Part 7: Cherry-Picking Commits in Git
  • Part 8: Using the Reflog to Restore Lost Commits
Teamwork: Write down a convention

Before we explore different ways of structuring releases and integrating changes, let’s talk about conventions. If you work in a team, you need to agree on a common workflow and a branching strategy for your projects. It’s a good idea to put this down in writing to make it accessible to all team members.

Admittedly, not everyone likes writing documentation or guidelines, but putting best practise on record not only avoids mistakes and collisions, it also helps when onboarding new team members. A document explaining your branching strategies will help them to understand how you work and how your team handles software releases.

Here are a couple of examples from our own documentation:

  • master represents the current public release branch
  • next represents the next public release branch (this way we can commit hotfixes on master without pulling in unwanted changes)
  • feature branches are grouped under feature/
  • WIP branches are grouped under wip/ (these can be used to create “backups” of your personal WIP)

A different team might have a different opinion on these things (for example on “wip” or “feature” groups), which will certainly be reflected in their own documentation.

Integrating changes and structuring releases

When you think about how to work with branches in your Git repositories, you should probably start with thinking about how to integrate changes and how to structure releases. All those topics are tightly connected. To help you better understand your options, let’s look at two different strategies. The examples are meant to illustrate the extreme ends of the spectrum, which is why they should give you some ideas of how you can design your own branching workflow:

  • Mainline Development
  • State, Release, and Feature Branches

The first option could be described as “always be integrating” which basically comes down to: always integrate your own work with the work of the team. In the second strategy you gather your work and release a collection of it, i.e. multiple different types of branches enter the stage. Both approaches have their pros and cons, and both strategies can work for some teams, but not for others. Most development teams work somewhere in between those extremes.

Let’s start with the mainline development and explain how this strategy works.

Mainline Development

I said it earlier, but the motto of this approach is “always be integrating.” You have one single branch, and everyone contributes by committing to the mainline:

Remember that we’re simplifying for this example. I doubt that any team in the real world works with such a simple branching structure. However, it does help to understand the advantages and disadvantages of this model.

Firstly, you only have one branch which makes it easy to keep track of the changes in your project. Secondly, commits must be relatively small: you can’t risk big, bloated commits in an environment where things are constantly integrated into production code. As a result, your team’s testing and QA standards must be top notch! If you don’t have a high-quality testing environment, the mainline development approach won’t work for you. 

State, Release and Feature branches

Let’s look at the opposite now and how to work with multiple different types of branches. They all have a different job: new features and experimental code are kept in their own branches, releases can be planned and managed in their own branches, and even various states in your development flow can be represented by branches:

Remember that this all depends on your team’s needs and your project’s requirements. While this approach may look complicated at first, it’s all a matter of practise and getting used to it.

Now, let’s explore two main types of branches in more detail: long-running branches and short-lived branches.

Long-running branches

Every Git repository contains at least one long-running branch which is typically called master or main. Of course, your team may have decided to have other long-running branches in a project, for example something like develop, production or staging. All of those branches have one thing in common: they exist during the entire lifetime of a project. 

A mainline branch like master or main is one example for a long-running branch. Additionally, there are so-called integration branches, like develop or staging. These branches usually represent states in a project’s release or deployment process. If your code moves through different states in its development life cycle — e.g. from developing to staging to production — it makes sense to mirror this structure in your branches, too.

One last thing about long-running branches: most teams have a rule like “don’t commit directly to a long-running branch.” Instead, commits are usually integrated through a merge or rebase. There are two main reasons for such a convention:

  • Quality: No untested or unreviewed code should be added to a production environment.
  • Release bundling and scheduling: You might want to release new code in batches and even schedule the releases in advance.

Next up: short-lived branches, which are usually created for certain purposes and then deleted after the code has been integrated.

Short-lived branches

In contrast to long-running branches, short-lived branches are created for temporary purposes. Once they’ve fulfilled their duty and the code has been integrated into the mainline (or another long-lived branch), they are deleted. There are many different reasons for creating a short-lived branch, e.g. starting to work on a new and experimental feature, fixing a bug, refactoring your code, etc.

Typically, a short-lived branch is based on a long-running branch. Let’s say you start working on a new feature of your software. You might base the new feature on your long-running main branch. After several commits and some tests you decide the work is finished. The new feature can be integrated into the main branch, and after it has been merged or rebased, the feature branch can be deleted. 

Two popular branching strategies

In the last section of this article, let’s look at two popular branching strategies: Git Flow and GitHub Flow. While you and your team may decide on something completely different, you can take them as inspiration for your own branching strategy.

Git Flow

One well-known branching strategy is called Git Flow. The main branch always reflects the current production state. There is a second long-running branch, typically called develop. All feature branches start from here and will be merged into develop. Also, it’s the starting point for new releases: developers open a new release branch, work on that, test it, and commit their bug fixes on such a release branch. Once everything works and you’re confident that it’s ready for production, you merge it back into main. As the last step, you add a tag for the release commit on main and delete the release branch.

Git Flow works pretty well for packaged software like (desktop) applications or libraries, but it seems like a bit of an overkill for website projects. Here, the difference between the main branch and the release branch is often not big enough to benefit from the distinction. 

If you’re using a Git desktop GUI like Tower, you’ll find the possible actions in the interface and won’t have to memorize any new commands:

GitHub Flow

If you and your team follow the continuous delivery approach with short production cycles and frequent releases, I would suggest looking at GitHub Flow

It’s extremely lean and simple: there is one long-running branch, the default main branch. Anything you’re actively working on has its own separate branch. It doesn’t matter if that’s a feature, a bug fix, or a refactoring.

What’s the “best” Git branching strategy?

If you ask 10 different teams how they’re using Git branches, you’ll probably get 10 different answers. There is no such thing as the “best” branching strategy and no perfect workflow that everyone should adopt. In order to find the best model for you and your team, you should sit down together, analyze your project, talk about your release strategy, and then decide on a branching workflow that supports you in the best possible way.

If you want to dive deeper into advanced Git tools, feel free to check out my (free!) “Advanced Git Kit”: it’s a collection of short videos about topics like branching strategies, Interactive Rebase, Reflog, Submodules and much more.

Advanced Git series:
  • Part 1: Creating the Perfect Commit in Git
  • Part 2: Branching Strategies in Git
    You are here!
  • Part 3: Better Collaboration With Pull Requests
    Coming soon!
  • Part 4: Merge Conflicts
  • Part 5: Rebase vs. Merge
  • Part 6: Interactive Rebase
  • Part 7: Cherry-Picking Commits in Git
  • Part 8: Using the Reflog to Restore Lost Commits

The post Branching Strategies in Git appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

ct.css — Performance Hints via Injected Stylesheet Alone

Css Tricks - Tue, 10/05/2021 - 4:28am

This is some bonafide CSS trickery from Harry that gives you some generic performance advice based on what it sees in your <head> element.

First, it’s possible to make a <style> block visible like any other element by changing the display away from the default of none. It’s a nice little trick. You can even do that for things in the <head>, for example…

head, head style, head script { display: block; }

From there, Harry gets very clever with selectors, determining problematic situations from the usage and placement of certain tags. For example, say there is a <script> that comes after some styles…

<head> <link rel="stylesheet" href="..."> <script src="..."></script> <title>Page Title</title> <!-- ... -->

Well, that’s bad, because the script is blocked by CSS likely unnecessarily. Perhaps some sophisticated performance tooling software could tell you that. But you know what else can? A CSS selector!

head [rel="stylesheet"]:not([media="print"]):not(.ct) ~ script, head style:not(:empty) ~ script { }

That’s kinda like saying head link ~ script, but a little fancier in that it only selects actual stylesheets or style blocks that are truly blocking (and not itself). Harry then applies styling and pseudo-content to the blocks so you can use the stylesheet as a visual performance debugging tool.

That’s just darn clever, that. The stylesheet has loads of little things you can test for, like attributes you don’t need, blocking resources, and elements that are out of order.

Direct Link to ArticlePermalink

The post ct.css — Performance Hints via Injected Stylesheet Alone appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Quickly Testing CSS Fallbacks

Css Tricks - Mon, 10/04/2021 - 9:15am

Dumb trick alert!

Not all browsers support all features. Say you want to write a fallback for browsers that doesn’t support CSS Grid. Not very common these days, but it’s just to illustrate a point.

You could write the supporting CSS in an @supports blocks:

@supports (display: grid) { .blocks { display: grid; grid-template-columns: repeat(auto-fill, minmax(min(100px, 100%), 1fr)); gap: 1rem; } }

Then to test the fallback, you quickly change @supports (display: grid) to something nonsense, like adding an “x” so it’s @supports (display: gridx). That’s a quick toggle:

The example above doesn’t have very good fallbacks does it?! Maybe I’d attempt to write something similar in flexbox, as hey, maybe there is some small group of browsers still out there that support flexbox and not grid. More likely, I’d just write a fallback that makes things look pretty OK as a column.

If I’m fairly confident the browser supports @supports queries (oh, the irony), I could write it like:

@supports (display: grid) { /* grid stuff */ } @supports not (display: grid) { /* at least space them out a bit */ .block { margin: 10px } }

That’s an assumption that will get safer and safer to make, and honestly, we’re probably already there (if you’ve dropped IE support).

This makes me want that @when syntax even more though, because then we could write:

@when supports(display: grid) { } @else { }

…which feels so fresh and so clean.

The post Quickly Testing CSS Fallbacks appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Animation Techniques for Adding and Removing Items From a Stack

Css Tricks - Mon, 10/04/2021 - 4:05am

Animating elements with CSS can either be quite easy or quite difficult depending on what you are trying to do. Changing the background color of a button when you hover over it? Easy. Animating the position and size of an element in a performant way that also affects the position of other elements? Tricky! That’s exactly what we’ll get into here in this article.

A common example is removing an item from a stack of items. The items stacked on top need to fall downwards to account for the space of an item removed from the bottom of the stack. That is how things behave in real life, and users may expect this kind of life-like motion on a website. When it doesn’t happen, it’s possible the user is confused or momentarily disorientated. You expect something to behave one way based on life experience and get something completely different, and users may need extra time to process the unrealistic movement.

Here is a demonstration of a UI for adding items (click the button) or removing items (click the item).

CodePen Embed Fallback

You could paper over the poor UI slightly by adding a “fade out” animation or something, but the result won’t be that great, as the list will will abruptly collapse and cause those same cognitive issues.

Applying CSS-only animations to a dynamic DOM event (adding brand new elements and fully removing elements) is extremely tricky work. We’re going to face this problem head-on and go over three very different types of animations that handle this, all accomplishing the same goal of helping users understand changes to a list of items. By the time we’re done, you’ll be armed to use these animations, or build your own based on the concepts.

We will also touch upon accessibility and how elaborate HTML layouts can still retain some compatibility with accessibility devices with the help of ARIA attributes.

The Slide-Down Opacity Animation

A very modern approach (and my personal favorite) is when newly-added elements fade-and-float into position vertically depending on where they are going to end up. This also means the list needs to “open up” a spot (also animated) to make room for it. If an element is leaving the list, the spot it took up needs to contract.

Because we have so many different things going on at the same time, we need to change our DOM structure to wrap each .list-item in a container class appropriately titled .list-container. This is absolutely essential in order to get our animation to work.

<ul class="list"> <li class="list-container"> <div class="list-item">List Item</div> </li> <li class="list-container"> <div class="list-item">List Item</div> </li> <li class="list-container"> <div class="list-item">List Item</div> </li> <li class="list-container"> <div class="list-item">List Item</div> </li> </ul> <button class="add-btn">Add New Item</button>

Now, the styling for this is unorthodox because, in order to get our animation effect to work later on, we need to style our list in a very specific way that gets the job done at the expense of sacrificing some customary CSS practices.

.list { list-style: none; } .list-container { cursor: pointer; font-size: 3.5rem; height: 0; list-style: none; position: relative; text-align: center; width: 300px; } .list-container:not(:first-child) { margin-top: 10px; } .list-container .list-item { background-color: #D3D3D3; left: 0; padding: 2rem 0; position: absolute; top: 0; transition: all 0.6s ease-out; width: 100%; } .add-btn { background-color: transparent; border: 1px solid black; cursor: pointer; font-size: 2.5rem; margin-top: 10px; padding: 2rem 0; text-align: center; width: 300px; } How to handle spacing

First, we’re using margin-top to create vertical space between the elements in the stack. There’s no margin on the bottom so that the other list items can fill the gap created by removing a list item. That way, it still has margin on the bottom even though we have set the container height to zero. That extra space is created between the list item that used to be directly below the deleted list item. And that same list item should move up in reaction to the deleted list item’s container having zero height. And because this extra space expands the vertical gap between the list items further then we want it to. So that’s why we use margin-top — to prevent that from happening.

But we only do this if the item container in question isn’t the first one in the list. That’s we used :not(:first-child) — it targets all of the containers except the very first one (an enabling selector). We do this because we don’t want the very first list item to be pushed down from the top edge of the list. We only want this to happen to every subsequent item thereafter instead because they are positioned directly below another list item whereas the first one isn’t.

Now, this is unlikely to make complete sense because we are not setting any elements to zero height at the moment. But we will later on, and in order to get the vertical spacing between the list elements correct, we need to set the margin like we do.

A note about positioning

Something else that is worth pointing out is the fact that the .list-item elements nested inside of the parent .list-container elements are set to have a position of absolute, meaning that they are positioned outside of the DOM and in relation to their relatively-positioned .list-container elements. We do this so that we can get the .list-item element to float upwards when removed, and at the same time, get the other .list-item elements to move and fill the gap that removing this .list-item element has left. When this happens, the .list-container element, which isn’t positioned absolute and is therefore affected by the DOM, collapses its height allowing the other .list-container elements to fill its place, and the .list-item element — which is positioned with absolute — floats upwards, but doesn’t affect the structure of the list as it isn’t affected by the DOM.

Handling height

Unfortunately, we haven’t yet done enough to get a proper list where the individual list-items are stacked one by one on top of each other. Instead, all we will be able to see at the moment is just a single .list-item that represents all of the list items piled on top of each other in the exact same place. That’s because, although the .list-item elements may have some height via their padding property, their parent elements do not, but have a height of zero instead. This means that we don’t have anything in the DOM that is actually separating these elements out from each other because in order to do that, we would need our .list-item containers to have some height because, unlike their child element, they are affected by the DOM.

To get the height of our list containers to perfectly match the height of their child elements, we need to use JavaScript. So, we store all of our list items within a variable. Then, we create a function that is called immediately as soon as the script is loaded.

This becomes the function that handles the height of the list container elements:

const listItems = document.querySelectorAll('.list-item'); function calculateHeightOfListContainer(){ }; calculateHeightOfListContainer();

The first thing that we do is extract the very first .list-item element from the list. We can do this because they are all the same size, so it doesn’t matter which one we use. Once we have access to it, we store its height, in pixels, via the element’s clientHeight property. After this, we create a new <style> element that is prepended to the document’s body immediately after so that we can directly create a CSS class that incorporates the height value we just extracted. And with this <style> element safely in the DOM, we write a new .list-container class with styles that automatically have priority over the styles declared in the external stylesheet since these styles come from an actual <style> tag. That gives the .list-container classes the same height as their .list-item children.

const listItems = document.querySelectorAll('.list-item'); function calculateHeightOfListContainer() { const firstListItem = listItems[0]; let heightOfListItem = firstListItem.clientHeight; const styleTag = document.createElement('style'); document.body.prepend(styleTag); styleTag.innerHTML = `.list-container{ height: ${heightOfListItem}px; }`; }; calculateHeightOfListContainer(); Showing and Hiding

Right now, our list looks a little drab — the same as the what we saw in the first example, just without any of the addition or removal logic, and styled in a completely different way to the list constructed from <ul> and <li> tags list that were used in that opening example.

We’re going to do something now that may seem inexplicable at the moment and modify our .list-container and .list-item classes. We’re also creating extra styling for both of these classes that will only be added to them if a new class, .show, is used in conjunction with both of these classes separately.

The purpose we’re doing this is to create two states for both the .list-container and the .list-item elements. One state is without the .show classes on both of these elements, and this state represents the elements as they are animated out from the list. The other state contains the .show class added to both of these elements. It represents the specified .list-item as firmly instantiated and visible in the list.

In just a bit, we will switch between these two states by adding/removing the .show class from both the parent and the container of a specific .list-item. We’ll combined that with a CSS transition between these two states.

Notice that combining the .list-item class with the .show class introduces some extra styles to things. Specifically, we’re introducing the animation that we are creating where the list item fades downwards and into visibility when it is added to the list — the opposite happens when it is removed. Since the most performant way to animate elements positions is with the transform property, that is what we will use here, applying opacity along the way to handle the visibility part. Because we already applied a transition property on both the .list-item and the .list-container elements, a transition automatically takes place whenever we add or remove the .show class to both of these elements due to the extra properties that the .show class brings, causing a transition whenever we either add or remove these new properties.

.list-container { cursor: pointer; font-size: 3.5rem; height: 0; list-style: none; position: relative; text-align: center; width: 300px; } .list-container.show:not(:first-child) { margin-top: 10px; } .list-container .list-item { background-color: #D3D3D3; left: 0; opacity: 0; padding: 2rem 0; position: absolute; top: 0; transform: translateY(-300px); transition: all 0.6s ease-out; width: 100%; } .list-container .list-item.show { opacity: 1; transform: translateY(0); }

In response to the .show class, we are going back to our JavaScript file and changing our only function so that the .list-container element are only given a height property if the element in question also has a .show class on it as well, Plus, we are applying a transition property to our standard .list-container elements, and we will do it in a setTimeout function. If we didn’t, then our containers would animate on the initial page load when the script is loaded, and the heights are applied the first time, which isn’t something we want to happen.

const listItems = document.querySelectorAll('.list-item'); function calculateHeightOfListContainer(){ const firstListItem = listItems[0]; let heightOfListItem = firstListItem.clientHeight; const styleTag = document.createElement('style'); document.body.prepend(styleTag); styleTag.innerHTML = `.list-container.show { height: ${heightOfListItem}px; }`; setTimeout(function() { styleTag.innerHTML += `.list-container { transition: all 0.6s ease-out; }`; }, 0); }; calculateHeightOfListContainer();

Now, if we go back and view the markup in DevTools, then we should be able to see that the list has disappeared and all that is left is the button. The list hasn’t disappeared because these elements have been removed from the DOM; it has disappeared because of the .show class which is now a required class that must be added to both the .list-item and the .list-container elements in order for us to be able to view them.

The way to get the list back is very simple. We add the .show class to all of our .list-container elements as well as the .list-item elements contained inside. And once this is done we should be able to see our pre-created list items back in their usual place.

<ul class="list"> <li class="list-container show"> <div class="list-item show">List Item</div> </li> <li class="list-container show"> <div class="list-item show">List Item</div> </li> <li class="list-container show"> <div class="list-item show">List Item</div> </li> <li class="list-container show"> <div class="list-item show">List Item</div> </li> </ul> <button class="add-btn">Add New Item</button>

We won’t be able to interact with anything yet though because to do that — we need to add more to our JavaScript file.

The first thing that we will do after our initial function is declare references to both the button that we click to add a new list item, and the .list element itself, which is the element that wraps around every single .list-item and its container. Then we select every single .list-container element nested inside of the parent .list element and loop through them all with the forEach method. We assign a method in this callback, removeListItem, to the onclick event handler of each .list-container. By the end of the loop, every single .list-container instantiated to the DOM on a new page load calls this same method whenever they are clicked.

Once this is done, we assign a method to the onclick event handler for addBtn so that we can activate code when we click on it. But obviously, we won’t create that code just yet. For now, we are merely logging something to the console for testing.

const addBtn = document.querySelector('.add-btn'); const list = document.querySelector('.list'); function removeListItem(e){ console.log('Deleted!'); } // DOCUMENT LOAD document.querySelectorAll('.list .list-container').forEach(function(container) { container.onclick = removeListItem; }); addBtn.onclick = function(e){ console.log('Add Btn'); }

Starting work on the onclick event handler for addBtn, the first thing that we want to do is create two new elements: container and listItem. Both elements represent the .list-item element and their respective .list-container element, which is why we assign those exact classes to them as soon as we create the them.

Once these two elements are prepared, we use the append method on the container to insert the listItem inside of it as a child, the same as how these elements that are already in the list are formatted. With the listItem successfully appended as a child to the container, we can move the container element along with its child listItem element to the DOM with the insertBefore method. We do this because we want new items to appear at the bottom of the list but before the addBtn, which needs to stay at the very bottom of the list. So, by using the parentNode attribute of addBtn to target its parent, list, we are saying that we want to insert the element as a child of list, and the child that we are inserting (container) will be inserted before the child that is already on the DOM and that we have targeted with the second argument of the insertBefore method, addBtn.

Finally, with the .list-item and its container successfully added to the DOM, we can set the container’s onclick event handler to match the same method as every other .list-item already on the DOM by default.

addBtn.onclick = function(e){ const container = document.createElement('li'); container.classList.add('list-container'); const listItem = document.createElement('div'); listItem.classList.add('list-item'); listItem.innerHTML = 'List Item'; container.append(listItem); addBtn.parentNode.insertBefore(container, addBtn); container.onclick = removeListItem; }

If we try this out, then we won’t be able to see any changes to our list no matter how many times we click the addBtn. This isn’t an error with the click event handler. Things are working exactly how they should be. The .list-item elements (and their containers) are added to the list in the correct place, it is just that they are getting added without the .show class. As a result, they don’t have any height to them, which is why we can’t see them and is why it looks like nothing is happening to the list.

To get each newly added .list-item to animate into the list whenever we click on the addBtn, we need to apply the .show class to both the .list-item and its container, just as we had to do to view the list items already hard-coded into the DOM.

The problem is that we cannot just add the .show class to these elements instantly. If we did, the new .list-item statically pops into existence at the bottom of the list without any animation. We need to register a few styles before the animation additional styles that override those initial styles for an element to know what transition to make. Meaning, that if we just apply the .show class to are already in place — so no transition.

The solution is to apply the .show classes in a setTimeout callback, delaying the activation of the callback by 15 milliseconds, or 1.5/100th of a second. This imperceptible delay is long enough to create a transition from the proviso state to the new state that is created by adding the .show class. But that delay is also short enough that we will never know that there was a delay in the first place.

addBtn.onclick = function(e){ const container = document.createElement('li'); container.classList.add('list-container'); const listItem = document.createElement('div'); listItem.classList.add('list-item'); listItem.innerHTML = 'List Item'; container.append(listItem); addBtn.parentNode.insertBefore(container, addBtn); container.onclick = removeListItem; setTimeout(function(){ container.classList.add('show'); listItem.classList.add('show'); }, 15); }

Success! It is now time to handle how we remove list items when they are clicked.

Removing list items shouldn’t be too hard now because we have already gone through the difficult task of adding them. First, we need to make sure that the element we are dealing with is the .list-container element instead of the .list-item element. Due to event propagation, it is likely that the target that triggered this click event was the .list-item element.

Since we want to deal with the associated .list-container element instead of the actual .list-item element that triggered the event, we’re using a while-loop to loop one ancestor upwards until the element held in container is the .list-container element. We know it works when container gets the .list-container class, which is something that we can discover by using the contains method on the classList property of the container element.

Once we have access to the container, we promptly remove the .show class from both the container and its .list-item once we have access to that as well.

function removeListItem(e) { let container = e.target; while (!container.classList.contains('list-container')) { container = container.parentElement; } container.classList.remove('show'); const listItem = container.querySelector('.list-item'); listItem.classList.remove('show'); }

And here is the finished result:

CodePen Embed Fallback Accessibility & Performance

Now you may be tempted to just leave the project here because both list additions and removals should now be working. But it is important to keep in mind that this functionality is only surface level and there are definitely some touch ups that need to be made in order to make this a complete package.

First of all, just because the removed elements have faded upwards and out of existence and the list has contracted to fill the gap that it has left behind does not mean that the removed element has been removed from the DOM. In fact, it hasn’t. Which is a performance liability because it means that we have elements in the DOM that serve no purpose other than to just accumulate in the background and slow down our application.

To solve this, we use the ontransitionend method on the container element to remove it from the DOM but only when the transition caused by us removing the .show class has finished so that its removal couldn’t possibly interrupt our transition.

function removeListItem(e) { let container = e.target; while (!container.classList.contains('list-container')) { container = container.parentElement; } container.classList.remove('show'); const listItem = container.querySelector('.list-item'); listItem.classList.remove('show'); container.ontransitionend = function(){ container.remove(); } }

We shouldn’t be able to see any difference at this point because allwe did was improve the performance — no styling updates.

The other difference is also unnoticeable, but super important: compatibility. Because we have used the correct <ul> and <li> tags, devices should have no problem with correctly interpreting what we have created as an unordered list.

Other considerations for this technique

A problem that we do have however, is that devices may have a problem with the dynamic nature of our list, like how the list can change its size and the number of items that it holds. A new list item will be completely ignored and removed list items will be read as if they still exist.

So, in order to get devices to re-interpret our list whenever the size of it changes, we need to use ARIA attributes. They help get our nonstandard HTML list to be recognized as such by compatibility devices. That said, they are not a guaranteed solution here because they are never as good for compatibility as a native tag. Take the <ul> tag as an example — no need to worry about that because we were able to use the native unordered list element.

We can use the aria-live attribute to the .list element. Everything nested inside of a section of the DOM marked with aria-live becomes responsive. In other words, changes made to an element with aria-live is recognized, allowing them to issue an updated response. In our case, we want things highly reactive and we do that be setting the aria live attribute to assertive. That way, whenever a change is detected, it will do so, interrupting whatever task it was currently doing at the time to immediately comment on the change that was made.

<ul class="list" role="list" aria-live="assertive"> The Collapse Animation

This is a more subtle animation where, instead of list items floating either up or down while changing opacity, elements instead just collapse or expand outwards as they gradually fade in or out; meanwhile, the rest of the list repositions itself to the transition taking place.

The cool thing about the list (and perhaps some remission for the verbose DOM structure we created), would be the fact that we can change the animation very easily without interfering with the main effect.

So, to achieve this effect, we start of by hiding overflow on our .list-container. We do this so that when the .list-container collapses in on itself, it does so without the child .list-item flowing beyond the list container’s boundaries as it shrinks. Apart from that, the only other thing that we need to do is remove the transform property from the .list-item with the .show class since we don’t want the .list-item to float upwards anymore.

.list-container { cursor: pointer; font-size: 3.5rem; height: 0; overflow: hidden; list-style: none; position: relative; text-align: center; width: 300px; } .list-container.show:not(:first-child) { margin-top: 10px; } .list-container .list-item { background-color: #D3D3D3; left: 0; opacity: 0; padding: 2rem 0; position: absolute; top: 0; transition: all 0.6s ease-out; width: 100%; } .list-container .list-item.show { opacity: 1; } CodePen Embed Fallback The Side-Slide Animation

This last animation technique is strikingly different fromithe others in that the container animation and the .list-item animation are actually out of sync. The .list-item is sliding to the right when it is removed from the list, and sliding in from the right when it is added to the list. There needs to be enough vertical room in the list to make way for a new .list-item before it even begins animating into the list, and vice versa for the removal.

As for the styling, it’s very much like the Slide Down Opacity animation, only thing that the transition for the .list-item should be on the x-axis now instead of the y-axis.

.list-container { cursor: pointer; font-size: 3.5rem; height: 0; list-style: none; position: relative; text-align: center; width: 300px; } .list-container.show:not(:first-child) { margin-top: 10px; } .list-container .list-item { background-color: #D3D3D3; left: 0; opacity: 0; padding: 2rem 0; position: absolute; top: 0; transform: translateX(300px); transition: all 0.6s ease-out; width: 100%; } .list-container .list-item.show { opacity: 1; transform: translateX(0); }

As for the onclick event handler of the addBtn in our JavaScript, we’re using a nested setTimeout method to delay the beginning of the listItem animation by 350 milliseconds after its container element has already started transitioning.

setTimeout(function(){ container.classList.add('show'); setTimeout(function(){ listItem.classList.add('show'); }, 350); }, 10);

In the removeListItem function, we remove the list item’s .show class first so it can begin transitioning immediately. The parent container element then loses its .show class, but only 350 milliseconds after the initial listItem transition has already started. Then, 600 milliseconds after the container element starts to transition (or 950 milliseconds after the listItem transition), we remove the container element from the DOM because, by this point, both the listItem and the container transitions should have come to an end.

function removeListItem(e){ let container = e.target; while(!container.classList.contains('list-container')){ container = container.parentElement; } const listItem = container.querySelector('.list-item'); listItem.classList.remove('show'); setTimeout(function(){ container.classList.remove('show'); container.ontransitionend = function(){ container.remove(); } }, 350); }

Here is the end result:

CodePen Embed Fallback That’s a wrap!

There you have it, three different methods for animating items that are added and removed from a stack. I hope that with these examples you are now confident to work in a situation where the DOM structure settles into a new position in reaction to an element that has either been added or removed from the DOM.

As you can see, there’s a lot of moving parts and things to consider. We started with that we expect from this type of movement in the real world and considered what happens to a group of elements when one of them is updated. It took a little balancing to transition between the showing and hiding states and which elements get them at specific times, but we got there. We even went so far as to make sure our list is both performant and accessible, things that we’d definitely need to handle on a real project.

Anyway, I wish you all the best in your future projects. And that’s all from me. Over and out.

The post Animation Techniques for Adding and Removing Items From a Stack appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

ShopTalk Goes Video

Css Tricks - Mon, 10/04/2021 - 4:04am

Dave and I slapped up a little videos section of the ShopTalk website. Twelve so far! They are short-ish, between 10-20 minutes, each focused on one fairly specific thing. We’re kinda just dipping our toes here — we don’t even have a real proper name for them yet! The place to subscribe to them is YouTube, and you might actually be subscribed already as we’ve been publishing them right to the CSS-Tricks YouTube Channel. There is a direct RSS feed as well if you’re cool like that.

Dave wrote:

One nice by-product of exploring video is that we get a chance to re-explain mouth-coding examples or overly visual concepts that were probably confusing on the audio podcast. As much as I love podcasting, the lack of visual examples for a visual field like Web Design and Development is a limitation. I like that the YouTube series is like supplementary material to ideas covered in the podcast, rather than a replacement.

Quick hits:

  • If you have topic ideas you’d like to see, hit me in the comments or anywhere else.
  • If you have ideas on how to make the show work, also let me know (E.g. Is the quality OK? Is the length OK? Are the thumbnails OK? What else can we do to appease the YouTube Algorithm Gods?)
  • Should we actually stream it?
  • Smash that Like and Subscribe button.

The post ShopTalk Goes Video appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.