Tech News

Making an Audio Waveform Visualizer with Vanilla JavaScript

Css Tricks - 9 hours 35 min ago

As a UI designer, I’m constantly reminded of the value of knowing how to code. I pride myself on thinking of the developers on my team while designing user interfaces. But sometimes, I step on a technical landmine.

A few years ago, as the design director of wsj.com, I was helping to re-design the Wall Street Journal’s podcast directory. One of the designers on the project was working on the podcast player, and I came upon Megaphone’s embedded player.

I previously worked at SoundCloud and knew that these kinds of visualizations were useful for users who skip through audio. I wondered if we could achieve a similar look for the player on the Wall Street Journal’s site.

The answer from engineering: definitely not. Given timelines and restraints, it wasn’t a possibility for that project. We eventually shipped the redesigned pages with a much simpler podcast player.

But I was hooked on the problem. Over nights and weekends, I hacked away trying to achieve this
effect. I learned a lot about how audio works on the web, and ultimately was able to achieve the look with less than 100 lines of JavaScript!

It turns out that this example is a perfect way to get acquainted with the Web Audio API, and how to visualize audio data using the Canvas API.

But first, a lesson in how digital audio works

In the real, analog world, sound is a wave. As sound travels from a source (like a speaker) to your ears, it compresses and decompresses air in a pattern that your ears and brain hear as music, or speech, or a dog’s bark, etc. etc.

An analog sound wave is a smooth, continuous function.

But in a computer’s world of electronic signals, sound isn’t a wave. To turn a smooth, continuous wave into data that it can store, computers do something called sampling. Sampling means measuring the sound waves hitting a microphone thousands of times every second, then storing those data points. When playing back audio, your computer reverses the process: it recreates the sound, one tiny split-second of audio at a time.

A digital sound file is made up of tiny slices of the original audio, roughly re-creating the smooth continuous wave.

The number of data points in a sound file depends on its sample rate. You might have seen this number before; the typical sample rate for mp3 files is 44.1 kHz. This means that, for every second of audio, there are 44,100 individual data points. For stereo files, there are 88,200 every second — 44,100 for the left channel, and 44,100 for the right. That means a 30-minute podcast has 158,760,000 individual data points describing the audio!

How can a web page read an mp3?

Over the past nine years, the W3C (the folks who help maintain web standards) have developed the Web Audio API to help web developers work with audio. The Web Audio API is a very deep topic; we’ll hardly crack the surface in this essay. But it all starts with something called the AudioContext.

Think of the AudioContext like a sandbox for working with audio. We can initialize it with a few lines of JavaScript:

// Set up audio context window.AudioContext = window.AudioContext || window.webkitAudioContext; const audioContext = new AudioContext(); let currentBuffer = null;

The first line after the comment is a necessary because Safari has implemented AudioContext as webkitAudioContext.

Next, we need to give our new audioContext the mp3 file we’d like to visualize. Let’s fetch it using… fetch()!

const visualizeAudio = url => { fetch(url) .then(response => response.arrayBuffer()) .then(arrayBuffer => audioContext.decodeAudioData(arrayBuffer)) .then(audioBuffer => visualize(audioBuffer)); };

This function takes a URL, fetches it, then transforms the Response object a few times.

  • First, it calls the arrayBuffer() method, which returns — you guessed it — an ArrayBuffer! An ArrayBuffer is just a container for binary data; it’s an efficient way to move lots of data around in JavaScript.
  • We then send the ArrayBuffer to our audioContext via the decodeAudioData() method. decodeAudioData() takes an ArrayBuffer and returns an AudioBuffer, which is a specialized ArrayBuffer for reading audio data. Did you know that browsers came with all these convenient objects? I definitely did not when I started this project.
  • Finally, we send our AudioBuffer off to be visualized.
Filtering the data

To visualize our AudioBuffer, we need to reduce the amount of data we’re working with. Like I mentioned before, we started off with millions of data points, but we’ll have far fewer in our final visualization.

First, let’s limit the channels we are working with. A channel represents the audio sent to an individual speaker. In stereo sound, there are two channels; in 5.1 surround sound, there are six. AudioBuffer has a built-in method to do this: getChannelData(). Call audioBuffer.getChannelData(0), and we’ll be left with one channel’s worth of data.

Next, the hard part: loop through the channel’s data, and select a smaller set of data points. There are a few ways we could go about this. Let’s say I want my final visualization to have 70 bars; I can divide up the audio data into 70 equal parts, and look at a data point from each one.

const filterData = audioBuffer => { const rawData = audioBuffer.getChannelData(0); // We only need to work with one channel of data const samples = 70; // Number of samples we want to have in our final data set const blockSize = Math.floor(rawData.length / samples); // Number of samples in each subdivision const filteredData = []; for (let i = 0; i < samples; i++) { filteredData.push(rawData[i * blockSize]); } return filteredData; } This was the first approach I took. To get an idea of what the filtered data looks like, I put the result into a spreadsheet and charted it.

The output caught me off guard! It doesn’t look like the visualization we’re emulating at all. There are lots of data points that are close to, or at zero. But that makes a lot of sense: in a podcast, there is a lot of silence between words and sentences. By only looking at the first sample in each of our blocks, it’s highly likely that we’ll catch a very quiet moment.

Let’s modify the algorithm to find the average of the samples. And while we’re at it, we should take the absolute value of our data, so that it’s all positive.

const filterData = audioBuffer => { const rawData = audioBuffer.getChannelData(0); // We only need to work with one channel of data const samples = 70; // Number of samples we want to have in our final data set const blockSize = Math.floor(rawData.length / samples); // the number of samples in each subdivision const filteredData = []; for (let i = 0; i < samples; i++) { let blockStart = blockSize * i; // the location of the first sample in the block let sum = 0; for (let j = 0; j < blockSize; j++) { sum = sum + Math.abs(rawData[blockStart + j]) // find the sum of all the samples in the block } filteredData.push(sum / blockSize); // divide the sum by the block size to get the average } return filteredData; }

Let’s see what that data looks like.

This is great. There’s only one thing left to do: because we have so much silence in the audio file, the resulting averages of the data points are very small. To make sure this visualization works for all audio files, we need to normalize the data; that is, change the scale of the data so that the loudest samples measure as 1.

const normalizeData = filteredData => { const multiplier = Math.pow(Math.max(...filteredData), -1); return filteredData.map(n => n * multiplier); }

This function finds the largest data point in the array with Math.max(), takes its inverse with Math.pow(n, -1), and multiplies each value in the array by that number. This guarantees that the largest data point will be set to 1, and the rest of the data will scale proportionally.

Now that we have the right data, let’s write the function that will visualize it.

Visualizing the data

To create the visualization, we’ll be using the JavaScript Canvas API. This API draws graphics into an HTML <canvas> element. The first step to using the Canvas API is similar to the Web Audio API.

const draw = normalizedData => { // Set up the canvas const canvas = document.querySelector("canvas"); const dpr = window.devicePixelRatio || 1; const padding = 20; canvas.width = canvas.offsetWidth * dpr; canvas.height = (canvas.offsetHeight + padding * 2) * dpr; const ctx = canvas.getContext("2d"); ctx.scale(dpr, dpr); ctx.translate(0, canvas.offsetHeight / 2 + padding); // Set Y = 0 to be in the middle of the canvas };

This code finds the <canvas> element on the page, and checks the browser’s pixel ratio (essentially the screen’s resolution) to make sure our graphic will be drawn at the right size. We then get the context of the canvas (its individual set of methods and values). We calculate the pixel dimensions pf the canvas, factoring in the pixel ratio and adding in some padding. Lastly, we change the coordinates system of the <canvas>; by default, (0,0) is in the top-left of the box, but we can save ourselves a lot of math by setting (0, 0) to be in the middle of the left edge.

Now let’s draw some lines! First, we’ll create a function that will draw an individual segment.

const drawLineSegment = (ctx, x, y, width, isEven) => { ctx.lineWidth = 1; // how thick the line is ctx.strokeStyle = "#fff"; // what color our line is ctx.beginPath(); y = isEven ? y : -y; ctx.moveTo(x, 0); ctx.lineTo(x, y); ctx.arc(x + width / 2, y, width / 2, Math.PI, 0, isEven); ctx.lineTo(x + width, 0); ctx.stroke(); };

The Canvas API uses an concept called “turtle graphics.” Imagine that the code is a set of instructions being given to a turtle with a marker. In basic terms, the drawLineSegment() function works as follows:

  1. Start at the center line, x = 0.
  2. Draw a vertical line. Make the height of the line relative to the data.
  3. Draw a half-circle the width of the segment.
  4. Draw a vertical line back to the center line.

Most of the commands are straightforward: ctx.moveTo() and ctx.lineTo() move the turtle to the specified coordinate, without drawing or while drawing, respectively.

Line 5, y = isEven ? -y : y, tells our turtle whether to draw down or up from the center line. The segments alternate between being above and below the center line so that they form a smooth wave. In the world of the Canvas API, negative y values are further up than positive ones. This is a bit counter-intuitive, so keep it in mind as a possible source of bugs.

On line 8, we draw a half-circle. ctx.arc() takes six parameters:

  • The x and y coordinates of the center of the circle
  • The radius of the circle
  • The place in the circle to start drawing (Math.PI or ? is the location, in radians, of 9 o’clock)
  • The place in the circle to finish drawing (0 in radians represents 3 o’clock)
  • A boolean value telling our turtle to draw either counterclockwise (if true) or clockwise (if false). Using isEven in this last argument means that we’ll draw the top half of a circle — clockwise from 9 o’clock to 3 clock — for even-numbered segments, and the bottom half for odd-numbered segments.

OK, back to the draw() function.

const draw = normalizedData => { // Set up the canvas const canvas = document.querySelector("canvas"); const dpr = window.devicePixelRatio || 1; const padding = 20; canvas.width = canvas.offsetWidth * dpr; canvas.height = (canvas.offsetHeight + padding * 2) * dpr; const ctx = canvas.getContext("2d"); ctx.scale(dpr, dpr); ctx.translate(0, canvas.offsetHeight / 2 + padding); // Set Y = 0 to be in the middle of the canvas // draw the line segments const width = canvas.offsetWidth / normalizedData.length; for (let i = 0; i < normalizedData.length; i++) { const x = width * i; let height = normalizedData[i] * canvas.offsetHeight - padding; if (height < 0) { height = 0; } else if (height > canvas.offsetHeight / 2) { height = height > canvas.offsetHeight / 2; } drawLineSegment(ctx, x, height, width, (i + 1) % 2); } };

After our previous setup code, we need to calculate the pixel width of each line segment. This is the canvas’s on-screen width, divided by the number of segments we’d like to display.

Then, a for-loop goes through each entry in the array, and draws a line segment using the function we defined earlier. We set the x value to the current iteration’s index, times the segment width. height, the desired height of the segment, comes from multiplying our normalized data by the canvas’s height, minus the padding we set earlier. We check a few cases: subtracting the padding might have pushed height into the negative, so we re-set that to zero. If the height of the segment will result in a line being drawn off the top of the canvas, we re-set the height to a maximum value.

We pass in the segment width, and for the isEven value, we use a neat trick: (i + 1) % 2 means “find the reminder of i + 1 divided by 2.” We check i + 1 because our counter starts at 0. If i + 1 is even, its remainder will be zero (or false). If i is odd, its remainder will be 1 or true.

And that’s all she wrote. Let’s put it all together. Here’s the whole script, in all its glory.

See the Pen
Audio waveform visualizer
by Matthew Ström (@matthewstrom)
on CodePen.

In the drawAudio() function, we’ve added a few functions to the final call: draw(normalizeData(filterData(audioBuffer))). This chain filters, normalizes, and finally draws the audio we get back from the server.

If everything has gone according to plan, your page should look like this:

Notes on performance

Even with optimizations, this script is still likely running hundreds of thousands of operations in the browser. Depending on the browser’s implementation, this can take many seconds to finish, and will have a negative impact on other computations happening on the page. It also downloads the whole audio file before drawing the visualization, which consumes a lot of data. There are a few ways that we could improve the script to resolve these issues:

  1. Analyze the audio on the server side. Since audio files don’t change often, we can take advantage of server-side computing resources to filter and normalize the data. Then, we only have to transmit the smaller data set; no need to download the mp3 to draw the visualization!
  2. Only draw the visualization when a user needs it. No matter how we analyze the audio, it makes sense to defer the process until well after page load. We could either wait until the element is in view using an intersection observer, or delay even longer until a user interacts with the podcast player.
  3. Progressive enhancement. While exploring Megaphone’s podcast player, I discovered that their visualization is just a facade — it’s the same waveform for every podcast. This could serve as a great default to our (vastly superior) design. Using the principles of progressive enhancement, we could load a default image as a placeholder. Then, we can check to see if it makes sense to load the real waveform before initiating our script. If the user has JavaScript disabled, their browser doesn’t support the Web Audio API, or they have the save-data header set, nothing is broken.

I’d love to hear any thoughts y’all have on optimization, too.

Some closing thoughts

This is a very, very impractical way of visualizing audio. It runs on the client side, processing millions of data points into a fairly straightforward visualization.

But it’s cool! I learned a lot in writing this code, and even more in writing this article. I refactored a lot of the original project and trimmed the whole thing in half. Projects like this might not ever go on to see a production codebase, but they are unique opportunities to develop new skills and a deeper understanding of some of the neat APIs modern browsers support.

I hope this was a helpful tutorial. If you have ideas of how to improve it, or any cool variations on the theme, please reach out! I’m @ilikescience on Twitter.

The post Making an Audio Waveform Visualizer with Vanilla JavaScript appeared first on CSS-Tricks.

When to Use SVG vs. When to Use Canvas

Css Tricks - 9 hours 35 min ago

SVG and canvas are both technologies that can draw stuff in web browsers, so they are worth comparing and understanding when one is more suitable than the other. Even a light understanding of them makes the choice of choosing one over the other pretty clear.

  • A little flat-color icon? That's clearly SVG territory.
  • An interactive console-like game? That's clearly canvas territory.

I know we didn't cover why yet, but I hope that will become clear as we dig into it.

SVG is vector and declarative

If you know you need vector art, SVG is the choice. Vector art is visually crisp and tends to be a smaller file size than raster graphics like JPG.

That makes logos a very common SVG use case. SVG code can go right within HTML, and are like declarative drawing instructions:

<svg viewBox="0 0 100 100"> <circle cx="50" cy="50" r="50" /> </svg>

If you care a lot about the flexibility and responsiveness of the graphic, SVG is the way.

Canvas is a JavaScript drawing API

You put a <canvas> element in HTML, then do the drawing in JavaScript. In other words, you issue commands to tell it how to draw (which is more imperative than declarative).

<canvas id="myCanvas" width="578" height="200"></canvas> <script> var canvas = document.getElementById('myCanvas'); var context = canvas.getContext('2d'); var centerX = canvas.width / 2; var centerY = canvas.height / 2; var radius = 70; context.beginPath(); context.arc(centerX, centerY, radius, 0, 2 * Math.PI, false); context.fillStyle = 'green'; context.fill(); </script> SVG is in the DOM

If you're familiar with DOM events like click and mouseDown and whatnot, those are available in SVG as well. A <circle> isn't terribly different than a <div> in that respect.

<svg viewBox="0 0 100 100"> <circle cx="50" cy="50" r="50" /> <script> document.querySelector('circle').addEventListener('click', e => { e.target.style.fill = "red"; }); </script> </svg> SVG for accessibility

You can have a text alternative for canvas:

<canvas aria-label="Hello ARIA World" role="img"></canvas>

You can do that in SVG too, but since SVG and its guts can be right in the DOM, we generally think of SVG as being what you use if you're trying to build an accessible experience. Explained another way, you can build an SVG that assistive technology can access and find links and sub-elements with their own auditory explanations and such.

Text is also firmly in SVG territory. SVG literally has a <text> element, which is accessible and visually crisp — unlike canvas where text is typically blurry.

Canvas for pixels

As you'll see in Sarah Dranser's comparison below, canvas is a way of saying dance, pixels, dance!. That's a fun way of explaining the concept that drives it home better than any dry technical sentiment could do.

Highly interactive work with lots and lots of complex detail and gradients is the territory of canvas. You'll see a lot more games built with canvas than SVG for this reason, although there are always exceptions (note the simple vector-y-ness of this game).

CSS can play with SVG

We saw above that SVG can be in the DOM and that JavaScript can get in there and do stuff. The story is similar with CSS.

<svg viewBox="0 0 100 100"> <circle cx="50" cy="50" r="50" /> <style> circle { fill: blue; } </style> </svg>

Note how I've put the <script> and <style> blocks within the <svg> for these examples, which is valid. But assuming you've put the SVG literally in the HTML, you could move those out, or have other external CSS and JavaScript do the same thing.

We have a massive guide of SVG Properties and CSS. But what is great to know is that the stuff that CSS is great at is still possible in SVG, like :hover states and animation!

See the Pen
ExxbpBE
by Chris Coyier (@chriscoyier)
on CodePen.

Combinations

Technically, they aren't entirely mutually exclusive. A <svg> can be painted to a <canvas>.

As Blake Bowen proves, you can even keep the SVG on the canvas very crisp!

See the Pen
SVG vs Canvas Scaling
by Blake Bowen (@osublake)
on CodePen.

Ruth John's comparison

See the Pen
SVG vs Canvas
by Rumyra (@Rumyra)
on CodePen.

Sarah Drasner's comparison

Tablized from this tweet.

DOM/Virtual DOM Canvas Pros Great for UI/UX animation Dance, pixels, dance! Great for SVG that is resolution independent Great for really impressive 3D or immersive stuff Easier to debug Movement of tons of objects Cons Tanks with lots of objects Harder to make accessible Because you have to care about the way you anmimate Not resolution independent out of the box Breaks to nothing Shirley Wu's comparison

Tablized from this tweet.

SVG Canvas Pros Easy to get started Very performant Easier to register user interactions Easy to update Easy to animate Cons Potentially complex DOM More work to get started Not performant for a large number of elements More work to handle interactions Have to write custom animations

Many folks consider scenarios with a lot of objects (1,000+, as Shirley says) is the territory of canvas.

SVG is the default choice; canvas is the backup

A strong opinion, but it feels right to me:

One extremely basic way to answer it is "use canvas when you cannot use svg" (where "cannot" might mean animating thousands of objects, manipulating each pixel individually, etc.). To put it another way, SVG should be your default choice, canvas your backup plan.

— Benjamin De Cock (@bdc) October 2, 2019

Wrap up

So, if we revisit those first two bullet points...

  • A little flat-color icon? SVG goes in the DOM, so something like an icon inside a button makes a lot of sense for SVG — not to mention it can be controlled with CSS and have regular JavaScript event handlers and stuff
  • An interactive console-like game? That will have lots and lots of moving elements, complex animation and interaction, performance considerations. Those are things that canvas excels at.

And yet there is a bunch of middle ground here. As a day-to-day web designer/developer kinda guy, I find SVG far more useful on a practical level. I'm not sure I've done any production work in canvas ever. Part of that is because I don't know canvas nearly as well. I wrote a book on SVG, so I've done far more research on that side, but I know enough that SVG is doing the right stuff for my needs.

The post When to Use SVG vs. When to Use Canvas appeared first on CSS-Tricks.

A Super Weird CSS Bug That Affects Text Selection

Css Tricks - Mon, 11/11/2019 - 3:15pm

You know how you can style (to some degree) selected text with ::selection? Well, Jeff Starr uncovered a heck of a weird CSS bug.

If you:

  1. Leave that selector empty
  2. Link it from an external stylesheet (rather than <style> block)

Selecting text will have no style at all. &#x1f633;&#x1f62c;&#x1f615;

In other words, if you <link rel="stylesheet" ...> up some CSS that includes these empty selectors:

::-moz-selection { } ::selection { }

Then text appears to be un-selectable. You actually can still select the text, so it's a bit like you did ::selection { background: transparent; } rather than user-select: none;.

The fact that it behaves that way in most browsers (Safari being a lone exception) makes it seems like that's how it is specced, but I'm calling it a bug. A selector with zero properties in it should essentially be ignored, rather than doing something so heavy-handed.

Jeff made a demo. I did as well to confirm it.

See the Pen
Invisible Text Selection Bug
by Chris Coyier (@chriscoyier)
on CodePen.

The post A Super Weird CSS Bug That Affects Text Selection appeared first on CSS-Tricks.

Pac-Man… in CSS!

Css Tricks - Mon, 11/11/2019 - 5:45am

You all know famous Pac-Man video game, right? The game is fun and building an animated Pac-Man character in HTML and CSS is just as fun! I’ll show you how to create one while leveraging the powers of the clip-path property.

See the Pen
Animated Pac-Man
by Maks Akymenko (@maximakymenko)
on CodePen.

Are you excited? Let’s get to it!

First, let’s bootstrap the project

We only need two files for our project: index.html and style.css. We could do this manually by creating an empty folder with the files inside. Or, we can use this as an excuse to work with the command line, if you’d like:

mkdir pacman cd pacman touch index.html && touch style.css Set up the baseline styles

Go to style.css and add basic styling for your project. You could also use things like reset.css and normalize.css to reset the browser styling, but our project is simple and straightforward, so we won’t do much here.One thing you'll want to do for sure is use Autoprefixer to help with cross-browser compatibility.

We're basically setting the body to the full width and height of the viewport and centering things right smack dab in the middle of it. Things like the background color and fonts are purely aesthetic.

@import url('https://fonts.googleapis.com/css?family=Slabo+27px&display=swap'); *, *:after, *:before { box-sizing: border-box; } body { background: #000; color: #fff; padding: 0; margin: 0; font-family: 'Slabo 27px', serif; display: flex; height: 100vh; justify-content: center; align-items: center; } Behold, Pac-Man in HTML!

Do you remember how Pac-Man looks? He's essentially a yellow circle and an opening in the circle for a mouth. He’s a two-dimensional dot-eating machine!

So he has a body (or is he just a head?) and a mouth. He doesn’t even have eyes, but we’ll give him one anyway.

This’ll be our HTML markup:

<div class="pacman"> <div class="pacman__eye"></div> <div class="pacman__mouth"></div> </div> Dressing up Pac-Man with CSS

The most interesting part starts! Go to style.css and create the styles for Pac-Man.

First, we’ll create his body/head/whatever-that-is. Again, that’s merely a circle with a yellow background:

.pacman { width: 100px; height: 100px; border-radius: 50%; background: #f2d648; position: relative; margin-top: 20px; }

His (non-existent) eye is pretty much the same — a circle, but smaller and dark gray. We’ll give it absolute positioning so we can place it right where it needs to be:

.pacman__eye { position: absolute; width: 10px; height: 10px; border-radius: 50%; top: 20px; right: 40px; background: #333333; }

Now we have the basic shape!

See the Pen
Mouthless Pac-Man
by CSS-Tricks (@css-tricks)
on CodePen.

Using clip-path to draw the mouth

Pretty straightforward so far, right? If our Pac-Man is going to eat some dots (and chase some ghosts), he’s going to need a mouth. We’re going to create yet another circle, but make it black this time and overlay it right on top of his yellow head. Then we’re going to use the clip-path property to cut out a slice of it — sort of like an empty pie container except for one last piece of pie.

The dotted border shows where the black circle has been cut out, leaving only a slice of it to be Pac-Man's mouth. .pacman__mouth { background: #000; position: absolute; width: 100%; height: 100%; clip-path: polygon(100% 74%, 44% 48%, 100% 21%); }

Why a polygon and all those percentages?!?! Notice that we’ve already established the width and height of the element. The polygon() function let’s us draw a free-form shape inside the bounds of the element and that shape serves as a mask that only displays that portion of the element. So, we’re using clip-path and telling it we want a shape (polygon()) that contains a series of points at specific positions (100% 74%, 44% 48%, 100% 21%).

The clip-path property can be hard to grok. The CSS-Tricks Almanac helps explain it. There’s also the cool Clippy app that makes it easy to draw clip-patch shapes and export the CSS, which is what I did for this tutorial.

So far, so good:

See the Pen
Pac-Man with Mouth
by CSS-Tricks (@css-tricks)
on CodePen.

Make Pac-Man eat

We’ve got a pretty good-looking Pac-Man, but it will be way cooler but there’s no way for him to chew his food. I mean, maybe we wants to swallow his food whole, but we’re not going to allow that. Let’s make his mouth open and close instead.

All we need to do is animate the clip-path property, and we’ll use @keyframes for that. I’m naming this animation eat:

@keyframes eat { 0% { clip-path: polygon(100% 74%, 44% 48%, 100% 21%); } 25% { clip-path: polygon(100% 60%, 44% 48%, 100% 40%); } 50% { clip-path: polygon(100% 50%, 44% 48%, 100% 50%); } 75% { clip-path: polygon(100% 59%, 44% 48%, 100% 35%); } 100% { clip-path: polygon(100% 74%, 44% 48%, 100% 21%); } }

Again, I used the Clippy app to get the values, however, feel free to pass in your own. Maybe, you’ll be able to make animation even smoother!

We’ve got our keyframes in place, so let’s add it to our .pacman class. We could use the shorthand animation property, but I’ve broken out the properties to make things more self-explanatory so you can see what’s going on:

animation-name: eat; animation-duration: 0.7s; animation-iteration-count: infinite;

There we go!

See the Pen
Chewing Pac-Man
by CSS-Tricks (@css-tricks)
on CodePen.

We’ve gotta feed Pac-Man

If Pac-Man can chomp, why not to give him some food to eat! Let’s slightly modify our HTML markup a bit to include some food:

<div class="pacman"> <div class="pacman__eye"></div> <div class="pacman__mouth"></div> <div class="pacman__food"></div> </div>

...and let’s style it up. After all, food needs to be appetizing to the eyes as well as the mouth! We’re going to make yet another circle because that’s what the game uses.

.pacman__food { position: absolute; width: 15px; height: 15px; background: #fff; border-radius: 50%; top: 40%; left: 120px; }

See the Pen
Chewing Pac-Man with Teasing Food
by CSS-Tricks (@css-tricks)
on CodePen.

Aw, poor Pac-Man sees the food, but is unable to eat it. Let's make the food come to him using another sprinkle of CSS animation:

@keyframes food { 0% { transform: translateX(0); opacity: 1; } 100% { transform: translateX(-50px); opacity: 0; } }

Now we only need to pass this animation to our .pacman__food class.

animation-name: food; animation-duration: 0.7s; animation-iteration-count: infinite;

We have a happy, eating Pac-Man!

See the Pen
Pac-Man Eating
by CSS-Tricks (@css-tricks)
on CodePen.

Like before, the animation took some tweaking on my end to get it just right. What’s happening is the food starts away from Pac-Man at full opacity, then slides closer to him using transform: translateX() to move from left to right and disappears with zero opacity. Then it’s set to run infinitely so he eats all day, every day!

That’s a wrap! It’s fun to take little things like this and see how HTML and CSS can be used to re-create them. I’d like to see your own Pac-Man (or Ms. Pac-Man!). Share in the comments!

The post Pac-Man… in CSS! appeared first on CSS-Tricks.

Disabled buttons suck

Css Tricks - Mon, 11/11/2019 - 5:45am

In this oldie but goodie, Hampus Sethfors digs into why disabled buttons are troubling for usability reasons and he details one example where this was pretty annoying for him. The same has happened to me recently where I clicked a button that looked like a secondary button and... nothing happened.

Here’s another reason why disabled buttons are bad though:

Disabled buttons usually have call to action words on them, like “Send”, “Order” or “Add friend”. Because of that, they often match what users want to do. So people will try to click them.

What’s the alternative? Throw errors? Highlight the precise field that the user needs to update? All those are certainly very helpful to show what the problem is and how to fix it. To play devil’s advocate for a second though, Daniel Koster wrote about how disabled buttons don’t have to suck which is sort of interesting.

But whatever the case, let's just not get into the topic of disabled links.

Direct Link to ArticlePermalink

The post Disabled buttons suck appeared first on CSS-Tricks.

Two-Value Display Syntax (and Sometimes Three)

Css Tricks - Fri, 11/08/2019 - 11:17am

You know the single-value syntax: .thing { display: block; }. The value "block" being a single value. There are lots of single values for display. For example, inline-flex, which is like flex in that it becomse a flex container, but behaves like an inline-level element rather than a block-level element. Somewhat intuitive, but much better served by a two-value system that can apply that same concept more broadly and just as intuitively.

For a deep look, you should read Rachel Andrew's blog post The two-value syntax of the CSS Display property. The spec is also a decent read, as is this video from Miriam:

This is how it maps in my brain Choose block or inline, then choose flow, flow-root, flex, grid, or table. If it's a list-item that's a third thing.

You essentially pick one from each column to describe the layout you want. So the existing values we use all the time map out something like this:

Another way to think about those two columns I have there is "outside" and "inside" display values. Outside, as in, how it flows with other elements around it. Inside, as in, how layout happens inside those elements.

Can you actually use it?

Not really. Firefox 70 is first out of the gate with it, and there are no other signals for support from Chrome-land or Safari-land that I know about. It's a great evolution of CSS, but as far as day-to-day usage, it'll be years out. Something as vital as layout isn't something you wanna let fail just for this somewhat minor descriptive benefit. Nor is it probably worth the trouble to progressively enhance with @supports and such.

Tidbits
  • Check out the automatic transformations bit. Just because you set an element to a particular display, doesn't mean it might not be overruled by a certain situation. I'm assuming it's mostly talking about an element being forced to be a flex item or grid item.
  • There is implied shorthand. Like if you inline list-item, that's really inline flow list-item whereas list-item is block flow list-item. Looks all fairly intuitive.
  • You still use stuff like table-row and table-header-group. Those are single-value deals, as is contents and none.
  • Column one technically includes run-in too, but as far as I know, no browser has ever supported run-in display.
  • Column two technically includes ruby, but I have never understood what that even is.
How we talk about CSS

I like how Rachel ties this change to a more rational mental and teaching model:

... They properly explain the interaction of boxes with other boxes, in terms of whether they are block or inline, plus the behavior of the children. For understanding what display is and does, I think they make for a very useful clarification. As a result, I’ve started to teach display using these two values to help explain what is going on when you change formatting contexts.

It is always exciting to see new features being implemented, I hope that other browsers will also implement these two-value versions soon. And then, in the not too distant future we’ll be able to write CSS in the same way as we now explain it, clearly demonstrating the relationship between boxes and the behavior of their children.

The post Two-Value Display Syntax (and Sometimes Three) appeared first on CSS-Tricks.

Diana Smith’s Pure CSS Artwork “Lace”

Css Tricks - Fri, 11/08/2019 - 11:17am

Diana is at it again with her absolutely unbelievable CSS paintings. This latest one is called Lace. Past paintings are Francine, Vignes, and Zigario.

She wrote for us last year if you'd like a little insight into her thinking.

Andy Baio looked at the painting in a variety of older and incompatible browsers, and the results are hilarious and amazing.

IE 8 Safari 13

Direct Link to ArticlePermalink

The post Diana Smith’s Pure CSS Artwork “Lace” appeared first on CSS-Tricks.

Working with Fusebox and React

Css Tricks - Fri, 11/08/2019 - 5:35am

If you are searching for an alternative bundler to webpack, you might want to take a look at FuseBox. It builds on what webpack offers — code-splitting, hot module reloading, dynamic imports, etc. — but code-splitting in FuseBox requires zero configuration by default (although webpack will offer the same as of version 4.0).

Instead, FuseBox is built for simplicity (in the form of less complicated configuration) and performance (by including aggressive caching methods). Plus, it can be extended to use tons of plugins that can handle anything you need above and beyond the defaults.

Oh yeah, and if you are a fan of TypeScript, you might be interested in knowing that FuseBox makes it a first-class citizen. That means you can write an application in Typescript — with no configuration! — and it will use the Typescript transpiler to compile scripts by default. Don’t plan on using Typescript? No worries, the transpiler will handle any JavaScript. Yet another bonus!

To illustrate just how fast it is to to get up and running, let’s build the bones of a sample application that's usually scaffolded with create-react-app. Everything we’re doing will be on GitHub if you want to follow along.

FuseBox is not the only alternative to webpack, of course. There are plenty and, in fact, Maks Akymenko has a great write-up on Parcel which is another great alternative worth looking into.

The basic setup

Start by creating a new project directory and initializing it with npm:

## Create the directory mkdir csstricks-fusebox-react && $_ ## Initialize with npm default options npm init -y

Now we can install some dependencies. We’re going to build the app in React, so we’ll need that as well as react-dom.

npm install --save react react-dom

Next, we’ll install FuseBox and Typescript as dependencies. We’ll toss Uglify in there as well for help minifying our scripts and add support for writing styles in Sass.

npm install --save-dev fuse-box typescript uglify-js node-sass

Alright, now let’s create a src folder in the root of the project directory (which can be done manually). Add the following files (`app.js and index.js) in there, including the contents:

// App.js import * as React from "react"; import * as logo from "./logo.svg"; const App = () => { return ( <div className="App"> <header className="App-header"> <img src={logo} className="App-logo" alt="logo" /> <h1 className="App-title">Welcome to React</h1> </header> <p className="App-intro"> To get started, edit `src/App.js` and save to reload. </p> </div> ) }; export default App;

You may have noticed that we’re importing an SVG file. You can download it directly from the GitHub repo.

// index.js import * as React from "react"; import * as ReactDOM from "react-dom"; import App from "./App" ReactDOM.render( <App />, document.getElementById('root') );

You can see that the way we handle importing files is a little different than a typical React app. That’s because FuseBox does not polyfill imports by default.

So, instead of doing this:

import React from "react";

...we’re doing this:

import * as React from "react"; <!-- ./src/index.html --> <!DOCTYPE html> <html lang="en"> <head> <title>CSSTricks Fusebox React</title> $css </head> <body> <noscript> You need to enable JavaScript to run this app. </noscript> <div id="root"></div> $bundles </body> </html>

Styling isn’t really the point of this post, but let’s drop some in there to dress things up a bit. We’ll have two stylesheets. The first is for the App component and saved as App.css.

/* App.css */ .App { text-align: center; } .App-logo { animation: App-logo-spin infinite 20s linear; height: 80px; } .App-header { background-color: #222; height: 150px; padding: 20px; color: white; } .App-intro { font-size: large; } @keyframes App-logo-spin { from { transform: rotate(0deg); } to { transform: rotate(360deg); } }

The second stylesheet is for index.js and should be saved as index.css:

/* index.css */ body { margin: 0; padding: 0; font-family: sans-serif; }

OK, we’re all done with the initial housekeeping. On to extending FuseBox with some goodies!

Plugins and configuration

We said earlier that configuring FuseBox is designed to be way less complex than the likes of webpack — and that’s true! Create a file called fuse.js in the root directory of the application.

We start with importing the plugins we’ll be making use of, all the plugins come from the FuseBox package we installed.

const { FuseBox, CSSPlugin, SVGPlugin, WebIndexPlugin } = require("fuse-box");

Next, we’ll initialize a FuseBox instance and tell it what we’re using as the home directory and where to put compiled assets:

const fuse = FuseBox.init({ homeDir: "src", output: "dist/$name.js" });

We’ll let FuseBox know that we intend to use the TypeScript compiler:

const fuse = FuseBox.init({ homeDir: "src", output: "dist/$name.js", useTypescriptCompiler: true, });

We identified plugins in the first line of the configuration file, but now we’ve got to call them. We’re using the plugins pretty much as-is, but definitely check out what the CSSPlugin, SVGPlugin and WebIndexPlugin have to offer if you want more fine-grained control over the options.

const fuse = FuseBox.init({ homeDir: "src", output: "dist/$name.js", useTypescriptCompiler: true, plugins: [ // HIGHLIGHT CSSPlugin(), SVGPlugin(), WebIndexPlugin({ template: "src/index.html" }) ] }); const { FuseBox, CSSPlugin, SVGPlugin, WebIndexPlugin } = require("fuse-box"); const fuse = FuseBox.init({ homeDir: "src", output: "dist/$name.js", useTypescriptCompiler: true, plugins: [ CSSPlugin(), SVGPlugin(), WebIndexPlugin({ template: "src/index.html" }) ] }); fuse.dev(); fuse .bundle("app") .instructions(`>index.js`) .hmr() .watch() fuse.run();

FuseBox lets us configure a development server. We can define ports, SSL certificates, and even open the application in a browser on build.

We’ll simply use the default environment for this example:

fuse.dev();

It is important to define the development environment *before* the bundle instructions that come next:

fuse .bundle("app") .instructions(`>index.js`) .hmr() .watch().

What the heck is this? When we initialized the FuseBox instance, we specified an output using dist/$name.js. The value for $name is provided by the bundle() method. In our case, we set the value as app. That means that when the application is bundled, the output destination will be dist/app.js.

The instructions() method defines how FuseBox should deal with the code. In our case, we’re telling it to start with index.js and to execute it after it’s loaded.

The hmr() method is used for cases where we want to update the user when a file changes, this usually involves updating the browser when a file changes. Meanwhile, watch() re-bundles the bundled code after every saved change.

With that, we’ll cap it off by launching the build process with fuse.run() at the end of the configuration file. Here’s everything we just covered put together:

const { FuseBox, CSSPlugin, SVGPlugin, WebIndexPlugin } = require("fuse-box"); const fuse = FuseBox.init({ homeDir: "src", output: "dist/$name.js", useTypescriptCompiler: true, plugins: [ CSSPlugin(), SVGPlugin(), WebIndexPlugin({ template: "src/index.html" }) ] }); fuse.dev(); fuse .bundle("app") .instructions(`>index.js`) .hmr() .watch() fuse.run();

Now we can run the application from the terminal by running node fuse. This will start the build process which creates the dist folder that contains the bundled code and the template we specified in the configuration. After the build process is done, we can point the browser to http://localhost:4444/ to see our app.

Running tasks with Sparky

FuseBox includes a task runner that can be used to automate a build process. It's called Sparky and you can think of it as sorta like Grunt and Gulp, the difference being that it is built on top of FuseBox with built-in access to FuseBox plugins and the FuseBox API.

We don’t have to use it, but task runners make development a lot easier by automating things we’d otherwise have to do manually and it makes sense to use what’s specifically designed for FuseBox.

To use it, we’ll update the configuration we have in fuse.js, starting with some imports that go at the top of the file:

const { src, task, context } = require("fuse-box/sparky");

Next, we’ll define a context, which will look similar to what we already have. We’re basically wrapping what we did in a context and setConfig(), then initializing FuseBox in the return:

context({ setConfig() { return FuseBox.init({ homeDir: "src", output: "dist/$name.js", useTypescriptCompiler: true, plugins: [ CSSPlugin(), SVGPlugin(), WebIndexPlugin({ template: "src/index.html" }) ] }); }, createBundle(fuse) { return fuse .bundle("app") .instructions(`> index.js`) .hmr(); } });

It’s possible to pass a class, function or plain object to a context. In the above scenario, we’re passing functions, specifically setConfig() and createBundle(). setConfig() initializes FuseBox and sets up the plugins. createBundle() does what you might expect by the name, which is bundling the code. Again, the difference from what we did before is that we’re embedding both functionalities into different functions which are contained in the context object.

We want our task runner to run tasks, right? Here are a few examples we can define:

task("clean", () => src("dist").clean("dist").exec()); task("default", ["clean"], async (context) => { const fuse = context.setConfig(); fuse.dev(); context.createBundle(fuse); await fuse.run() });

The first task will be responsible for cleaning the dist directory. The first argument is the name of the task, while the second is the function that gets called when the task runs.
To call the first task, we can do node fuse clean from the terminal.

When a task is named default (which is the first argument as in the second task), that task will be the one that gets called by default when running node fuse — in this case, that’s the second task in our configuration. Other tasks need to be will need to be called explicitly in terminal, like node fuse <task_name>.

So, our second task is the default and three arguments are passed into it. The first is the name of the task (`default`), the second (["clean"]) is an array of dependencies that should be called before the task itself is executed, and the third is a function (fuse.dev()) that gets the initialized FuseBox instance and begins the bundling and build process.

Now we can run things with node fuse in the terminal. You have the option to add these to your package.json file if that’s more comfortable and familiar to you. The script section would look like this:

"scripts": { "start": "node fuse", "clean": "node fuse clean" }, That’s a wrap!

All in all, FuseBox is an interesting alternative to webpack for all your application bundling needs. As we saw, it offers the same sort of power that we all tend to like about webpack, but with a way less complicated configuration process that makes it much easier to get up and running, thanks to built-in Typescript support, performance considerations, and a task runner that’s designed to take advantage of the FuseBox API.

What we look at was a pretty simple example. In practice, you’re likely going to be working with more complex applications, but the concepts and principles are the same. It’s nice to know that FuseBox is capable of handling more than what’s baked into it, but that the initial setup is still super streamlined.

If you’re looking for more information about FuseBox, it’s site and documentation are obviously great starting point. the following links are also super helpful to get more perspective on how others are setting it up and using it on projects.

The post Working with Fusebox and React appeared first on CSS-Tricks.

Weekly Platform News: Web Apps in Galaxy Store, Tappable Stories, CSS Subgrid

Css Tricks - Thu, 11/07/2019 - 2:42pm

In this week's roundup: Firefox gains locksmith-like powers, Samsung's Galaxy Store starts supporting Progressive Web Apps, CSS Subgrid is shipping in Firefox 70, and a new study confirms that users prefer to tap into content rather than scroll through it.

Let's get into the news.

Securely generated passwords in Firefox

Firefox now suggests a securely generated password when the user focuses an <input> element that has the autocomplete="new-password" attribute value. This option is also available via the context menu on any password field.


(via The Firefox Frontier)

Web apps in Samsung’s app store

Samsung has started adding Progressive Web Apps (PWA) to its app store, Samsung Galaxy Store, which is available on Samsung devices. The new “Web apps” category is visible initially only in the United States. If you own a PWA, you can send its URL to pwasupport@samsung.com, and Samsung will help you get onboarded into Galaxy Store.

(via Ada Rose Cannon)

Tappable stories on the mobile web

According to a study commissioned by Google, the majority of people prefer tappable stories over scrolling articles when consuming content on the mobile web. Google is using this study to promote AMP Stories, which is a format for tappable stories on the mobile web.

Both studies had participants interact with real-world examples of tappable stories on the mobile web as well as scrolling article equivalents. Forrester found that 64% of respondents preferred the tappable mobile web story format over its scrolling article equivalent.

(via Alex Durán)

The grid form use-case for CSS Subgrid

CSS Subgrid is shipping in Firefox next month. This new feature enables grid items of nested grids to be put onto the outer grid, which is useful in situations where the wanted grid items are not direct children of the grid container.

(via Šime Vidas)

The post Weekly Platform News: Web Apps in Galaxy Store, Tappable Stories, CSS Subgrid appeared first on CSS-Tricks.

Location, Privilege and Performant Websites

Css Tricks - Thu, 11/07/2019 - 9:37am

Here’s a wonderful reminder from Stephanie Stimac about web performance. She writes about a recent experience of moving to an area with an unreliable network and how this caused problems for her as she tried to figure out what was happening during a power blackout:

Assuming all of your customers are living the same life, with the same privilege, with the same access to fast internet and data is the quickest way to ensure you’re excluding some of them and not providing the same level of service the rest get. It’s most likely not even happening intentionally, bias is inherent in us all in some way or another. Bias based on location is something I hadn’t considered before my experience on a subpar network due to where I live.

But if you’re providing a service or utility that is essential to a large portion of your community, it’s important to take a step back and assess your user experience from a different perspective.

Stephanie also makes note of how NPR has a text-only version of their website so that it’s still possible to access information on the worst possible network connections. CNN does something quite similar.

And doesn't this go beyond web performance? At the core, this is an accessibility issue as well and yet another example of how hard it is to make accessible sites.

Direct Link to ArticlePermalink

The post Location, Privilege and Performant Websites appeared first on CSS-Tricks.

Query JSON documents in the Terminal with GROQ

Css Tricks - Thu, 11/07/2019 - 5:23am

JSON documents are everywhere today, but they are rarely structured the way you want them to be. They often include too much data, have weirdly named fields, or place the data in unnecessary nested objects. Graph-Relational Object Queries (GROQ) is a query language (like SQL, but different) which is designed to work directly on JSON documents. It basically lets you write queries you can quickly filter and then reformat JSON documents to get them into the most convenient shape.

GROQ was developed by Sanity.io (where it’s used as the primary query language). It’s open source and it gives us built-in ways to use it in JavaScript and the command line on any JSON source. Together, we’ll add GROQ to the terminal toolkit, which will save you time whenever you need to wrangle some JSON data on a poject.

Let’s install GROQ

Like most things, we need to install the GROQ CLI tool and can do so with with npm (or Yarn) in the terminal:

$ npm install -g groq-cli

In order to play with it, we need to have a JSON file available. We’ll use curl to download an example dataset of todo data:

$ curl -o todos.json https://jsonplaceholder.typicode.com/todos

Let's take a quick look at a sample item in the data:

{ "userId": 1, "id": 1, "title": "delectus aut autem", "completed": false },

Pretty straightforward. We have a user ID, a todo item ID, a todo title and a boolean specifying whether the todo item is completed or not.

Now let’s run a basic GROQ query: finding all completed todos, but only return the todo titles and user IDs. It’s OK to copy/paste this line because we’ll walk through what it means in a bit.

$ cat todos.json | groq '*[completed == true]{title, userId}' --pretty

The groq command line tools accept a JSON document on standard input. This works very nicely with the Unix philosophy of “doing one thing and work on a text stream.” In order to read JSON from a file we’ll use the cat command. Also note that groq will output a minimal JSON on a single line by default, but by passing --pretty, we get a nicely indented and highlighted syntax.

To store the result, we can pipe it to a new file using >:

$ cat todos.json | groq '*[completed == true]{title, userId}' > result.json

The query itself consists of three parts:

  • * refers to the dataset (i.e. the data in the JSON file).
  • [completed == true] is a filter that removes items that are marked as incomplete.
  • {title, userId} is a projection that causes the query to only return the "title" and "userId" properties.
Let’s warm up with some exercises

You probably didn’t think you’d need to exercise to get through this post! Well, the good news is that we’re only exercising the mind with a few things to try out with GROQ before we get into more details.

  1. What happens if you remove [completed == true] and/or {title, userId}?
  2. How can you change the query to find all todos by the user with an ID of 2?
  3. How can you change the query to find uncompleted todos by the user with an ID of 2?
  4. What happens if the filter in the original query example swaps places with the projection?
  5. How would you write a single command (with pipes) that downloads the JSON and processes it with GROQ?

We’ll put the answers at the end of the post for you to reference.

Querying the Nobel prize winners

The todo data is nice for a warmup, but let’s be honest: It’s not very motivating to look at a list that uses Latin as placeholder content. However, the Nobel Prize has a dataset of all past laureates available to use publicly.

Here’s a sample return:

{ "laureates": [ { "id": "1", "firstname": "Wilhelm Conrad", "surname": "Röntgen", "born": "1845-03-27", "died": "1923-02-10", "bornCountry": "Prussia (now Germany)", "bornCountryCode": "DE", "bornCity": "Lennep (now Remscheid)", "diedCountry": "Germany", "diedCountryCode": "DE", "diedCity": "Munich", "gender": "male", "prizes": [...], }, // ... ] }

Ah! This is much more interesting! Let’s download the dataset and find the first name of all Norwegian laureates. Here, we’re going to use --output flag for curl to save the data to a file.

$ curl --output laureate.json http://api.nobelprize.org/v1/laureate.json $ cat laureate.json | groq '*.laureates[bornCountryCode == "NO"]{firstname}' --pretty

What do you get back? I received 12 Norwegian Nobel laureates. Not bad!

Note that this query is not quite like the first query we wrote. We have an extra .laureates in this one. When we used * in the todo dataset, it represents the whole JSON document which is contained in an array at the top-level of the todo dataset. On the other hand, the laureate file uses an object at the top-level where the list of laureates is stored in the "laureates" property.

To access a specific item, we can use the filter [0] and return just the first name. That ought to tell us who the first Norwegian was to win a Nobel prize.

$ cat laureate.json | groq '*.laureates[bornCountryCode == "NO"]{firstname}[0]' --pretty // Returned object { "firstname": "Ivar" } More exercises!

We’d be remiss not to play around with this new dataset a bit to see how the queries work.

  1. Write a query to find all Nobel laureates from your own country.
  2. Write a query to return the last Norwegian laureate. Hint: -1 refers to the last item.
  3. What happens if you try to filter directly on the root object? *[bornCountryCode == "NO"]?
  4. What’s the difference between *.laureates\[bornCountryCode == "NO"\][0] and *.laureates\[0\][bornCountryCode == "NO"]?

Like last time, answers will be at the end of this post.

Working with filters

Now we know that in total there have been 12 Norwegian Nobel laureates, how many of them were born after 1950? That’s no problem figuring out with GROQ:

$ cat laureate.json | groq '*.laureates[bornCountryCode == "NO" && born >= "1950-01-01"]{firstname}' --pretty // Sample return [ { "firstname": "May-Britt" }, { "firstname": "Edvard I." } ]

In fact, GROQ has a rich set of operators we can use inside a filter. We can compare numbers and strings with equal (==), not equal (!=), greater than (>), greater than or equal ( >=), less than (<), and less than or equal (<=). Plus, comparisons can be combined with AND (&&), OR (||) and NOT (!). The in operator even makes it possible to check for many cases at once (e.g. bornCountryCode in ["NO", "SE", "DK"]). And the defined function allows us to see if a field exists (e.g. defined(diedCountry)).

Even more exercises!

You know the drill: try playing with filters a bit to see how they work with the dataset. Answers are at the end, of course.

  1. Write a query that returns living laureates.
  2. Is there a difference between the filters [bornCountryCode == "NO"][born >= "1950-01-01"] and [bornCountryCode == "NO" && born >= "1950-01-01"]?
  3. Can you find all laureates that won a prize in 1973?
Working with projections

The Nobel Prize dataset separates the first name and the surname for each laureate, but what if we want to combine them together into one field? Projections in GROQ can do exactly that!

*.laureates[bornCountryCode == "NO" && born >= "1950-01-01"]{ "name": firstname + " " + surname, born, "prizeCount": count(prizes), }

Running this query tells us that May-Britt Moser and Edvard Moser received one prize (which was, in fact, the same prize):

[ { "name": "May-Britt Moser", "born": "1963-01-04", "prizeCount": 1 }, { "name": "Edvard I. Moser", "born": "1962-04-27", "prizeCount": 1 } ]

What happened here? Well, when we write a projection in GROQ what we’re really writing is a JSON object. Previously, we had simple projections (like {firstname}), but this is a shortcut way of writing {"firstname": firstname}. By using the expanded object syntax, we can both rename keys and transform the values.

GROQ has a rich set of operators and functions for transforming data, including string concatenation, arithmetic operators (+, -, *, /, %, **), counting arrays (count(prizes)), and rounding numbers (round(num, <amount of decimals>).

Exercises

Hopefully, you’re getting a good feel for things at this point, but here are some more ways to practice working with projections:

  1. Find all laureates who have won two or more prizes.
  2. Find how many prizes have been won by women.
  3. Format a fullname key that combines lastname and firstname in the result.
Doing more at once

Watch this:

$ cat laureate.json | groq --pretty ' { "count": count(*.laureates), "norwegians": *.laureates[bornCountryCode == "NO"]{firstname}, } '

The result:

{ "count": 928, "norwegians": [ { "firstname": "Ivar" }, { "firstname": "Lars" }, … ] }

Catch that? A GROQ query doesn’t have to start with *. In this query, we’re creating a JSON object where the values are results from separate queries. This provides a lot of flexibility in what we can produce with GROQ. Maybe you want the total number of incomplete todos together with a list of the five last ones. Or maybe you want to split the todos into two separate lists: one for completed and one for incomplete. Or maybe you need to wrap everything inside an object because that is what another tool/library/framework expects. Whatever the case, GROQ has you covered.

Let’s try one last exercise. Can you project an object where laureates contains an array with a rounded percentage of the total number of prizes that each laureate has run, returning the laureates’ first name? Then, try outputting the total number handed out.

Summary

There isn’t much you need to learn before getting some good use out of GROQ. If you have followed the exercises, you’re on a great path to becoming a GROQ guru. Naturally, this introduction doesn’t touch on all the different features and aspects of GROQ, so feel free to explore the specification and the project itself on GitHub. And feel free to reach out to the team at Sanity.io if you have questions about data wrangling with GROQ.

Exercise answers Exercise 1 Question 1

If you remove [completed == true] you will get all todos, not only those that are completed. If you remove {title, userId} you will get all properties.

Question 2 *[userId == 2] Question 3 *[userId == 2 && completed == false] or *[userId == 2 && !completed] Question 4

If you change the order of the filter and the projection you will do the projection first and then apply the filter. This means that you’re filtering on a list of todos that only contain title and userId, and completed == true can never be true.

Question 5 curl https://jsonplaceholder.typicode.com/todos | groq '*[completed == true]{title, userId}' > result.json Exercise 2 Question 1 *.laureates[bornCountryCode == "INSERT-YOUR-COUNTRY-HERE"] Question 2 *.laureates\[bornCountryCode == "NO"\][-1] Question 3

*[bornCountryCode == "NO"] will try to filter on an object. This doesn’t make any sense, so you’ll get null as the answer.

Question 4

*.laureates\[0\][bornCountryCode == "NO"] doesn’t work as you might think. This will first find the first laureate (which happens to be Wilhelm Conrad) and then it will try to “filter” the object. This makes no sense so the answer is null.
Exercise 3 Question 1 *.laureates[died == "0000-00-00"] Question 2

There’s no difference between the filters \[bornCountryCode == "NO"\][born >= "1950-01-01"] and [bornCountryCode == "NO" && born >= "1950-01-01"]. The first one does the filtering in two “passes” but the end result is the same.

Question 3 *.laureates["1973" in prizes[].year] Exercise 4 Question 1 *.laureates[count(prizes) >= 2] Question 2 count(*.laureates[gender == "female"]) Question 3 *.laureates{"fullname": surname + ", " + firstname} Exercise 5 *.laureates{"laureates": {firstname, "percentage": round(count(prizes) / count(*.laureates[].prizes), 3) * 100}, "total": count(*.laureates[].prizes)}

The post Query JSON documents in the Terminal with GROQ appeared first on CSS-Tricks.

Optimizing Images for Users with Slow Network Speeds

Css Tricks - Thu, 11/07/2019 - 5:22am

For every website, page load time is a critical factor that can make or break the business. Thanks to the better user experience that comes with a fast-loading webpage, those who focus on page load optimization enjoy better conversion rates, better SEO, better retention, and lower bounce rates.

And this fact has been highlighted in several studies. For example, according to one of the studies, 47% of consumers expect a web page to load in 2 seconds or less. No wonder that developers across the globe focus a lot on improving the webpage load time.

Logic dictates that keeping other factors the same, a lighter webpage should load faster than a heavier webpage, and that is the direction in which our webpages should head too. However, contrary to that logic, our websites have become heavier over the years. A look at data from HTTP Archive reveals that an average webpage in 2017 was almost three times heavier than what it used to be in 2011.

With more powerful user devices, faster networks, and the growing popularity of client-side frameworks and media-rich experiences, we have started pushing more and more data to the user’s device.

However, as developers, we miss a crucial point. We usually develop and test our websites in our offices over stable WiFi or wired connections. However, when a real-user uses our website, the network speed and stability may not be that great. Especially with more and more users coming online via mobile devices, the problem of fluctuating network conditions is even more significant.

Don’t believe it? ImageKit.io conducted a study to determine the network speed reported by the Network Info API of Chrome browser for users of a website (with visitors mostly from India). It is not very surprising that almost 40% of the visitors tracked had reported speed lower than 4G, i.e., less than 700 Kbps as per the Network Info API Spec.

While this percentage of users experiencing poor network conditions might be lower if we get visitors from developed countries like the USA or those in Europe, we can still safely assume that varying network conditions impact a sizeable chunk of our users. And we have all experienced it as well. When we enter an elevator or move to the basement parking lot of a building or travel to a remote location, the mobile data download speeds drop significantly. 

Therefore, we need to keep in mind that our users, especially the ones on mobile, will invariably try to visit our website on a slow network, and our goal as a developer should be to provide them with at least a decent browsing experience.

Why optimize images for slow networks?

The ultimate goal of optimizing a website for slower networks is to be able to serve its lighter variant. This way, all the essential stuff gets downloaded and displayed quickly on the user’s device. 

Amongst all the resources that load on a webpage, images make up for most of the payload. And even if we do take care of optimizing the images in general, optimizing them further for slower networks can reduce the overall page weight by almost 30%. 

Also, additional compression of images doesn’t break the critical functionality of any application. Yes, the image quality drops a bit to provide for better user experience. But unlike stripping away Javascript, which would require a lot of thought, compressing images is relatively straightforward.

How to optimize images for a slow network?

Now that we have established that optimizing our webpage based on the user’s network speed is essential and that images are the lowest-hanging fruit to get started, let’s look at how we can achieve network-based image optimization.

There are two parts of the solution.

Determine the user’s network speed

We need to determine the network speed that the user is experiencing and divide them into buckets. For example, users experiencing speed above a certain threshold should be classified in a single group and served a particular quality level of an image. This classification is simple in modern web browsers with the Network Information API. This API automatically classifies the users into four buckets - 4G, 3G, 2G, and slow 2G, with 4G being the fastest and slow 2G being the slowest. 

// returns '4g', '3g', '2g' or 'slow-2g' var effectiveType = NetworkInformation.effectiveType; Compress the images to an appropriate quality level

The second part of the solution is to be able to alter the compression level of an image in real-time, depending on the user’s network speed determined in step 1. It should be as simple as passing an additional parameter in the image URL when the browser triggers a load for it.

While we rely on the browser to determine the user’s network speed, a tool like ImageKit.io makes the real-time compression bit simple. 

ImageKit.io is a real-time image optimization and transformation product that helps us deliver images in the right format, change compression levels, resize, crop, and transform images directly from the URL and deliver those images via a global image CDN. We can get the image with the desired compression level by just passing the image quality parameter in the URL. Quality is directly proportional to image size, i.e., higher the quality number, larger will be the resulting image.

// ImageKit URL with quality 90 https://ik.imagekit.io/demo/default-image.jpg?tr=q-90 // ImageKit URL with quality 50 https://ik.imagekit.io/demo/default-image.jpg?tr=q-50 How else does ImageKit help with network-based image optimization?

While ImageKit has always supported real-time URL-based image quality modification, it has started supporting the network-based image optimization features recently. With these new features, it has become effortless to implement complete network-based optimization with minimum effort.

Of course, first, we need to sign up for ImageKit and start delivering the images on our website through it. Once this is done, in the ImageKit dashboard, we have to enable the setting for network-based image optimization. We get a code snippet right there that and add it to an existing service worker on our website or to a new service worker. 

// Adding the code snippet in a service worker importScripts("https://runtime.imagekit.io/<your_imagekit_id>/v1/js/network-based-adaption.js?v=" + new Date().getTime());

Within the dashboard itself, we also need to specify the desired quality level for different network speed buckets. For example, we have used a quality of 90 for images for users classified as 4G users and a quality of 50 for our slow 2G users. Remember that lower quality results in smaller image sizes.

This code snippet is like a plugin meant for use in service workers. It intercepts the image requests originating from the user’s browser, detects the user’s network speed, and adds the necessary parameters to the image URL. These parameters can be understood by the ImageKit server to compress the image to the desired level and maintain efficient client-side caching. For network-based image optimization, all that we need to do is to include it on our website and done!

Additionally, in the ImageKit dashboard, we can specify the image URLs (or patterns in URLs) that should not be optimized based on the network type. For example, we would want to present the same crisp logo of our brand to our users regardless of their network speed.

Verifying the setup

Once set up correctly, we can quickly check if the network-based optimization is working using Chrome Developer Tools. We can emulate a slow network on our browser using the developer tools. If set up correctly, the service worker should add some parameters to the original image request indicating the current network speed. These parameters are understood by ImageKit’s servers to compress the image as per the quality settings specified in our ImageKit dashboard corresponding to that network speed.

How does caching work with the service worker in place?

ImageKit’s service worker plugin by-passes the browser cache in favor of network-based image cache in the browser. By-passing the browser cache means that the service worker can maintain different caches for different network types and choose the correct image from the cache or request a new one based on the user’s current network condition. 

The service worker plugin automatically uses the cache-first technique to load the images and also implements a waterfall method to pick the right one from the cache. With this waterfall method, images at higher quality get preference over images at a lower quality. What it means is that, if the user’s speed drops to 2G and he has a particular image cached from the time when he was experiencing good download speed on a 4G network, the service worker will use that cached 4G image for delivery instead of downloading the image over the 2G network. But the reverse is not valid. If the user is experiencing 4G network speeds, the service worker won’t pick up the 2G image from the cache, because it is possible to fetch a better quality image and the resources allow for it.

And there is more!

Apart from a simple, ready-to-use service worker plugin and dashboard settings, ImageKit has a few more things to offer that make it an attractive tool to get started with network-based optimization.

Analytics

ImageKit provides us with analytics on our user’s observed network type. It gives us an insight into the number of times network-based optimization gets triggered and what does the user distribution look like across different network types. This distribution analysis can be helpful even for optimizing other resources on our website.

Cost of optimization

With network-based image optimizations, the size of the images goes down, but at the same time, the number of transformation requests can potentially go up. Unlike a lot of other real-time image optimization products out there, ImageKit’s pricing is based only on the output image bandwidth and nothing else. So with the favorable pricing model, implementing network-based image optimization not only provides a lot more value to our users but also helps us cut down on image delivery costs.

Conclusion

Improving page load performance is essential. However, there is one bit that we have all been missing - optimizing it for slow networks.

Images present an easy start when it comes to optimizing our entire website for different networks. With the support of network-based optimization features via service workers, it has become effortless to achieve it with ImageKit. 

It will be a great value add for our users and will help to improve the user experience even further, which will have a positive impact on the conversions on our website. 

Sign up now for ImageKit and get started with it now!

The post Optimizing Images for Users with Slow Network Speeds appeared first on CSS-Tricks.

Some Things You Oughta Know When Working with Viewport Units

Css Tricks - Wed, 11/06/2019 - 11:42am

David Chanin has a quickie article summarizing a problem with setting an element's height to 100vh in mobile browsers and then also positioning something on the bottom of that.

Summarized in this graphic:

The trouble is that Chrome isn't taking the address bar (browser chrome) into account when it's revealed which cuts off the element short, forcing the bottom of the element past the bottom of the actual viewport.

<div class="full-page-element"> <button>Button</button> </div> .full-page-element { height: 100vh; position: relative; } .full-page-element button { position: absolute; bottom: 10px; left: 10px; }

You'd expect that button in the graphic to be visible (assuming this element is at the top of the page and you haven't scrolled) since it's along the bottom edge of a 100vh element. But it's actually hidden behind the browser chrome in mobile browsers, including iOS Safari or Android Chrome.

I use this a lot:

body { height: 100vh; /* Nice to not have to think about the HTML element parent */ margin: 0; }

It's just a quick way to make sure the body is full height without involving any other elements. I'm usually doing that on pretty low-stakes demo type stuff, but I'm told even that is a little problematic because you might experience jumpiness as browser chrome appears and disappears, or things may not be quite as centered as you'd expect.

You'd think you could do body { height: 100% }, but there's a gotcha there as well. The body is a child of <html> which is only as tall as the content it contains, just like any other element.

If you need the body to be full height, you have to deal with the HTML element as well:

html, body { height: 100%; }

...which isn't that big of a deal and has reliable cross-browser consistency.

It's the positioning things along the bottom edge that is tricky. It is problematic because of position: absolute; within the "full height" (often taller-than-visible) container.

If you are trying to place something like a fixed navigation bar at the bottom of the screen, you'd probably do that with position: fixed; bottom: 0; and that seems to work fine. The browser chrome pushes it up and down as you'd expect (video).

Horizontal viewport units are just as weird and problematic due to another bit of browser UI: scrollbars. If a browser window has a visible scrollbar, that scrollbar will usually eat into the visual space although a value of 100vw is calculated as if the scrollbar wasn't there. In other words, 100vw will cause horizontal scrolling in a way you probably wouldn't expect.

See the Pen
CSS Vars for viewport width minus scrollbar
by Shaw (@shshaw)
on CodePen.

Our last CSS wishlist roundup mentioned better viewport unit handling a number of times, so developers are clearly pretty interested in having better solutions for these things. I'm not sure what that would mean for web compatibility though, because changing the way they work might break all the workarounds we've used that are surely still out in the wild.

The post Some Things You Oughta Know When Working with Viewport Units appeared first on CSS-Tricks.

What is super() in JavaScript?

Css Tricks - Wed, 11/06/2019 - 5:12am

What's happening when you see some JavaScript that calls super()?.In a child class, you use super() to call its parent’s constructor and super.<methodName> to access its parent’s methods. This article will assume at least a little familiarity with the concepts of constructors and child and parent classes. If that's totally new, you may want to start with Mozilla's article on Object-oriented JavaScript for beginners.

Super isn't unique to Javascript — many programming languages, including Java and Python, have a super() keyword that provides a reference to a parent class. JavaScript, unlike Java and Python, is not built around a class inheritance model. Instead, it extends JavaScript’s prototypal inheritance model to provide behavior that’s consistent with class inheritance.

Let’s learn a bit more about it and look at some code samples.

First off, here’s a quote from Mozilla’s web docs for classes:

JavaScript classes, introduced in ECMAScript 2015, are primarily syntactical sugar over JavaScript's existing prototype-based inheritance. The class syntax does not introduce a new object-oriented inheritance model to JavaScript.

An example of a simple child and parent class will help illustrate what this quote really means:

See the Pen
ZEzggLK
by Bailey Jones (@bailey_jones)
on CodePen.

My example has two classes: fish and trout. All fish have information for habitat and length, so those properties belong to the fish class. Trout also has a property for variety, so it extends fish to build on top of the other two properties. Here are the constructors for fish and trout:

class fish { constructor(habitat, length) { this.habitat = habitat this.length = length } } class trout extends fish { constructor(habitat, length, variety) { super(habitat, length) this.variety = variety } }

The fish class’s constructor defines habitat and length, and the trout’s constructor defines variety. I have to call super() in trout’s constructor, or I’ll get a reference error when I try to set this.variety. That's because on line one of the trout class, I told JavaScript that trout is a child of fish using the extends keyword. That means trout’s this context includes the properties and methods defined in the fish class, plus whatever properties and methods trout defines for itself. Calling super() essentially lets JavaScript know what a fish is so that it can create a this context for trout that includes everything from fish, plus everything we’re about to define for trout. The fish class doesn’t need super() because its “parent” is just the JavaScript Object. Fish is already at the top of the prototypal inheritance chain, so calling super() is not necessary — fish’s this context only needs to include Object, which JavaScript already knows about.

The prototypal inheritance model for fish and trout and the properties available on the this context for each of them. Starting from the top, the prototypal inheritance chain here goes Object ? fish ? trout.

I called super(habitat, length) in trout’s constructor (referencing the fish class), making all three properties immediately available on the this context for trout. There’s actually another way to get the same behavior out of trout’s constructor. I must call super() to avoid a reference error, but I don’t have to call it “correctly” with parameters that fish's constructor expects. That's because I don't have to use super() to assign values to the fields that fish creates — I just have to make sure that those fields exist on trout's this context. This is an important difference between JavaScript and a true class inheritance model, like Java, where the following code could be illegal depending on how I implemented the fish class:

class trout extends fish { constructor(habitat, length, variety) { super() this.habitat = habitat this.length = length this.variety = variety } }

This alternate trout constructor makes it harder to tell which properties belong to fish and which belong to trout, but it gets the same result as the previous example. The only difference is that here, calling super() with no parameters creates the properties habitat and length on the current this context without assigning anything to them. If I called console.log(this) after line three, it would print {habitat: undefined, length: undefined}. Lines four and five assign values.

I can also use super() outside of trout’s constructor in order to reference methods on the parent class. Here I’ve defined a renderProperties method that will display all the class’s properties into the HTML element I pass to it. super() is useful here because I want my trout class to implement a similar method that does the same thing plus a little more — it assigns a class name to the element before updating its HTML. I can reuse the logic from the fish class by calling super.renderProperties() inside the relevant class function.

class fish { renderProperties(element) { element.innerHTML = JSON.stringify(this) } } class trout extends fish { renderPropertiesWithSuper(element) { element.className="green" super.renderProperties(element); }

The name you choose is important. I’ve called my method in the trout class renderPropertiesWithSuper() because I still want to have the option of calling trout.renderProperties() as it’s defined on the fish class. If I’d just named the function in the trout class renderProperties, that would be perfectly valid; however, I’d no longer be able to access both functions directly from an instance of trout - calling trout.renderProperties would call the function defined on trout. This isn't necessarily a useful implementation - it's an arguably better pattern for a function that calls super like this to overwrite its parent function's name - but it does illustrate how flexible JavaScript allows your classes to be.

It is completely possible to implement this example without the use of the super() or extends keywords that were so helpful in the previous code sample, it's just much less convenient. That's what Mozilla meant by "syntactical sugar." In fact, if I plugged my previous code into a transpiler like Babel to make sure my classes worked with older versions of JavaScript, it would generate something closer to the following code. The code here is mostly the same, but you'll notice that without extends and super(), I have to define fish and trout as functions and access their prototypes directly. I also have to do some extra wiring on lines 15, 16, and 17 to establish trout as a child of fish and to make sure trout can be passed the correct this context in its constructor. If you're interested in a deep dive into what's going on here, Eric Green has an excellent post with lots of code samples on how to build classes with and without ES2015.

See the Pen
wvvdxdd
by Bailey Jones (@bailey_jones)
on CodePen.

Classes in JavaScript are a powerful way to share functionality. Class components in React, for example, rely on them. However, if you're used to Object Oriented programming in another language that uses a class inheritance model, JavaScript's behavior can occasionally be surprising. Learning the basics of prototypal inheritance can help clarify how to work with classes in JavaScript.

The post What is super() in JavaScript? appeared first on CSS-Tricks.

Netlify CMS Open Authoring

Css Tricks - Wed, 11/06/2019 - 5:10am

I like the term "Git-backed CMS." That term works for an emerging style of CMS that looks and behaves much like any other CMS, with a fascinating twist: it doesn't actually store any data for you. These CMSs are connected to a Git repo where the data lives in flat files (e.g. Markdown). You teach the CMS where those files are and how they are structured. Then, as you use the CMS to create, edit, and delete things, those changes happen as commits (or pull/merge requests) are made against that repo. So cool.

For example, CloudCannon can do it specifically for hosted Jekyll sites.

But more in the Indie Web / JAMstack spirit, there are players like Forestry and the one I have the most experience with: Netlify CMS.

Lemme do a series of screenshots with captions to make the point very clear.

The site in question is our Serverless site. It happens to be Gatsby, but the important part is that that the content comes from Markdown files in a Git repo. Here's an example Markdown file (with Frontmatter) in the repo. I like Markdown fine, but I'd prefer to work with content in a GUI CMS honestly. The reason I went this way is so the data is in a repo, meaning I can take content-based pull requests. I really do get content-based pull requests. That's the magic right there. That's exactly what I want. Netlify CMS is basically two files. An index.html that loads up a SPA interface that literally does everything. And a configuration file to teach it about your content. With Netlify CMS in place, I have my GUI CMS happy place. Any changes in here turn up as commits on the repo. OK OK OK. What's this "Open Authoring" thing?

As I write, it's a beta feature.

Here's the main thing: I can use Netlify CMS for my site. My team can also use it, because I can invite them specifically to the repo. But you, random person on the internet, cannot. If you wrote to me and told me you wanted to be a volunteer content manager on the site, then maybe, OK, I'll invite you to the repo. (You being a member of the repo will allow you to auth into Netlify CMS, assuming you are using the GitHub back end, which is the only connection Open Authoring works with right now.)

But that's a bummer that random internet people can't submit pull requests on content via Netlify CMS. That would be way easier than the manual process of forking the repo and all that jazz — although to be fair, click the little pencil icon while looking at a Markdown file on GitHub and editing it makes the process pretty simple by opening a pull request automatically (but it doesn't help you add new content or upload images or anything).

This is where Open Authoring comes in. In my Netlify CMS config I can basically flip it on with one line of config. They explain it well:

you can use Netlify CMS to accept contributions from GitHub users without giving them access to your repository. When they make changes in the CMS, the CMS forks your repository for them behind the scenes, and all the changes are made to the fork. When the contributor is ready to submit their changes, they can set their draft as ready for review in the CMS. This triggers a pull request to your repository, which you can merge using the GitHub UI.

Emphasis mine.

Wanna see the real beauty of this? Now we can put "Edit this" buttons on all the content, and if you click it, you'll head straight into Netlify CMS to do the editing. It works if you are me, my team member, or you, random person from the internet.

That's what I've always wanted. It makes the site into a wiki! But there is enough public accountability (they have to use a real GitHub account) that I wouldn't worry about much spam or obnoxious behavior.

The post Netlify CMS Open Authoring appeared first on CSS-Tricks.

Show Search Button when Search Field is Non-Empty

Css Tricks - Tue, 11/05/2019 - 8:21am

I think the :placeholder-shown selector is tremendously cool. It allows you to select the placeholder of an input (<input placeholder="...">) when that placeholder is present. Meaning, the input does not yet have any value. You might think input[value] could do that, or help match on the actual value, but it can't.

This makes :placeholder-shown one of the few CSS properties that we have that can react to user-initiated state joining the likes of :hover-and-friends, :checked (checkbox hack!), and the also-awesome :focus-within.

One way we can use it is to check whether the user entered any text into a search field. If yes, then reveal the search button visually (never hide it for assistive tech). If no, then leave it hidden. It's just a fun little "space-saving" technique not terrible unlike float labels.

So, perhaps we start with a semantic search form:

<form action="#" method="GET" class="search-form"> <!-- Placeholders aren't labels! So let's have a real label --> <label for="search" class="screen-reader-text">Search</label> <input id="search" type="search" class="search-input" placeholder="Enter search terms..."> <button class="search-button">Search</button> </form>

We hide the search label visually with one of those special screen-reader-only classes because it will always be hidden visually. But we'll hide the button by pushing it outside the form with hidden overflow.

.search-form { /* we'll re-use this number, so DRYing up */ --searchButtonWidth: 75px; overflow: hidden; position: relative; } .search-input { /* take full width of form */ width: 100%; } .search-button { position: absolute; /* push outside the form, hiding it */ left: 100%; width: var(--searchButtonWidth); }

Then the trick is to pull the search button back in when the placeholder goes away (user has entered a value):

/* ... */ .search-input:not(:placeholder-shown) ~ .search-button { /* pull back the negative value of the width */ transform: translateX(calc(-1 * var(--searchButtonWidth))); } .search-button { position: absolute; left: 100%; width: var(--searchButtonWidth); /* animate it */ transition: 0.2s; }

Which ends up like this:

See the Pen
:placeholder-shown revealing button
by Chris Coyier (@chriscoyier)
on CodePen.

I saw this idea in a Pen by Liam Johnston. Cool idea, Liam!

I know that using the placeholder attribute at all is questionable, so your mileage may vary. I'll admit that I'm mostly intrigued by the state-handling aspects of it directly in CSS and how it can be used — like the infamous checkbox hack.

Support is good. One of those where when Edge is firmly Chromium, it's in the clear.

This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.

DesktopChromeOperaFirefoxIEEdgeSafari47345111*769Mobile / TabletiOS SafariOpera MobileOpera MiniAndroidAndroid ChromeAndroid Firefox9.0-9.2NoNo767868

The post Show Search Button when Search Field is Non-Empty appeared first on CSS-Tricks.

Making a Chart? Try Using Mobx State Tree to Power the Data

Css Tricks - Tue, 11/05/2019 - 4:56am

Who loves charts? Everyone, right? There are lots of ways to create them, including a number of libraries. There’s D3.js, Chart.js, amCharts, Highcharts, and Chartist, to name only a few of many, many options.

But we don’t necessary need a chart library to create charts. Take Mobx-state-tree (MST), an intuitive alternative to Redux for managing state in React. We can build an interactive custom chart with simple SVG elements, using MST to manage and manipulate data for the chart. If you've attempted to build charts using something like D3.js in the past, I think you’ll find this approach more intuitive. Even if you're an experienced D3.js developer, I still think you'll be interested to see how powerful MST can be as a data architecture for visualizations.

Here’s an example of MST being used to power a chart:

This example uses D3's scale functions but the chart itself is rendered simply using SVG elements within JSX. I don’t know of any chart library that has an option for flashing hamster points so this is a great example of why it’s great to build your own charts — and it’s not as hard as you might think!

I’ve been building charts with D3 for over 10 years and, while I love how powerful it is, I’ve always found that my code can end up being unwieldy and hard to maintain, especially when working with complex visualizations. MST has changed all that completely by providing an elegant way to separate the data handling from the rendering. My hope for this article is that it will encourage you to give it a spin.

Getting familiar with MST model

First of all, let’s cover a quick overview of what a MST model looks like. This isn’t an in-depth tutorial on all things MST. I only want to show the basics because, really, that’s all you need about 90% of the time.

Below is a Sandbox with the code for a simple to-do list built in MST. Take a quick look and then I’ve explain what each section does.

First of all, the shape of the object is defined with typed definitions of the attribute of the model. In plain English, this means an instance of the to-do model must have a title, which must be a string and will default to having a “done” attribute of false.

.model("Todo", { title: types.string, done: false //this is equivalent to types.boolean that defaults to false })

Next, we have the view and action functions. View functions are ways to access calculated values based on data within the model without making any changes to the data held by the model. You can think of them as read-only functions.

.views(self => ({ outstandingTodoCount() { return self.todos.length - self.todos.filter(t => t.done).length; } }))

Action functions, on the other hand, allow us to safely update the data. This is always done in the background in a non-mutable way.

.actions(self => ({ addTodo(title) { self.todos.push({ id: Math.random(), title }); } }));

Finally, we create a new instance of the store:

const todoStore = TodoStore.create({ todos: [ { title: "foo", done: false } ] });

To show the store in action, I’ve added a couple of console logs to show the output of outStandingTodoCount() before and after triggering the toggle function of the first instance of a Todo.

console.log(todoStore.outstandingTodoCount()); // outputs: 1 todoStore.todos[0].toggle(); console.log(todoStore.outstandingTodoCount()); // outputs: 0

As you can see, MST gives us a data structure that allows us to easily access and manipulate data. More importantly, it’s structure is very intuitive and the code is easy to read at a glance — not a reducer in sight!

Let’s make a React chart component

OK, so now that we have a bit of background on what MST looks like, let’s use it to create a store that manages data for a chart. We’ll will start with the chart JSX, though, because it’s much easier to build the store once you know what data is needed.

Let’s look at the JSX which renders the chart.

The first thing to note is that we are using styled-components to organize our CSS. If that’s new to you, Cliff Hall has a great post that shows it in use with a React app.

First of all, we are rendering the dropdown that will change the chart axes. This is a fairly simple HTML dropdown wrapped in a styled component. The thing to note is that this is a controlled input, with the state set using the selectedAxes value from our model (we’ll look at this later).

<select onChange={e => model.setSelectedAxes(parseInt(e.target.value, 10)) } defaultValue={model.selectedAxes} >

Next, we have the chart itself. I’ve split up the axes and points in to their own components, which live in a separate file. This really helps keep the code maintainable by keeping each file nice and small. Additionally, it means we can reuse the axes if we want to, say, have a line chart instead of points. This really pays off when working on large projects with multiple types of chart. It also makes it easy to test the components in isolation, both programmatically and manually within a living style guide.

{model.ready ? ( <div> <Axes yTicks={model.getYAxis()} xTicks={model.getXAxis()} xLabel={xAxisLabels[model.selectedAxes]} yLabel={yAxisLabels[model.selectedAxes]} ></Axes> <Points points={model.getPoints()}></Points> </div> ) : ( <Loading></Loading> )}

Try commenting out the axes and points components in the Sandbox above to see how they work independently of each other.

Lastly, we’ll wrap the component with an observer function. This means that any changes in the model will trigger a re-render.

export default observer(HeartrateChart);

Let’s take a look at the Axes component:

As you can see, we have an XAxis and a YAxis. Each has a label and a set of tick marks. We go into how the marks are created later, but here you should note that each axis is made up of a set of ticks, generated by mapping over an array of objects with a label and either an x or y value, depending on which axis we are rendering.

Try changing some of the attribute values for the elements and see what happens… or breaks! For example, change the line element in the YAxis to the following:

<line x1={30} x2="95%" y1={0} y2={y} />

The best way to learn how to build visuals with SVG is simply to experiment and break things. &#x1f642;

OK, that’s half of the chart. Now we’ll look at the Points component.

Each point on the chart is composed of two things: an SVG image and a circle element. The image is the animal icon and the circle provides the pulse animation that is visible when mousing over the icon.

Try commenting out the image element and then the circle element to see what happens.

This time the model has to provide an array of point objects which gives us four properties: x and y values used to position the point on the graph, a label for the point (the name of the animal) and pulse, which is the duration of the pulse animation for each animal icon. Hopefully this all seems intuitive and logical.

Again, try fiddling with attribute values to see what changes and breaks. You can try setting the y attribute of the image to 0. Trust me, this is a much less intimidating way to learn than reading the W3C specification for an SVG image element!

Hopefully this gives you an understanding and feel for how we are rendering the chart in React. Now, it’s just a case of creating a model with the appropriate actions to generate the points and ticks data we need to loop over in JSX.

Creating our store

Here is the complete code for the store:

I’ll break down the code into the three parts mentioned earlier:

  1. Defining the attributes of the model
  2. Defining the actions
  3. Defining the views
Defining the attributes of the model

Everything we define here is accessible externally as a property of the instance of the model and — if using an observable wrapped component — any changes to these properties will trigger a re-render.

.model('ChartModel', { animals: types.array(AnimalModel), paddingAndMargins: types.frozen({ paddingX: 30, paddingRight: 0, marginX: 30, marginY: 30, marginTop: 30, chartHeight: 500 }), ready: false, // means a types.boolean that defaults to false selectedAxes: 0 // means a types.number that defaults to 0 })

Each animal has four data points: name (Creature), longevity (Longevity__Years_), weight (Mass__grams_), and resting heart rate (Resting_Heart_Rate__BPM_).

const AnimalModel = types.model('AnimalModel', { Creature: types.string, Longevity__Years_: types.number, Mass__grams_: types.number, Resting_Heart_Rate__BPM_: types.number }); Defining the actions

We only have two actions. The first (setSelectedAxes ) is called when changing the dropdown menu, which updates the selectedAxes attribute which, in turn, dictates what data gets used to render the axes.

setSelectedAxes(val) { self.selectedAxes = val; },

The setUpScales action requires a bit more explanation. This function is called just after the chart component mounts, within a useEffect hook function, or after the window is resized. It accepts an object with the width of the DOM that contains the element. This allows us to set up the scale functions for each axis to fill the full available width. I will explain the scale functions shortly.

In order to set up scale functions, we need to calculate the maximum value for each data type, so the first thing we do is loop over the animals to calculate these maximum and minimum values. We can use zero as the minimum value for any scale we want to start at zero.

// ... self.animals.forEach( ({ Creature, Longevity__Years_, Mass__grams_, Resting_Heart_Rate__BPM_, ...rest }) => { maxHeartrate = Math.max( maxHeartrate, parseInt(Resting_Heart_Rate__BPM_, 10) ); maxLongevity = Math.max( maxLongevity, parseInt(Longevity__Years_, 10) ); maxWeight = Math.max(maxWeight, parseInt(Mass__grams_, 10)); minWeight = minWeight === 0 ? parseInt(Mass__grams_, 10) : Math.min(minWeight, parseInt(Mass__grams_, 10)); } ); // ...

Now to set up the scale functions! Here, we’ll be using the scaleLinear and scaleLog functions from D3.js. When setting these up, we specify the domain, which is the minimum and maximum input the functions can expect, and the range, which is the maximum and minimum output.

For example, when I call self.heartScaleY with the maxHeartrate value, the output will be equal to marginTop. That makes sense because this will be at the very top of the chart. For the longevity attribute, we need to have two scale functions since this data will appear on either the x- or the y-axis, depending on which dropdown option is chosen.

self.heartScaleY = scaleLinear() .domain([maxHeartrate, minHeartrate]) .range([marginTop, chartHeight - marginY - marginTop]); self.longevityScaleX = scaleLinear() .domain([minLongevity, maxLongevity]) .range([paddingX + marginY, width - marginX - paddingX - paddingRight]); self.longevityScaleY = scaleLinear() .domain([maxLongevity, minLongevity]) .range([marginTop, chartHeight - marginY - marginTop]); self.weightScaleX = scaleLog() .base(2) .domain([minWeight, maxWeight]) .range([paddingX + marginY, width - marginX - paddingX - paddingRight]);

Finally, we set self.ready to be true since the chart is ready to render.

Defining the views

We have two sets of functions for the views. The first set outputs the data needed to render the axis ticks (I said we’d get there!) and the second set outputs the data needed to render the points. We’ll take a look at the tick functions first.

There are only two tick functions that are called from the React app: getXAxis and getYAxis. These simply return the output of other view functions depending on the value of self.selectedAxes.

getXAxis() { switch (self.selectedAxes) { case 0: return self.longevityXAxis; break; case 1: case 2: return self.weightXAxis; break; } }, getYAxis() { switch (self.selectedAxes) { case 0: case 1: return self.heartYAxis; break; case 2: return self.longevityYAxis; break; } },

If we take a look at the Axis functions themselves we can see they use a ticks method of the scale function. This returns an array of numbers suitable for an axis. We then map over the values to return the data we need for our axis component.

heartYAxis() { return self.heartScaleY.ticks(10).map(val => ({ label: val, y: self.heartScaleY(val) })); } // ...

Try changing the value of the parameter for the ticks function to 5 and see how it affects the chart: self.heartScaleY.ticks(5).

Now we have the view functions to return the data needed for the Points component.

If we take a look at longevityHeartratePoints (which returns the point data for the “Longevity vs. Heart” rate chart), we can see that we are looping over the array of animals and using the appropriate scale functions to get the x and y positions for the point. For the pulse attribute, we use some maths to convert the beats per minute value of the heart rate into a value representing the duration of a single heartbeat in milliseconds.

longevityHeartratePoints() { return self.animals.map( ({ Creature, Longevity__Years_, Resting_Heart_Rate__BPM_ }) => ({ y: self.heartScaleY(Resting_Heart_Rate__BPM_), x: self.longevityScaleX(Longevity__Years_), pulse: Math.round(1000 / (Resting_Heart_Rate__BPM_ / 60)), label: Creature }) ); },

At the end of the store.js file, we need to create a Store model and then instantiate it with the raw data for the animal objects. It is a common pattern to attach all models to a parent Store model which can then be accessed through a provider at top level if needed.

const Store = types.model('Store', { chartModel: ChartModel }); const store = Store.create({ chartModel: { animals: data } }); export default store;

And that is it! Here’s our demo once again:

This is by no means the only way to organize data to build charts in JSX, but I have found it to be incredibly effective. I’ve have used this structure and stack in the wild to build a library of custom charts for a big corporate client and was blown away with how nicely MST worked for this purpose. I hope you have the same experience!

The post Making a Chart? Try Using Mobx State Tree to Power the Data appeared first on CSS-Tricks.

Float Element in the Middle of a Paragraph

Css Tricks - Mon, 11/04/2019 - 10:15am

Say you want to have an image (or any other element) visually float left into a paragraph of text. But like... in the middle of the paragraph, not right at the top. It's doable, but it's certainly in the realm of CSS trickery!

One thing you can do is slap the image right in the middle of the paragraph:

<p> Lorem ipsum dolor sit amet consectetur, adipisicing <img src="tree.jpg" alt="An oak tree." /> elit. Similique quibusdam aliquam provident suscipit corporis minima? Voluptatem temporibus nulla </p>

But that's mega awkward. Note the alt text. We can't have random alt text in the middle of a sentence. It's semantically blech and literally confusing for people using assistive technology.

So what we have to do is put the image before the paragraph.

<img src="tree.jpg" alt="An oak tree." /> <p> Lorem ipsum dolor sit amet consectetur, adipisicing elit. Similique quibusdam aliquam provident suscipit corporis minima? Voluptatem temporibus nulla </p>

But when we do that, we aren't exactly floating the image in the middle of the paragraph anymore. It's right at the top. No margin-top or vertical translate or anything is going to save us here. margin will just extend the height of the floated area and translate will push the image into the text.

The trick, at least one I've found, is to leverage shape-outside and a polygon() to re-shape the floated area around where you want it. You can skip the top-left part. Using Clippy is a great way to get a start to the polygon:

But instead of the clip-path Clippy gives you by default, you apply that value to shape-outside.

That should be enough if you are just placing a box in that place. But if it's literally an image or needs a background color, you might also need to apply clip-path and perhaps transform things into place. This is where I ended up with some fiddling.

See the Pen
Float cutout in middle of paragraph.
by Chris Coyier (@chriscoyier)
on CodePen.

The post Float Element in the Middle of a Paragraph appeared first on CSS-Tricks.

The Trick to Animating the Dot on the Letter “i”

Css Tricks - Mon, 11/04/2019 - 5:02am

Here’s the trick: by combining the Turkish letter "?" and the period "." we can create something that looks like the letter "i," but is made from two separate elements. This opens us up to some fun options to style or animate the dot of the letter independently from the stalk. Worried about accessibility? Don’t worry, we’ll cover that the best way we know how.

Let’s look at how to create and style these separate "letters," and find out when they can be used, and when to avoid them.

Check out some examples

Here are some different styles and animations we can do with this idea:

See the Pen
Styles and animations
by Ali C (@alichur)
on CodePen.

Because both parts of the letter are regular Unicode characters, they will respect font changes and page zoom the same as any other text. Here’s some examples of different fonts, styles, and zoom levels:

See the Pen
Different fonts and zoom
by Ali C (@alichur)
on CodePen.

Step-by-step through the technique

Let’s break down how this technique works.

Choose the Unicode characters to combine

We are using the dotless "i" character (?) and a full stop. And, yes, we could use other characters as well, such as the dotless "j" character (?) or even the accents on characters such as "ñ" (~) or "è" (`).

Stack the characters on top of each other by wrapping them in a span and setting the display property to block.

<span class="character">.</span> <span class="character">?</span> .character { display: block; } Align the characters

They need to be close to each other. We can do that by adjusting the line heights and removing the margins from them.

.character { display: block; line-height: 0.5; margin-top: 0; margin-bottom: 0; } Add a CSS animation to the dot element

Something like this bouncing animation:

@keyframes bounce { from { transform: translate3d(0, 0, 0); } to { transform: translate3d(0, -10px, 0); } } .bounce { animation: bounce 0.4s infinite alternate; }

There’s more on CSS animations in the CSS-Tricks Almanac.

Checking in, here’s where we are so far:

See the Pen
Creating the letter
by Ali C (@alichur)
on CodePen.

Add any remaining letters of the word

It’s fine to animate the "i" on its own, but perhaps it’s merely one letter in a word, like "Ping." We’ll wrap the animated characters in a span to make sure everything stays on a single line.

<p> P <span> <span class="character">.</span> <span class="character>?</span> </span> ng </p>

There’s an automatic gap between inline-block elements, so be sure to remove that if the spacing looks off.

The final stages:

See the Pen
Adding the letter inside a word
by Ali C (@alichur)
on CodePen.

What about SVG?

The same effect can be achieved by creating a letter from two or more SVG elements. Here's an example where the circle element is animated independently from the rectangle element.

See the Pen
SVG animated i
by Ali C (@alichur)
on CodePen.

Although an SVG letter won’t respond to font changes, it opens up more possibilities for animating sections of letters that aren’t represented by Unicode characters and letter styles that don't exist in any font.

Where would you use this?

Where would you want to use something like this? I mean, it’s not a great use case for body content or any sort of long-form content. Not only would that affect legibility (can you imagine if every "i" in this post was animated?) but it would have a negative impact on assistive technology, like screen readers, which we will touch on next.

Instead, it’s probably best to use this technique where the content is intended for decoration. A logo is a good example of that. Or perhaps in an icon that’s intended to be described, but not interpreted as text by assistive technology.

Let’s talk accessibility

Going back to our "Ping" example, a screen reader will read that as P . ? ng. Not exactly the pronunciation we’re looking for and definitely confusing to anyone listening to it.

Depending on the usage, different ARIA attributes can be added so that text is read differently. For example, we can describe the entire element as an image and add the text as its label:

<div role=img aria-label="Ping"> <p>P<span>.</span><span>?</span>ng</p> </div>

This way, the outer div element describes the meaning of the text which gets read by screen readers. However, we also want assistive technology to skip the inner elements. We can add aria-hidden="true" or role="presentation" to them so that they are also not interpreted as text:

<div role=img aria-label="Ping"> <p role="presentation">P <span>.</span> <span>?</span> ng</p> </div>

This was only tested on a Mac with VoiceOver in Safari. If there are issues in other assistive technology, please let us know in the comments.

More Unicode!

There’s many more "letters" we can create by combining Unicode characters. Here's a full outline of common glyphs, or you can have some fun with the ones below and share your creations in the comments. But remember: no cool effect is worth compromising accessibility on a live site!

First Glyph Second Glyph Combined ? . i ? . j n ~ ñ a e æ a ` à

The post The Trick to Animating the Dot on the Letter “i” appeared first on CSS-Tricks.

Become a Front-End Master in 2020 With These 10 Project Ideas

Css Tricks - Fri, 11/01/2019 - 9:27am

This is a little updated cross-post from a quickie article I wrote on DEV. I'm publishing here 'cuz I'm all IndieWeb like that.

I love this post by Simon Holdorf. He's got some ideas for how to level up your skills as a front-end developer next year. Here they are:

  • Build a movie search app using React
  • Build a chat app with Vue
  • Build a weather app with Angular
  • Build a to-do app with Svelte

... and 5 more like that.

All good ideas. All extremely focused on JavaScript frameworks.
I like thinking of being a front-end developer as being someone who is a browser person. You deal with people who use some kind of client to use the web on some kind of device. That's the job.

I love JavaScript frameworks, but knowing them isn't what makes you a good front-end developer. Being performance-focused and accessibility-focused, and thus user-focused is what makes you a front-end master, beyond executing the skills required to get the website built.

In that vein, here's some more ideas.

  • Go find a Dribbble shot that appeals to you. Re-build it in HTML and CSS in the cleanest and most accessible way you can.
  • Find a component you can abstract in your codebase, and abstract it so you can re-use it efficiently. Consider accessibility as you do it. Could you make it more accessible in a way that benefits the entire site? How about your SVG icon component — how's that looking these days?
  • Try out a static site generator (perhaps one that isn't particularly JavaScript focused, just to experience it). What could the data source be? What could you make if you ran the build process on a timed schedule?
  • Install the Axe accessibility plugin for DevTools and run it on an important site you control. Make changes to improve the accessibility as it suggests.
  • Spin up a copy of Fractal. Check out how it can help you think about building front-ends as components, even at the HTML and CSS level.
  • Build a beautiful form in HTML/CSS that does something useful for you, like receive leads for freelance work. Learn all about form validation and see how much you can do in just HTML, then HTML plus some CSS, then with some vanilla JavaScript. Make the form work by using a small dedicated service.
  • Read a bit about Serverless and how it can extend your front-end developer skillset.
  • Figure out how to implement an SVG icon system. So many sites these days need an icon set. Inlining SVG is a great simple solution, but how you can abstract that to easily implement it with your workflow? How can it work with the framework you use?
  • Try to implement a service worker. Read a book about them. Do something very small. Check out a framework centered around them.
  • Let's say you needed to put up a website where the entire thing was the name and address of the company, and a list of hours it is open. What's the absolute minimum amount of work and technical debt you could incur to do it?

The post Become a Front-End Master in 2020 With These 10 Project Ideas appeared first on CSS-Tricks.

Syndicate content
©2003 - Present Akamai Design & Development.