Developer News

Body Toggle

Css Tricks - Tue, 07/06/2021 - 10:55am

I appreciate the clarity of this trick that Mikael Ainalem posted over on Reddit:

It’s a one-liner that toggles the class on the <body> so you can mock up different states and toggle between them on click.

<body onclick="this.classList.toggle("active");">

Could be on any element as well!

CodePen Embed Fallback

This can be a big thing. See “The Power of Changing Classes” as a case in point. Even if you aren’t much of a JavaScript person, classList is perhaps the one API you should know.

The post Body Toggle appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

I’ve got one question about Jetpack for you.

Css Tricks - Tue, 07/06/2021 - 10:00am

And maybe an optional follow-up if you’re up for it.

Automattic, the makers of Jetpack and many other WordPress-y things, have sponsored my site (me = Chris Coyier; site = CSS-Tricks) for quite a while. I use Jetpack myself, and I’m always trying to tell people about its features and benefits.

Yet I get the sense that there is a decent amount of hesitancy (or even general negative feelings) toward Jetpack. I want to hone in on that and understand it better. This will be useful for me in my attempt to be a good sponsoree, and useful for Automattic to improve Jetpack.

Fill out my online form. var z3puphz08qee3j; (function(d, t) { var s = d.createElement(t), options = { 'userName':'chriscoyier', 'formHash':'z3puphz08qee3j', 'autoResize':true, 'height':'800', 'async':true, 'host':'', 'header':'hide', 'ssl':true }; s.src = ('https:' == d.location.protocol ?'https://':'http://') + ''; s.onload = s.onreadystatechange = function() { var rs = this.readyState; if (rs) if (rs != 'complete') if (rs != 'loaded') return; try { z3puphz08qee3j = new WufooForm(); z3puphz08qee3j.initialize(options); z3puphz08qee3j.display(); } catch (e) { } }; var scr = d.getElementsByTagName(t)[0], par = scr.parentNode; par.insertBefore(s, scr); })(document, 'script');

The post I’ve got one question about Jetpack for you. appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Trigonometry in CSS and JavaScript: Beyond Triangles

Css Tricks - Mon, 07/05/2021 - 4:15am

Web design is such a rectangle-based design medium that literally any deviation from it feels fresh. Michelle Barker gets into using math in various ways to programmatically draw lines, shapes, and animations that end up looking both beautiful and have that “I could use this” feel.

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post Trigonometry in CSS and JavaScript: Beyond Triangles appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

The Fourteenth Fourth

Css Tricks - Sun, 07/04/2021 - 3:29am

It’s CSS-Tricks birthday! Somehow that keeps coming around every year. It’s that time where I reflect upon that past year. It’s like the annual vibe check.

I’m writing this just days after my current home state of Oregon has lifted most of the COVID restrictions. Certainly a very weird feeling. We’re just hitting the state-wide 70% vaccinated level which is the big milestone covered in the news. I thought our little local organic-heavy progressive grocery store would be the last place to go mask-less, but even in there, the vast majority of people are raw-facin’ it, employees included. So it’s not just America’s birthday this year, but a real sign of changing times. Controversy in tow, as there is plenty of evidence the danger is far from over. Definitely gonna hit up some fireworks though. The kid loves ’em.

Well-Oiled Machine

I’d say that’s ^ the main vibe around here from my perspective. The site is in good shape all around. The tech behind it is stable and mostly satisfying. The editorial flow is flowing. The content idea bucket overfloweth. The newsletter goes out on time. The advertising and sponsorship demand is sound. Ain’t any squeaky wheels on this train.

And did you know we have zero meetings? Just light Slack chatter, that’s it. This is a part-time gig for everyone, and we aren’t doing any life-saving work here, so no need to take up anyone’s time with meetings.

Technologically, we’re leaning more and more into the WordPress block editor all the time and it feels like that is a good thing to do here in WordPress land. Every time we have a chance to get more into any current WordPress tech and take advantage of things WordPress does well, it tends to work out.

This is all great because as far as hours-in-the-day go, most of my time is on and needs to be on CodePen. An incredible amount of work lays ahead there as we evolve it.

Things to Get Done

That’s not to say there isn’t work to be done. I’ve got some WordPress scrubbing to do, for one thing. There are a few too many places functionality code is being stored on the site. I’ve completed an audit, but now I need to do the coding work to get it clean again. Things change over the years, WordPress evolves, needs evolve, performance and accessibility considerations evolve, my own taste evolves. Code from 8 years ago needs to evolve too.

One thing I’d really love to get done is to move all the content on the site that really should have been a Custom Post Type to actually be Custom Post Types. Namely screencasts and almanac entries. Right now they are Pages instead, which was fine at the time as it lends itself to a hierarchical structure nicely. But the only reason they aren’t Custom Post Types is because those didn’t exist when I started them. In today’s WordPress, they really should be, and I think it would open doors to managing them better. I’m not sure I have the chops to pull off a conversion like that so I might have to hire out for it.

I’d also like to evolve our eCommerce a bit. I think it’s been going great as we dipped our toes into selling things like posters and MVP membership, and now it’s time to make all that stuff better and more valuable since it’s a proven win. For example, I’m working on making sure the book is downloadable in proper eBook formats, that’ll be a value-add for members. I’ve started thinking about what more we can do with the newsletter as well since those are so hot these days, and I’m a fan.

Social Media Cards

While social media isn’t a major focus of ours, we do tend to make sure Twitter is in good shape, as we have that sweet handle @css. I’m pretty hot on the idea that sites (content sites especially), should have nice social media images. Fortunately, thanks to Social Image Generator and some custom code, ours are in good shape. I still smile looking at them as they are so damn distinct now. WP Tavern did a nice writeup on the plugin.

There are five different possibilities for social cards now we can use.

  • This is the default. It defaults to the post title, but we can override that.
  • If the post has a featured image, it will be incorporated into the social media image like this.
  • If we add a quote to a meta field, we’ll get this special quote card design.
  • We can turn off the generated social media card and have it just use the featured image as the card.
  • If we turn off the generated social card and don’t have a featured image, it falls back to this generic card.

I’m incredibly blessed that we have the same four major sponsors as we’ve had the last few years:

  • Automattic: WordPress is at the heart of this whole site. I’m so pleased to get to have Automattic as a sponsor, who not only create all kinds of important software for WordPress that we use here, like Jetpack and WooCommerce, but are big contributors to WordPress itself. I like that the site can be a living testament to what you can do with WordPress.
  • Frontend Masters: There is no A to Z learning path here on CSS-Tricks. If you want true curriculum to level up your skills, that’s what Frontend Masters is for. I couldn’t recommend any learning platform more, which is why I’m so happy to have them as our official learning partner and enthusiastically point people there.
  • Netlify: The Jamstack is a good movement for the web and literally nobody does it better than Netlify. They have pioneered so many good ideas it’s incredible. It’s easy to look at the industry and see even huge companies scramble to do what they’ve been doing for years.
  • Flywheel: I’m a believer in happy path web hosting. Use hosts that specialize in what you’re doing. This site is WordPress and I don’t think there is a better hosting option for WordPress than Flywheel. And that’s without consider that they also make Local, of which there is no better local development story for WordPress.

We’re about a year and a half into v18, and it has certainly evolved quite a bit since its launch. While it’s feeling solid now, I’ve started to get the redesign itch and have been saving design inspiration for v19. I imagine it’ll happen over the slower holiday season as it tends to. I have a feeling it’ll be a stripping-down sort of design heading back to less colors and more typography-driven approach that can support themes in a way I never have. But we’ll see!


It’s largely the same story as the last 3-4 years. Always hovering just a smidge north of 8m page views a month. A perfectly healthy number for such a niche site. But also a constant reminder how difficult the content game is. You’d think a constant stream of content creation would grow traffic up and up over time, particularly since our technical content usually has a decent shelf-life. But at some point, you have to keep creating content and keep working on a site just to maintain what you have. Meaning older content slowly drives less traffic and new content needs to step up and fill the gap. At least that’s one interpretation of what’s going on—I’m sure the complete story is much more intricate (SEO, competition, saturation, content blockers affecting numbers, etc).

The name?

I ain’t gonna up and change anything, but the name “CSS-Tricks” has been so hokey for so long. Every time I see some other brand pull of a daring name change, I’m a little jealous. Would it be worth it for CSS-Tricks? The potential benfits being: a new name could usher in fresh interest in the site, be a catalyst for other change, and be less of a jarring mismatch between what we actually publish and what people might expect us to publish based on the name. I’d have to do a lot more thinking and research to be able to pull it off. If the domain changes, even with perfect redirects, are there still serious SEO implications? How could I minimize the confusion? Is there a chance in hell a change has more upsides than downsides?

The post The Fourteenth Fourth appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Good Meetings

Css Tricks - Fri, 07/02/2021 - 4:41am

Like it or not, meetings are essential to a good working environment and communication. Therefore, it’s crucial that we work on making them as productive as possible. Today we’ll explore myriad ways to keep meetings coordinated, well documented, and talk about how to recognize and steer away from anti-patterns.

I’m timid to write this because I have not always hosted good meetings. I have, however, hosted thousands of them, so I’ve learned both from some mistakes and successes. In all likelihood, if you do any kind of management or lead work for a while, you’ll also see your own spectrum of meetings: meetings with different types of agendas and purposes, meetings with varying levels of awkwardness, meetings that didn’t have a formal outcome. We’ll dive into all of these in this article, as well as some tips for each.

The truth is, a meeting by its nature can almost never be perfect because it is by definition a group of people. That group of people will consist of different people: with different tastes, different opinions, different priorities, and different values. There’s a high chance that not everyone will agree on what a great meeting is. So half of the journey is aligning on that.

The Good, The Bad

One thing’s for sure: we can agree on what a bad meeting is. So let’s start by using that as a ballast:

  • There’s no clear purpose or direction
  • It feels chaotic
  • The wrong people are there
  • People are generally disrespectful of one another
  • Everyone feels it’s a waste of time

From those assertions, we can then derive what a good meeting is:

  • The purpose of the meeting is clear
  • There’s an agenda (we’ll dive in to the complexity of this in a moment)
  • There are the right people in the room. Not too many where communication is overly complicated, not too few where the people you need to move forward aren’t there.
  • There’s some order. People aren’t dropping in and out, talking over each other, or being generally inconsiderate
  • There’s a clear decision, outcome, and next steps at the end
Purpose of the Meeting and Direction

The first point and the last are connected: to have a good meeting, there is a core. You’ve all come for a unique purpose, and the end of the meeting should encapsulate what you’ve learned about that purpose and what the next steps are. Thus, the beginning and end of the meeting might sound a little similar:

We’re all here today to discuss how we’re going to support the next version of framework X. I have some new data to show you that frames the direction, Hassan and Jenna are here to talk about some of the details of the implementation, and Angela, we’d love to coordinate with you on a rollout process because it affects your team.

And at the end:

OK, so we decided we’re going in Y direction. Angela, your team seems comfortable doing Z, is that correct? And the rollout timeline we’ve agreed on is 5 weeks. The next steps are to explore the impacts of A, B, and C and reconvene in a week on our findings and process.

This is just an example—it’s not important to model this precisely. But you should be aligning at the beginning and end of the meeting to make sure that nothing major is missing and everyone is on the same page. If you haven’t come to a decision by the end of the meeting, then your next steps may be either to figure out who will make the decision and inform everyone or roll over to a new meeting.

Ideally these sentences are encapsulating information everyone needs:

  • The shared purpose
  • What you’re doing about getting to that outcome
  • Who is owning what
  • How
  • When, what are the timelines

If there are people there who do not need to know this information, they probably shouldn’t have been at the meeting in the first place.

The Agenda

Beyond deciding that a meeting should have an agenda, there are so many ways and means an agenda can be used. Strangely enough, an agenda can also be a way to not have a good meeting, so let’s explore that, too.

An agenda should ideally always state the purpose of the meeting. I personally love to then include some bullets as talking points, as well as space to take notes right in the document during the meeting.

Sometimes people use an agenda to write thoughts down before the meeting, and I would strongly suggest you steer clear of this—there’s nothing wrong with a person keeping notes for themselves for the meeting but if you come to a meeting where an agenda is locked top to bottom with material, it can sometimes shut down the collaborative aspect of the meeting—which means it shouldn’t be a meeting at all, it should just be a shared doc, to be consumed async. Part of the purpose of the meeting is the discussion itself. Again, louder:

Part of the purpose of the meeting is the discussion itself.

Not all meetings are the same

There are also different kinds of meetings. Let’s go over what type of agenda you might use for each:

Brainstorming session: perhaps you don’t want a full agenda, just the purpose and a notes section, or even a Miro board or other whiteboarding tool to use for capturing people’s thoughts, with small areas stubbed out.

Weekly discussion or daily standup: I typically have folks add whatever they like to ours, prefacing their contribution with their name and a small category, for instance, RD for rapid decision, D for discussion, and P for process. Here’s an example:

- [Sarah, RD] should we block off 4 hours to triage our iceboxed issues?

Our team uses a kanban board during the standup and people take turns talking about what they’re doing for that time period. It’s nice how it helps solidify the tasks and priorities for the week, and allows for some course correction if there’s accidental misalignment before the work is done.

We also talk about what was done or shipped in the previous week so we can celebrate a little. Especially on tasks we know took the person a long time or took a lot of effort.

We found through trial and error that twice a week check-ins suited us: once on Monday to kick the week off, and again on Wednesday to keep us aligned and the momentum going.

Cross-Functional meetings: This is one where a more formal agenda with some preparation can be really helpful, so that all parties have enough information about the purpose and what’s being discussed. If you have a lot of information, though, I would suggest creating a one-sheeter and sharing that ahead of time instead of adding everything to an agenda. Sometimes if I know everyone is too busy to read everything async, I will give the first 5 minutes to the group to read through the one-sheeter on the call so we’re all on the same page. People usually appreciate this. YMMV.

All this said, agendas are very useful, but I’ve seen strange culture arise from making strict rules around them. The point of the agenda and meeting is to collaborate on something. That point is nullified if folks are putting process ahead of that impetus.

The best cultures I’ve worked at use both meetings and agendas as tools for working together effectively- tools that everyone equally feels responsible for making useful.

All Kinds of Awkward

OK, you led a meeting! You gave people purpose, you set direction and timelines. But why was it so awkward?!

Not all forms of awkward are bad, really. There are different kinds of awkward, and some are quite natural, some are more harmful. Let’s analyze this for a moment, starting from most innocuous to something more insidious.

You all didn’t know each other well

The team I got to work with at Netlify was some of the silliest, most collaborative, and trustworthy groups I’ve ever had the pleasure of working with. We actively cultivated this culture and it was great fun. Every meeting started with goofing off and chatter. Then, when we got to business.

The meeting would flow effortlessly because we were all comfortable together. One time a friend in the People department asked “what do you do to break the ice with your team”, and I jokingly responded “ice? Our team? No… we don’t need that… maybe we should be frozen?”

Not all conversations are going to be like this. We knew each other fairly well and actively worked to have vulnerability together. If your meetings with other groups you don’t know well have awkward moments, that’s actually pretty natural, and nothing to be too concerned with. You can try to make conversation and that can help, but trying to force it too much can also feel a bit stilted, so just ease up on the guilt for this one. There’s nothing wrong with you, I promise.

There were too many people

During the pandemic, my husband and I would sometimes try to replace in-person dinner parties with zoom versions of the same. What we learned was they didn’t quite work at scale. When you have an in-person party with 12 or more people, everyone doesn’t really stay in one huddle together, they break off to smaller conversations. When we started hosting the zoom parties with smaller groups, the calls became more fluid, relaxed and comfortable.

There’s a certain scale at which conversation begins to feel performative because there are so many eyes on a person when they’re speaking. Meetings are very much the same. Try not to invite too many people to a meeting. If you are worried folks might not feel included unless you invite them, you can either mark them as optional or let them know you’ll be sure to tell them the outcome.

If you’re inviting too many people because there’s a company culture that everyone should be involved in every decision, that might be a sign of a wider issue that needs some solving. Companies at a certain scale start to have issues functioning if there is no clear understanding of ownership. If you’re inviting everyone out of fear of hurt feelings often, it’s likely not a problem with your meetings, and more a sign that you need some clarity. See the DRI section at the end of this chapter for more information on how to mitigate this.

There’s something people aren’t saying

This kind of awkward is probably the most harmful. If the meeting is awkward because people don’t feel comfortable telling the truth, or there’s an elephant in the room, or there’s a smell that needs to be dealt with. Elephant smells? Ok, moving on.

We should watch out for this and try to do something about it. Personally, I’m a “walk towards the fire to put it out” kind of person, and will actually just acknowledge that it’s awkward because it doesn’t feel like we’re being transparent with one another. I’ll state what I know from my perspective and then ask if other folks are feeling the same. 

If you do this, you’ll usually have to wait a beat or two. People will likely be a bit shocked that you came right out and said it. It will take them a couple of seconds to adjust and consider what will happen if they tell the truth, too. It’s crucial that you not speak to fill the silence in these moments. It will feel very uncomfortable, but I promise, you have to let the silence hang for a bit before someone speaks up. Typically from there, people will all start speaking, and you can actually dig into the problems.


There’s an entire chapter devoted to conflicts because the topic is big and nuanced enough to warrant its own time and space, but let’s apply some of the principles here, because there is an intersection of good meetings and dealing with conflicts directly.

The most important piece here is that conflicts are not something to be avoided. It’s not bad that people feel passionate about their work, it’s great. Not all conflicts are negative- the point of the meeting may be to bring to light where folks aren’t aligned. There probably is some base premise or problem they are all trying to solve, but they see the solution differently. It can help to find the alignment there so the ideas can be fleshed out without being attached to a particular person’s identity.

 The identity thing can be a pitfall, because if you have two people discussing their idea instead of an idea, it can feel to them like someone is rejecting them rather than a concept

We want to try to guide towards an approach where it doesn’t feel like anyone is attacking one another, and also manage actively against people being disrespectful to one another. It’s the job of a manager to disambiguate healthy conflict from attack so that respectful discourse is encouraged. If folks are putting out ad hominem attacks, it’s on you to reel that in and move the conversation towards the work instead. Otherwise, it really is hard for the conversation to stay productive.

Typically I’d say it’s good to hear people out, and then reign things back in by discussing what you think you’re hearing and tying it back to a shared purpose. Then we find where we have common ground. Here’s an example:

“What I’m hearing is that Rashida feels that team X is migrating a system that affects her team while they are trying to release a big feature. Is that correct? And that Jerome feels that it’s crucial that team X be able to migrate the system soon for stability purposes. Is that correct?

“OK, well, it sounds like we have a shared goal of making sure the company can ship features with some stability. Perhaps we can talk through what timelines are immovable and which are not so that we can stay coordinated?

“I’m sure we all want to be able to ship said feature without any hiccups and also get the new system up and going”

Here, we stated what we thought we were hearing, which allows for the person to either feel heard or correct us if we’re mistaken and there’s a miscommunication. (Sometimes there is!)

Then we stated the shared goal from both parties, as well as risks and constraints that may play a part in some of the conflicts that need to be ironed out.

You’ll note in the last sentence, we try to tie a knot for a vision of stability that addresses both of their understandable needs.

A couple of things to note: I’m giving an example here and you absolutely don’t have to do it like me. The most important thing is that folks feel heard and that you all agree on what the conflict is. And that you remain open to that discussion, while finding the base premise of why you’re even talking about it.

It’s also way easier said than done. If you have a conversation that goes off the rails, I’d suggest spending a bit of time after you’re off the call to write down what you think happened.

I tend to give myself a section to just talk through the facts of what happened, and then another to talk through my feelings of what went poorly and what could have been better. It helps to check in with the facts separately because our human brains can sometimes try to protect us and see a particular version of an event. Hard to do, but checking in with just the facts helps ground that a bit.

There can be times where a strong conflict happens during a meeting and you’re at an impasse, and you need to give folks time to regroup. I’d suggest calling another meeting in a week as a follow up, and try to hear people out individually in the meantime. Sometimes people need a little distance from a matter, or they’re having a hard day, and that’s totally ok.


The DRI stands for “Directly Responsible Individual” and is one of the most important pieces that we haven’t covered yet. A good meeting must have a DRI, and it is not necessarily the person who called the meeting. It might not be you. But you must designate who owns the project and ultimately makes decisions when there’s one to be made.

Why do you need a DRI? Well, as much as you do want to hear input from everyone, eventually you have to make a decision, and there are plenty of things in software development that don’t necessarily have one true answer.

Note that the phrasing is not PWMD (Person Who Makes Decisions) though that acronym looks pretty hardcore. Instead, we use Directly Responsible Individual because that’s also core to deciding who this person is. They are the person who is going to own the outcome.

That’s part of why not everyone can get equal say- if it’s your project and you are on the line for the outcome of whatever decisions are made, you can see how you would also need to own decision making. And likewise, if people who have no skin in the game decide things, they might not understand all the moving parts or invest as much in the gravity of the matter.

The appointment of the DRI not only unlocks the groups to make final decisions and move forward, but also places the responsibility on the party that will carry the weight.

There are several systems of ownership you can explore, such as DACI, which separates out Driver, Approver, Contributors, and Informed so that everyone knows their roles, and several others such as RACI and RAPID. Use whichever system makes the most sense for your organization.

I find it best to identify this person early on in a project and make sure it is restated at the start of a meeting (it can be included on the agenda as well), as it helps greatly if you find yourself at a crossroads. This person can unblock you and help the group move forward.

Moving Forward

It may at times feel like meetings are a drag on a software engineering process, but it doesn’t always have to feel this way. There’s something special about collaborating with a group of people who are respectful and working towards a common purpose. Good meetings can provide clarity and save people hours and days of work when they’re headed in the wrong direction. Having clear ownership, documentation, and only the right people in the room can keep many teams in lockstep, even when problems are complex.

Buy the Book

This is just a sample of the kind of content from my latest book coming out soon…

Join the list!

The post Good Meetings appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

The Trick to Enable Printify Shipping Notifications for Orders in WooCommerce? Customer Notes.

Css Tricks - Thu, 07/01/2021 - 2:02pm

This is a super niche blog post. But it’s been on my list forever to write down because this caused me grief for far too long.

The setup is that you can use WooCommerce to sell things on a WordPress site, of course. If what you’re selling is a physical product, one thing you can do is set that up as print-and-ship on-demand. That’s what I do, for example, with our printed posters and sweatshirts. One company that does that, and the one we use right now, is Printify. It’s not even a plugin, it’s just APIs talking to each other.

That all works fine. The problem I was having? Customers weren’t getting any shipping notifications.

For a long time, I thought this was just something Printify punted on. For example, Printify doesn’t provide customer service to your customers, only to you. So if your customer has a problem, they contact you, and if it seems like it’s a Printify problem, you need to then contact them to figure it out. That’s not my favorite, but it’s understandable, as you are acting as the storefront here and things can go wrong with orders that the store needs to deal with, not Printify.

But no shipping notifications seems bananas. That’s like table stakes for eCommerce. Not to mention you can see shipping information in the Printify dashboard. So it was a lot of…

  1. Customer wonders where order is
  2. Customer is annoyed they didn’t get any shipping notification
  3. Customer emails me
  4. I look up shipping/tracking information
  5. I send to them manually

That’s just not tenable.

The thing is though, it’s supposed to work, and it does through a sneaky little feature of core WooCommerce itself.

So an order comes in, and I can see it:

Once the payment is solid, it’ll kick over to Printify, and I can see the order there too.

Once Printify has tracking information, it becomes available in the Printify dashboard:

Most orders do. Some orders just randomly don’t — although that’s mostly international orders (e.g. from the U.S. to another country)

The trick is that this tracking information doesn’t just stay in Printify. They API it over to the WordPress site as well in the form of a “note” on the order. So you can see it there:

Notes are, in a sense, kind of abitrary metadata on orders. You can just type whatever you want as a note and either add it privately or visibly to the user.

That was all happening normally on my site.

Here was my problem:

My “Customer note” email was turned off.

I was confused I guess because I didn’t really understand the “Notes” idea in WordPress and it wasn’t documented anywhere saying that is how Printify communicates this information. It just dawned on me looking at it for the 100th time. Why that was off? I don’t know. Does it default to off? Did I turn it off because I didn’t understand it, and turning off customer-facing emails I don’t understand felt right at some point? Again, I don’t know. I also maybe just assumed that Printify would email the customer the tracking information because they have that information, as well as the customer email. Who knows.

With it on, though, it works!

Point is: by turning this email on, it went from a ton of very manual customer service work to almost none. So I wanted to get it blogged in case anyone is in this frustrating situation like I was.

The post The Trick to Enable Printify Shipping Notifications for Orders in WooCommerce? Customer Notes. appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

CSS for Web Vitals

Css Tricks - Thu, 07/01/2021 - 8:54am

The marketing for Core Web Vitals (CWV) has been a massive success. I guess that’s what happens when the world’s dominant search engine tells people that something’s going to be an SEO factor. Ya know what language can play a huge role in those CWV scores? I’ll wait five minutes for you to think of it. Just kidding, it’s CSS.

Katie Hempenius and Una Kravets:

The way you write your styles and build layouts can have a major impact on Core Web Vitals. This is particularly true for Cumulative Layout Shift (CLS) and Largest Contentful Paint (LCP).

For example…

  • Absolutely positioning things takes them out of flow and prevents layout shifting (but please don’t read that as we should absolute position everything).
  • Don’t load images you don’t have to (e.g. use a CSS gradient instead), which lends a bit of credibility to this.
  • Perfect font fallbacks definitely help layout shifting.

There are a bunch more practical ideas in the article and they’re all good ideas (because good performance is good for lots of reasons), even if the SEO angle isn’t compelling to you.

Direct Link to ArticlePermalink

The post CSS for Web Vitals appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

App Platform on Digital Ocean

Css Tricks - Thu, 07/01/2021 - 8:54am

This is new stuff from DO.

App Platform is a hosting product, no surprise there, but it has some features that are Jamstack-inspired in the best possible way, and an additional set of unique and powerful features. Let’s start with some basics:

  • Static sites can be hosted on the free tier
  • Automatic HTTPS
  • Global CDN (Cloudflare is in front, so you’re DDoS safe)
  • Deploy from Git

That’s the stuff that developers like me are loving these days. Take some of the hardest, toil-laden, no-fun aspects of web development and entirely do them for me.

And now the drumroll:

  • This isn’t just for static sites: it’s for PHP, Node, Python, Ruby, Go, Docker Containers, etc.
  • You don’t have to configure and update things, these are boxes ready-to-go for those technologies.
  • You can scale to whatever you need.
  • You don’t pay by the team seat. Unlimited team members. You pay by usage like bandwidth and build time.
Try App Platform

Use that link to get $100 in credit over 60 days.

It extremely easy to deploy a static site

You snag it right from GitHub (or GitLab, or Docker Hub), which is great right away, and off you go.

Then we get our first little hint of something compelling right away:

But let’s say we don’t need that immediately, we can go with a free plan and get this out.

The site will build and you can see logs:

And lookie that my static site is LIVE!

Say my site needs to run an actual build process? That, and lots more configuration come in the form of an “App Spec”. This is where I would include those build commands, change Git information, deployment zones, and loads more.

About that database…

Wasn’t that interesting to see the setup steps for this static site suggest adding a database? So many sites need some kind of data store, and it’s often left up to developers to go find some kind of cloud-accessible data storage that will work well with their app. With Digital Ocean App Platform, it can live right alongside your static app.

It’s called a component.

As you can see, it can be, but doesn’t have to be a Database. It could be another type of server! Here I could pop a PostgreSQL DB on there for just $7/month.

If what you need to add is an internal or external service, it will let you add that via another Git repo that you hook up. Oh my what a modern system you now have. A front end and a back end each individually deployable directly via Git itself.

This is for server-side apps as well.

This feels big to me! I get that same kinda easy DX feeling I get with static sites, but with, say, a Python or Ruby on Rails app. Free deployment! Server boxes I don’t have to configure and manage myself!

Seems like a pretty happy-path hosting environment for lots of stuff.

Try App Platform

Use that link to get $100 in credit over 60 days.

The post App Platform on Digital Ocean appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Hack the “Deploy to Netlify” Button Using Environment Variables to Make a Customizable Site Generator

Css Tricks - Thu, 07/01/2021 - 4:40am

If you’re anything like me, you like being lazy shortcuts. The “Deploy to Netlify” button allows me to take this lovely feature of my personality and be productive with it.

Clicking the button above lets me (or you!) instantly clone my Next.js starter project and automatically deploy it to Netlify. Wow! So easy! I’m so happy!

Now, as I was perusing the docs for the button the other night, as one does, I noticed that you can pre-fill environment variables to the sites you deploy with the button. Which got me thinking… what kind of sites could I customize with that?

Idea: “Link in Bio” website

Ah, the famed “link in bio” you see all over social media when folks want you to see all of their relevant links in life. You can sign up for the various services that’ll make one of these sites for you, but what if you could make one yourself without having to sign up for yet another service?

But, we also are lazy and like shortcuts. Sounds like we can solve all of these problems with the “Deploy to Netlify” (DTN) button, and environment variables.

How would we build something like this?

In order to make our DTN button work, we need to make two projects that work together:

  • A template project (This is the repo that will be cloned and customized based on the environment variables passed in.)
  • A generator project (This is the project that will create the environment variables that should be passed to the button.)

I decided to be a little spicy with my examples, and so I made both projects with Vite, but the template project uses React and the generator project uses Vue.

I’ll do a high-level overview of how I built these two projects, and if you’d like to just see all the code, you can skip to the end of this post to see the final repositories!

The Template project

To start my template project, I’ll pull in Vite and React.

npm init @vitejs/app

After running this command, you can follow the prompts with whatever frameworks you’d like!

Now after doing the whole npm install thing, you’ll want to add a .local.env file and add in the environment variables you want to include. I want to have a name for the person who owns the site, their profile picture, and then all of their relevant links.


You can set this up however you’d like, because this is just test data we’ll build off of! As you build out your own application, you can pull in your environment variables at any time for parsing with import.meta.env. Vite lets you access those variables from the client code with VITE_, so as you play around with variables, make sure you prepend that to your variables.

Ultimately, I made a rather large parsing function that I passed to my components to render into the template:

function getPageContent() { // Pull in all variables that start with VITE_ and turn it into an array let envVars = Object.entries(import.meta.env).filter((key) => key[0].startsWith('VITE_')) // Get the name and profile picture, since those are structured differently from the links const name = envVars.find((val) => val[0] === 'VITE_NAME')[1].replace(/_/g, ' ') const profilePic = envVars.find((val) => val[0] === 'VITE_PROFILE_PIC')[1] // ... // Pull all of the links, and properly format the names to be all lowercase and normalized let links = => { return [deEnvify(k[0]), k[1]] }) // This object is what is ultimately sent to React to be rendered return { name, profilePic, links } } function deEnvify(str) { return str.replace('VITE_', '').replace('_LINK', '').toLowerCase().split('_').join(' ') }

I can now pull in these variables into a React function that renders the components I need:

// ... return ( <div> <img alt={} src={vars.profilePic} /> <p>{}</p> {, index) => { return <Link key={`link${index}`} name={l[0]} href={l[1]} /> })} </div> ) // ...

And voilà! With a little CSS, we have a “link in bio” site!

Now let’s turn this into something that doesn’t rely on hard-coded variables. Generator time!

The Generator project

I’m going to start a new Vite site, just like I did before, but I’ll be using Vue for this one, for funzies.

Now in this project, I need to generate the environment variables we talked about above. So we’ll need an input for the name, an input for the profile picture, and then a set of inputs for each link that a person might want to make.

In my App.vue template, I’ll have these separated out like so:

<template> <div> <p> <span>Your name:</span> <input type="text" v-model="name" /> </p> <p> <span>Your profile picture:</span> <input type="text" v-model="propic" /> </p> </div> <List v-model:list="list" /> <GenerateButton :name="name" :propic="propic" :list="list" /> </template>

In that List component, we’ll have dual inputs that gather all of the links our users might want to add:

<template> <div class="list"> Add a link: <br /> <input type="text" v-model="" /> <input type="text" v-model="newItem.url" @keyup.enter="addItem" /> <button @click="addItem">+</button> <ListItem v-for="(item, index) in list" :key="index" :item="item" @delete="removeItem(index)" /> </div> </template>

So in this component, there’s the two inputs that are adding to an object called newItem, and then the ListItem component lists out all of the links that have been created already, and each one can delete itself.

Now, we can take all of these values we’ve gotten from our users, and populate the GenerateButton component with them to make our DTN button work!

The template in GenerateButton is just an <a> tag with the link. The power in this one comes from the methods in the <script>.

// ... methods: { convertLink(str) { // Convert each string passed in to use the VITE_WHATEVER_LINK syntax that our template expects return `VITE_${str.replace(/ /g, '_').toUpperCase()}_LINK` }, convertListOfLinks() { let linkString = '' // Pass each link given by the user to our helper function this.list.forEach((l) => { linkString += `${this.convertLink(}=${l.url}&` }) return linkString }, // This function pushes all of our strings together into one giant link that will be put into our button that will deploy everything! siteLink() { return ( // This is the base URL we need of our template repo, and the Netlify deploy trigger '' + 'VITE_NAME=' + // Replacing spaces with underscores in the name so that the URL doesn't turn that into %20 /g, '_') + '&' + 'VITE_PROFILE_PIC=' + this.propic + '&' + // Pulls all the links from our helper function above this.convertListOfLinks() ) }, },

Believe it or not, that’s it. You can add whatever styles you like or change up what variables are passed (like themes, toggles, etc.) to make this truly customizable!

Put it all together

Once these projects are deployed, they can work together in beautiful harmony!

This is the kind of project that can really illustrate the power of customization when you have access to user-generated environment variables. It may be a small one, but when you think about generating, say, resume websites, e-commerce themes, “/uses” websites, marketing sites… the possibilities are endless for turning this into a really cool boilerplate method.

The post Hack the “Deploy to Netlify” Button Using Environment Variables to Make a Customizable Site Generator appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

How do you make a layout with pictures down one side of a page matched up with paragraphs on the other side?

Css Tricks - Wed, 06/30/2021 - 10:36am

I got this exact question in an email the other day, and I thought it would make a nice blog post because of how wonderfully satisfying this is to do in CSS these days. Plus we can sprinkle in polish to it as we go.

HTML-wise, I’m thinking image, text, image, text, etc.

<img src="..." alt="..." height="" width="" /> <p>Text text text...</p> <img src="..." alt="..." height="" width="" /> <p>Text text text...</p> <img src="..." alt="..." height="" width="" /> <p>Text text text...</p>

If that was our entire body in an HTML document, the answer to the question in the blog post title is literally two lines of CSS:

body { display: grid; grid-template-columns: min-content 1fr; }

It’s going to look something like this…

Not pretty but we got the job done very quickly.

So cool. Thanks CSS. But let’s clean it up. Let’s make sure there is a gap, set the default type, and reign in the layout.

body { display: grid; padding: 2rem; grid-template-columns: 300px 1fr; gap: 1rem; align-items: center; max-width: 800px; margin: 0 auto; font: 500 100%/1.5 system-ui; } img { max-width: 100%; height: auto; }

I mean… ship it, right? Close, but maybe we can just add a quick mobile style.

@media (max-width: 650px) { body { display: block; font-size: 80%; } p { position: relative; margin: -3rem 0 2rem 1rem; padding: 1rem; background: rgba(white, 0.8); } }

OK, NOW ship it!

CodePen Embed Fallback

The post How do you make a layout with pictures down one side of a page matched up with paragraphs on the other side? appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

When a Click is Not Just a Click

Css Tricks - Wed, 06/30/2021 - 4:45am

The click event is quite simple and easy to use; you listen for the event and run code when the event is fired. It works on just about every HTML element there is, a core feature of the DOM API.

As often the case with the DOM and JavaScript, there are nuances to consider. Some nuances with the click event are typically not much a concern. They are minor and probably most people would never even notice them in the majority of use cases.

Take, for example, the click event listening to the grandfather of interactive elements, the <button> element. There are nuances associated with button clicks and these nuances, like the difference between a “click” from a mouse pointer and “click” from the keyboard. Seen this way, a click is not always a “click” the way it’s typically defined. I actually have run into situations (though not many) where distinguishing between those two types of clicks comes in handy.

How do we distinguish between different types of clicks? That’s what we’re diving into!

First things first

The <button> element, as described by MDN, is simply:

The HTML element represents a clickable button, used to submit forms or anywhere in a document for accessible, standard button functionality. By default, HTML buttons are presented in a style resembling the platform the user agent runs on, but you can change buttons’ appearance with CSS.

The part we’ll cover is obviously the “anywhere in a document for accessible, standard button functionality” part of that description. As you may know, a button element can have native functionality within a form, for example it can submit a form in some situations. We are only really concerning ourselves over the basic clicking function of the element. So consider just a simple button placed on the page for specific functionality when someone interacts with it.

CodePen Embed Fallback

Consider that I said “interacts with it” instead of just clicking it. For historical and usability reasons, one can “click” the button by putting focus on it with tabbing and then using the Space or Enter key on the keyboard. This is a bit of overlap with keyboard navigation and accessibility; this native feature existed way before accessibility was a concern. Yet the legacy feature does help a great deal with accessibility for obvious reasons.

In the example above, you can click the button and its text label will change. After a moment the original text will reset. You can also click somewhere else within the pen, tab to put focus on the button, and then use Space or Enter to “click” it. The same text appears and resets as well. There is no JavaScript to handle the keyboard functionality; it’s a native feature of the browser. Fundamentally, in this example the button is only aware of the click event, but not how it happened.

One interesting difference to consider is the behavior of a button across different browsers, especially the way it is styled. The buttons in these examples are set to shift colors on its active state; so you click it and it turns purple. Consider this image that shows the states when interacting with the keyboard.

Keyboard Interaction States

The first is the static state, the second is when the button has focus from a keyboard tabbing onto it, the third is the keyboard interaction, and the fourth is the result of the interaction. With Firefox you will only see the first two and last states; when interacting with either Enter or Space keys to “click” it you do not see the third state. It stays with the second, or “focused”, state during the interaction and then shifts to the last one. The text changes as expected but the colors do not. Chrome gives us a bit more as you’ll see the first two states the same as Firefox. If you use the Space key to “click” the button you’ll see the third state with the color change and then the last. Interestingly enough, with Chrome if you use Enter to interact with the button you won’t see the third state with the color change, much like Firefox. In case you are curious, Safari behaves the same as Chrome.

The code for the event listener is quite simple:

const button = document.querySelector('#button'); button.addEventListener('click', () => { button.innerText = 'Button Clicked!'; window.setTimeout(() => { button.innerText = '"click" me'; }, 2000); });

Now, let’s consider something here with this code. What if you found yourself in a situation where you wanted to know what caused the “click” to happen? The click event is usually tied to a pointer device, typically the mouse, and yet here the Space or Enter key are triggering the same event. Other form elements have similar functionality depending on context, but any elements that are not interactive by default would require an additional keyboard event to work. The button element doesn’t require this additional event listener.

I won’t go too far into reasons for wanting to know what triggered the click event. I can say that I have occasionally ran into situations where it was helpful to know. Sometimes for styling reasons, sometimes accessibility, and sometimes for specific functionality. Often different context or situations provide for different reasons.

Consider the following not as The Way™ but more of an exploration of these nuances we’re talking about. We’ll explore handling the various ways to interact with a button element, the events generated, and leveraging specific features of these events. Hopefully the following examples can provide some helpful information from the events; or possibly spread out to other HTML elements, as needed.

Which is which?

One simple way to know a keyboard versus mouse click event is leveraging the keyup and mouseup events, taking the click event out of the equation.

CodePen Embed Fallback

Now, when you use the mouse or the keyboard, the changed text reflects which event is which. The keyboard version will even inform you of a Space versus Enter key being used.

Here’s the new code:

const button = document.querySelector('#button'); function reset () { window.setTimeout(() => { button.innerText = '"click" me'; }, 2000); } button.addEventListener('mouseup', (e) => { if (e.button === 0) { button.innerText = 'MouseUp Event!'; reset(); } }); button.addEventListener('keyup', (e) => { if (e.code === 'Space' || e.code === 'Enter') { button.innerText = `KeyUp Event: ${e.code}`; reset(); } });

A bit verbose, true, but we’ll get to a slight refactor in a bit. This example gets the point across about a nuance that needs to be handled. The mouseup and keyup events have their own features to account for in this situation.

With the mouseup event, about every button on the mouse could trigger this event. We usually wouldn’t want the right mouse button triggering a “click” event on the button, for instance. So we look for the e.button with the value of 0 to identify the primary mouse button. That way it works the same as with the click event yet we know for a fact it was the mouse.

With the keyup event, the same thing happens where about every key on the keyboard will trigger this event. So we look at the event’s code property to wait for the Space or Enter key to be pressed. So now it works the same as the click event but we know the keyboard was used. We even know which of the two keys we’re expecting to work on the button.

Another take to determine which is which

While the previous example works, it seems like a bit too much code for such a simple concept. We really just want to know if the “click” came from a mouse or a keyboard. In most cases we probably wouldn’t care if the source of the click was either the Space or Enter keys. But, if we do care, we can take advantage of the keyup event properties to note which is which.

Buried in the various specifications about the click event (which leads us to the UI Events specification) there are certain properties assigned to the event concerning the mouse location, including properties such as screenX/screenY and clientX/clientY. Some browsers have more, but I want to focus on the screenX/screenY properties for the moment. These two properties essentially give you the X and Y coordinates of the mouse click in relation to the upper-left of the screen. The clientX/clientY properties do the same, but the origin is the upper-left of the browser’s viewport.

This trick relies on the fact that the click event provides these coordinates even though the event was triggered by the keyboard. When a button with the click event is “clicked” by the Space or Enter key it still needs to assign a value to those properties. Since there’s no mouse location to report, if it falls back to zero as the default.

CodePen Embed Fallback

Here’s our new code:

const button = document.querySelector('#button'); button.addEventListener('click', (e) => { button.innerText = e.screenX + e.screenY === 0 || e.offsetX + e.offsetY === 0 ? 'Keyboard Click Event!' : 'Mouse Click Event!'; window.setTimeout(() => { button.innerText = '"click" me'; }, 2000); });

Back to just the click event, but this time we look for those properties to determine whether this is a keyboard or mouse “click.” We take both screenX and screenY properties, add them together, and see if they equal zero; which makes for an easy test. The possibilities of the button being in the immediate upper-left of the screen to be clicked has to be quite low. It could be possible if one attempted to make such an effort of a pixel-perfect click in such an odd location, but I would think it’s a safe assumption that it won’t happen under normal circumstances.

Now, one might notice the added e.offsetX + e.offsetY === 0 part. I have to explain that bit…

Enter the dreaded browser inconsistencies

While creating and testing this code, the all-too-often problem of cross-browser support reared its ugly head. It turns out that even though most browsers set the screenX and screenY values on a keyboard-caused click event to zero, Safari decides to be different. It applies a proper value to screenX and screenY as if the button was clicked by a mouse. This throws a wrench into my code which is one of the fun aspects of dealing with different browsers — they’re made by different groups of different people creating different outcomes to the same use cases.

But, alas, I needed a solution because I didn’t necessarily want to rely only on the keyup event for this version of the code. I mean, we could if we wanted to, so that’s still an option. It’s just that I liked the idea of treating this as a potential learning exercise to determine what’s happening and how to make adjustments for differences in browsers like we’re seeing here.

Testing what Safari is doing in this case, it appears to be using the offsetX and offsetY properties in the event to determine the location of the “click” and then applying math to determine the screenX and screenY values. That’s a huge over-simplification, but it sort of checks out. The offset properties will be the location of the click based on the upper-left of the button. In this context, Safari applies zero to offsetX and offsetY, which would obviously be seen as the upper-left of the button. From there it treats that location of the button as the determination for the screen properties based on the distance from the upper-left of the button to the upper-left of the screen.

The other usual browsers technically also apply zero to offestX and offsetY, and could be used in place of screenX and screenY. I chose not to go that route. It’s certainly possible to click a button that happens to be at the absolute top-left of the screen is rather difficult while clicking the top-left of a button. Yet, Safari is different so the tests against the screen and offsets is the result. The code, as written, hopes for zeroes on the screen properties and, if they are there, it moves forward assuming a keyboard-caused click. If the screen properties together are larger then zero, it checks the offset properties just in case. We can consider this the Safari check.

This is not ideal, but it wouldn’t be the first time I had to create branching logic due to browser inconsistencies.

In the hope that the behavior of these properties will not change in the future, we have a decent way to determine if a button’s click event happened by mouse or keyboard. Yet technology marches on providing us new features, new requirements, and new challenges to consider. The various devices available to us has started the concept of the “pointer” as a means to interact with elements on the screen. Currently, such a pointer could be a mouse, a pen, or a touch. This creates yet another nuance that we might want to be consider; determining the kind of pointer involved in the click.

Which one out of many?

Now is a good time to talk about Pointer Events. As described by MDN:

Much of today‘s web content assumes the user’s pointing device will be a mouse. However, since many devices support other types of pointing input devices, such as pen/stylus and touch surfaces, extensions to the existing pointing device event models are needed. Pointer events address that need.

So now let’s consider having a need for knowing what type of pointer was involved in clicking that button. Relying on just the click event doesn’t really provide this information. Chrome does have an interesting property in the click event, sourceCapabilities. This property in turn has a property named firesTouchEvents that is a boolean. This information isn’t always available since Firefox and Safari do not support this yet. Yet the pointer event is available much everywhere, even IE11 of all browsers.

This event can provide interesting data about touch or pen events. Things like pressure, contact size, tilt, and more. For our example here we’re just going to focus on pointerType, which tells us the device type that caused the event.

CodePen Embed Fallback

Clicking on the button will now tell you the pointer that was used. The code for this is quite simple:

const button = document.querySelector('#button'); button.addEventListener('pointerup', (e) => { button.innerText = `Pointer Event: ${e.pointerType}`; window.setTimeout(() => { button.innerText = '"click" me'; }, 2000); });

Really, not that much different than the previous examples. We listen for the pointerup event on the button and output the event’s pointerType. The difference now is there is no event listener for a click event. So tabbing onto the button and using space or enter key does nothing. The click event still fires, but we’re not listening for it. At this point we only have code tied to the button that only responds to the pointer event.

That obviously leaves a gap in functionality, the keyboard interactivity, so we still need to include a click event. Since we’re already using the pointer event for the more traditional mouse click (and other pointer events) we have to lock down the click event. We need to only allow the keyboard itself to trigger the click event.

CodePen Embed Fallback

The code for this is similar to the “Which Is Which” example up above. The difference being we use pointerup instead of mouseup:

const button = document.querySelector('#button'); function reset () { window.setTimeout(() => { button.innerText = '"click" me'; }, 2000); } button.addEventListener('pointerup', (e) => { button.innerText = `Pointer Event: ${e.pointerType}`; reset(); }); button.addEventListener('click', (e) => { if (e.screenX + e.screenY === 0 || e.offsetX + e.offsetY === 0) { button.innerText = 'Keyboard ||Click Event!'; reset(); } });

Here we’re using the screenX + screenY (with the additional offset check) method to determine if the click was caused by the keyboard. This way a mouse click would be handled by the pointer event. If one wanted to know if the key used was space or enter, then the keyup example above could be used. Even then, the keyup event could be used instead of the click event depending on how you wanted to approach it.

Anoher take to determine which one out of many

In the ever-present need to refactor for cleaner code, we can try a different way to code this.

CodePen Embed Fallback

Yep, works the same as before. Now the code is:

const button = document.querySelector('#button'); function btn_handler (e) { if (e.type === 'click' && e.screenX + e.screenY > 0 && e.offsetX + e.offsetY > 0) { return false; } else if (e.pointerType) { button.innerText = `Pointer Event: ${e.pointerType}`; } else if (e.screenX + e.screenY === 0) { button.innerText = 'Keyboard Click Event!'; } else { button.innerText = 'Something clicked this?'; } window.setTimeout(() => { button.innerText = '"click" me'; }, 2000); } button.addEventListener('pointerup', btn_handler); button.addEventListener('click', btn_handler);

Another scaled down version to consider: this time we’ve reduced our code down to a single handler method that both pointerup and click events call. First we detect if the mouse “click” caused the event; if it does, we wish to ignore it in favor of the pointer event. This is checked with a test opposite of the keyboard test; is the sum of screenX and screenY larger than zero? This time there’s an alteration to the offset check by doing the same as the screen test, is the sum of those properties larger than zero as well?

Then the method checks for the pointer event, and upon finding that, it reports which pointer type occurred. Otherwise, the method checks for keyboard interactions and reports accordingly. If neither of those are the culprit, it just reports that something caused this code to run.

So here we have a decent number of examples on how to handle button interactions while reporting the source of those interactions. Yet, this is just one of the handful of form elements that we are so accustomed to using in projects. How does similar code work with other elements?

Checking checkboxes

Indeed, similar code does work very much the same way with checkboxes.

There are a few more nuances, as you might expect by now. The normal usage of <input type="checkbox"> is a related label element that is tied to the input via the for attribute. One major feature of this combination is that clicking on the label element will check the related checkbox.

Now, if we were to attach event listeners for the click event on both elements, we get back what should be obvious results, even if they are a bit strange. For example, we get one click event fired when clicking the checkbox. If we click the label, we get two click events fired instead. If we were to console.log the target of those events, we’ll see on the double event that one is for the label (which makes sense as we clicked it), but there’s a second event from the checkbox. Even though I know these should be the expected results, it is a bit strange because we’re expecting results from user interactions. Yet the results include interactions caused by the browser.

So, the next step is to look at what happens if we were to listen for pointerup, just like some of the previous examples, in the same scenarios. In that case, we don’t get two events when clicking on the label element. This also makes sense as we’re no longer listening for the click event that is being fired from the checkbox when the label is clicked.

There’s yet another scenario to consider. Remember that we have the option to put the checkbox inside the label element, which is common with custom-built checkboxes for styling purposes.

<label for="newsletter"> <input type="checkbox" /> Subscribe to my newsletter </label>

In this case, we really only need to put an event listener on the label and not the checkbox itself. This reduces the number of event listeners involved, and yet we get the same results. Clicks events are fired as a single event for clicking on the label and two events if you click on the checkbox. The pointerup events do the same as before as well, single events if clicking on either element.

These are all things to consider when trying to mimic the behavior of the previous examples with the button element. Thankfully, there’s not too much to it. Here’s an example of seeing what type of interaction was done with a checkbox form element:

CodePen Embed Fallback

This example includes both types of checkbox scenarios mentioned above; the top line is a checkbox/label combination with the for attribute, and the bottom one is a checkbox inside the label. Clicking either one will output a message below them stating which type of interaction happened. So click on one with a mouse or use the keyboard to navigate to them and then interact with Space or Enter; just like the button examples, it should tell you which interaction type causes it.

To make things easier in terms of how many event listeners I needed, I wrapped the checkboxes with a container div that actually responds to the checkbox interactions. You wouldn’t necessarily have to do it this way, but it was a convenient way to do this for my needs. To me, the fun part is that the code from the last button example above just copied over to this example.

const checkbox_container = document.querySelector('#checkbox_container'); const checkbox_msg = document.querySelector('#checkbox_msg'); function chk_handler (e) { if (e.type === 'click' && e.screenX + e.screenY > 0 && e.offsetX + e.offsetY > 0) { return false; } else if (e.pointerType) { checkbox_msg.innerText = `Pointer Event: ${e.pointerType}`; } else if (e.screenX + e.screenY === 0) { checkbox_msg.innerText = 'Keyboard Click Event!'; } else { checkbox_msg.innerText = 'Something clicked this?'; } window.setTimeout(() => { checkbox_msg.innerText = 'waiting...'; }, 2000); } checkbox_container.addEventListener('pointerup', chk_handler); checkbox_container.addEventListener('click', chk_handler);

That means we could possibly have the same method being called from the the various elements that need the same detecting the pointer type functionality. Technically, we could put a button inside the checkbox container and it should still work the same. In the end it’s up to you how to implement such things based on the needs of the project.

Radioing your radio buttons

Thankfully, for radio button inputs, we can still use the same code with similar HTML structures. This mostly works the same because checkboxes and radio buttons are essentially created the same way—it’s just that radio buttons tend to come in groups tied together while checkboxes are individuals even in a group. As you’ll see in the following example, it works the same:

CodePen Embed Fallback

Again, same code attached to a similar container div to prevent having to do a number of event listeners for every related element.

When a nuance can be an opportunity

I felt that “nuance” was a good word choice because the things we covered here are not really “issues” with the typical negative connotation that word tends to have in programming circles. I always try to see such things as learning experiences or opportunities. How can I leverage things I know today to push a little further ahead, or maybe it’s time to explore outward into new things to solve problems I face. Hopefully, the examples above provide a somewhat different way to look at things depending on the needs of the project at hand.

We even found an opportunity to explore a browser inconsistency and find a workaround to that situation. Thankfully we don’t run into such things that much with today’s browsers, but I could tell you stories about what we went through when I first started web development.

Despite this article focusing more on form elements because of the click nuance they tend to have with keyboard interactions, some or all of this can be expanded into other elements. It all depends on the context of the situation. For example, I recall having to do multiple events on the same elements depending on the context many times; often for accessibility and keyboard navigation reasons. Have you built a custom <select> element to have a nicer design than the standard one, that also responds to keyboard navigation? You’ll see what I mean when you get there.

Just remember: a “click” today doesn’t always have to be what we think a click has always been.

The post When a Click is Not Just a Click appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Fixing a Bug in Low-Resolution Mode

Css Tricks - Wed, 06/30/2021 - 4:44am

I was working on a bug ticket the other day where it was reported that an icon was sitting low in a button. Just not aligned like it should be. I had to go on a little journey to figure out how to replicate it before I could fix it. Lemme set the scene.

Here’s the screenshot:

See how the icon is just… riding low?

But I go to look at the button on my machine, and it looks perfectly fine:

What the heck, right? Same platform (macOS), same browser (Firefox), same version, everything. Other people on the team looked too, and it was fine for them.

Then a discovery! (Thanks, Klare.)

It only showed up that way on her low-resolution external monitor. I don’t know if “low” is fair, but it’s not the “retina” of a MacBook Pro, whatever that is.

My problem is I don’t even have a monitor anymore that isn’t high resolution. So how I can test this? Maybe I just… can’t? Nope! I can! Check it out. I can “Get Info” on the Firefox app on my machine, and check this box:

Checked box for “Open in Low Resolution”

Now I can literally see the bug. It is unique to Firefox as far as I can tell. Perhaps something to do with pixel… rounding? I have no idea. Here’s a reduced test case of the HTML/CSS at play though.

The solution? Rather than using an inline-block display type for buttons, we moved to inline-flex, which feels like the correct display type for buttons because of how good flexbox is at centering.

.button { /* a million things so that all buttons are perfect and... */ display: inline-flex; align-items: center; }

The post Fixing a Bug in Low-Resolution Mode appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Chromium spelling and grammar features

Css Tricks - Tue, 06/29/2021 - 10:25am

Delan Azabani digs into the (hopefully) coming soon ::spelling-error and ::grammar-error pseudo selectors in CSS. Design control is always nice. Hey, if we can style scrollbars and style selected text, why not this?

The squiggly lines that indicate possible spelling or grammar errors have been a staple of word processing on computers for decades. But on the web, these indicators are powered by the browser, which doesn’t always have the information needed to place and render them most appropriately. For example, authors might want to provide their own grammar checker (placement), or tweak colors to improve contrast (rendering).

To address this, the CSS pseudo and text decoration specs have defined new pseudo-elements ::spelling-error and ::grammar-error, allowing authors to style those indicators, and new text-decoration-line values spelling-error and grammar-error, allowing authors to mark up their text with the same kind of decorations as native indicators.

This is a unique post too, as Delan is literally the person implementing the feature in the browser. So there is all sorts of deep-in-the-weeds stuff about how complex all this is and what all the considerations are. Kinda like, ya know, web development. Love to see this. I’ve long felt that it’s weird there is seemingly such little communication between browser engineers and website authors, despite the latter being a literal consumer of the former’s work.

Direct Link to ArticlePermalink

The post Chromium spelling and grammar features appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Working around the viewport-based fluid typography bug in Safari

Css Tricks - Mon, 06/28/2021 - 11:17am

Sara digs into a bug I happened to have mentioned back in 2012 where fluid type didn’t resize when the browser window resized. Back then, it affected Chrome 20 and Safari 6, but the bug still persists today in Safari when a calc() involves viewport units.

Sara credits Martin Auswöger for a super weird and clever trick using -webkit-marquee-increment: 0vw; (here’s the documentation) to force Safari into the correct behavior. I’ll make a screencast just to document it:

I randomly happened to have Safari Technology Preview open, which at the moment is Safari 15, and I see the bug is fixed. So I wouldn’t rush out the door to implement this.

Direct Link to ArticlePermalink

The post Working around the viewport-based fluid typography bug in Safari appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Positioning Overlay Content with CSS Grid

Css Tricks - Mon, 06/28/2021 - 4:25am

Not news to any web developer in 2021: CSS Grid is an incredibly powerful tool for creating complex, distinct two-dimensional modern web layouts.

Recently, I have been experimenting with CSS Grid and alignment properties to create component layouts that contain multiple overlapping elements. These layouts could be styled using absolute positioning and a mix of offset values (top, right, bottom, left), negative margins, and transforms. But, with CSS Grid, positioning overlay elements can be built using more logical, readable properties and values. The following are a few examples of where these grid properties come in handy.

It will help to read up on grid-template-areas and grid-area properties if you’re not yet familiar with them.

Expanding images inside limited dimensions CodePen Embed Fallback

In the demo, there is a checkbox that toggles the overflow visibility so that we can see where the image dimensions expand beyond the container on larger viewport widths.

Here’s a common hero section with a headline overlapping an image. Although the image is capped with a max-width, it scales up to be quite tall on desktop. Because of this, the content strategy team has requested that some of the pertinent page content below the hero remain visible in the viewport as much as possible. Combining this layout technique and a fluid container max-height using the CSS clamp() function, we can develop something that adjusts based on the available viewport space while anchoring the hero image to the center of the container.

CSS clamp(), along with the min() and max() comparison functions, are well-supported in all modern browsers. Haven’t used them? Ahmad Shadeed conducts a fantastic deep dive in this article.

Open this Pen and resize the viewport width. Based on the image dimensions, the container height expands until it hits a maximum height. Notice that the image continues to grow while remaining centered in the container. Resize the viewport height and the container will flex between its max-height’s lower and upper bound values defined in the clamp() function.

Prior to using grid for the layout styles, I might have tried absolute positioning on the image and title, used an aspect ratio padding trick to create a responsive height, and object-fit to retain the ratio of the image. Something like this could get it there:

.container { position: relative; max-height: clamp(400px, 50vh, 600px); } .container::before { content: ''; display: block; padding-top: 52.25%; } .container > * { max-width: 1000px; } .container .image { position: absolute; top: 0; left: 50%; transform: translateX(-50%); width: 100%; height: 100%; object-fit: cover; } .container .title { position: absolute; top: 50%; left: 50%; transform: translate(-50%, -50%); width: 100%; text-align: center; }

Maybe it’s possible to whittle the code down some more, but there’s still a good chunk of styling needed. Managing the same responsive layout with CSS Grid will simplify these layout style rules while making the code more readable. Check it out in the following iteration:

.container { display: grid; grid-template: "container"; place-items: center; place-content: center; overflow: hidden; max-height: clamp(450px, 50vh, 600px); } .container > * { grid-area: container; max-width: 1000px; }

place-content: center instructs the image to continue growing out from the middle of the container. Remove this line and see that, while the image is still vertically centered via place-items, once the max-height is reached, the image will stick to the top of the container block and go on scaling beyond its bottom. Set place-content: end center and you’ll see the image spill over the top of the container.

This behavior may seem conceptually similar to applying object-fit: cover on an image as a styling method for preserving its intrinsic ratio while resizing to fill its content-box dimensions (it was utilized in the absolute position iteration). However, in this grid context, the image element governs the height of its parent and, once the parent’s max-height is reached, the image continues to expand, maintaining its ratio, and remains completely visible if the parent overflow is shown. object-fit could even be used with the aspect-ratio property here to create a consistent aspect ratio pattern for the hero image:

.container .image { width: 100%; height: auto; object-fit: cover; aspect-ratio: 16 / 9; } The overlay grid-area

Moving on to the container’s direct children, grid-area arranges each of them so that they overlap the same space. In this example, grid-template-areas with the named grid area makes the code a little more readable and works well as a pattern for other overlay-style layouts within a component library. That being said, it is possible to get this same result by removing the template rule and, instead of grid-area: container, using integers:

.container > * { grid-area: 1 / 1; }

This is shorthand for grid-row-start, grid-column-start, grid-row-end, and grid-column-end. Since the siblings in this demo all share the same single row/column area, only the start lines need to be set for the desired result.

Setting place-self to place itself

Another common overlay pattern can be seen on image carousels. Interactive elements are often placed on top of the carousel viewport. I’ve extended the first demo and replaced the static hero image with a carousel.

CodePen Embed Fallback

Same story as before: This layout could fall back on absolute positioning and use integer values in a handful of properties to push and pull elements around their parent container. Instead, we’ll reuse the grid layout rulesets from the previous demo. Once applied, it appears as you might expect: all of the child elements are centered inside the container, overlapping one another.

With place-items: center declared on the container, all of its direct children will overlap one another.

The next step is to set alignment values on individual elements. The place-self property—shorthand for align-self and justify-self—provides granular control over the position of a single item inside the container. Here are the layout styles altogether:

.container { display: grid; grid-template:"container"; place-items: center; place-content: center; overflow: hidden; max-height: clamp(450px, 50vh, 600px); } .container > * { grid-area: container; max-width: 1000px; } .title { place-self: start center; } .carousel-control.prev { place-self: center left; } { place-self: center right; } .carousel-dots { place-self: end center; }

There’s just one small problem: The title and carousel dot indicators get pulled out into the overflow when the image exceeds the container dimensions.

To properly contain these elements within the parent, a grid-template-row value needs to be 100% of the container, set here as one fractional unit.

.container { grid-template-areas: "container"; grid-template-rows: 1fr; }

For this demo, I leaned into the the grid-template shorthand (which we will see again later in this article).

.container { grid-template: "container" 1fr; }

After providing that little update, the overlay elements stay within the parent container, even when the carousel images spread beyond the carousel’s borders.

Alignment and named grid-template-areas

Let’s use the previous overlay layout methods for one more example. In this demo, each box contains elements positioned in different areas on top of an image.

CodePen Embed Fallback

For the first iteration, a named template area is declared to overlay the children on the parent element space, similar to the previous demos:

.box { display: grid; grid-template-areas: "box"; } .box > *, .box::before { grid-area: box; }

The image and semi-transparent overlay now cover the box area, but these style rules also stretch the other items over the entire space. This seems like the right time for place-self to pepper these elements with some alignment magic!

.tag { place-self: start; } .title { place-self: center; } .tagline { place-self: end start; } .actions { place-self: end; }

That‘s looking great! Every element is positioned in their defined places over the image as intended. Well, almost. There’s a bit of nuance to the bottom area where the tagline and action buttons reside. Hover over an image to reveal the tagline. This might look fine with a short string of text on a desktop screen, but if the tagline becomes longer (or the boxes in the viewport smaller), it will eventually extend behind the action buttons.

Note how the tagline in the first box on the second row overlaps the action buttons.

To clean this up, the grid-template-areas use named areas for the tagline and actions. The grid-template-columns rule is introduced so that the actions container only scales to accommodate the size of its buttons while the tagline fills in the rest of the inline area using the 1fr value.

.box { display: grid; grid-template-areas: "tagline actions"; grid-template-columns: 1fr auto; }

This can also be combined with the grid-template shorthand. The column values are defined after a slash, like so:

.box { grid-template: "tagline actions" / 1fr auto; }

The grid-area is then converted to integers now that the “box” keyword has been removed.

.box > *, .box::before { grid-area: 1 / 1 / -1 / -1; }

Everything should look the way it did before. Now for the finishing touch. The tagline and actions keywords are set as their respective element grid-area values:

.tagline { grid-area: tagline; place-self: end start; } .actions { grid-area: actions; place-self: end; }

Now, when hovering over the cards in the demo, the tagline wraps to multiple lines when the text becomes too long, rather than pushing past the action buttons like it did before.

Named grid lines

Looking back at the first iteration of this code, I really liked having the default grid-area set to the box keyword. There’s a way to get that back.

I’m going add some named grid lines to the template. In the grid-template rule below, the first line defines the named template areas, which also represents the row. After the slash are the explicit column sizes (moved to a new line for readability). The [box-start] and [box-end] custom identifiers represent the box area.

.box { display: grid; grid-template: [box-start] "tagline actions" [box-end] / [box-start] 1fr auto [box-end]; } .box > *, .box::before { grid-area: box; }

Passing a name with the -start and -end syntax into brackets defines an area for that name. This name, known as a custom ident, can be anything but words from the CSS spec should be avoided.

Logical placement values

One of the really interesting parts to observe in this last example is the use of logical values, like start and end, for placing elements. If the direction or writing-mode were to change, then the elements would reposition accordingly.

When the “right to left” direction is selected from the dropdown, the inline start and end positions are reversed. This layout is ready to accommodate languages, such as Arabic or Hebrew, that read from right to left without having to override any of the existing CSS.

Wrapping up

I hope you enjoyed these demos and that they provide some new ideas for your own project layouts—I’ve compiled a collection of examples you can check out over at CodePen. The amount of power packed into the CSS Grid spec is incredible. Take a minute to reflect on the days of using floats and a clearfix for primitive grid row design, then return to the present day and behold the glorious layout and display properties of today‘s CSS. To make these things work well is no easy task, so let’s applaud the members of the CSS working group. The web space continues to evolve and they continue to make it a fun place to build.

Now let’s release container queries and really get this party started.

The post Positioning Overlay Content with CSS Grid appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Scaling Organizations Should Consider Building a Website Backed by a CRM Platform

Css Tricks - Mon, 06/28/2021 - 3:29am

To make some terminology clear here:

  • CMS = Content Management System
  • CRM = Customer Relationship Management

Both are essentially database-backed systems for managing data. HubSpot is both, and much more. Where a CMS might be very focused on content and the metadata around making content useful, a CRM is focused on leads and making communicating with current and potential customers easier.

They can be brothers-in-arms. We’ll get to that.

Say a CRM is set up for people. You run a Lexus dealership. There is a quote form on the website. People fill it out and enter the CRM. That lead can go to your sales team for taking care of that customer.

But a CRM could be based on other things. Say instead of people it’s based on real estate listings. Each main entry is a property, with essentially metadata like photos, address, square footage, # of bedrooms/baths, etc. Leads can be associated with properties.

That would be a nice CRM setup for a real estate agency, but the data that is in that CRM might be awfully nice for literally building a website around those property listings. Why not tap into that CRM data as literal data to build website pages from?

That’s what I mean by a CRM and CMS being brothers-in-arms. Use them both! That’s why HubSpot can be an ideal home for websites like this.

To keep that tornado of synergy going, HubSpot can also help with marketing, customer service, and integrations. So there is a lot of power packed into one platform.

And with that power, also a lot of comfort and flexibility.

  • You’re still developing locally.
  • You’re still using Git.
  • You can use whatever framework or site-building tools you want.
  • You’ve got a CLI to control things.
  • There is a VS Code Extension for super useful auto-complete of your data.
  • There is a staging environment.

And the feature just keep coming. HubSpot really has a robust set of tools to make sure you can do what you need to do.

As developer-rich as this all is, it doesn’t mean that it’s developer-only. There are loads of tools for working with the website you build that require no coding at all. Dashboard for content management, data wrangling, style control, and even literal drag-and-drop page builders.

It’s all part of a very learnable system.

Themestemplatesmodules, and fields are the objects you’ll work with most in HubSpot CMS as a developer. Using these different objects effectively lets you give content creators the freedom to work and iterate on websites independently while staying inside style and layout guardrails you set.

Get Started with HubSpot CMS

The post Scaling Organizations Should Consider Building a Website Backed by a CRM Platform appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Custom Property Brain Twisters

Css Tricks - Fri, 06/25/2021 - 8:39am

I am part of that 82% that got it wrong in Lea’s quiz (tweet version).

Here’s the code:

:root { --accent-color: skyblue; } div { --accent-color: revert; background: var(--accent-color, orange); }

So what background do I expect <div> to have?

My brain goes like this:

  1. Well, --accent-color is declared, so it’s definitely not orange (the fallback).
  2. The value for the background is revert, so it’s essentially background: revert;
  3. The background property doesn’t inherit though, and even if you force it to, it would inherit from the <body>, not the root.
  4. So… transparent.



[Because the value is revert it] cancels out any author styles, and resets back to whatever value the property would have from the user stylesheet and UA stylesheet. Assuming there is no --accent-color declaration in the user stylesheet, and of course UA stylesheets don’t set custom properties, then that means the property doesn’t have a value.

Since custom properties are inherited properties (unless they are registered with inherits: false, but this one is not), this means the inherited value trickles in, which is — you guessed it — skyblue.

Stephen posted a similar quiz the other day:

CSS variable riddle: What color will the <p> element be?

— Shaw (@shshaw) June 4, 2021

Again, my brain does it totally wrong. It goes:

  1. OK, well, --color is declared, so it’s not blue (the fallback).
  2. It’s not red because the second declaration will override that one.
  3. So, it’s essentially like p { color: inherit; }.
  4. The <p> will inherit yellow from the <body>, which it would have done naturally anyway, but whatever, it’s still yellow.


Apparently inherit there is actually inheriting from the next place up the tree that sets it, which html does, so green. That actually is how normal inheriting works. It’s just a brain twister because it’s easy to conflate color the property with --color the custom property.

It also might be useful to know that when you actually declare a custom property with @property you can say whether you want it to inherit or not. So that would change the game with these brain twisters!

@property --property-name { syntax: '<color>'; inherits: false; initial-value: #c0ffee; }

The post Custom Property Brain Twisters appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

How to Cancel Pending API Requests to Show Correct Data

Css Tricks - Fri, 06/25/2021 - 4:33am

I recently had to create a widget in React that fetches data from multiple API endpoints. As the user clicks around, new data is fetched and marshalled into the UI. But it caused some problems.

One problem quickly became evident: if the user clicked around fast enough, as previous network requests got resolved, the UI was updated with incorrect, outdated data for a brief period of time.

We can debounce our UI interactions, but that fundamentally does not solve our problem. Outdated network fetches will resolve and update our UI with wrong data up until the final network request finishes and updates our UI with the final correct state. The problem becomes more evident on slower connections. Furthermore, we’re left with useless networks requests that waste the user’s data.

Here is an example I built to illustrate the problem. It grabs game deals from Steam via the cool Cheap Shark API using the modern fetch() method. Try rapidly updating the price limit and you will see how the UI flashes with wrong data until it finally settles.

CodePen Embed Fallback The solution

Turns out there is a way to abort pending DOM asynchronous requests using an AbortController. You can use it to cancel not only HTTP requests, but event listeners as well.

The AbortController interface represents a controller object that allows you to abort one or more Web requests as and when desired.

Mozilla Developer Network

The AbortController API is simple: it exposes an AbortSignal that we insert into our fetch() calls, like so:

const abortController = new AbortController() const signal = abortController.signal fetch(url, { signal })

From here on, we can call abortController.abort() to make sure our pending fetch is aborted.

Let’s rewrite our example to make sure we are canceling any pending fetches and marshalling only the latest data received from the API into our app:

CodePen Embed Fallback

The code is mostly the same with few key distinctions:

  1. It creates a new cached variable, abortController, in a useRef in the <App /> component.
  2. For each new fetch, it initializes that fetch with a new AbortController and obtains its corresponding AbortSignal.
  3. It passes the obtained AbortSignal to the fetch() call.
  4. It aborts itself on the next fetch.
const App = () => { // Same as before, local variable and state declaration // ... // Create a new cached variable abortController in a useRef() hook const abortController = React.useRef() React.useEffect(() => { // If there is a pending fetch request with associated AbortController, abort if (abortController.current) { abortController.abort() } // Assign a new AbortController for the latest fetch to our useRef variable abortController.current = new AbortController() const { signal } = abortController.current // Same as before fetch(url, { signal }).then(res => { // Rest of our fetching logic, same as before }) }, [ abortController, sortByString, upperPrice, lowerPrice, ]) } Conclusion

That’s it! We now have the best of both worlds: we debounce our UI interactions and we manually cancel outdated pending network fetches. This way, we are sure that our UI is updated once and only with the latest data from our API.

The post How to Cancel Pending API Requests to Show Correct Data appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Chapter 9: Community

Css Tricks - Thu, 06/24/2021 - 4:30am

In April of 2009, Yahoo! shut down GeoCities. Practically overnight, the once beloved service had its signup page replaced with a vague message announcing its closure.

We have decided to discontinue the process of allowing new customers to sign up for GeoCities accounts as we focus on helping our customers explore and build new relationships online in other ways. We will be closing GeoCities later this year.

Existing GeoCities accounts have not changed. You can continue to enjoy your web site and GeoCities services until later this year. You don’t need to change a thing right now — we just wanted you to let you know about the closure as soon as possible. We’ll provide more details about closing GeoCities and how to save your site data this summer, and we will update the help center with more details at that time.

In the coming months, the company would offer little more detail than that. Within a year, user homepages built with GeoCities would blink out of existence, one by one, until they were all gone.

Reactions to the news ranged from outrage to contemptful good riddance. In general, however, the web lamented about a great loss. Former GeoCities users recalled the sites that they built using the service, often hidden from public view, and often while they were very young.

For programmer and archivist Jason Scott, nostalgic remembrances did not go far enough. He had only recently created the Archive Team, a rogue group of Internet archivists willing to lend their compute cycles to the rescue of soon departed websites. The Archive Team monitors sites on the web marked for closure. If they find one, they run scripts on their computers to download as much of the site as they could before it disappears.

Scott did not think the question of whether or not GeoCities deserved to exist was relevant. “Please recall, if you will, that for hundreds of thousands of people, this was their first website,” he posted to his website not long after Yahoo!‘s announcement. “[Y]ou could walk up to any internet-connected user, hand them the URL, and know they would be able to see your stuff. In full color.” GeoCities wasn‘t simply a service. It wasn’t just some website. It was burst of creative energy that surged from the web.

In the weeks and months that followed, the Archive Team set to work downloading as many GeoCities sites as they could. They would end up with millions in their archive before Yahoo! pulled the plug.

Chris Wilson recalled the promise of an early web in a talk looking back on his storied career with Mosaic, then Internet Explorer, and later Google Chrome. The first web browser, developed by Sir Tim Berners-Lee, included the ability for users to create their own websites. As Wilson remembers it, that was the de-facto assumption about the web—that it would be a participatory medium.

“Everyone can be an author. Everyone would generate content,” Wilson said, “We had the idea that web server software should be free and everyone would run a server on their machine.” His work on Mosaic included features well ahead of their time, like built-in annotations so that users could collaborate and share thoughts on web documents together. They built server software in the hopes that groups of friends would cluster around common servers. By the time Netscape skyrocketed to popularity, however, all of those features had faded away.

GeoCities represented the last remaining bastion of this original promise of the web. Closing the service down, abruptly and without cause, was a betrayal of that promise. For some, it was the writing on the wall: the web of tomorrow was to look nothing like the web of yesterday.

In a story he recalls frequently, David Bohnett learned about the web on an airplane. Tens of thousands of feet up, untethered from any Internet network, he first saw mention of the web in a magazine. Soon thereafter, he fell in love.

Bohnett is a naturally empathetic individual. The long arc of his career so far has centered on bringing people together, both as a technologist and as a committed activist. As a graduate student, he worked as a counselor answering calls on a crisis hotline and became involved in the gay rights movement at his school. In more recent years, Bohnett has devoted his life to philanthropy.

Finding connection through compassion has been a driving force for Bohnett for a long time. At a young age, he recognized the potential of technology to help him reach others. “I was a ham radio operator in high school. It was exciting to collect postcards from people you talked to around the world,” he would later say in an interview. “[T]hat is a lot of what the Web is about.‘’

Some of the earliest websites brought together radical subcultures and common interests. People felt around in the dark of cyberspace until they found something they liked.

Riding a wave of riot grrrl ephemera in the early 1990’s, ChickClick was an early example. Featuring a mix of articles and message boards, women and young girls used ChickClick as a place to gather and swap stories from their own experience.

Much of the site centered on its strident creators, sisters Heather and Heidi Swanson. Though they each had their own areas of responsibility—Heidi provided the text and the editorial, Heather acted as the community liaison—both were integral parts of the community they created. ChickClick would not exist without the Swanson sisters. They anchored the site to their own personalities and let it expand through like-minded individuals.

Eventually, ChickClick grew into a network of linked sites, each focused on a narrower demographic; an interconnected universe of women on the web. The cost to expanding was virtually zero, just a few more bytes zipping around the Internet. ChickClick’s greatest innovation came when they offered their users their own homepages. Using a rudimentary website builder, visitors could create their own space on the web, for free and hosted by ChickClick. Readers were suddenly transformed into direct participants in the universe they had grown to love.

Bohnett would arrive at a similar idea not long after. After a brief detour running a more conventional web services agency called Beverley Hills Internet, Bohnett and his business partner John Rezner tried something new. In 1994, Bohnett sent around an email to some friends inviting them to create a free homepage (up to 15MB) on their experimental service. The project was called GeoCities.

What made GeoCities instantly iconic was that it reached for a familiar metaphor in its interface. When users created an account for the first time they had to pick an actual physical location on a virtual map—the digital “address” of their website. “This is the next wave of the net—not just information but habitation,” Bohnett would say in a press release announcing the project. Carving out a real space in cyberspace would become a trademark of the GeoCities experience. For many new users of the web, it made the confusing world of the web feel lived in and real.

The GeoCities map was broken up into a handful of neighborhoods users could join. Each neighborhood had a theme, though there wasn‘t much rhyme or reason to what they were called. Some were based on real world locations, like Beverley Hills for fashion aficionados or Broadway for theater nerds. Others simply played to a theme, like Area51 for the sci-fi crowd or Heartland for parents and families. Themes weren’t enforced, and most were later dropped in everything but name.

Credit: One Terabyte of Kilobyte Age

Neighborhoods were limited to 10,000 people. When that number was reached, the neighborhood expanded into suburbs. Everywhere you went on GeoCities there was a tether to real, physical spaces.

Like any real-world community, no two neighborhoods were the same. And while some people weeded their digital gardens and tended to their homepages, others left their spaces abandoned and bare, gone almost as soon as they arrived. But a core group of people often gathered in their neighborhoods around common interests and established a set of ground rules.

Historian Ian Milligan has done extensive research on the mechanics and history of GeoCities. In his digital excavation, he discovered a rich network of GeoCities users who worked hard to keep their neighborhoods orderly and constructive. Some neighborhoods assigned users as community liaisons, something akin to a dorm room RA, or neighborhood watch. Neighbors were asked to (voluntarily) follow a set of rules. Select members acted as resources, reaching out to others to teach them how to build better homepages. “These methods, grounded in the rhetoric of both place and community,” Milligan argues, “helped make the web accessible to tens of millions of users.”

For a large majority of users, however, GeoCities was simply a place to experiment, not a formal community. GeoCities would eventually become one of the web’s most popular destinations. As more amateurs poured in, it would become known for a certain garish aesthetic, pixelated GIFs of construction workers, or bright text on bright backgrounds. People used their homepages to host their photo albums, or make celebrity fan sites, or to write about what they had for lunch. The content of GeoCities was as varied as the entirety of human experience. And it became the grounding for a lot of what came next.

“So was it community?” Black Planet founder Omar Wasow would later ask. “[I]t was community in the sense that it was user-generated content; it was self-expression.” Self-expression is a powerful ideal, and one that GeoCities proved can bring people together.

Many early communities, GeoCities in particular, offered a charming familiarity in real world connection. Other sites flipped the script entirely to create bizarre and imaginative worlds.

Neopets began as an experiment by students Donna Williams and Adam Powell in 1999. Its first version—a prototype that mixed Williams art and Powell’s tech—had many of the characteristics that would one day make it wildly popular. Users could collect and raise fictional virtual pets inside the fictional universe of Neopia. It operated like the popular handheld toy Tamagotchi, but multiplied and remixed for cyberspace.

Beyond a loose set of guidelines, there were no concrete objectives. No way to “win” the game. There were only the pets, and pet owners. Owners could create their own profiles, which let them display an ever expanding roster of new pets. Pulled from their imagination, Williams and Powell infused the site with their own personality. They created “unique characters,” as Williams later would describe it, “something fantasy-based that could live in this weird, wonderful world.”

As the site grew, the universe inside it did as well. Neopoints could be earned through online games, not as much a formal objective as much as in-world currency. They could be spent on accessories or trinkets to exhibit on profiles, or be traded in the Neopian stock market (a fully operational simulation of the real one), or used to buy pets at auction. The tens and thousands of users that soon flocked to the site created an entirely new world, mapped on top of of a digital one.

Like many community creators, Williams and Powell were fiercely protective of what they had built, and the people that used it. They worked hard to create an online environment that was safe and free from cheaters, scammers, and malevolent influence. Those who were found breaking the rules were kicked out. As a result, a younger audience, and one that was mostly young girls, were able to find their place inside of Neopia.

Neopians—as Neopets owners would often call themselves—rewarded the effort of Powell and Williams by enriching the world however they could. Together, and without any real plan, the users of Neopets crafted a vast community teeming with activity and with its own set of legal and normative standards. The trade market flourished. Users traded tips on customizing profiles, or worked together to find Easter eggs hidden throughout the site. One of the more dramatic examples of users taking ownership of the site was The Neopian Times, an entirely user-run in-universe newspaper documenting the fictional going-ons of Neopia. Its editorial has spanned decades, and continues to this day.

Though an outside observer might find the actions of Neopets frivolous, they were a serious endeavor undertaken by the site’s most devoted fans. It became a place for early web adventurers, mostly young girls and boys, to experience a version of the web that was fun, and predicated on an idea of user participation. Using a bit of code, Neopians could customize their profile to add graphics, colors, and personality to it. “Neopets made coding applicable and personal to people (like me),” said one former user, “who otherwise thought coding was a very impersonal activity.” Many Neopets coders went on to make that their careers.

Neopets was fun and interesting and limited only by the creativity of its users. It was what many imagined a version of the web would look like.

The site eventually languished under its own ambition. After it was purchased and run by Doug Dohring and later, Viacom, it set its sights on a multimedia franchise. “I never thought we could be bigger than Disney,” Dohring once said in a profile in Wired, revealing just how far that ambition went, “but if we could create something like Disney – that would be phenomenal.” As the site began to lean harder into somewhat deceptive advertising practices and emphasize expansion into different mediums (TV, games, etc.), Neopets began to overreach. Unable to keep pace with the rapid developments of the web, it has been sold to a number of different owners. The site is still intact, and thanks to its users, thriving to this day.

Candice Carpenter thought a village was a handy metaphor for an online community. Her business partner, and co-founder, Nancy Evans suggested adding an “i” to it, for interactive. Within a few years, iVillage would rise to the highest peak of Internet fortunes and hype. Carpenter would cultivate a reputation for being charismatic, fearless, and often divisive, a central figure in the pantheon of dot-com mythology. Her meteoric rise, however, began with a simple idea.

By the mid-90’s, community was a bundled, repeatable, commotized product (or to some, a “totally overused buzzword,” as Omar Wasow would later put it). Search portals like Yahoo! and Excite were popular, but their utility came from bouncing visitors off to other destinations. Online communities had a certain stickiness, as one one profile in The New Yorker put it, “the intangible quality that brings individuals to a Web site and holds them for long sessions.”

That unique quality attracted advertisers hoping to monetize the attention of a growing base of users. Waves of investment in community, whatever that meant at any given moment, followed. “The lesson was that users in an online community were perfectly capable of producing value all by themselves,” Internet historian Brian McCullough describes. The New Yorker piece framed it differently. “Audience was real estate, and whoever secured the most real estate first was bound to win.” was set against the backdrop of this grand drama. Its rapid and spectacular rise to prominence and fall from grace is well documented. The site itself was a series of chat rooms organized by topic, created by recent Cornell alumni Stephan Paternot and Todd Krizelman. It offered a fresh take on standard chat rooms, enabling personalization and fun in-site tools.

Backed by the notoriously aggressive Wall Street investment bank Bear Stearns, and run by green, youngish recent college grads, theGlobe rose to a heavily inflated valuation in full public view. “We launched nationwide—on cable channels, MTV, networks, the whole nine yards,” Paternot recalls in his book about his experience, “We were the first online community to do any type of advertising and fourth or the fifth site to launch a TV ad campaign.” Its collapse would be just as precipitous; and just as public. The site’s founders would be on the covers of magazines and the talk of late night television shows as examples of dot-com glut, with just a hint of schadenfreude.

So too does iVillage get tucked into the annals of dot-com history. The site‘s often controversial founders were frequent features in magazine profiles and television interviews. Carpenter attracted media attention as deftly as she maneuvered her business through rounds of investment and a colossally successful IPO. Its culture was well-known in the press for being chaotic, resulting in a high rate of turnover that saw the company go through five Chief Financial Officer’s in four years.

And yet this ignores the community that iVillage managed to build. It began as a collection of different sites, each with a mix of message boards and editorial content centered around a certain topic. The first, a community for parents known as Parent Soup which began at AOL, was their flagship property. Before long, it spanned across sixteen interconnected websites. “iVillage was built on a community model,” writer Claire Evans describes in her book Broad Band, “its marquee product was forums, where women shared everything from postpartum anxiety and breast cancer stories to advice for managing work stress and unruly teenage children.”

Candice Carpenter (left) and Nancy Evans (right).
Image credit: The New Yorker

Carpenter had a bold and clear vision when she began, a product that had been brewing for years. After growing tired of the slow pace of growth in positions at American Express and QVC, Carpetner was given more free rein consulting for AOL. It was her first experience with an online world. There wasn‘t a lot that impressed her about AOL, but she liked the way people gathered together in groups. “Things about people‘s lives that were just vibrant,” she’d later remark in an interview, “that’s what I felt the Internet would be.”

Parent Soup began as a single channel on AOL, but it soon moved to the web along with similar sites for different topics and interests—careers, dating, health and more. What drew people to iVillage sites was their authenticity, their ability to center conversations around topics and bring together people that were passionate about spreading advice. The site was co-founded by Nancy Evans, who had years of experience as an editor in the media industry. Together, they resisted the urge to control every aspect of their community. “The emphasis is more on what visitors to the site can contribute on the particulars of parenthood, relationships and workplace issues,” one writer noted, “rather than on top-tier columnists spouting advice and other more traditional editorial offerings used by established media companies.”

There was, however, something that bound all of the site‘s together: a focus that made iVillage startlingly consistent and popular. Carpenter would later put it concisely: “the vision is to help women in their lives with the stuff big and small that they need to get through.” Even as the site expanded to millions of users, and positioned itself as a network specifically for women, and went through one of the largest IPO’s in the tech industry, that simple fact would remain true.

What’s forgotten in the history of dot-com community is the community. There were, of course, lavish stories of instant millionaires and unbounded ambition. But much of the content that was created was generated by people, people that found each other across vast distances among a shared understanding. The lasting connections that became possible through these communities would outlast the boom and bust cycle of Internet business. Sites like iVillage became benchmarks for later social experiments to aspire to.

In February of 2002, Edgar Enyedy an active contributor to a still new Spanish version of Wikipedia posted to the Wikipedia mailing list and to Wikipedia‘s founder, Jimmy Wales. “I’ve left the project,” he announced, “Good luck with your wikiPAIDia [sic].”

As Wikipedia grew in the years after it officially launched in 2001, it began to expand to other countries. As it did, each community took on its own tenor and tone, adapting the online encyclopedia to the needs of each locale. “The organisation of topics, for example,” Enyedy would later explain, “is not the same across languages, cultures and education systems. Historiography is also obviously not the same.”

Enyedy‘s abrupt exit from the project, and his callous message, was prompted by a post from Wikipedia’s first editor-in-chief Larry Sanger. Sanger had been instrumental in the creation of Wikipedia, but he had recently been asked to step back as a paid employee due to lack of funds. Sanger suggested that sometime in the near future, Wikpedia may turn to ads.

It was more wishful thinking than actual fact—Sanger hoped that ads may bring him his job back. But it was enough to spurn Enyedy into action. The Wikipedia Revolution, author Andrew Lih explains why. “Advertising is the third-rail topic in the community—touch it only if you’re not afraid to get a massive shock.”

By the end of the month, Enyedy had created an independent fork of the Spanish Wikipedia site, along with a list of demands for him to rejoin the project. The list included moving the site from .com to .org domain and moving servers to infrastructure owned by the community and, of course, a guarantee that ads would not be used. Most of these demands would eventually be met, though its hard to tell what influence Enyedy had.

The fork of Wikipedia was both a legally and ideologically acceptable project. Wikipedia’s content is licensed under the Creative Commons license; it is freely open and distributable. The code that runs it is open source. It was never a question of whether a fork of Wikipedia was possible. It was a question of why it felt necessary. And the answer speaks to the heart of the Wikipedia community.

Wikipedia did not begin with a community, but rather as something far more conventional. The first iteration was known as Nupedia, created by Jimmy Wales in early 2000. Wales imagined a traditional encyclopedia ported into the digital space. An encyclopedia that lived online, he reasoned, could be more adaptable than the multi-volume tomes found buried in library stacks or gathering dust on bookshelves.

Wales was joined by then graduate student Larry Sanger, and together they recruited a team of expert writers and editors to contribute to Nupedia. To guarantee that articles were accurate, they set up a meticulous set of guidelines for entries. Each article contributed to Nupedia went through rounds of feedback and was subject to strict editorial oversight. After a year of work, Nupedia had less than a dozen finished articles and Wales was ready to shut the project down.

However, he had recently been introduced to the concept of a wiki, a website that anybody can contribute to. As software goes, the wiki is not overly complex. Every page has a publicly accessible “Edit” button. Anyone can go in and make edits, and those edits are tracked and logged in real time.

In order to solicit feedback on Nupedia, Wales had set up a public mailing list anyone could join. In the year since it was created, around 2,000 people had signed up. In January of 2001, he sent a message to that mailing list with a link to a wiki.

His hope was that he could crowdsource early drafts of articles from his project’s fans. Instead, users contributed a thousand articles in the first month. Within six months, there were ten thousand. Wales renamed the project to Wikipedia, changed the license for the content so that it was freely distributable, and threw open the doors to anybody that wanted to contribute.

The rules and operations of Wikipedia can be difficult to define. It has evolved almost in spite of itself. Most articles begin with a single, random contribution and evolve from there. “Wikipedia continues to grow, and articles continue to improve,” media theorist Clary Shirky wrote of the site in his seminal work Here Comes Everybody, “the process is more like creating a coral reef, the sum of millions of individual actions, than creating a car. And the key to creating those individual actions is to hand as much freedom as possible to the average user.”

From these seemingly random connections and contributions, a tight knit group of frequent editors and writers have formed at the center of Wikipedia. Programmer and famed hacktivist Aaron Swartz described how it all came together. “When you put it all together, the story become clear: an outsider makes one edit to add a chunk of information, then insiders make several edits tweaking and reformatting it,” described Swartz, adding, “as a result, insiders account for the vast majority of the edits. But it’s the outsiders who provide nearly all of the content.” And these insiders, as Swartz referes to them them, created a community.

“One of the things I like to point out is that Wikipedia is a social innovation, not a technical innovation,” Wales once said. In the discussion pages of articles and across mailing lists and blogs, Wikipedians have found ways to collaborate and communicate. The work is distributed and uneven—a small community is responsible for a large number of edits and refinements to articles—but it is impressively collated. Using the ethos of open source as a guide, the Wikipedia community created a shared set of expectations and norms, using the largest repository of human knowledge in existence as their anchor.

Loosely formed and fractured into factions, the Wikipedia community nevertheless follows a set of principles that it has defined over time. Their conventions are defined and redefined on a regular basis, as the community at the core of Wikipedia grows. When it finds a violation of these principles—such as the suggestion that ads will be plastered on the article they helped they create—they sometimes react strongly.

Wikipedia learned from the fork of Spanish Wikipedia, and set up a continuous feedback loop that has allowed its community to remain at the center of making decisions. This was a primary focus of Katherine Maher, who became exectuvie director of Wikimedia, the company behind Wikipedia, in 2016, and then CEO three years later. Wikimedia’s involvement in the community, in Maher’s words, “allows us to be honest with ourselves, and honest with our users, and accountable to our users in the spirit of continuous improvement. And I think that that is a different sort of incentive structure that is much more freeing.”

The result is a hive mind sorting collective knowledge that thrives independently twenty years after it was created. Both Maher and Wales have referred to Wikipedia as a “part of the commons,” a piece of informational infrastructure as important as the cables that pipe bandwidth around the world, built through the work of community.

Fanfiction can be hard to define. It has been the seeds of subculture and an ideological outlet; the subject of intense academic and philosophical inquiry. Fanfiction has often been noted for its unity through anti-hegemony—it is by its very nature illegal or, at the very least, extralegal. As a practice, Professor Brownen Thomas has put it plainly: “Stories produced by fans based on plot lines and characters from either a single source text or else a ‘canon’ of works; these fan-created narratives often take the pre-existing storyworld in a new, sometimes bizarre, direction.” Fanfiction predates the Internet, but the web acted as its catalyst.

Message boards, or forums, began as a technological experiment on the web, a way of replicating the Usenet groups and bulletin boards of the pre-web Internet. Once the technology had matured, people began to use them to gather around common interests. These often began with a niche—fans of a TV show, or a unique hobby—and then used as the beginning point for much wider conversation. Through threaded discussions, forum-goers would discuss a whole range of things in, around, and outside of the message board theme. “If urban history can be applied to virtual space and the evolution of the Web,” one writer recalls, “the unruly and twisted message boards are Jane Jacobs. They were built for people, and without much regard to profit.”

Some stayed small (and some even remain so). Others grew. Fans of the TV show Buffy the Vampire Slayer had used the official message board of the show for years. It famously took on a life of its own when the boards where shut down, and the users funded and maintained an identical version to keep the community alive. Sites like Newgrounds and DeviantART began as places to discuss games and art, respectively. Before long they were the launching pad for the careers of an entire generation of digital creators.

Fandom found something similar on the web. On message boards and on personal websites, writers swapped fanfiction stories, and readers flocked to boards to find them. They hid in plain sight, developing rules and conventions for how to share among one another without being noticed.

In the fall of 1998, developer Xing Li began posting to a number of Usenet fanfiction groups. In what would come to be known as his trademark sincerity, his message read: “I’m very happy to announce that is now officially open!!!!!! And we have done it 3 weekss ahead of projected finish date. While everyone trick-or-treated we were hard at working debugging the site.”

Li wasn’t a fanfiction creator himself, but he thought he stumbled upon a formula for its success. What made unique was that its community tools—built-in tagging, easy subscriptions to stories, freeform message boards for discussions—was built with fandom in mind. As one writer would later describe this winning combination, “its secret to success is its limited moderation and fully-automated system, meaning posting is very quick and easy and can be done by anyone.”

Fanfiction creators found a home at, or as it was often shortened to. Throughout its early years, Li had a nerdy and steadfast devotion to the development of the site. He‘d post sometimes daily to an open changelog on the site, a mix of site-related updates and deeply personal anecdotes. “Full-text searching allows you to search for keywords/phrases within every fanfiction entry in our huge archive,” one update read. “I can‘t get the song out of my head and I need to find the song or I will go bonkers. Thanks a bunch. =)” read another (the song was The Cure‘s “Boys Don’t Cry”).

Li’s cult of personality and the unique position of the site made it immensely popular. For years, the fanfiction community had stuck to the shadows. gave them a home. Members took it upon themselves to create a welcoming environment, establishing norms and procedures for tagging and discoverability, as well as feedback for writers.

The result was a unique community on the web that attempted to lift one another up. “Sorry. It‘s just really gratifying to post your first fic and get three hits within about six seconds. It‘s pretty wild, I haven’t gotten one bad review on FF.N…” one fanfic writer posted in the site’s early days. “That makes me pretty darn happy :)”

The reader and writer relationship on was fluid. The stories generated by users acted as a reference for conversation among fellow writers and fanfiction readers. One idea often flows into the next, and it is only through sharing content that it takes on meaning. “Yes, they want recognition and adulation for their work, but there‘s also the very strong sense that they want to share, to be part of something bigger than themselves. There’s a simple, human urge to belong.”

As the dot-com era waned, community was repackaged and resold as the social web. The goals of early social communities were looser than the tight niches and imaginative worlds of early community sites. Most functioned to bring one’s real life into digital space., launched in 1995, is one of the earliest examples of this type of site. Its founder, Randy Conrads, believed that the web was best suited for reconnecting people with their former schoolmates.

Not long after, AsianAve launched from the chaotic New York apartment where the site‘s six co-founders lived and worked. Though it had a specific demographic—Asian Americans—AsianAve was modeled after a few other early social web experiences, like SixDegrees. The goal was to simulate real life friend groups, and to make the web a fun place to hang out. “Most of Asian Avenue‘s content is produced by members themselves,” an early article in The New York Times describes. “[T]he site offers tool kits to create personal home pages, chat rooms and interactive soap operas.” Eventually, one of the site‘s founders, Benjamin Sun, began to explore how he could expand his idea beyond a single demographic. That’s when he met Omar Wasow.

Wasow was fascinated with technology from a young age. When he was a child, he fell in love first with early video games like Pong and Donkey Kong. By high school, he made the leap to programmer. “I begged my way out of wood shop into computer science class. And it really changed my life. I went to being somebody who consumed video games to creating video games.”

In 1993, Wasow founded New York Online, a Bulletin Board System that targeted a “broad social and ethnic ‘mix’,” instead of pulling from the same limited pool of upper-middle class tech nerds most networked projects focused on. To earn an actual living, Wasow developed websites for popular magazine brands like Vibe and Essence. It was through this work that he crossed paths with Benjamin Sun.

By the mid-1990‘s, Wasow had already gathered a loyal following and public profile, featured in magazines like Newsweek and Wired. Wasow’s reputation centered on his ability to build communities thoughtfully, to explore the social ramifications of his tech before and while he built it. When Sun approached him about expanding AsianAve to an African American audience, a site that would eventually be known as BlackPlanet, he applied the same thinking.

Wasow didn’t want to build a community from scratch. Any site that they built would need to be a continuation of the strong networks Black Americans had been building for decades. “A friend of mine once shared with me that you don’t build an online community; you join a community,” Wasow once put it, “BlackPlanet allowed us to become part of a network that already had centuries of black churches and colleges and barbecues. It meant that we, very organically, could build on this very powerful, existing set of relationships and networks and communities.”

BlackPlanet offered its users a number of ways to connect. A central profile—the same kind that MySpace and Facebook would later adopt—anchored a member’s digital presence. Chat rooms and message boards offered opportunities for friendly conversation or political discourse (or sometimes, fierce debate). News and email were built right into the app to make it a centralized place for living out your digital life.

By the mid-2000’s BlackPlanet was a sensation. It captured a large part of African Americans who were coming online for the first time. Barack Obama, still a Senator running for President, joined the site in 2007. Its growth exploded into the millions; it was a seminal experience for black youth in the United States.

After being featured on a segment on the The Oprah Winfrey Show, teaching Oprah how to use the Internet, Wasow‘s profile reached soaring heights. The New York Times dubbed him the “philosopher-prince of the digital age,” for his considered community building. “The best the Web has to offer is community-driven,” Wasow would later say. He never stopped building his community thoughtfully. and they in turn, became an integral part of the country’s culture.

Before long, a group of developers would look at BlackPlanet and wonder how to adapt it to a wider audience. The result were the web’s first true social networks.

The post Chapter 9: Community appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

TablesNG — Improvements to table rendering in Chromium

Css Tricks - Wed, 06/23/2021 - 11:37am

When I blogged “Making Tables With Sticky Header and Footers Got a Bit Easier” recently, I mentioned that the “stickiness” improvement was just one of the features that got better for <table>s in Chrome as part of the TablesNG upgrade. I ain’t the only one who’s stoked about it.

But Bramus took it the rest of the nine yards and looked at all of the table enhancements. Every one of these is great. The kind of thing that makes CSS ever-so-slightly less frustrating.

Just the writing-mode stuff is fantastic.

CodePen Embed Fallback

Direct Link to ArticlePermalink

The post TablesNG — Improvements to table rendering in Chromium appeared first on CSS-Tricks. You can support CSS-Tricks by being an MVP Supporter.

Syndicate content
©2003 - Present Akamai Design & Development.