Home Blog Google Ads
Google Ads

Negative Keywords Google Ads: Strategy and Master Lists 2026

How to build, maintain, and organize negative keyword lists in Google Ads. Strategy, common mistakes, and the monthly cycle I run on real accounts.

Lionel Fenestraz · 29 April 2026 · 19 min read · Updated: April 2026
Structure of shared negative keyword lists in Google Ads organized by category and campaign
In this article

Negative keywords in Google Ads don’t “tell Google to avoid” a term. They block it. It’s a hard instruction: if the user’s query contains that term (following the match type rules), the ad never enters the auction. The official Google Ads documentation describes it exactly that way, no ambiguity. That distinction matters, because it changes how you build the lists. You’re not suggesting, you’re closing doors.

This guide doesn’t repeat the search term report audit I published last week. Here we go straight to the real deliverable: how to build, maintain, and organize a negative list system that survives six months without degrading. If you already know negatives exist and you add them one by one when you remember, this is what you’re missing. I work on ecommerce accounts spending $1,500/month and up, but the method scales the same way going up.

In 30 seconds:

  • Negative keywords block the ad impression, they don’t “suggest” Google avoid it, per the Google Ads documentation.
  • Four list types cover 90% of cases: universal, B2B/B2C, by product, and by intent.
  • Broad negatives do match plurals, typos, and close variants since 2019, but exact and phrase stay strict (Google Ads Help).
  • The Search Engine Journal guide with Moz data documents 5% to 40% PPC spend savings with systematic negative management.
  • A cycle that works: 90-minute monthly review plus quarterly audit of list x campaign intersections.

What are negative keywords and what changes in 2026?

A negative keyword is a rule that prevents your ad from competing for a given query. It doesn’t penalize, doesn’t reduce bid, doesn’t “prefer not to show it”. It blocks it. According to the official Google Ads documentation, each negative is evaluated against the actual user query using the specific match type of the negative, which can be broad, phrase, or exact.

Blocking isn’t the same as “suggesting”

This seems obvious, and it isn’t. When you add free as a broad negative, Google doesn’t “try” to avoid queries with free. It discards them before they enter the auction. The customer never sees you. It’s a binary filter layer that happens before Smart Bidding, before ad quality, before anything. That’s why an error in a negative can cut good traffic without warning.

Close variants: the asymmetry between positives and negatives

Since 2019, Google Ads expanded positive broad keywords to cover close variants: plurals, misspellings, synonyms, and reformulations with the same intent. Broad negatives were also updated to match those variants, per the current Google Ads documentation. But phrase and exact negatives stay strict. They block only the exact form or the closed phrase as you write it. What’s the practical result? If you put free course as an exact negative, queries like free courses may still come through. If you put it as a broad negative, both get blocked.

This asymmetry is the source of 70% of the mistakes I see. People add exact negatives for “precision”, forget plurals and variants, and months later discover they’re still paying for the same queries with one letter different.

What changes with AI Max and broad dominating in 2026

With AI Max for Search active, the volume of distinct queries a campaign matches grows quite a bit. The “Search terms matched by AI Max” column in the report lives alongside traditional keyword match queries. Negatives apply equally to both, but the review frequency has to go up in the first weeks after you turn it on.

What I’ve learned about AI Max and negatives: the new report column exposes queries that classic matching would never have shown you. That means your universal negative list, the one you’ve been refining for years, probably has gaps you didn’t know existed. It’s the one case where I recommend dedicating a full session (2-3 hours) to auditing the universal list after you turn AI Max on.

The official Google Ads documentation confirms that broad match negatives match plurals, misspellings, and close variants since 2019, but phrase and exact negatives still block only the exact declared form. This asymmetry between positives and negatives is a regular source of unexpected spend in poorly audited accounts.

The 4 list types I use in every account

Organizing negatives in a single giant list is the most common antipattern I find. When an account has 400 negatives in one shared list, nobody reviews them, nobody documents the reason, and in six months they end up blocking converting queries by accident. The Search Engine Journal guide citing Moz data documents savings of 5% to 40% with systematic negative management. The key word is “systematic”, and that starts with categorization.

The universal list: what you sell to no one

It’s the base list I apply to every search campaign in the account. It contains terms irrelevant to the entire business, without nuance. Words like free, jobs, employment, pdf, second hand, used, diy, do it yourself, reviews, forums, complaints. Ask yourself: is there any campaign in this account that wants this term? If the answer is a flat no, it goes to the universal list. If there’s a campaign where you do want it, it goes to a different list.

Separate B2B and B2C lists

If your account mixes business-facing and consumer-facing campaigns, keep two distinct shared lists. The B2B list blocks consumer terms (for home, family, gift) in campaigns aimed at businesses. The B2C list blocks corporate jargon (enterprise license, master contract, procurement) in consumer campaigns. Without this separation, a mixed account cannibalizes its own traffic constantly.

Lists by product or category

In ecommerce this becomes critical. If you sell sneakers and clothing, the sneaker campaign needs clothing, t-shirts, pants as negatives. The clothing one needs sneakers, boots, footwear. It looks redundant, it is, but it prevents Smart Bidding from sending “sneaker” traffic to a t-shirt landing page when the match opens up. I keep one shared list per main category.

Lists by intent (informational vs transactional)

The fourth layer is the one fewest people use, and the one with the biggest impact. You separate informational terms (how, what is, difference between, comparison, review, analysis) from transactional ones (buy, price, deal, cheap). Then you apply the informational list to bottom-of-funnel campaigns and the transactional one to awareness campaigns if you don’t want to cannibalize. The classic pattern: brand campaigns block everything informational, research campaigns block buy/price.

In a cosmetics ecommerce account I worked on for 18 months, moving from a single list of 220 negatives to this four-list scheme cut the monthly maintenance time in half. Not because there were fewer terms, but because each list had a clear owner and a reason to exist. When you reviewed the B2C list, you knew what you were looking for.

Category distribution in a typical master list Pattern observed across ecommerce account audits $1,500-15,000/month · 2025 Informational Product not sold Wrong geo Competitor Wrong intent B2B/B2C crossed ~35% ~25% ~15% ~10% ~8% ~7%
Indicative distribution across master lists in the ecommerce accounts I audit. Informational queries always lead; product-not-sold is usually the second-largest block. Observational data, not an academic study.

How to build your master list from scratch

If you don’t have lists yet, order matters. I’ve tried starting several times by exporting the full search term report and filtering negatives in one go, and it always turns out worse than starting with the obvious ones. The reason is simple: at the beginning, common sense covers 60% of the value in 20 minutes. After that, the search term report adds refinement.

Obvious terms go first

Sit down with no data, just business knowledge. Write out in a doc the categories of the scheme (universal, B2B/B2C, product, intent). Under each, list the terms you know with 100% certainty you don’t want. Do you sell online courses? In universal, add in person, new york, chicago (if they’re not geographically relevant), jobs, university, civil service exam. Add your own obvious ones too. Don’t overthink it. If a term fits in two categories, leave it in the more general one.

That takes you 30-40 minutes and usually produces 60-120 initial negatives. They’re the floor, not the ceiling.

Then the search term report, but filtered

With the base set up, export the search terms report from the last 90 days sorted by cost. Read through the queries one by one with the doc open on the side. For each query with cost above $15-20 and zero conversions, ask yourself: what part of this query turns it into a negative? Sometimes it’s the whole query (exact negative). Sometimes it’s one word (broad negative that captures variants). Sometimes it’s a two-word phrase (phrase).

The mistake many people make when starting out is adding whole queries as exact negatives. You pile up 800 exact negatives that each block only their literal form, and plurals keep costing you money. I prefer fewer negatives and more broad: one broad negative second hand blocks buy second hand, phone second hand, cheap second hand, all in one line.

Test before applying in bulk

Before pushing 200 negatives at once to a shared list, apply them to a single campaign for a week. Watch what happens to impressions, clicks, and cost. If any negative is cutting good traffic, it’ll show up that week as an unexpected drop in volume on keywords that were converting. When the test comes back clean, you push to the shared list. This step is tedious and nobody does it. That’s why half of the inherited accounts I audit have badly placed negatives blocking real conversions.

Document the why

The most important piece: a doc that runs alongside the list with one row per negative explaining when and why it was added. Looks like bureaucracy. It isn’t. When eight months from now someone (you or a colleague) sees the negative family in the B2B list and wonders whether to remove it, the doc will say: “Added in March 2026 because enterprise campaigns were getting family plan queries, which is consumer”. Without that doc, the negative gets deleted, and three weeks later you’re back to the same problem.

In my onboarding process for new accounts, every account older than 12 months I’ve inherited has had no documentation on the reason for its negatives. Not one. It’s the piece that produces the most return and the one fewest people execute.

Common mistakes when adding negatives (and how to avoid them)

Three out of every four audits I run have at least one serious configuration error on negatives. I’m not talking about suboptimal negatives, I’m talking about negatives that cut converting traffic. The same error categories repeat, and all of them can be avoided with a 5-minute review process before you apply.

Blocking converting terms by accident

The classic case: the account added cheap as a broad negative years ago. Sounds reasonable, “we want premium traffic”. Problem: in ecommerce, cheap is a signal of high purchase intent. Blocking it cuts conversions. Before adding a broad negative on a potentially ambiguous word, filter the search term report by that word and look: how many conversions did it generate in the last 90 days? If it’s more than 3-5, don’t block it. Refine it.

Negatives that are too broad

free looks like an obvious negative. It is, except if you sell courses with a free tier and a paid tier. If your funnel depends on free trial or free demo, a broad negative free kills the top of the funnel. The fix is more specific phrase negatives: complete free course, free pdf download, block the noise without blocking the useful layer.

Forgetting close variants in phrase/exact

If you put kids mattress as a phrase negative because you don’t sell children’s, you’ll still get queries with kids mattresses, mattress for kids small, children mattress. Phrase doesn’t match close variants. Options: (a) add the variants too, (b) move to broad with kids if you can afford it. I usually go with (a) in sensitive categories and (b) when the term is unambiguous.

Not documenting the reason

I mentioned it in the build section, but it deserves a repeat because it’s the mistake that creates the most regressions. A negative without a documented reason eventually gets deleted. Three months later someone sees spend on the query you were blocking, adds it back, and the cycle restarts. If you’re tracking the pattern of common Google Ads mistakes, this is the one that burns the most money over the medium term.

How to keep lists alive: the monthly cycle

Negative lists aren’t a sprint, they’re infrastructure. The Optmyzr auditing workflow recommends light weekly checks combined with deep monthly audits, and the same cadence applies to negative management. What changes is the scope: instead of reviewing individual terms every week, you review the entire system every month.

The monthly session: 90 minutes

One Monday a month, on a fixed calendar, 90 minutes. Order I follow: (i) review the month’s log of which negatives got added at campaign level, (ii) spot repeated patterns (if the same negative appeared in 3+ campaigns, promote to shared list), (iii) remove duplicates across lists, (iv) update the reason doc. I don’t audit the list x campaign intersection monthly. That’s quarterly work.

The quarterly audit: intersections

Every 3 months, 3 hours. Central question: do the shared lists still apply to the right campaigns? When new campaigns get created during the quarter, people often forget to assign them the relevant shared lists. It also happens the other way: a B2B list stayed attached to a campaign that migrated to B2C and nobody updated it. This quarterly audit corrects those drifts, which are invisible day to day and add up to silent spend.

Why monthly and not weekly

The weekly review is for the search term report, which was last week’s deliverable. Adding one-off negatives at the campaign or group level is part of the weekly cycle. But touching the shared lists every week is overfitting. Shared lists live at the month scale, not the week scale. If you change them every week, you lose the pattern. If you change them every three months, you lose currency.

The workflow recommended by Optmyzr combines light weekly checks with deep monthly audits. For negative keywords, the right frequency is: weekly at the campaign/group level, monthly for consolidation into shared lists, quarterly for auditing intersections between lists and campaigns.

Real examples of negative categories by industry

There’s no universal list that works everywhere. Each sector has distinct patterns. These three examples come from real accounts I run, simplified to protect confidential data but with the structure intact.

Fashion ecommerce

Universal: free, jobs, employment, second hand, pdf, pattern, tutorial, how to make. Typical trap: blocking kids/girls in adult categories but leaving them open in kids’ categories. Another classic: outlet blocked when the brand doesn’t want that positioning, even though the query has high purchase intent. That’s a strategic call, not a technical one. By product: each category blocks the names of the others (if the campaign is on t-shirts, pants and dresses go out).

B2B SaaS

Universal: free, open source, free alternative, download crack, tutorial, course, university. The intent list becomes critical: bottom-of-funnel campaigns block what is, how it works, definition, while awareness campaigns do the opposite and block price, buy, sign up. The geo layer matters if you license only to certain countries: country names outside target go to universal. By product: competitor names get blocked selectively only on brand campaigns, but left open on comparison campaigns if you have comparison content.

Professional services

Universal: free, template, pdf, example, free model, tutorial. Here free is high risk because a lot of informational services traffic uses that word. In legal consulting, for example, free consultation has high purchase intent. Blocking it cuts leads. By intent: queries with myself, self-employed, do it yourself are usually clear negatives if your service is for companies or firms, not individuals.

Workflow: how much time to spend and when to stop

I’ll be honest about something few consultants say: there’s a point of diminishing returns in negative management. After the first 90 days of serious work, each additional hour produces less savings than the previous one. Knowing when to stop is part of the job.

The first 90 days

This is where you gain the most. You start with nonexistent or disorganized lists, and you end with a documented four-list structure. Budget around 3-4 hours in week 1 (initial build), 1 hour weekly in weeks 2-4, and 90 minutes monthly during months 2-3. It’s the period with the highest return per hour invested.

Month 4 through month 12

Pure maintenance. 90 minutes monthly of formal session plus one-off adjustments during the weekly search term report reviews. You’re not building anymore, you’re tuning. The changes are small, incremental, and they barely show up individually, but they hold the CPA together over the medium term.

When it’s time to rebuild

There are three clear triggers that take you back to “build” mode even after months of routine: (1) structural business change (new channel, new product line, geographic expansion), (2) activation of AI Max or a major change in Smart Bidding, (3) a new team or consultant arrives and the lists have drifted without documentation. Outside those three cases, resist the temptation to reshuffle everything. Stable lists produce more value than lists “optimized” constantly.

In a B2B services account I’ve been running since 2022, the negative lists changed structurally three times in 4 years: once when I built them, once when the client added a consumer product line, and once when we turned Performance Max on in 2024. Outside those moments, pure maintenance. Stability is a feature, not a bug.

Want us to review your negative lists?

If you run a Google Ads account with $1,500+/month in spend and you suspect your negative lists are disorganized or incomplete, I'll walk you through the build method live on your account.

Schedule session

30-min session · No commitment

FAQ

How many negative keywords should a healthy account have?

It depends on size and how long it’s been active, there’s no right number. Accounts 6-12 months old usually have 150-400 negatives split across 3-4 shared lists plus campaign-specific negatives. More than 1,000 uncategorized negatives is a sign of messy accumulation, not good management. The quality of the structure weighs more than the total volume.

Do broad negatives also block plurals and variants?

Yes, since 2019. The official Google Ads documentation confirms that broad match negatives match plurals, misspellings, and close variants. But phrase and exact negatives stay strict: they block only the literal form. That’s why I recommend broad by default for most cases, and phrase/exact only when you need surgical precision.

Should I use shared lists or campaign-level negatives?

Both, with different criteria. Shared lists for terms the whole account doesn’t sell (universal, B2B/B2C, by main category). Campaign-level negatives for specific contexts: separating rent/buy, filtering intents inside a group. The mix lets you scale without breaking individual campaigns. Per the Optmyzr audit guide, well-organized shared lists considerably reduce monthly maintenance compared to managing negatives campaign by campaign.

How do I know if a negative is blocking good traffic?

Filter the search term report from the last 90 days by the word you blocked, before adding the negative. If there are conversions in queries containing that term, it’s a bad negative. If all those queries are a different intent from yours, it’s a good negative. This pre-verification step should apply to every broad negative before you push it to a shared list.

Do match types on negatives work the same as on positives?

Yes in form (broad, phrase, exact), no in behavior. Broad negatives need every term to appear in the query to block; broad positives match with reformulations and synonyms. Phrase negatives block the literal phrase in that order; exact ones block only the identical query. The asymmetry matters and is a common source of misconfiguration.

Conclusion

Negative keyword management is one of those disciplines that looks boring and isn’t, because every month you run it well you save money that nobody is going to give you an award for. Well-run accounts don’t stand out; badly run ones do. If you take one thing from this guide, let it be this: the lists aren’t an archive, they’re a system with four categorized layers, a monthly maintenance cycle, and a doc that explains why each block exists. Without that trio, negatives end up being noise.

The rest is consistency. An hour and a half per month, three hours per quarter, and the account holds together. When an internal audit catches CPA climbing with no clear cause, most of the time the answer sits in the negative lists that haven’t been touched in six months. Boring? Yes. But that’s where the dollars are.

If you’d rather we look at it on your real account, the link above books 30 minutes. You’ll leave with a prioritized list of concrete changes, ordered by impact, ready to apply that same week.


Lionel Fenestraz has been running Google Ads campaigns for ecommerce and B2B since 2018. This article is part of the Google Ads optimization series on lionelz.com.

Lionel Fenestraz — Freelance Google Ads & Meta Ads Consultant
Lionel Fenestraz
Freelance PPC & CRO Consultant · Google Partner · CXL Certified · Google Ads Search Certified
7+ years managing Google Ads and Meta Ads for vacation rental, B2B and ecommerce. Trilingual ES/EN/FR.
Free first call

Could your ad campaigns
perform better?

30 minutes to review your situation and tell you exactly what I would change. No pitch, no sales proposal.

Book a call →
30 min · Google Meet · No commitment