Home Blog Google Ads
Google Ads

Google Ads Quality Score in 2026: What Actually Moves the Needle (And What Doesn't)

Stop overpaying for clicks. Learn what moves the Google Ads Quality Score needle in 2026 and how to cut your CPC by 50%.

Lionel Fenestraz · 17 March 2026
Google Ads Quality Score in 2026: What Actually Moves the Needle (And What Doesn't)
Free audit
Does this sound familiar in your account?
30 min to review it. No pitch, no commitment.
Book a call →

Most advertisers fall into one of two camps with Google Ads Quality Score. Either they obsess over it, chasing a 10/10 on every keyword like it is a badge of honour. Or they dismiss it entirely, assuming Smart Bidding has made it a relic. Both positions cost money.

Here is the reality. A keyword sitting at Quality Score 1–3 can cost you up to 400% more per click than a keyword at the QS 5 baseline. That is not a marginal difference. That is the difference between a profitable account and one that burns budget while your competitors pay a fraction of the price for the same traffic. On the flip side, a Quality Score of 10 can unlock up to a 50% CPC discount. I have seen this play out in real accounts repeatedly. An advertiser struggling with a £40 CPC drops to £22 after fixing landing page experience and tightening ad group themes. Same keyword. Same bids. The low score was the bottleneck the whole time.

In this post I will explain what Quality Score actually is and what it is not, break down the three components in order of practical impact, address the Smart Bidding and Performance Max questions directly, give you real benchmarks to orientate yourself, and walk you through the diagnostic I run when I audit accounts.

What Quality Score Actually Is (And What It Isn’t)

Quality Score is a 1–10 diagnostic score assigned at keyword level in Search campaigns. Google calculates it based on three components: Expected Click-Through Rate, Ad Relevance, and Landing Page Experience. A 5 is average. A 7 is good. A 10 is rare and, in most cases, not worth pursuing at the expense of everything else.

The 1–10 score is not the signal Google uses at auction. Google has confirmed this explicitly: Quality Score is a diagnostic tool, not an input into the ad auction. At every auction, Google calculates a real-time quality estimate based on the actual query, device, location, time of day, and dozens of other signals. The 1–10 score you see in the interface is a lagging snapshot based on historical impressions for exact searches of your keyword. It is diagnostic, not predictive.

This distinction changes how you respond to it. If your Quality Score drops from 6 to 5 overnight, do not panic and start rewriting every ad. The 1–10 score lags. What you are seeing is the reflection of something that already happened, often days ago.

Quality Score is not a KPI to optimise in isolation, a ranking factor in Performance Max, or a reliable proxy for overall account health. I have audited accounts with average Quality Scores of 7 that were structurally broken, and accounts with an average of 5 generating excellent returns because their targeting and bidding were dialled in.

What it is: a direct input into Ad Rank, which determines both your auction position and your actual CPC. Better QS means lower CPC for the same position. That is the only reason it matters, and it is reason enough.

How Quality Score Moves Your CPC

The penalties and discounts are not linear and not symmetrical. The bigger gains sit in the upper range — moving from 7 to 8 does more for you than moving from 5 to 6. The figures below are based on Adalysis’s practitioner research. They are estimates derived from reverse-engineering historical auction data, not precise figures published by Google, and the underlying analysis dates to around 2013. Treat them as directional rather than exact.

Quality ScoreCPC Impact vs. Baseline (QS 5)
1+400%
2+150%
3+67%
4+25%
5Baseline
6-17%
7-29%
8-37%
9-44%
10-50%

QS 5 is the neutral point. Below it, you pay a progressively steeper surcharge for the same position. Above it, you receive a progressively larger discount. The jump from 4 to 5 saves you 25 points but only brings you to baseline. This is why fixing genuinely poor Quality Scores is urgent, and chasing perfection above 8 is rarely worth the effort.

The Three Components and What Actually Moves Them

Quality Score has three components. Adalysis’s practitioner research estimates their relative weights at approximately 39% for Expected Click-Through Rate, 39% for Landing Page Experience, and 22% for Ad Relevance. These are not Google-official figures and should be treated as directional rather than definitive. That said, they align with what I observe in real accounts when I look at where fixing one component moves the needle more than another.

Expected Click-Through Rate (~39%)

Expected CTR is Google’s prediction of how likely your ad is to get clicked, given the keyword and position, relative to other ads competing on the same keyword. It is not a measure of your own historical CTR. It is a comparison against every other advertiser showing up for that keyword. If your ad copy is generic and your competitors are writing tightly targeted ads, you lose this comparison regardless of your bid.

What moves it: ad copy that matches search intent precisely. A searcher typing “emergency boiler repair London” and seeing an ad that says “Heating Services. Get a Quote Today.” will not click at the same rate as one that says “Emergency Boiler Repair in London. Same-Day Engineers Available.” The second ad speaks to what the person actually wants. Testing multiple headline combinations in Responsive Search Ads matters here, but only if you are genuinely rotating distinct value propositions, not minor variations of the same generic copy. There is a full breakdown of how to write ad copy that performs in this guide on PPC ad copy.

What does not move it: changing bids, adjusting match types, or restructuring campaigns. I have seen advertisers change keyword match types from broad to exact assuming it would improve CTR signals. It does not work that way. The creative is the lever here.

Ad Relevance (~22%)

Ad Relevance measures how closely your ad copy matches the intent behind the search query. This is where ad group structure has a direct impact on Quality Score.

The common mistake is building broad ad groups where a single Responsive Search Ad covers 20, 30, sometimes 50 different keyword intents. I have seen this in most of the accounts I audit. The RSA tries to be relevant to everything and ends up specific to nothing. Ad Relevance suffers as a result. The fix is tighter theming: fewer keywords per ad group, ad copy written for the specific intent of those keywords, and the primary keyword appearing in at least one headline. This is one of the structural mistakes covered in more detail here.

The intent match matters more than keyword presence. A searcher looking for “best CRM for small business” wants reassurance that the product suits their size and budget. An ad headline that says “CRM for Small Businesses. No Setup Costs.” addresses that intent. “Best CRM Software. Try Free.” does not, even if it contains words from the query. Dynamic Keyword Insertion does not solve this either — automatically inserting the search term into the headline is not the same as writing an ad that speaks to what the searcher actually wants. Google’s relevance assessment reads intent, not just vocabulary.

Landing Page Experience (~39%)

Landing Page Experience is Google’s assessment of how relevant, transparent, and useful your landing page is for someone who clicked your ad. It carries the same estimated weight as Expected CTR and it is the component most advertisers underinvest in. Partly because fixing it requires more effort than rewriting a headline, and partly because it feels like a CRO problem rather than a PPC problem. In 2026, that distinction no longer holds.

What moves it: page content that matches the ad copy and keyword intent, fast load times, mobile optimisation, and no deceptive content or disruptive interstitials. If your ad promises “same-day delivery” and the landing page mentions “ships in 3–5 business days,” Google’s crawlers will pick up that mismatch. So will the user, which is why this component correlates with bounce rate even if Google does not use bounce rate directly.

Here is what content mismatch looks like in practice. A solicitors firm runs a Search campaign targeting “employment tribunal solicitor London.” The ad copy mentions no-win-no-fee representation and a free initial consultation. The landing page links to the firm’s general legal services page: a list of practice areas, a brief about the partners, and a contact form buried near the footer. The words “employment tribunal” appear once, in a sidebar. There is no mention of no-win-no-fee. Google’s crawler reads this and cannot find the content the ad promised. Landing Page Experience scores “Below Average.” The user bounces within seconds for the same reason. The fix is a dedicated page for employment tribunal enquiries that opens with the no-win-no-fee offer, explains the process clearly, and places the contact form in the first screen. The ad and the page need to tell the same story, in the same language, in the same order.

Landing Page Experience is growing in importance because of AI Max. Google’s AI-driven campaign features evaluate landing page content more deeply to determine ad placement and relevance. The page is no longer just the destination. It is part of the targeting signal. If you are not running regular landing page audits, you are missing something that compounds over time. The same principle applies when doing a broader Google Analytics audit: landing page quality shows up in engagement metrics well before it appears in your Quality Score.

Quality Score in Automated Campaigns: Smart Bidding and Performance Max

Smart Bidding

The argument goes like this: Smart Bidding uses machine learning to optimise bids in real time, so Google’s algorithms already account for quality signals dynamically. The 1–10 Quality Score becomes a lag indicator of something the system is already handling. Stop worrying about it.

This argument is half right and entirely misleading in practice.

Smart Bidding does use a richer, real-time version of the same quality signals that feed the 1–10 score. But the underlying signals (ad copy relevance, landing page quality, expected engagement) matter more in a Smart Bidding account, not less. Smart Bidding amplifies differences. If your landing page converts at 4% and a competitor’s converts at 9%, Smart Bidding learns this faster than a manual bidding account would and adjusts accordingly. Poor quality does not get smoothed out by automation. It gets penalised more efficiently.

The practical implication holds across any Smart Bidding strategy, whether Maximise Conversions, Target CPA, or Target ROAS: a weak landing page does not just hurt your Quality Score. It starves the algorithm of conversions, slows down the learning phase, and pushes the system into conservative bidding and underdelivery.

What makes this particularly damaging is the conversion volume problem. Google’s guidance recommends roughly 30 conversions per month for Target CPA campaigns and 50 for Target ROAS before the algorithm has enough data to bid confidently. These thresholds vary by campaign type and are guidelines rather than hard cutoffs, but the principle holds across bid strategies. A landing page with poor experience systematically suppresses the conversion rate for every click the campaign generates during that window. If your campaign produces 200 clicks per week and your landing page converts at 1.5% rather than a reasonable 5%, you collect three conversions per week rather than ten. At three per week, you need more than ten weeks to accumulate sufficient data. During those weeks, the system bids conservatively and underdelivers. By the time you investigate, the account looks like a bidding problem or a budget problem. It is a landing page problem.

Performance Max

Performance Max has no keyword-level Quality Score. PMax does not use keywords in the traditional sense, so there is no keyword to attach a QS to.

The equivalent quality signal in PMax is the asset group rating: Low, Good, or Best. Google evaluates the quality and diversity of your headlines, descriptions, images, and videos within each asset group, as well as the relevance of those assets to the audience signals and the final URL. A “Low” rating is the functional equivalent of a poor Quality Score in Search. It limits reach and increases costs. A “Best” rating does not guarantee results, but it signals that the creative inputs are strong enough for the algorithm to work with.

The two campaign types often compete for the same users. If your Search quality is poor, you lose the inventory where intent signals are strongest. Do not use PMax as a reason to deprioritise quality work on your Search campaigns.

Industry Benchmarks: What a Good Quality Score Actually Looks Like

According to WordStream’s analysis of 15,666 accounts in 2025, the average Quality Score across Google Ads accounts sits at 5–6 out of 10. Anything at 7 or above puts you ahead of the majority of advertisers in almost every industry.

The spread by industry is wider than most people expect:

IndustryAverage Quality Score
Apparel, Fashion & Jewelry7.36
Shopping, Collectibles & Gifts6.90
Finance & Insurance5.72
Education & Instruction5.35
Home & Home Improvement5.33
Attorneys & Legal Services5.02
Physicians & Surgeons4.95
Dentists & Dental Services4.84

Source: WordStream Google Ads Account Study, 15,666 accounts, 2025.

Dentists, lawyers, and physicians sit at the bottom for the same reasons: competitive queries, compliance-constrained ad copy, and landing pages that frequently underperform on mobile and load speed. Apparel and shopping sit at the top because product-level keyword targeting creates natural tightness between keyword, ad, and landing page.

A Quality Score of 7 is genuinely good. A 10 is achievable on branded or highly specific keywords, but chasing it on competitive head terms is rarely worth the investment. The LeadGen Economy analysis models that improving from QS 5 to QS 8 on lead generation keywords reduces cost per lead by approximately 27%, based on the CPC discount at each score level with conversion rate held constant. That is a modelled estimate rather than an empirical measurement across real accounts, but it is grounded in the same CPC table above: a QS 8 pays roughly 37% less per click than a QS 5. The direction is right even if the exact figure varies by account.

A Practical Quality Score Diagnostic: Run This on Your Account Today

This is the workflow I use when auditing accounts.

Start by filtering your keywords to show only those with a Quality Score below 5. Then apply a secondary filter for impression volume: anything under a few hundred impressions in the last 30 days can be deprioritised. Low-impression keywords with poor QS rarely have enough data to act on. Focus on keywords that are actually spending or driving traffic.

For each keyword in that filtered list, look at which component is marked “Below Average.” Google shows this in the keyword columns panel (you may need to add the QS component columns). The below-average flag tells you where the problem is.

If Expected CTR is below average, the issue is your ad copy relative to competitors. Review what else is showing for that keyword using Google’s ad preview tool. Are your headlines specific to the intent? Are you leading with a generic CTA or something that actually differentiates you? Test new headline combinations and give each test enough impressions before drawing conclusions.

If Ad Relevance is below average, the issue is usually structural. Is this keyword in an ad group with too many others pulling in different directions? Is the RSA trying to serve too many intents at once? The fix is to tighten the ad group by removing mismatched keywords, or write copy more specifically aligned to this keyword’s intent, including the keyword itself in at least one headline.

If Landing Page Experience is below average, run the page through Google’s PageSpeed Insights and check the mobile score. Then check whether the page content actually delivers on what the ad promises. If someone searches for a specific product or service and the landing page opens on a generic homepage, that is your problem. The fix is usually a dedicated landing page, not a homepage redirect.

After making changes, give it two to four weeks before drawing conclusions. Quality Score updates as new impression data accumulates, and meaningful movement on a keyword with moderate traffic typically takes that long to stabilise.

Prioritise ruthlessly. A keyword with a Quality Score of 3 and five impressions a month can wait. One driving 20% of your spend with a QS of 4 is where you start.

The Score Follows the Work

Quality Score reflects the quality of three things: your ad copy, your ad group structure, and your landing page. Fix those and the score follows. Optimise the score directly without fixing the underlying issues and you are measuring the wrong thing.

In a Smart Bidding world, the signals that feed Quality Score are the same signals that determine how well your algorithm learns. Getting ad relevance right, landing page experience right, click-through rates up: these are not QS optimisation tasks. They are the work of building a Google Ads account that functions. If you want to understand how all of this fits into a broader campaign structure, the Google Ads strategy guide covers how budget allocation and campaign architecture work together. Quality Score is just the most legible diagnostic you have for knowing where you are falling short.

If you want to know where your account specifically is falling short, I offer a free audit. I will look at your Quality Score components, your campaign structure, your landing pages, and your bidding setup, and give you a clear list of what to fix first. Book a free audit here.


Sources

  1. About Quality Score for Search Campaigns — Google Ads Help
  2. Google Ads Account Study (15,666 accounts) — WordStream
  3. Google Ads Benchmarks 2025 — WordStream
  4. Google Ads Quality Score: The Ultimate Guide — Adalysis
  5. Quality Score Impact on Lead CPL — LeadGen Economy
  6. Smart Bidding — Google Ads Help
Free audit
Room to improve your account?
30 minutes to review together. No commitment.
Book a call →
LF
Lionel Fenestraz
Freelance PPC & CRO Consultant · Google Partner · CXL Certified
7+ years managing Google Ads and Meta Ads for vacation rental, B2B and ecommerce. Trilingual ES/EN/FR.
LinkedIn Book a call →
Free first call

Could your ad campaigns
perform better?

30 minutes to review your situation and tell you exactly what I would change. No pitch, no sales proposal.

Book a call →
30 min · Google Meet · No commitment