USPTO Letter of Protest Success Rates Shift
GleanMark analyzed USPTO Letter of Protest outcomes and found a 2025 shift from LOPE toward LOPR, suggesting a broader market trend rather than a few-examiner anomaly.
Are USPTO Letter of Protest success rates falling?
That is the question more trademark practitioners seem to be asking in 2026. A growing number of firms and brand owners have the sense that Letters of Protest are not producing the same results they did even a year ago. The issue is not whether the USPTO still reviews these submissions. The issue is whether reviewed protests are still translating into direct examiner action at the same rate.
GleanMark analyzed USPTO prosecution-event data to test that question. The result is a clear directional shift: over the course of 2025, resolved Letters of Protest became materially less likely to produce a LOPE outcome and more likely to produce a LOPR outcome. That decline remains visible after controlling for law-office mix, and persists — with caveats about panel coverage — after controlling for examiner mix.
This article explains what we found, what public guidance says, and how law firms should benchmark client results against the broader market.
What LOPT, LOPE, and LOPR mean
In the USPTO prosecution-event stream, Letter of Protest activity shows up through a small number of event families:
LOPTmeans the protest evidence was forwarded or otherwise entered into the review pathLOPEmeans the protest evidence was reviewed and action was takenLOPRmeans the protest evidence was reviewed and no further action was taken
For practitioners, that distinction matters. A LOPE event is not just an administrative status. It is generally associated with office action, suspension, or another prosecution consequence tied to the protest evidence. A LOPR event means the evidence was reviewed but did not produce direct action.
That makes LOPE versus LOPR a useful shorthand for whether a protest actually changed the course of prosecution.
The 2025 trend: fewer LOPE outcomes, more LOPR
To measure recent change, we looked at applications filed after January 1, 2024 — focusing on applications moving through the current examination pipeline — and grouped them by the quarter of their first LOPT event. We then limited the analysis to resolved matters, meaning matters that had progressed to either LOPE or LOPR. We excluded unresolved LOPT-only matters because newer protests often have not yet matured into a reviewed downstream outcome.
Here is the resolved 2025 picture:
First LOPT Quarter | LOPE | LOPR | LOPE % of Resolved |
|---|---|---|---|
| 2025-Q1 | 305 | 346 | 46.85% |
| 2025-Q2 | 109 | 157 | 40.98% |
| 2025-Q3 | 216 | 373 | 36.67% |
| 2025-Q4 | 188 | 347 | 35.14% |
The direction is hard to ignore. In the first quarter of 2025, nearly 47% of resolved LOPs in this slice led to direct action. By the fourth quarter, that had fallen to almost 35%.
That is the core market signal: among matters that were actually reviewed to a downstream conclusion, the balance moved away from LOPE and toward LOPR.
One caveat applies to the later quarters. The analysis restricts to matters that have already resolved to either LOPE or LOPR, and more recent protests have had less time to reach a downstream outcome. If LOPE outcomes take longer to materialize than LOPR outcomes, the resolved pool for Q3 and Q4 would over-represent LOPR, slightly inflating the apparent decline. The Q1-to-Q2 drop, where both quarters are well-matured by the analysis date, is less susceptible to that effect.
Why this matters in practice
If all you care about is a single protest’s immediate coding outcome, LOPE versus LOPR is already meaningful. But if you care about actual prosecution consequences, it is even more important to look at what happens after the first LOP event.
That is because LOPE-type matters and LOPR matters behave very differently over time. A protest that results in direct action should be much more likely to disrupt the target application. A protest that is reviewed but does not lead to direct action should be more likely to leave the application on a path toward publication or registration.
So we built a cohort model anchored on each serial’s first LOPT date and checked the application at three, six, and twelve months after that date.
Each checkpoint classified the application into one of three mutually exclusive states:
abandon_or_cancelpublishedpending_not_published
For the pre-LOPE period, we also tested whether legacy LOPT-only matters were a reasonable proxy for what modern LOPE matters look like. Based on downstream outcomes, they were. Those legacy LOPT-only matters behaved much more like modern LOPT+LOPE matters than modern LOPT+LOPR matters, especially by the twelve-month mark.
Using that proxy, we compared two combined groups:
lope_equivalent= legacyLOPT-only plus modernLOPT+LOPElopr= legacy and modernLOPT+LOPR
The cohort split is much wider than the event-code split
Across all first-LOPT dates from 2020 through 2025, the checkpoint analysis looked like this:
| Group | Window | Denominator | Abandon/Cancel | Published | Pending / Not Published |
|---|---|---|---|---|---|
lope_equivalent | 3 months | 6,698 | 3.43% | 6.15% | 90.42% |
lope_equivalent | 6 months | 6,483 | 14.59% | 8.88% | 76.52% |
lope_equivalent | 12 months | 6,163 | 52.05% | 15.69% | 32.26% |
lopr | 3 months | 7,409 | 3.87% | 26.99% | 69.13% |
lopr | 6 months | 7,011 | 13.25% | 38.23% | 48.52% |
lopr | 12 months | 6,498 | 33.23% | 46.32% | 20.45% |
The twelve-month gap is the key result.
Applications in the lope_equivalent group were far more likely to be abandoned or cancelled by the one-year mark. Applications in the lopr group were far more likely to reach publication. That is exactly what you would expect if LOPE reflects a protest that materially affected prosecution and LOPR reflects a protest that was reviewed but did not produce direct action.
In other words, the decline in LOPE share is not just a coding curiosity. It maps to a materially different downstream prosecution profile.
Is this just a few examiners?
That is the next question any serious analysis has to answer.
If the 2025 decline were driven by just a handful of individual examiners, then the market-level trend would be less interesting. It would suggest a mix effect rather than a broad policy or practice shift.
We tested that in two ways:
- examiner normalization using
uspto_records.employee_name - law-office normalization using
law_office_assigned_location_code
The raw half-year result for the same 2025 resolved cohort was:
- 2025-H1
LOPErate: 45.15% - 2025-H2
LOPErate: 35.94%
After holding examiner mix constant for the overlapping examiner panel, the result was:
- 2025-H1 examiner-standardized
LOPErate: 46.39% - 2025-H2 examiner-standardized
LOPErate: 34.06%
Examiner-level overlap is not perfect. The same-examiner panel covered 798 of 2,041 resolved cases, or about 39.1%. That means the examiner-standardized result is useful, but not the cleanest possible control.
The law-office normalization is stronger because the overlap covers 99.9% of resolved cases:
- 2025-H1 law-office-standardized
LOPErate: 45.66% - 2025-H2 law-office-standardized
LOPErate: 35.59%
That is almost identical to the raw decline.
The practical takeaway is that the shift does not appear to be driven only by a few isolated examiners. Once you control for examiner mix, the decline remains. Once you control for law-office mix, it remains again.
What public sources are saying
Outside research is useful here because it shows what the market already understands and what it does not.
Public discussion of Letters of Protest is still focused primarily on procedure. The USPTO’s Letter of protest practice tip emphasizes several points:
- the protest should identify a specific and relevant ex parte ground
- the evidence should be relevant and non-duplicative
- timing matters
- pre-publication and post-publication protests are evaluated differently
- evidence is limited by item count and page count
The USPTO’s Trademark Modernization Act guidance and practitioner commentary from INTA, JD Supra, and Alt Legal also focus on the procedural tightening that followed the TMA: statutory codification, a two-month decision window, and a more formalized review structure.
There is also a cost signal. The USPTO’s 2025 trademark fee changes increased the fee for filing a Letter of Protest from $50 to $150 effective January 18, 2025. That does not by itself explain the LOPE to LOPR shift, but it could affect who files, how often they file, and what kinds of matters are worth protesting.
What we did not find in public sources is just as important. We did not find a public market-wide quantitative study tracking LOPE versus LOPR outcomes over time. Most published discussion explains how to file a stronger LOP, not whether LOPs are now less likely to result in direct examiner action.
That is the gap this analysis begins to fill.
What may be driving the change
The data shows a shift, but it does not prove a single cause. Several possibilities remain plausible.
First, evidence quality may be weakening relative to the Office’s expectations. The public guidance is explicit that the evidence must be relevant, concise, and tied to a specific ground. Poorly targeted screenshots, duplicative materials, and oversized protest packages may be making it to review without persuading the Office to act.
Second, timing may be slipping. The USPTO’s process guidance strongly favors filing as early as possible. If more protests are being filed later in examination, especially after publication, the system should naturally produce more reviewed-but-no-action outcomes.
Third, the Office may be applying a narrower action filter even when the evidence is admitted and reviewed. Our normalization work suggests the trend is not just a staffing artifact. That leaves open the possibility of a broader institutional shift in how often protested evidence is translated into refusals, suspensions, or other direct action.
Fourth, the economics of filing may be changing behavior. A fee increase can filter who files and what they are willing to file on. It can also change the ratio between highly strategic LOPs and more speculative ones.
These are all testable hypotheses. But they require client-level serial lists, deeper document review, or both.
What firms should do next
For law firms, the next step is not to speculate about whether a client’s LOP success rate has fallen. The next step is to benchmark it properly.
That means comparing a client’s protested serials against:
- the overall market baseline
- the same time-based cohort baseline
- examiner-matched or law-office-matched baselines
- downstream prosecution outcomes such as publication, abandonment, and cancellation
It also means looking beyond the event code. If a client’s results are materially below the market, the next questions should be:
- Are their protests concentrated in certain classes?
- Are they filing later than the market?
- Are their weaker outcomes concentrated with certain examiners or law offices?
- Are their protested applications less likely to receive office action or suspension?
- Are their protested applications more likely to move to publication anyway?
That is where related prosecution context becomes useful. Articles such as our guide to the trademark examination process after filing, our TMEP office action strategy guide, and our TTAB proceedings overview all help frame why the LOP decision point matters so much. A failed protest does not just produce a disappointing event code. It can force the brand owner to wait for publication and then decide whether to oppose, coexist, or stand down.
That is also why the fee increase matters strategically. Clients paying more to file LOPs are justified in asking whether the probability of direct action is deteriorating. Our analysis suggests that, at least in recent quarters, that concern is grounded in real data rather than isolated anecdote. For broader prosecution cost context, see our guide on USPTO fee changes and how to avoid surcharges.
The bottom line
The public conversation around Letters of Protest is still mostly about process. The data suggests practitioners should also be paying attention to outcomes.
Across resolved 2025 matters in our corpus, the share of LOPs resulting in direct action fell from 46.85% in the first quarter to 35.14% in the fourth quarter. That decline survives examiner normalization and law-office normalization. And the cohort model shows why the distinction matters: LOPE-type matters and LOPR matters lead to very different prosecution outcomes by the one-year mark.
That does not mean every client experiencing a lower LOP success rate is suffering from the same cause. Some clients may still be underperforming the market because of filing timing, evidence quality, or concentration in a particular prosecution segment. But it does mean firms should start from the possibility that part of the problem is systemic.
If a client says, “Our Letters of Protest do not seem to hit the way they used to,” the current data says that concern may be valid.
FAQ
What is a Letter of Protest?
A Letter of Protest is a USPTO procedure that allows a third party to submit evidence relevant to the examination of a pending trademark application. It is typically used to put conflicting registrations, descriptiveness evidence, genericness evidence, or other ex parte evidence in front of the Office before the application matures.
What is the difference between LOPE and LOPR?
LOPE indicates the protest evidence was reviewed and the Office took action based on it. LOPR indicates the protest evidence was reviewed but no further action was taken. In practice, that distinction matters because the downstream prosecution paths of LOPE matters and LOPR matters are very different.
Does this prove every firm's LOP success rate is falling?
No. It shows a market-level shift in the recent data, not a universal decline for every filer. A particular client could still be doing better or worse than the market because of timing, class mix, examiner exposure, or the kinds of evidence being submitted.
Why use publication instead of registration in cohort analysis?
Publication is often the better intermediate measure because it captures whether the application cleared USPTO examination, and it happens earlier than registration. That makes it more useful for fixed-window cohort checks such as three, six, and twelve months after the first LOPT event.
What should a law firm compare if a client thinks its LOP results are weakening?
The best comparison is not just raw acceptance rate. Firms should compare the client's protested serials against market baselines for the same periods, the same cohort windows, and ideally matched baselines by examiner or law office. That is the cleanest way to distinguish client-specific underperformance from a broader USPTO trend.
Sources
Related Articles
The 45-Class Gambit: What Maximum Trademark Coverage Reveals About Brand Ambition
March 5, 2026
What 1,500 Trademarks Tell You About Mattel
March 5, 2026
When a Trademark Portfolio Stops Being Protection and Starts Being Infrastructure
March 5, 2026