Skip to content

Why Does the PTAB Still Have a Backlog?

Why Does the PTAB Still Have a Backlog? published on 13 Comments on Why Does the PTAB Still Have a Backlog?

Here is a simple question:

Why does the PTAB still have a backlog?

This is not a question about why the PTAB incurred a backlog. The source of the backlog is well-understood: In the mid-2000’s, the USPTO administration urged the examining corps to push down the allowance rate in the interest of “patent quality.” Because applications were rejected based on metrics rather than a proper legal determination, applicants responded by filing appeals, thus overwhelming an unprepared appeal board with a surge of incoming appeals.

However, the USPTO has acted to address some of the problems that prompted the backlog. It has stopped focusing on raw allowance metrics, in favor of per-case legal determinations, and it has expanded the PTAB to work through the backlog. Nevertheless the PTAB still has a massive backlog. Ex parte appeals still require multiple years to resolve. Why does this backlog persist?

(Spoiler alert: This article is not really about the PTAB, but about the examining corps and USPTO administration.)

A. PTAB Production Metrics

The USPTO Data Visualization Center (the “Dashboard”) includes some data on the PTAB, but not enough to extrapolate trends. (The data only goes back three years – and includes appeal inventory, but not appeal rates.) Instead, valuable data can be mined out of the PTAB/BPAI Annual Process Production Reports, reported every September for the preceding year.

This data includes the PTAB’s “inventory” of appeals that remain pending at the end of each year:

This data demonstrates what we already know: a spike in appeals in the latter half of the 2000’s, and the continuing persistence of the backlog.

To its credit, the USPTO has scaled up the PTAB – from 80 judges in 2009, to 180 judges in 2014 – as well as judges’ productivity. As a result, the PTAB has literally quadrupled its output of ex parte appeal decisions:

The obvious discrepancy is the total number of ex-parte appeals received every year:

Again, as was known, incoming ex-parte appeals began rising in 2005. (The surge in 2009 is certainly interesting, but difficult to explain.) This chart also suggests that appeals have fallen significantly since 2011 – another reason why the backlog has been reduced of late. However, the filing of new ex-parte appeals remains elevated, at about twice that of the period between 1997 to 2006.

This data partly answers the initial question: the PTAB still has an appeal backlog because incoming appeal volume remains high. However, the question is now reformulated:

Why does ex-parte appeal volume remain elevated?

B. Ex-Parte Appeal Rate Metrics

Analyzing the causes of the the appeal volume requires adjustment for some cofactors:

  1. Application volume.

    The easiest explanation for higher ex-parte appeal rates is higher prosecution volume: the USPTO received 50% more applications in 2014 than in 2005. However, we can account for application volume by calculating the ratio of ex-parte appeals filed with the number of pending applications – both total applications (including applications that are not yet in examination), and those in active examination:

    This data roughly reflects the actual ex-parte appeal rate – the rate with which applicants choose to file an ex-parte appeal from an office action – and demonstrates a sustained, elevated rate of ex-parte appeals per application.

  2. Cost.

    In recent years, the costs of filing an ex-parte appeal have risen dramatically – even if adjusted for inflation. Taking cost into account for appeal rates – accounting for inflation, and also normalizing the cost to the ex-parte appeal costs in 1997 – reveals a more troubling scenario:

    The uptick in the appeal rate starting in 2005 now looks even worse: applicants chose to file appeals at a higher rate despite concurrent cost increases. Also, the “progress” suggested by the charts above now looks to be driven more by cost than by gains in applicant satisfaction. That is, the demand for appeals by applicants to the PTAB remains high – indeed, continues to increase!

  3. Delay.

    The choice to appeal must be weighted against the prodigious delay that this decision is likely to incur. When an appeal takes three years to resolve, the issuance of the patent is delayed by three years – and unless the PTAB rules entirely in the applicant’s favor (such that the appeal duration is recouped by patent term extension), the appeal also consumes a large chunk of the patent term.

    The PTAB data about ex-parte appeals does not include a breakdown by pendency, but we can estimate it by comparing the PTAB’s output and the appeal volume. In 2004, the PTAB disposed of 3,452 cases, leaving a pending ex-parte appeal volume of only 985 – i.e., the odds of having an ex-parte appeal resolved within a year were quite high. But in the past year, the PTAB resolved 11,238 ex-parte appeals, leaving a pending ex-parte appeal volume of 23,084.

    These statistics enable a rough estimate of the odds that a particular appeal will be resolved each year: (Output / (Output + Backlog)). These metrics reveal a sustained delay since 2008, which, when factored into the ex-parte appeal rate, present an even more dire scenario:

It is now apparent that the recent reduction in ex-parte appeals is not attributable to improved applicant satisfaction – but to an exacerbation of the toll of the appeal process on applicants. Despite the fact that PTAB ex-parte appeals are now more costly and protracted than ever, applicants are compelled to file ex-parte appeals at a historically unprecedented rate.

The question is again reformulated:

Why is the demand for ex-parte appeals elevated?

C. Evaluating Applicant Demand for Ex-Parte Appeals

Every ex-parte demand is filed in the same circumstance. An application has been twice rejected, and the applicant is faced with several choices:

  • If any claims have been allowed, accept only the allowed claims, and cancel the rest. Optionally, file a continuation to pursue other claims.

  • Abandon the application (because the rejection cannot be adequately traversed, or because the examination budget is exhausted).

  • Traverse the rejection, with or without claim amendments, and a Request for Continued Examination if the rejection is final.

  • File an ex-parte appeal.

Among these options, the demand for ex-parte appeals is higher than ever before. This is notable since the first three options have not significantly changed over the time periods under consideration.

Moreover, applicants only choose this option if:

  1. The applicant does not merely disagree with the examiner, but believes that the examiner’s position is unreasonable, such that arguments to the SPE and PTAB are likely to prevail. Applicants are disinclined to appeal matters within examiner’s discretion – e.g., restriction requirements and 112(b) rejections – where it is more difficult to prove that the examiner’s discretion is in error. Rather, appeals are limited to circumstances where a third-party reviewer will find the examiner’s decision to be objectively unsustainable, such as an unreasonable claim construction or a misinterpretation of the references.
  2. and

  3. The applicant believes that the examiner cannot be persuaded away from this unreasonable position through further prosecution.

In other words: Demand for ex-parte appeals is a direct indicator of the persuasiveness of office actions, and examiners’ motivation to pursue this objective. Applicants choose to appeal rejections that are perceived as objectively unpersuasive, and where the examiner’s mind appears closed. The data reflects a sustained incidence – indeed, an upward trend – in unpersuasive examination over the past six years. If the USPTO honestly wants to assess examination quality, these are the types of metrics that it could use.

The trends revealed above raise a still further question:

Why are examiners, on a systemic level, more inclined to stand firm on unpersuasive rejections today?

D. Causes of Unpersuasive Office Actions

Several reasons exist for the prevalence of unpersuasive office actions:

  • The shift of patent examination from objective to subjective determinations. The trend of unpersuasive office actions correlates with the steady shift of patent examination from an objective process (a comparison of the contents of references with the claimed subject matter) to a subjective process (esoteric conclusions that any number of unrelated references can be “combined” under KSR; assertions that claimed subject matter is “abstract” or “preemptive”; and major ambiguity over whether claimed subject matter invokes a means-plus-function construction, or includes enough “hardware” to avoid a “non-statutory” 101 rejection).
  • The USPTO’s failure to identify and reward higher-quality, more persuasive examination. The 2015 Office of the Inspector General (OIG) Report about the Office of Patent Quality Assurance revealed that, during annual review, 96% of GS-13 examiners with partial signatory authority are rated as “commendable” or “outstanding.” Raises and promotions are based almost exclusively on output timeliness and volume, with no regard whatsoever for quality or persuasiveness of their work. Examiners have absolutely no incentive to improve quality: anything above the absolute worst 4% of office actions is treated as equivalent.

    This represents an abject failure of USPTO management to encourage – or even recognize! – examiners’ efforts to improve examination quality. And yet, the USPTO has not offered one word about changing such practices – indeed, it has yet to acknowledge that this management style is a problem.

  • The USPTO’s failure to penalize examiners for unreasonable examination errors. The OIG Report is even more critical about the failure of the USPTO to address examination errors:

    USPTO supervisors we interviewed indicated that there is an incentive to not charge errors in order to avoid the potential time-intensive error rebuttal process.

    From FY 2011 to FY 2013, examiners with an error rate identified by an OPQA independent reviewer still received an “outstanding” or “commendable” quality rating over 95 percent of the time.

    The Commissioner of Patents verbally announced that errors found by the Office of Patent Quality Assurance could not be used to calculate an examiner’s error rate.

    The disconnect is even more stark with respect to appeal outcomes. During the August 2015 webinar about patent quality, the Office of Patent Quality Assurance offered this response a question about using appeals to inform “patent quality” assessments:

    We just did a recent study with the Office of Chief Economist, where he looked at some of the final written decisions by the board. We looked at a sample of the cases, to try to find what was the root cause of where maybe that case went off the rails and ended up to the board and what was the decision.

    One of the flaws in the certain type of review looking at that, by the time the case gets to the Board and the decision is made, we are looking at cases that were published and issued 10 years ago. Eight years ago. Seven years ago. It is a very lagging indicator of quality for us.

    This is the OPQA admitting that examination errors resulting in reversal on appeal are regarded as… well, “water under the bridge.” Evidence of past errors is viewed as immaterial to today’s examining corps! It is therefore unsurprising that examiners are undeterred by the risk of being overturned on appeal – as evidenced by the 75% applicant success rate on appeal.

These factors demonstrate an underlying choice of priorities by the USPTO: a focus on satisfying the public at large, in terms of issuing only high-quality patents, at the expense of satisfying applicants, in terms of providing an efficient, cost-effective process that produces technically accurate, well-articulated, persuasive office actions. To be sure, both objectives are important – but the USPTO’s focus and efforts are extremely lopsided.

This choice is driven by the impact of each issue on the USPTO. The office suffers majorly from poor public perception of the quality of issued patents – while the degradation of the experience for applicants, resulting in delayed, costly, and unfair outcomes, is largely invisible to the USPTO. The most obvious and measurable symptoms, such as pendency and examination backlog, are being addressed; but less directly measurable factors, such as quality of examination, receive little attention.

And yet, such choices are not free of consequences – leading to an observation that brings us full circle: The cost of the USPTO’s tolerance for unpersuasive examination is borne by the PTAB.

E. Conclusion

As shown above, PTAB appeals are more in demand now than ever before. Indeed, demand remains elevated despite the USPTO’s efforts to suppress demand by raising prices, and the extensive backlog that renders the PTAB an (ahem) unappealing option in many cases. And demand is elevated because examiners have much more freedom (and virtually no penalties) for standing firm on objectively unreasonable rejections.

The USPTO must act to address the root causes of the increased demand for ex-parte appeals – but first, it must acknowledge this causal chain of events. Regrettably, to date, it has chosen to hide the problem, by increasing costs to suppress ex-parte appeals. This data bodes poorly for the near-term prognosis of the USPTO.


Data and charts available here:

First, this data is not offered as a quantitative measurement. Statisticians may well take issue with the particular calculations that I chose, such as the odds of the PTAB deciding on a particular case per year. Rather, I offer this data as evidence of qualitative trends in ex-parte appeal demand.

Second, various factors may have resulted in an overestimate of ex-parte appeal demand:

  • The cost of filing an ex-parte appeal are only partly determined by the USPTO’s fees – practitioners’ fees are also relevant, and may in fact be considerably larger than the USPTO fees. While aggregate metrics about attorney’s fees are unavailable, we can presume that applicants base their choices to appeal based on the aggregate metrics – e.g., a 50% hike in USPTO fees might only translate to a 25% increase in the applicant’s total appeal cost. So the impact of cost on demand may be overestimated. On the other hand, many appeal briefs are filed pro-se or by in-house counsel, so only the USPTO fees affect the cost.

  • When faced with a final rejection, an applicant must weigh the projected delay incurred by a PTAB appeal with the projected delay incurred by filing an RCE – i.e., the “RCE Hole.” The unappetizing nature of this choice for the applicant is exacerbated by the fact that the “RCE Hole” is an intentional loophole created by the USPTO as a punitive measure that examiners can exercise against “uncooperative” applicants.

Third, various factors may have resulted in an underestimate of ex-parte appeal demand:

  • The overall appeal rates reported by the PTAB only account for appeals that are “forwarded” to the PTAB following the filing of briefs. Often, examiners will respond to a notice of appeal by reopening prosecution, either during pre-appeal review or in lieu of an examiner’s answer. Applicants, too, sometimes choose to file an RCE rather than maintain an appeal. So the overall metrics about how many notices of appeal are received ever year are much lower than the actual appeal rate.

  • As noted above, this data does not capture cases where the examiner’s rejection is unfair, but on a matter that is mostly discretionary, such as restriction requirements, or “non-transitory” Nuijten-type rejections. Such issues have escalated at least as much as appealable issues, but are not reflected in appeal metrics.


These stats are over two years old but they do show wide variation between art units and tech centers. I’ve been trying to find time to update this info but am swamped.

Commentary on ex parte Appeals
One issue in this morning’s column misses is the effect of employee morale on the quality work product – be it rejection/appeal or allowance. There is a mistaken assumption among some that higher quality is demonstrated by more rejections but there is also a mistaken assumption among different individuals that better quality translates to more allowances. Obviously there is a need for quality assessment.

Quality Assessment(s) Commentary
• One qualitative facet of that quality control assessment is the give/take between examiner and inventor representative is vitally important. Both examining and practitioner need to consider the facts, i. e., the claims language and phrases as well as the references cited. It is incumbent on the examiner to present a fact based logical reasoned office action be it rejection or allowance. Conversely, applicant’s representative stands in lieu of the inventor and is charged with presenting applicant’s fact based response(s) regarding the claims and the cited reference(s).
• There needs to be training on making cogent fact based communication for examiners and practioners in office actions or applicant responses for all who are involved in patent examination.

I don’t think this is accurate, at least not for 1600s. In my experience, only 1/2 of the appeals I see are reasonable. Bad attorneys make bad examination, too. And our win rate is closer to 50-60%, depending on how you score new grounds and affirmed-in-part.

I’m certain that there’s variance among art units, just as there is among practitioners. (For example – 100% of the ex-parte appeals that I file are reasonable!)

My observations are based on the aggregate statistics of how the ex-parte appeal process plays out – which, as it happens, put the actual figure exactly between yours and mine: applicants win the appeal process 75% of the time. That statistic was calculated over two million applications, so it really can’t be dismissed due to anecdotal observation.

And in contrast with your art unit’s appeal success rate, the article that I cited lists specific appeal success rates for several other art units – many of which were well under 25%. One art unit had an ex-parte success rate of 9%. That’s a whole lot of examiner errors being corrected by a very expensive process (for the USPTO as well as the applicant).

“That statistic was calculated over two million applications, so it really can’t be dismissed due to anecdotal observation.”

It appears that you’re reading that into the article. The article states: “Chris Holt and I derived the overall 75% success rate number using the groundbreaking PatentCore™ database (Chris is CEO of PatentCore). This database includes about 2 million file histories.” Using a database is not the same thing as using every entry in database. Mr. Werking goes on to explain the analysis of a particular art unit and then reports the result of the analysis for “the art units [he] works with.” There does not appear to be a statement suggesting that the analysis was performed across the entire PTO. Further Mr. Werking states “I expect the average for all art units to be very close to that number.”

I am absolutely not saying that the statistic should be “dismissed”. This is an extremely interesting metric that I’d like to see developed further. However, I think that because the analysis is all but restricted to TC2100 and TC2400 should make any critical thinker suspicious. But that’s nothing that a PTO-wide analysis couldn’t fix!

Okay, that’s fair – there’s no indication of how many appeals those two million cases included.

Still… it’s a lot, right? Enough to evaluate between 50 and 200 appeals per art unit. Seems like a reasonably substantial sample size.

I’ll ask Kip for some more information about this study. If it’s only applicable across a specific domain, I’d like to include that kind of disclaimer here.

I think we’re still on different pages. You’re acknowledging that the 2e6 cases are not all appeals, and I agree.

What I meant to point out was that for the n appeals in the PatentCore database, there is no reason to conclude that all n were included in the analysis which produced the 75% result. The wording even suggests that only a art ubit dependent fraction were included.

If all n were included, I would agree that would be a very meaningful sample size and analysis.

Hmm – now I think that you’re reaching a bit.

If a study states that it was evaluated over a sample set of (x) records, it’s fair to think that some of those records didn’t apply to the purpose of the study. But you’re arguing something more: that of the slice of (x) records that do apply, only some of the records were considered, and the rest were discarded. Absent any statement to support that conclusion, I don’t think that’s a reasonable inference.

I fully agree. So which statement are you interpreting as equivalent to “it was evaluated over a sample set of (x) records”?

The closest thing I see is “derived the 75% success rate number using the groundbreaking PatentCore database”. “Using” a database does not signal to me that the analysis was performed for the entire database. Additionally, my interpretation seems to be supported by how he indicates that the analysis was performed on an incomplete art unit basis (“I checked the statistics for the art units that I work with”) and that he does not actually say he calculated the average value for all art units (“I expect the average for all art units to be very close to that number”).

I suspect we are at an impasse and Mr. Werking must be appealed to. Hopefully his backlog isn’t too bad.

I agree that it would be helpful to have more background info about the scope of the study. I’ll check with Kip. (He is also a member of the National Association of Patent Practitioners – I first met him at the Annual Meeting this past July – so I can reach him that way.)

Leave a Reply

Your email address will not be published. Required fields are marked *