Skip to content

Hey, USPTO: To Improve Patent Quality, Improve Office Action Quality

Hey, USPTO: To Improve Patent Quality, Improve Office Action Quality published on 8 Comments on Hey, USPTO: To Improve Patent Quality, Improve Office Action Quality

Patent quality is a hot topic at the USPTO – not only taking center stage on USPTO Director Lee’s radar, but also prompting a two-day Patent Quality Summit. However, this effort by the USPTO seems peculiar, because the issuing patent – as a work-product or deliverable – is entirely drafted by the applicant. After verifying that the application meets all of the requirements of 35 USC and 37 CFR, the USPTO’s actual contributions to the issuing patent are the Notice of Allowance – a one-page boilerplate form, occasionally with a cursory statement by the examiner that the claims are novel – and the provision of a serial number and a shiny ribbon.

Rather, the USPTO’s primary work product is the office action – a statement of reasons for allowing or rejecting or allowing particular claims. While the USPTO scrupulously monitors examiners’ output in terms of quantity and timeliness, remarkably little attention is paid to evaluating the quality of the content of office actions. And a detailed (or even cursory) evaluation of office actions will reveal abundant opportunities for improvement that the USPTO does not seem to appreciate.

This post is a call to action for Director Lee and USPTO officials: The USPTO can best contribute to the “patent quality” issue by closely evaluating and striving to improve the contents of office actions.


The quality of patents issued by the USPTO is high on the agenda of incoming USPTO Director Michelle Lee. 1 Director Lee has expressed many opinions about the kinds of applications that should be filed, the kinds of patents that should issue, and the resources to be given to examiners for examination (better training, public searching, etc.) And of course, the examiner’s analysis and decisions are critically important to the patent process: whether yes or no, the examiner’s decision needs to be accurate and well-reasoned.

However, the centrality of these objectives in Ms. Lee’s agenda is puzzling for one key reason: The USPTO’s primary work product is not an issued patent.

The contents of issuing patents – the specification, figures, and claims – are conceived, written, and amended by the applicant. If the application satisfies all of the requirements of 35 USC and 37 CFR, the examiner completes a one-page Notice of Allowance, occasionally with a cursory statement of why the claims are novel. The patent issues in due course without further substantive changes beyond the applicant’s last set of amendments.

Rather, the USPTO’s primary work product is the office action: a specific statement of the examiner’s decision for the application. Office actions include the examiner’s interpretation of the claims; an enumeration of the statutes and case law principles that the examiner considers relevant to the application; and a comparison of the claimed subject matter with the closest references.

Of course, the office action is an expression of the examiner’s decision, which is more centrally the focus of “patent quality.” However, the quality of such decisions cannot be evaluated without a clear expression of the examiner’s rationale: not just a list of references that might be relevant, but why the references are relevant. All of the USPTO’s inward-facing “patent quality initiatives” – such as more extensive examiner training, and automated searches to identify prior art – are pointless if they do not improve the quality of the expression of the examiner’s decision in the office action.

Evidence of the USPTO’s skewed priorities is apparent through its Data Visualization Center, which provides extensive metrics of the USPTO’s self-assessment. These metrics reflect precise, detailed scrutiny of issues such as timeliness, pendency, and productivity:

  • Central Reexamination Unit Processing Time for Ex Parte Reexamination
  • First Office Action Pendency (months)
  • Traditional Total Pendency (months)
  • Office Time and Applicant Time – Traditional Total Pendency (months)
  • Office Time and Applicant Time – Traditional Total Pendency – Requests for Continued Examination (months)
  • Traditional Total Pendency Including Requests for Continued Examination (months)
  • Forward Looking First Action Pendency (months)
  • Pendency from Application Filing to Board Decision (months)
  • Pendency of Applications Which Include At Least One Request for Continued Examination (months)
  • Pendency from RCE Filing to Next Office Action (months)
  • Pendency of Continuation Applications (months)
  • Pendency of Divisional Applications (months)
  • Track One Pendency to First Office Action (months)
  • Track One Pendency to Final Disposition (months)
  • Track One pendency from Petition Grant to Allowance (months)
  • Track One Pendency from Filing to Petition Grant (months)
  • Track one Office Time and Application Time – Traditional Total Pendency (months)
  • First Action Interview Pilot (FAIP) Allowance Rates (percent)
  • Patent Applications Allowed (number)
  • Unexamined Patent Application Backlog (number of applications)
  • Request for Continued Examination (RCE) Backlog (number of applications)
  • Patent Application Production (number of office actions)
  • Average Actions Per Disposal (number of office actions)

However, regarding examination quality, the Data Visualization Center has a very different set of metrics:

  • Quality Composite Score
  • Quality: Final Disposition Compliance Rate
  • Quality: In-Process Compliance Rate
  • Quality: First Action on the Merits Search Review
  • Quality: Complete First Action on the Merits Review
  • Quality Index Reporting
  • External Quality Survey
  • Internal Quality Survey
And… that’s it.

These metrics reflect an odd skew in the USPTO’s self-assessment: the USPTO precisely measures and scrutinizes examination speed and quantity, but evaluates quality only as a few subjective and poorly-defined fields. Indeed, even these few metrics are doubtful: the USPTO lists a “Final Disposition Compliance Rate” of 97%, yet Patent Trial and Appeal Board metrics indicate that 44% of appeals result in the reversal of at least one basis of rejection.

In the field of management, two popular statements provide insight about the use of metrics:

  • Your metrics reflect your priorities.
  • You get what you measure.
Applying those observations to the particular metrics gathered by the USPTO – is it any surprise that the USPTO exhibits strong timeliness and output, but systemic problems in quality?

The magnitude and breadth of the USPTO’s failure to assess examination quality is documented in a recent report by the Office of the Inspector General. The findings of the OIG Report are revealing and troubling.

  • First, the OIG Report begins with a summary of the USPTO’s examiner review metrics:

    USPTO’s supervisors rate patent examiners on four performance elements, which are graded on a five-point scale, outlined in the examiner’s performance appraisal plans. The four performance elements for each examiner are:

    • Production: Examiners issue determinations on patentability within the assigned time frames
    • Quality: examiners correctly determine whether a patent application should be approved or rejected
    • Docket management: examiners manage respective caseloads and properly select cases for review per USPTO policies
    • Stakeholder interaction: examiners provide appropriate services to stakeholders
    (Footnote: USPTO has awards for the production and docket management elements, but there are no awards specific to the quality element.)

    Notably, none of these performance metrics reflect the quality of office actions. If an examiner reaches the right decision, on the right case, in the prescribed time frame – and yet, fails to express that rationale clearly and correctly (or even coherently!) in the office action – the examiner has fulfilled all of the review metrics.

  • Second, the OIG Report noted extensive problems in supervisors’ assessment of examiner quality:

    During the course of the annual performance period, supervisors are required to conduct an in-depth review of a minimum of four patent determinations completed by the examiner, regardless of the total number of determinations completed.

    USPTO management claims that supervisors review more than one case per quarter; however, there is no way to verify this because supervisors currently do not document which cases they review. In addition, USPTO supervisors we interviewed indicated that there is an incentive to not charge errors in order to avoid the potential time-intensive error rebuttal process.

    Furthermore, the current standards often make it difficult to justify giving an examiner a rating other than “outstanding.” Errors can be found in 75 – and even 100 – percent of the cases reviewed, yet an examiner could still obtain a rating of “fully successful” or higher on the quality performance element.

    Although USPTO implemented changes in FY 2011 to examiner performance appraisals to “align the patent examiner performance appraisal plans to organization goals,” some of the changes have made it more difficult to tie examiner performance to the issuance of high-quality patents. For example, USPTO relaxed the error rate of some examining activities by eliminating or combining multiple metrics into one quality error rate. Additionally, the new plan required some types of errors to have occurred multiple times before a supervisor could charge them to an examiner’s error rate.

    Another impact on measuring examiner quality occurred prior to the introduction of changes to the performance appraisal system of FY 2011. We were informed that the Commissioner of Patents verbally announced that errors found by OPQA could not be used to calculate an examiner’s error rate. We confirmed that from FY 2011 to FY 2013, examiners with an error rate identified by an OPQA independent reviewer still received an “outstanding” or “commendable” quality rating over 95 percent of the time.

    Underperforming examiners receive a series of escalating warnings before receiving a written warning. During the period of FY 2011 through FY 2013, of the approximately 6,000 to 8,000 patent examiners employed by USPTO during this time, 264 examiners received at least one written warning for production problems, and 233 received warnings for docket management problems. However, only 7 examiners received written warnings for low-quality decisions. Of note, an individual who received a written warning under the quality element still received an overall rating of “commendable” in the end-of-year rating.

  • Third, the OIG Report notes several problems with the accuracy and effectiveness of the Office of Patent Quality Assurance:

    OPQA is the official quality assurance program within the USPTO. It is important to note that, on average, OPQA reviews less than 1 percent of all office actions. The results of OPQA’s analysis feed into several components of USPTO’s official quality metrics, but these results are not used to assess the quality of particular offices within USPTO, nor are they used to assess the performance of individual examiners. Rather, the results are used to generate USPTO’s official quality metrics and provide corps-wide accuracy rates that affect the bonuses awarded to the supervisors of patent examiners.

    We were informed that OPQA reviewers may identify, but not record, some errors. This practice is not based on written policy direction. This practice reduces our confidence in the accuracy of USPTO’s official quality metric.

    The USPTO’s Composite Quality Metric is based on OPQA’s review of examiner decisions, which in turn is dependent on the number of errors identified by reviewers. For those patent actions examined by OPQA, USPTO was unable to provide an estimate on the number of errors that were recorded as “Needs Attention” instead of as an error.

  • The OIG Report concludes with this (under)statement:

    The weaknesses we identified with the current performance plan make it difficult to distinguish between patent examiners who are issuing high-quality patents, and those who are not. We are concerned with USPTO’s inability to distinguish and reward examiners performing at a truly outstanding level of performance versus those who are not.

    However, the critique above does not reflect a mere “inability” to assess office action quality: it reflects a process that has been crafted to obscure office action quality. The process encourages supervisors to overlook and hide quality problems, generates inflated metrics that do not match reality, and heaps rewards on examiners even notwithstanding rare instances of documented quality issues.

The OIG Report is valuable for evidencing the severity of the examination quality problem – i.e., tolerance for examination errors, and a systematic effort to overlook and hide the true incidence and magnitude of this problem. Yet, the OIG Report fails to distinguish between accurate patent determinations – whether the examiner allowed claims that should have been rejected, or vice versa – and sufficient, clear, and accurate office actions – whether the office action sets forth a rationale that fully and persuasively convey the rationale of the decision. The OIG Report relates the symptoms of poor examination quality, and recommends improvements to the review process, but regrettably fails to recommend anything specific regarding the contents of the office actions.

All of these factors point to a cultural tolerance for poor-quality office actions. By failing to measure (and hence value, and strive to improve) the quality of office actions, the USPTO’s processes reflect a systematic tolerance for poor-quality office actions.

The next post here at USPTO Talk will discuss a number of specific flaws that frequently arise in office actions – flaws that reflect poor-quality examination, waste the resources of the USPTO (and the applicant), and generally degrade the quality of the USPTO’s work. This post will provide recommendations for addressing these issues.


Notes:

  1. The following comments are from Ms. Lee’s opening remarks during the 2014 AIPLA Plenary Session:

    I’ve asked that teams of employees from across the agency-from examiners to IT staff to policy experts-be put together to take a hard look at patent quality from every angle. We’re considering all options-big and small-before examination, during examination, and after examination. This includes upgrading IT tools for our examiners, such as fully implementing our Patents End-to-End system and expanding international work-sharing IT capabilities. It includes increasing resources to improve patent examination quality, for example, by expanding focused reviews of examiner work products to measure the impact of training; improving the effectiveness of interviews between examiners and applicants; and providing training to all of our employees that interact with customers. It also includes comparing best practices and collaborating to improve quality with our foreign counterpart offices; more on that in a bit. And it includes using big data techniques to measure and improve every stage of the examination process. What do I mean by big data? Well, we collect a lot of data during the examination process, but we haven’t had the resources to fully capitalize on its potential. Now we do.

    While helpful, the advances of “more training,” “more oversight,” and “faster computers” are overly general – and do not directly improve or relate to any of the problems provided above. An examiner with more training, more oversight, and a faster computer can generate the same low-quality office action as they do today… only faster.

    Meanwhile, the specific suggestions from Ms. Lee’s keynote address include:

    • Increased technical training for our examiners;
    • More legal training, including on Section 112(f) on functional claiming;
    • A glossary pilot program;
    • Easier ways for third parties to submit prior art; and
    • Enhanced use of crowdsourcing techniques.

    Again, none of these initiatives addresses any of the problems with office actions specified above. Rather, these initiatives primarily raise applicants’ requirements for the patent specification, and/or make third-party invalidation easier.

    Of course, the patent community is familiar with this notion of “improving patent quality” by (1) ratcheting down the allowance rate, and (2) forcing applicants to file different applications. This was the agenda of Director Jon Dudas, whose term was characterized by blaming applicants for poor patent quality, and punitive administrative rules that not only provoked uproar, but exceeded the USPTO’s administrative authority. The consequences of this attitude were devastating: a crushing patent examination backlog, a precipitous increase in the rate of appeals, protracted patent pendency, and crippling problems of employee morale and retention. We can only hope that Director Lee does not intend to follow the same disastrous agenda of “improving patent quality” by arbitrarily punishing the patent community.

8 Comments

[…] Having qualified outsiders participate in quality reviews would bring a different yet important perspective to the review process. The patent system is designed to find a balance between the rights of patent owners and the rest of humankind. The practitioners represent the patent owners. For example, I would expect that practitioner special examiners would tend to give more weight than does the Office to the quality of communications from the Office, to factors such as the clarity of the record6 or the quality of Office actions.7 […]

Let’s not forget the fact that when an Examiner is overturned by the PTAB, they suffer absolutely no consequences, and there is no effort made to review the Office Actions they are currently sending out for the same/similar deficiencies overturned by the PTAB.

Entirely true, and a very serious problem.

I am primary examiner, and while I agree to an extent with all of the above; I have complete lack of faith when comes to PTAB; I have been affirmed on cases where one examiner (reviewer) agreed with me, but other agreed with applicant attorney, so I had to get third opinion and proceed to board. Yet, there were cases when both reviewers agreed 100%, but board affirmed the applicant. In my experience, I would not even look at what PTAB had to say, it really feels like they decide based on which side of bed they woke up rather than facts being presented. Also, please note that when examiner proceeds to appeal board; the applicant’s arguments and examiner’s response is reviewed by two reviewers (usually primary and SPE). Can they all be wrong, especially when they are looking at arguments presented by the applicant to prove examiner wrong? Sure, but if that is the case then all three should be penalized, SPE more then anyone else since the examiner is examining within his SPE’s acceptable interpretation, and his rejection is valid enough that he can convince other two to agree with him. Regarding quality, a primary examiner has about 13 hours (more or less) to work on office action (this includes, reading application, reviewing, understanding, searching, and writing the office action (really that does not leave much time for reviewing it, what you guys think?)); On serious note, if you want to increase the examination quality, got to increase the time, and I am not talking about 4-6 hours increase. Be realistic, we are humans too; mind you we have to get creative when searching because it is not straight forward search where we put in key terms and get a reference; usually any good law firm already does this before filing an application (I don’t have experience, but I assume they do perform some sort of search). Reading on average 25 pages, forming strategy for search, reviewing for legal compliance, reviewing 100s references to find relevant references, map each limit to relevant references, on average type about 20 pages of office action; all this within 13 hours; now do this for your entire career on daily bases. After this you guys want the examiner to be penalized, get error charged, more quality reviews etc…. Just imagine, on average an examiner has 25 claims in application, each claim has 4 limitations, and he does this for living. You guys really think that you can’t find anything stretched (not wrong), if you really want to? Give us a week for office action, then let’s talk quality. The patent application is not first grade story book, it is highly complex technical document with ever changing technology. Just to give you guys an idea, what these reports do to examiners…; within last month, I have done 11 actions (barely making my production, 11 only because I also do search mentoring and reviews for junior examiners, so I have about 64 examining hours a bi-wk); 9 of those actions has been reviewed by the SPE (don’t why, he must be trying to impress the upper management, or just being A ____), out of 9; I got great jobs on 6; on 3 I got an error just because SPE is ______; I took those errors to quality people, and other SPE just to prove my SPE wrong, he withdrew the errors (I am still working on one); I have spent over 16 hours rebutting errors, that should never been issued, and those hours still count toward my examining. Mind you, I am an examiner who in his 8 years of examining never got a single error, not from SPE, not for OPQA; even on my Sig program, I didn’t event get a single question. You guys want to know, how I do this? I work 12-14 hours a day, including weekends, so some A ______ SPE can tell me, my quality is not good. You guys have brain to write all this _____, surely you guys have brain to tell that no one can write office action in even 24 hours that merits high quality. (Please note, I am only expressing my opinion with regard to my Technology Center, I am not sure if it is true for every TC)
*Conclusion— Monitor the SPE (some SPE are anal about each term, some sign it without even reading; rotate the reporting SPE, so each examiner can get fair treatment; review each SPEs reviews)
*Error that effects the examiner’s rating should be given only for art related issues with exception of allowance (allowance is final product, so everything needs to be addressed properly); the other issues usually can easily be resolved with simple phone call, and are pretty clear cut, if examiner missed it, it is simply due to human mistake (they should still be monitored, but for coaching and mentoring purposes) and does not really affect anything unless it is allowance.
*Ultimately, increase the examining time, let human’s do human job and robot’s do robot’s job.
*The applicant should only claim their main invention in straight forward manor, rather than claiming rubbish, just to make search difficult (this is where the most of application fall); really just put out what is your invention, so examiner can exactly search that feature and proceed; I understand that applicant wants to claim broad scope, and play the game; but then don’t cry the quality, because broad claim also leads to broad interpretation. (May be PTO can have pilot program for people who want to really just claim what their main invention is)

Please excuse any _____ words, it is not professional but these sort of articles and comments don’t consider the underlying issue, they just want blood. If you guys really feel that their is quality issue, force PTO to change the examining hours to 1 application a week; then go ahead apply these quality criteria’s; 13 hours and crying for this criteria is just P_____ on examiners who take their job seriously (for examiner who don’t give S——-, this reports, error performance does not matter; by the time you fire them, it be good year for them, and they will just go to next job and do same thing, and when they fail their, then my tax will pay for them to sit at home and laugh at my A— for producing quality work in 13 hours).

The standard of English of Hakeem’s comment is a damning indictment of the quality of USPTO examination. Just some of my favourite examples:
“You guys have brain to write all this”
“let human’s do human job and robot’s do robot’s job”
“Error that effects the examiner’s rating should be given”
“The applicant should only claim their main invention in straight forward manor”
“…and when they fail their, then my tax will pay for them to sit at home..”

So what of it? They don’t get rewarded for winning either. If an examiner gets reversed 50% of the time, that’s good IMO. That means they’re pushing what might not be allowable to be decided by a panel. You don’t want a examiner who only pushes cases to the board she knows she’ll win, because that means she’s allowing applications that aren’t allowable. You don’t want an examiner who looses all the time too.

> If an examiner gets reversed 50% of the time, that’s good IMO. That means they’re pushing what might not be allowable to be decided by a panel.

It is not examiner’s job to “push what might not be allowable to the panel.” That is a waste of the PTAB’s resources – and one of the reasons why the PTAB is swamped and currently running a four-year backlog: too much “kicking the can down the road.”

For close cases, the examiner should call in the SPE, director, QAS, OPQA, and/or OPLA. These people are within the examiner’s supervisory chain, and are collectively responsible for making these types of decisions. Examination should never end with an answer of: “we just don’t know, it’s too close to call,” and the PTAB should not be called in for those decisions.

Rather, the PTAB’s central purpose is resolving legitimate disputes, where both the examiner and applicant believe that they have a strong argument. The PTAB is intended to break stalemates by putting the case before administrative law judges for review.

> You don’t want a examiner who only pushes cases to the board she knows she’ll win, because that means she’s allowing applications that aren’t allowable.

Allowance decisions are reviewed by the QAS and the SPE, and sometimes by the technology center director and/or the OPLA. If they all agree that an application should be allowed, then that’s a pretty good sign that it’s reasonable.

Leave a Reply to hakeem Cancel reply

Your email address will not be published. Required fields are marked *