Patent quality is a hot topic at the USPTO – not only taking center stage on USPTO Director Lee’s radar, but also prompting a two-day Patent Quality Summit. However, this effort by the USPTO seems peculiar, because the issuing patent – as a work-product or deliverable – is entirely drafted by the applicant. After verifying that the application meets all of the requirements of 35 USC and 37 CFR, the USPTO’s actual contributions to the issuing patent are the Notice of Allowance – a one-page boilerplate form, occasionally with a cursory statement by the examiner that the claims are novel – and the provision of a serial number and a shiny ribbon.
Rather, the USPTO’s primary work product is the office action – a statement of reasons for allowing or rejecting or allowing particular claims. While the USPTO scrupulously monitors examiners’ output in terms of quantity and timeliness, remarkably little attention is paid to evaluating the quality of the content of office actions. And a detailed (or even cursory) evaluation of office actions will reveal abundant opportunities for improvement that the USPTO does not seem to appreciate.
This post is a call to action for Director Lee and USPTO officials: The USPTO can best contribute to the “patent quality” issue by closely evaluating and striving to improve the contents of office actions.
The quality of patents issued by the USPTO is high on the agenda of incoming USPTO Director Michelle Lee. 1 Director Lee has expressed many opinions about the kinds of applications that should be filed, the kinds of patents that should issue, and the resources to be given to examiners for examination (better training, public searching, etc.) And of course, the examiner’s analysis and decisions are critically important to the patent process: whether yes or no, the examiner’s decision needs to be accurate and well-reasoned.
However, the centrality of these objectives in Ms. Lee’s agenda is puzzling for one key reason: The USPTO’s primary work product is not an issued patent.
The contents of issuing patents – the specification, figures, and claims – are conceived, written, and amended by the applicant. If the application satisfies all of the requirements of 35 USC and 37 CFR, the examiner completes a one-page Notice of Allowance, occasionally with a cursory statement of why the claims are novel. The patent issues in due course without further substantive changes beyond the applicant’s last set of amendments.
Rather, the USPTO’s primary work product is the office action: a specific statement of the examiner’s decision for the application. Office actions include the examiner’s interpretation of the claims; an enumeration of the statutes and case law principles that the examiner considers relevant to the application; and a comparison of the claimed subject matter with the closest references.
Of course, the office action is an expression of the examiner’s decision, which is more centrally the focus of “patent quality.” However, the quality of such decisions cannot be evaluated without a clear expression of the examiner’s rationale: not just a list of references that might be relevant, but why the references are relevant. All of the USPTO’s inward-facing “patent quality initiatives” – such as more extensive examiner training, and automated searches to identify prior art – are pointless if they do not improve the quality of the expression of the examiner’s decision in the office action.
Evidence of the USPTO’s skewed priorities is apparent through its Data Visualization Center, which provides extensive metrics of the USPTO’s self-assessment. These metrics reflect precise, detailed scrutiny of issues such as timeliness, pendency, and productivity:
- Central Reexamination Unit Processing Time for Ex Parte Reexamination
- First Office Action Pendency (months)
- Traditional Total Pendency (months)
- Office Time and Applicant Time – Traditional Total Pendency (months)
- Office Time and Applicant Time – Traditional Total Pendency – Requests for Continued Examination (months)
- Traditional Total Pendency Including Requests for Continued Examination (months)
- Forward Looking First Action Pendency (months)
- Pendency from Application Filing to Board Decision (months)
- Pendency of Applications Which Include At Least One Request for Continued Examination (months)
- Pendency from RCE Filing to Next Office Action (months)
- Pendency of Continuation Applications (months)
- Pendency of Divisional Applications (months)
- Track One Pendency to First Office Action (months)
- Track One Pendency to Final Disposition (months)
- Track One pendency from Petition Grant to Allowance (months)
- Track One Pendency from Filing to Petition Grant (months)
- Track one Office Time and Application Time – Traditional Total Pendency (months)
- First Action Interview Pilot (FAIP) Allowance Rates (percent)
- Patent Applications Allowed (number)
- Unexamined Patent Application Backlog (number of applications)
- Request for Continued Examination (RCE) Backlog (number of applications)
- Patent Application Production (number of office actions)
- Average Actions Per Disposal (number of office actions)
However, regarding examination quality, the Data Visualization Center has a very different set of metrics:
- Quality Composite Score
- Quality: Final Disposition Compliance Rate
- Quality: In-Process Compliance Rate
- Quality: First Action on the Merits Search Review
- Quality: Complete First Action on the Merits Review
- Quality Index Reporting
- External Quality Survey
- Internal Quality Survey
These metrics reflect an odd skew in the USPTO’s self-assessment: the USPTO precisely measures and scrutinizes examination speed and quantity, but evaluates quality only as a few subjective and poorly-defined fields. Indeed, even these few metrics are doubtful: the USPTO lists a “Final Disposition Compliance Rate” of 97%, yet Patent Trial and Appeal Board metrics indicate that 44% of appeals result in the reversal of at least one basis of rejection.
In the field of management, two popular statements provide insight about the use of metrics:
- Your metrics reflect your priorities.
- You get what you measure.
The magnitude and breadth of the USPTO’s failure to assess examination quality is documented in a recent report by the Office of the Inspector General. The findings of the OIG Report are revealing and troubling.
- First, the OIG Report begins with a summary of the USPTO’s examiner review metrics:
USPTO’s supervisors rate patent examiners on four performance elements, which are graded on a five-point scale, outlined in the examiner’s performance appraisal plans. The four performance elements for each examiner are:
- Production: Examiners issue determinations on patentability within the assigned time frames
- Quality: examiners correctly determine whether a patent application should be approved or rejected
- Docket management: examiners manage respective caseloads and properly select cases for review per USPTO policies
- Stakeholder interaction: examiners provide appropriate services to stakeholders
Notably, none of these performance metrics reflect the quality of office actions. If an examiner reaches the right decision, on the right case, in the prescribed time frame – and yet, fails to express that rationale clearly and correctly (or even coherently!) in the office action – the examiner has fulfilled all of the review metrics.
- Production: Examiners issue determinations on patentability within the assigned time frames
- Second, the OIG Report noted extensive problems in supervisors’ assessment of examiner quality:
During the course of the annual performance period, supervisors are required to conduct an in-depth review of a minimum of four patent determinations completed by the examiner, regardless of the total number of determinations completed.
USPTO management claims that supervisors review more than one case per quarter; however, there is no way to verify this because supervisors currently do not document which cases they review. In addition, USPTO supervisors we interviewed indicated that there is an incentive to not charge errors in order to avoid the potential time-intensive error rebuttal process.
Furthermore, the current standards often make it difficult to justify giving an examiner a rating other than “outstanding.” Errors can be found in 75 – and even 100 – percent of the cases reviewed, yet an examiner could still obtain a rating of “fully successful” or higher on the quality performance element.
Although USPTO implemented changes in FY 2011 to examiner performance appraisals to “align the patent examiner performance appraisal plans to organization goals,” some of the changes have made it more difficult to tie examiner performance to the issuance of high-quality patents. For example, USPTO relaxed the error rate of some examining activities by eliminating or combining multiple metrics into one quality error rate. Additionally, the new plan required some types of errors to have occurred multiple times before a supervisor could charge them to an examiner’s error rate.
Another impact on measuring examiner quality occurred prior to the introduction of changes to the performance appraisal system of FY 2011. We were informed that the Commissioner of Patents verbally announced that errors found by OPQA could not be used to calculate an examiner’s error rate. We confirmed that from FY 2011 to FY 2013, examiners with an error rate identified by an OPQA independent reviewer still received an “outstanding” or “commendable” quality rating over 95 percent of the time.
Underperforming examiners receive a series of escalating warnings before receiving a written warning. During the period of FY 2011 through FY 2013, of the approximately 6,000 to 8,000 patent examiners employed by USPTO during this time, 264 examiners received at least one written warning for production problems, and 233 received warnings for docket management problems. However, only 7 examiners received written warnings for low-quality decisions. Of note, an individual who received a written warning under the quality element still received an overall rating of “commendable” in the end-of-year rating.
- Third, the OIG Report notes several problems with the accuracy and effectiveness of the Office of Patent Quality Assurance:
OPQA is the official quality assurance program within the USPTO. It is important to note that, on average, OPQA reviews less than 1 percent of all office actions. The results of OPQA’s analysis feed into several components of USPTO’s official quality metrics, but these results are not used to assess the quality of particular offices within USPTO, nor are they used to assess the performance of individual examiners. Rather, the results are used to generate USPTO’s official quality metrics and provide corps-wide accuracy rates that affect the bonuses awarded to the supervisors of patent examiners.
We were informed that OPQA reviewers may identify, but not record, some errors. This practice is not based on written policy direction. This practice reduces our confidence in the accuracy of USPTO’s official quality metric.
The USPTO’s Composite Quality Metric is based on OPQA’s review of examiner decisions, which in turn is dependent on the number of errors identified by reviewers. For those patent actions examined by OPQA, USPTO was unable to provide an estimate on the number of errors that were recorded as “Needs Attention” instead of as an error.
- The OIG Report concludes with this (under)statement:
The weaknesses we identified with the current performance plan make it difficult to distinguish between patent examiners who are issuing high-quality patents, and those who are not. We are concerned with USPTO’s inability to distinguish and reward examiners performing at a truly outstanding level of performance versus those who are not.
However, the critique above does not reflect a mere “inability” to assess office action quality: it reflects a process that has been crafted to obscure office action quality. The process encourages supervisors to overlook and hide quality problems, generates inflated metrics that do not match reality, and heaps rewards on examiners even notwithstanding rare instances of documented quality issues.
The OIG Report is valuable for evidencing the severity of the examination quality problem – i.e., tolerance for examination errors, and a systematic effort to overlook and hide the true incidence and magnitude of this problem. Yet, the OIG Report fails to distinguish between accurate patent determinations – whether the examiner allowed claims that should have been rejected, or vice versa – and sufficient, clear, and accurate office actions – whether the office action sets forth a rationale that fully and persuasively convey the rationale of the decision. The OIG Report relates the symptoms of poor examination quality, and recommends improvements to the review process, but regrettably fails to recommend anything specific regarding the contents of the office actions.
All of these factors point to a cultural tolerance for poor-quality office actions. By failing to measure (and hence value, and strive to improve) the quality of office actions, the USPTO’s processes reflect a systematic tolerance for poor-quality office actions.
The next post here at USPTO Talk will discuss a number of specific flaws that frequently arise in office actions – flaws that reflect poor-quality examination, waste the resources of the USPTO (and the applicant), and generally degrade the quality of the USPTO’s work. This post will provide recommendations for addressing these issues.
- The following comments are from Ms. Lee’s opening remarks during the 2014 AIPLA Plenary Session:
I’ve asked that teams of employees from across the agency-from examiners to IT staff to policy experts-be put together to take a hard look at patent quality from every angle. We’re considering all options-big and small-before examination, during examination, and after examination. This includes upgrading IT tools for our examiners, such as fully implementing our Patents End-to-End system and expanding international work-sharing IT capabilities. It includes increasing resources to improve patent examination quality, for example, by expanding focused reviews of examiner work products to measure the impact of training; improving the effectiveness of interviews between examiners and applicants; and providing training to all of our employees that interact with customers. It also includes comparing best practices and collaborating to improve quality with our foreign counterpart offices; more on that in a bit. And it includes using big data techniques to measure and improve every stage of the examination process. What do I mean by big data? Well, we collect a lot of data during the examination process, but we haven’t had the resources to fully capitalize on its potential. Now we do.
While helpful, the advances of “more training,” “more oversight,” and “faster computers” are overly general – and do not directly improve or relate to any of the problems provided above. An examiner with more training, more oversight, and a faster computer can generate the same low-quality office action as they do today… only faster.
Meanwhile, the specific suggestions from Ms. Lee’s keynote address include:
- Increased technical training for our examiners;
- More legal training, including on Section 112(f) on functional claiming;
- A glossary pilot program;
- Easier ways for third parties to submit prior art; and
- Enhanced use of crowdsourcing techniques.
Again, none of these initiatives addresses any of the problems with office actions specified above. Rather, these initiatives primarily raise applicants’ requirements for the patent specification, and/or make third-party invalidation easier.
Of course, the patent community is familiar with this notion of “improving patent quality” by (1) ratcheting down the allowance rate, and (2) forcing applicants to file different applications. This was the agenda of Director Jon Dudas, whose term was characterized by blaming applicants for poor patent quality, and punitive administrative rules that not only provoked uproar, but exceeded the USPTO’s administrative authority. The consequences of this attitude were devastating: a crushing patent examination backlog, a precipitous increase in the rate of appeals, protracted patent pendency, and crippling problems of employee morale and retention. We can only hope that Director Lee does not intend to follow the same disastrous agenda of “improving patent quality” by arbitrarily punishing the patent community. ↩
- Increased technical training for our examiners;