Skip to content

Sharpening the Patent Process

Sharpening the Patent Process published on No Comments on Sharpening the Patent Process

Here’s a basic principle of legal systems: Before writing more laws, enforce the laws you’ve got. When a city discovers that its residents are driving recklessly and not following speed limits, the solution is more cops, better radar detectors, and more tickets – not overhauling the local driving laws to be even more strict.

The same principle applies to the patent system. Many of the current problems with the patent system don’t require the haphazard legislative changes sought by “patent reform” advocates – they can be addressed by sharpening the patent process to utilize the laws that we already have more effectively. Like a dull blade, our patent system does not need to be melted down and recast: it just needs to be sharpened.

Critics of the patent system raise many different types of complaints about the patent system, such as:

  • Patent language that is difficult to understand.

  • The issuance of patents with overly broad claims.

  • The unreliability of patents: the perception that patents are issuing with invalid claims, and the extensive, cumbersome legal apparatus in place to reevaluate issued patents (the PTAB, district courts, the CAFC, and the Supreme Court).

  • The exorbitant costs and protracted delays of the patent process.

  • The effectiveness of forum-shopping among circuit courts to find one that routinely issues decisions favoring a particular type of litigant.

  • The existence of unresolved, systemic problems, such as “patent trolls.”

These and other complaints form three general classes of problems. Notably, none of these problems reflect a deficiency in the written law, i.e., Title 35 of the U.S. Code and Title 37 of the Code of Federal Regulations. Rather, these problems reflect deficiencies and inefficiencies in the administrative and legal processes by which these laws are applied to patents and patent applications.

The three broad classes of problems with the patent system, and some workable solutions, are as follows.

  • Problem #1: Inconsistency, unreliability, and unpredictability.

    Symptoms of this problem:

    • Symptom #1: If the patent examination process were highly accurate, most patent examiners’ decisions would accurately reflect the law, and would nearly always be upheld after review by the PTAB, CAFC, and Supreme Court. Instead, examiners’ decisions are routinely overturned: the PTAB reversal rate is over 40%; reversal is common by both circuit courts and the CAFC; and
      the Supreme Court has a reversal rate of CAFC decisions over 80%, both this year and historically.

    • Symptom #2: If the same patent application is examined by two different patent examiners, both examiners should find the same references, and should reach the same conclusion with nearly identical rationale. Instead, allowance rates and decisions are extraordinarily variable from one examiner or art unit to the next.

    • Symptom #3: For all but the most borderline / “close call” patents and patent applications, a patent practitioner should be able to compare the claims with a particular set of references, and provide a reliable prediction as to whether each claim would be allowed or rejected by the patent office. In reality, such predictions are impossibly unreliable, as the outcome is highly dependent on the judgment of a particular examiner or judge.

    Each of these problems reflects a central, endemic problem in patent law: subjectivity. Our patent process includes far too much variance in the patent examination and review process: too much of the decision is driven by the reviewer’s personal philosophy and subjective judgment of the application, rather than a cohesive framework driven by objective facts.

    This subjectivity resides in two distinct sources:

    • Sub-Problem #1: The failure of the courts, and specifically the Supreme Court, to elucidate and apply objective tests of patent law. Courts have replaced a large number of rigorous, objective, and evidence-based tests in patent law with vague principles, postulation, and “totality of the circumstances” discretion – i.e., “smell tests.”


      • In KSR v. Teleflex, the Supreme Court altered the test of whether two or more references could be combined to support an “obviousness” rejection from: “do the references provide an explicit ‘teaching, suggestion, or motivation’ to combine them?” to… “common sense.” Literally, “common sense” is the test of whether references can be combined to form an obviousness rejection. That notion should alarm every single person who’s ever studied any field of law.

      • In Nautilus v. Biosig, the Supreme Court altered the standard for whether a patent claim was indefinite from: “is the claim insolubly ambiguous?” to: “could readers reasonably interpret the scope differently?”

      • In a continuing series of cases – Bilski v. Kappos, Mayo v. Prometheus, Alice Corp. v. CLS Bank – the Supreme Court has altered the standard of patent-eligible subject matter from the basic test established in State Street Bank v. Signature Financial Group (“does the invention have any specific utility?”) to a violently fluctuating set of decisions about patentable subject matter, while explicitly refusing to state any specific criteria that can be used for this determination.

      • In Octane Fitness v. ICON Health Fitness, the Supreme Court completely replaced the CAFC’s specific framework for determining whether attorney’s fees could be awarded to a “totality of the circumstances” test.
      In these and many other instances, the courts have replaced an objective or rigid legal test with a “flexible” or “nuanced” approach, by which an examiner or judge can make a decision based on their feelings about the matter. This “smell test” approach is the antithesis of a coherent legal system.

      Sub-Solution #1: The affinity of the Supreme Court and CAFC for reviewer discretion and “smell tests” must be reined in. Substantive determinations in the patent process must be reformulated as specific, mechanical, objective, fact-driven analyses that can be consistently, reliably applied to any application by any individual to reach the same result.

    • Sub-Problem #2: Systemic tolerance for factual errors.

      Even in areas of patent law where factual tests are still utilized – e.g., what a particular technical reference actually teaches, or the interpretation of a claim – factual errors are abundant, and are simply tolerated. At each stage of review, the reviewer is not held accountable for making determinations that are factually accurate and substantively sound.

      This lack of accountability is most readily apparent within the USPTO:

      • The penalty for an examiner failing to read the specification, or making a basic mistake about science or technology, or asserting that a reference teaches somethign that it doesn’t, is… nothing.
      • The penalty for an examiner refusing to consider an applicant’s argument about the claims or a reference, even repeatedly, is… nothing.
      • The penalty for an examiner having a rejection reversed via PTAB appeal is… nothing.
      • The penalty for an examiner having a factual determination in an issued patent found invalid by the PTAB or a court, or for having such decisions reversed by a higher-level court, is… nothing.
      Simply put, the entire patent system has a systemic tolerance for factual errors. These errors are simply accepted by the USPTO as a regular and necessary element of patent examination, and it’s the responsibility of the applicant, patentee, or accused infringer to correct the factual error – often via their tremendous expense and delay.

      Regrettably, the USPTO is in denial about this systemic problem. The Data Visualization Center indicates a “Final Disposition Compliance Rate” (“the correctness of the examiners’ overall determination of the patentability of the claims in the decision to finally reject or allow an application,” based on a “random sample review of allowances and first office actions”) of 96.6%. This metric does not comport with the over-40% reversal rate of the PTAB – and that over-40% doesn’t even include appeals where the SPE opts to re-open prosecution and vacate the examiner’s position, rather than sustaining a PTAB reversal. Indeed, it’s unclear how the USPTO manages to assess the “correctness” of such determinations in an ex parte manner, i.e., without consulting the applicant. It seems that the USPTO listens to the examiner’s side of an argument, and says, “I guess that sounds plausible,” without considering the applicant’s position – in contrast with the inter partes PTAB appeal process.

      Sub-Solution #2: Raise the accountability and expectations of factual accuracy for examiners (and more generally for courts). Give examiners more time and resources to make accurate determinations, and then hold them rigidly accountable for accuracy. Also, force the federal court system, from the circuits up to the Supreme Court, to agree upon an interpretation of patent law that doesn’t result in frequent reversals upon review.

  • Problem #2: Inefficiency.

    Symptoms of this problem:

    • Symptom #1: The average delay between the filing of a patent application and a first office action – i.e., the amount of time the application sits idle before an examiner first acts on it – is 19.3 months. 1

    • Symptom #2: The average duration of patent examination, from filing to disposal (issuance or termination) – i.e., the duration of the trip through the patent office – is 38 months.

    • Symptom #3: The average delay between submitting briefs and arguments to the Patent Trial and Appeal Board and the receipt of its decision – i.e., the amount of time the application sits idle in the appeal process – is 2.2 years. The total combined pendency for appealed cases, from filing to appeal, is 88 months, i.e., over seven years.

    • Symptom #4: The “patent term adjustment” of the average issuing patent – i.e., the free extension of the patent term beyond the typical “20 years from priority filing date” calculation, due to “unreasonable delay” by the USPTO in completing the examination process – is 18 months.

    The obvious question is: Why is the patent system so backlogged? Several reasons exist:

    • Increases in patent filings. While it’s certainly true that the USPTO is dealing with increased volume – annual patent application filings have risen 50% in the past 10 years – the USPTO has not dealt with this problem well: it is currently experiencing a backlog of 616,000 unexamined applications. Although the USPTO has made progress against its historic high (mid-2010) of 740,000 unexamined applications, the backlog has steadily grown over the last year, from 593,000 in February 2013. As an institution, the USPTO has simply failed to solve this problem.

    • High production requirements. In trying to address the backlog, the USPTO has become focused on “productivity,” as measured by how many office actions an examiner can push out the door over the course of a period.

      Timeliness and quality are both important, but timeliness can be automatically measured, while quality is more difficult to evaluate; and for any quantum of effort, quantity and quality are inversely related. Thanks to this relationship, the USPTO’s employment system encourages and rewards a series of fast but weakly reasoned office actions – and punishes solid office actions that express an accurate and convincing position, but that require more time to formulate.

      Of course, in the aggregate, issuing eight office actions that incrementally reach a correct result is much less efficient than preparing one well-reasoned one, so the USPTO’s examination metrics reward timeliness at the huge expense of efficiency. In the language of process engineering, the USPTO’s policies are overly focused on the “shallow cost” – i.e., the cost of taking a single action – and inadequately focused on the “deep cost” – i.e., the total cost of all of the steps needed to complete a particular task.

    • Patent examination policies that equated “high patent quality” with “low allowance rates.”

      For most of the 2000’s, in an attempt to raise the quality of patent examination, the USPTO, under Director Jon Dudas, exerted tremendous pressure on examiners to reduce pateht allowance rates in furtherance of patent “quality.” Instead of allowing the (let’s say) 60% of the applications that typically would have been allowed, examiners were instructed to allow only the best 20% – the “cream of the crop.”

      However, even the subjective patent system that we have isn’t that subjective: if an examiner can’t articulate a legal reason to reject an application, the patent office is legally obligated to allow it. Examiners were caught in the unappealing position of having to reject many applications, despite the unavailability of a plausible legal basis for the rejection. The result was an epic amount of churn of applications that examiners were not permitted to allow, and could not properly reject.

    • A miserable employment environment for most of the 2000’s, as documented by a Government Accountability Office (GAO) report citing “low morale and an atmosphere of distrust,” as well as out-of-touch management, as the causes of high examiner attrition, at the rate of one examiner quitting the patent system for every two examiners hired.

    • “Fee diversion”: The historic practice of Congress routinely siphoning revenue from the USPTO to other projects, thus depleting the USPTO of vital resources to improve its examination process.

    These and other problems drained efficiency out of the USPTO, causing applications to take a much longer time to issue. 2

    Solution #2: The USPTO needs a radical shift in policy focus, from valuing “productivity” (timeliness at the expense of accuracy) and “quality” (as measured by allowance rates) to efficiency: the process of reaching a factually accurate answer with the lowest deep cost, i.e., the quickest path to an absolute final disposition, without regard to the shallow cost, i.e., the resources required for each individual step in the process.

  • Problem #3: Failure of Leadership.

    Since 1974, the position of Chair of the Federal Reserve has been held by six people. Their average term is 7.3 years – and excluding the newest chair, Janet Yellen, who just started in February, the average term rises to 8.8 years. Similarly, the average tenure of a Fortune 500 company CEO is 9.7 years.

    In this same 40-year time frame, the USPTO has had 14 leaders, with an average tenure of 2.8 years. The tenure of the last eight USPTO directors is as follows:

    • Michelle K. Lee: 0 years, 9 months
    • Teresa Stanek Rea: 0 years, 10 months
    • David Kappos: 3 years
    • John Doll: 0 years, 7 months
    • Jon Dudas: 5 years
    • James Rogan: 2 years
    • Q. Todd Dickinson: 3 years
    • Bruce Lehman: 5 years

    The current state of the USPTO directorship is even worse. Since David Kappos announced his plans to vacate the office in November 2012, a new director has not been appointed – indeed, no one has even been nominated for the position. (Teresa Stanek Rea stepped into the role as “acting” director, and Michelle Lee is serving as “deputy director” in the absence of an actual appointee.) The office of director has been vacant for 19 months and counting – and not because of political gridlock, but a simple lack of attention from the federal government.

    Leading any significant enterprise takes time to understand the system, to identify the sources and causes of problems, and to devise and implement effective countermeasures. Unfortunately, the revolving door of the USPTO head office prevents any director from completing that process. It is bizarre and distressing that the federal government is incapable of hiring a director of the USPTO who is qualified and dedicated to run the office for an extended period. It is apparent that the federal government regards this position as a political appointment rather than a functionally critical office – and is unable to draw a connection between its disrespect for the office and the sustained systemic problems of the patent system.

    Solution #3: The federal government must appoint a USPTO director who has the expertise and the responsibility to identify the most important policy issues facing the entire U.S. patent system – and must leave that person in charge long enough to demonstrate results.

These changes, together, could address a very wide variety of problems with the USPTO. These changes would bring to the patent system many of the improvements that “patent reform” advocates seek: clearer rules of patentability; more accurate examination and more reliable patents; and a comprehensive review and solution to problems such as patent trolling. Best of all, none of these changes require new patent legislation: they simply require a redesign of the administrative processes by which our current body of patent law is implemented.

So why are these issues not the core of the “patent reform” movement? Because they would benefit everyone, not any particular special interest group; because these ideas are meaningful process improvements, not ideology-driven policy changes; and because it’s easier and more fun to criticize the patent system and to propose detached, academic ideas, than to develop an opinion that’s informed by the actual mechanics and history of the patent system.


  1. Metrics about the USPTO’s examination status are courtesy of the USPTO’s Data Visualization Center.
  2. The delays of the USPTO examination process exacerbate the public perception of the USPTO as incompetent. When the public encounters newly issued patents, they do not recite cutting-edge technologies, but rather technologies that have been in use for years, and that are now conventional… not understanding that such applications were filed over four years ago!

Leave a Reply

Your email address will not be published. Required fields are marked *