header

AI-Based Quality Assurance: Don't Fall For The Illusion Of Correct Dispatch QA

March 14, 2026
Jeff Clawson, M.D.

Jeff Clawson, M.D.

Ask Doc
Download PDF

It would be an understatement to say that we at the IAED™ and PDC™ have received a few questions about the emergence of AI and the ability of it to do 911 Quality Assurance processes—specifically Key Questioning and PAI/PDI review and QA scoring. This has certainly been, not only the question of the day, but of the month, year, and decade. 

We may have been somewhat remiss in what we have outwardly verbalized about it to our Priority Dispatch® /ProQA® centers, so let’s be clear in what we are internally doing about it. As many others have stated in 911/999/000/112 circles, AI may be here to stay, but in just what way and with what abilities it truly has remains to be known—from what AI has evolved to be and ultimately, believed in and instituted so far. 

The Holy Grail of AI-QA is not here yet. Let me explain in more detail to give some inside light on this subject, which, I may say, the Academy has been working on in a few different applications for several years—and is now nearing the release of an AI-QA product that does better than just claim to do “100% of your calls.” 

To be totally honest, and not to too rudely debunk some vendors that have made this claim, it is currently impossible to do even reasonable QA on 100% of your calls, and for several reasons, which I will address after some better disclosure of what is currently going on at “The Bell Labs” of Dispatch in the ARC Department of the IAED. Many venture capital-funded companies have dived dollar-heavy into this marketplace promising the world, but don’t have the capability (although not necessarily the expertise) to accomplish this for a few reasons as follows. 

While a player-piano "review" of all calls is physically possible, the ability to “accurately” score all 12 sections of any call is the real issue here—and no one out there is even close to this—and in a way that meets current QA necessities and standards. 

The ability for them to fully “review” and, in any remote (no pun intended) way, accurately review (that is acceptably and believably score) these calls, at least per current IAED Quality Assurance Standards, is virtually impossible. These entities literally don’t understand these standards, and more specifically, if they did, they don’t have the IP copyrights and patent approvals to use them in any way—not whether IAED protocol questions and/ or instructions are "typed in," spoken, or imparted. They also don’t have the rights to embody (reproduce, copy, translate, or even transcribe) any of the IAED protocols (questions, instructions, etc.), training curricula, or QA standards in any way—period. We have made this, as shown in the IAED/PDC Joint Statement on “Respecting IP Rights”, very clear to them.

However, this would be a disingenuous position, if, and only if, the Academy (in association with the Dev Team at PDC) were not fully engaged in providing a solution to this need. If that weren’t the case, we would join, authorize, or simply let them do it if we couldn’t—and I can guarantee that! This is where the rubber moves from meeting the sky (the pie) to meeting the road (the asphalt). 

The Academy is currently 100% involved in capturing the vast abilities of AI to address the very complex issues of actually doing complete and accurate QA of all parts of a call at the IAED standards level. Every case has at least 12 different (and I mean clearly different) types of formats, logic, and structures needed to be clearly understood to address this objective effectively (and within these, several more subcategories). This is not easy, or we would have thrown something out there already to simply compete with (but only poorly address) the flurried activity of the various venture capitalist-funded entities. Historically, these entities are not there for the long run, but for just enough time (3 to 5 years) to capitalize, then cash out with big profits, and fly into the Western skies with the moneybags, leaving many of us, who even might have developed “enduring” relationships with the current operators, to now deal with a new, and often clueless, management bunch now steering that somewhat abandoned and often ethically clueless ship. (Been there, done that, and am obviously more jaded for the experience). 

Back to the real issue. Accurate and quality AI can only do the 911 dispatch case review job right if it has specific standards to work from. There is a basic, upfront QA failure built into “guidelines” (i.e., you should if you want to, but not always), so what is the actual rock-hard standard that applies to the performance of any medical, fire, or police case? The process (protocol) and procedure (QA standards) have to be clearly written down, understandable, and appliable—the solid Rules of the Road, if you will—not what-ifs, shouldas, or maybes. 

As found in Principles 6th edition Chapter 13.10, Carolyn Smith-Marker, RN, and nursing quality assurance expert, stated in 1988, “Nurses can function according to defined purposeful expectations or by intuition. A nursing system can operate in a designated manner or haphazardly. Patient care can be given by design or by impulse and habit. Standards either exist or they do not. If they exist, they must be detailed, consistent and comprehensive or they will be shallow, irrelevant and worthless.” 

Here’s a down and dirty example of what we hear out there. Let’s say we are evaluating the application of Protocol 19: Heart Problems regarding the determination of the priority symptom of Chest Pain. Obviously, the operant Key Question is “Does s/he have any chest pain?”. Betcha a buck that your QA thing would query the AI, “Did the calltaker ask about chest pain?” and rightfully receive a simple “yes” or “no” answer. However, this is not a true QA evaluation of this question, as there are at least seven variations we have known this “question” to be constructed as—some close to the mark, a few similar, and a couple completely wrong—even one considered dispatch malpractice. 

“Does s/he have any chest pain?” is the most common variation. You might say, well close enough. But not actually, as any infers that the least and even any related pains in that area might be construed by a caller/patient as “any.” This one is followed by "Has s/he had any chest pain?" "Has had" means when exactly—now, yesterday, or last week? Even worse, is another less common, but not rare, application of the question: “S/he doesn’t have chest pain, does s/he?” This is often done when the chest pain appears to be minimal or fleeting (and therefore “not so important”). As the lawyers would say (and they will), “Objection, leading the witness.” Judge: “Sustained.” 

We don’t consider the format and construct of any question to be a “suggestion,” as often even a word or two can be way off the mark, and even lethal. Correct QA requires this level of application and review—not just that they said, in any way, shape, or form, the two words “chest pain.” Key words are not questions (they are called “paraphrases”), and they are not determiners of clear objectives—wherein the full question/sentence structure is often exact and unforgivingly so. But only if we are to do it right and not just fast! In medical, fire, and police, fast is “cool,” but right is enduring and defensible! 

I will say it and back it up: The currently stated “We do all questions review” is simply the “Illusion of correct Quality Assurance,” which is not the basis for actionable and defensible Quality Improvement. What the IAED is doing must, in the long term, be accurate, reliable, and defensible—not simply “feel good” reliance on a fad to check a state-required QA box on a yearly form. 

Thought question on the future legal side of things: If an AI-bot gets it wrong for you, especially in, or regarding, a real call, does any state or legal immunity apply to them or only to a good-faith human operator? Let me know when you have the answer to that one! 

While I would like to end with, “Be afraid, be very afraid,” I won’t, but I will say this is what the Academy (and The Bell Labs of Dispatch) was originally built for— and is now delivering on! Hope I’m right ... Onward through the AI fog ... Doc 

More Articles