Episode 151

September 28, 2025

00:16:21

151: EQA of ctDNA Mutation Testing Across the COIN Consortium

Hosted by

Gustavo B Barra
151: EQA of ctDNA Mutation Testing Across the COIN Consortium
Base by Base
151: EQA of ctDNA Mutation Testing Across the COIN Consortium

Sep 28 2025 | 00:16:21

/

Show Notes

️ Episode 151: EQA of ctDNA Mutation Testing Across the COIN Consortium

In this episode of PaperCast Base by Base, we explore how 16 Dutch laboratories evaluated their real‑world workflows for circulating tumor DNA (ctDNA) mutation testing across BRAF, EGFR, and KRAS using a coordinated external quality assessment within the COIN consortium.

Study Highlights:
The team distributed six plasma samples—three commercial references with predefined variants and three patient‑derived diagnostic leukapheresis samples—to participating labs, asking them to run their routine preanalytical and analytical pipelines, including ddPCR, small PCR panels, and next‑generation sequencing. Performance was scored on protocol adherence, overall detection, and precise genotyping, revealing broad variability in plasma input, extraction methods, and elution volumes and showing that only 38% of labs achieved a performance score above 0.90. Although 81% reached a 100% overall detection rate for the variants they assayed, clinically actionable mutations such as EGFR p.(S752_I759del), EGFR p.(N771_H773dup), and KRAS p.(G12C) were frequently mis‑genotyped, largely reflecting assay design limits. NGS approaches generally enabled more accurate variant‑level calls but carried a higher risk of false positives, while single‑target assays demonstrated sensitivity yet lacked breadth to cover all guideline‑relevant loci.

Conclusion:
Harmonizing preanalytical handling and selecting assays with adequate breadth and specificity are essential to deliver reproducible, clinically actionable liquid biopsy results in routine practice.

Reference:
van der Leest P, Rozendal P, Hinrichs J, van Noesel CJM, Zwaenepoel K, et al. External Quality Assessment on Molecular Tumor Profiling with Circulating Tumor DNA‑Based Methodologies Routinely Used in Clinical Pathology within the COIN Consortium. Clinical Chemistry. 2024;70(5):759–767. https://doi.org/10.1093/clinchem/hvae014

License:
This episode is based on an open-access article published under the Creative Commons Attribution 4.0 International License (CC BY 4.0) – https://creativecommons.org/licenses/by/4.0/

Support:
If you'd like to support Base by Base, you can make a one-time or monthly donation here: https://basebybase.castos.com/

Chapters

  • (00:00:00) - Chopping Through the DNA of Cancer
  • (00:03:18) - Commemorating the COIN Consortium
  • (00:03:56) - The challenge of standardization in cancer DNA testing
  • (00:07:56) - The EGFR Genotyping Study
  • (00:10:59) - Liquid DNA standardization
View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign. [00:00:14] Speaker B: Welcome to Base by Base, the papercast that brings genomics to you wherever you are. Today we're diving into, well, a really exciting area of personalized medicine, liquid biopsy. But it also comes with some potentially serious questions. We're talking about using circulating tumor DNA, CT DNA. You know, the stuff found in just a simple blood draw to diagnose and manage cancer. [00:00:38] Speaker A: It sounds revolutionary and it is. [00:00:40] Speaker B: Right? But here's the big question, the really high stakes. One, if a patient's life saving treatment depends completely on finding just one specific DNA variant floating around in their blood, how reliable is that test? Especially when different hospitals might be using. [00:00:54] Speaker A: Well, different machines, different procedures, that reliability is absolutely everything. I mean, the whole point of CTDNA testing, its real utility, it all hinges on two things. First, sensitivity. Can it actually find that tiny signal? [00:01:06] Speaker B: Like the needle in the haystack? [00:01:07] Speaker A: Exactly. But second, and this is crucial, can it accurately genotype the specific mutation? It's not enough to just say, yep, there's a mutation here, we need to know precisely which one. [00:01:17] Speaker B: Because that dictates the drug. [00:01:19] Speaker A: Precisely. That specific genetic difference tells you which targeted drug the patient should get. So what we're digging into today is a real world look at how the, let's call it the current variation, maybe even chaos in lab protocols is leading to dangerously inconsistent results in practice. [00:01:37] Speaker B: Okay, so let's really pin down the core problem this research looked at. We're talking about non small cell lung cancer, nsclc. Finding that exact genetic variant, it's like. [00:01:46] Speaker A: The map to the right medicine, right? [00:01:48] Speaker B: It is the map. [00:01:48] Speaker A: So picture this. Two expert labs, they get the exact same plasma sample from the same patient. Lab A figures it out, gives the precise actionable answer. Okay, this patient has KRAS PG12C. They need drug X, the target therapy. Right? But lab B, maybe because of their testing method, they only report something vague like KRASG12 mutation found. Now that sounds similar, but clinically it's. [00:02:12] Speaker B: Unusable for that specific targeted therapy. You can't act on it in the same way. The patient's outcome could be completely different just based on how the test was run and reported. And this study really highlights how much that variation matters. I mean, the key finding and why we're talking about this paper is that even among these highly specialized expert Dutch labs, part of a consortium, no less, right? [00:02:36] Speaker A: Experience labs, yeah. [00:02:37] Speaker B: Testing identical samples, nearly half of them failed to accurately identify some of the most critical actionable mutations, the ones needed for making those really precise treatment decisions. [00:02:49] Speaker A: And it's staggering, really, because what it shows is that these different protocols, they aren't just, you know, technical details for lab folks to worry about. They actually create different clinical realities for patients. [00:02:59] Speaker B: So standardization isn't just about making things neat and tidy. [00:03:02] Speaker A: Not at all. It's fundamentally a patient safety issue. If we can't get the methods aligned, harmonize how we read these liquid biopsies, we genuinely risk denying patients the right treatment. The very treatment that could, you know, save or significantly extend their life. [00:03:18] Speaker B: Okay, before we go deeper, let's give credit where it's due. [00:03:21] Speaker A: Absolutely. [00:03:21] Speaker B: Today we celebrate the work of the COIN Consortium, that stands for ctdna, on the road to implementation in the Netherlands. They undertook this really vital external quality assessment, this eqa, to help us understand how to harmonize liquid biopsy testing and, well, how it's actually performing right now. [00:03:40] Speaker A: It's critical work. [00:03:41] Speaker B: Yeah. And we should specifically mention people like Paul Vanderleest, Ed Sheering and the University Medical Center Groningen. They really spearheaded this effort to look at the real world picture. [00:03:50] Speaker A: And the COIN Consortium was really focused on, you know, getting this technology ready for routine clinical use. To appreciate the challenge they faced, let's just quickly set the stage technically for our listeners. When we say ctdna, you mean cell free DNA, right? [00:04:05] Speaker B: CCF DNA. Stuff shed by the tumor into the bloodstream. [00:04:08] Speaker A: Exactly. It's floating around in the blood plasma. And the big advantage is you collect it with just a simple blood draw. Minimally invasive. [00:04:14] Speaker B: And the potential uses are huge. Right. Finding molecular targets for drugs. [00:04:19] Speaker A: Yes. Predicting treatment response and also for monitoring. [00:04:22] Speaker B: Like minimal residual disease mrd looking for tiny traces of cancer after treatment. [00:04:28] Speaker A: That's a key application. Yes. The clinical promise is immense. But despite all that potential, getting it widely used has been held back by exactly what this study looked at. The lack of harmonized methods, both in the lab and the analytical part, and even before that, the pre analytical steps. [00:04:45] Speaker B: What do you mean by pre analytical? Like how the blood is drawn? [00:04:48] Speaker A: Exactly. We already knew from other studies that there's huge variation there. Things like which blood collection tubes are used, how long the sample sits before processing, the specific kits used for extracting the DNA, even things like the final volume, the DNA is dissolved in the elution volume. [00:05:04] Speaker B: Okay, hang on. Why do details like extraction kits and elution volumes matter so much? Seems pretty technical. [00:05:09] Speaker A: Ah, but they have a direct impact on sensitivity. Think about it. You're starting with a tiny amount of CTDNA in the blood. If you use less plasma to start with, or if you extract the DNA and then dissolve it in a large Volume of liquid that's elution. You're essentially diluting that already scarce ctdna. [00:05:26] Speaker B: Okay, I see. So the concentration goes down. [00:05:28] Speaker A: Precisely. It lowers the variant allele frequency. The VAF in the sample you actually test. And if your testing method isn't sensitive enough to pick up that very low v. AF you just miss the mutation completely or you might get an unreliable result. [00:05:42] Speaker B: Got it. So the qion study wasn't just about knowing there's variation. [00:05:46] Speaker A: No, they wanted to see if this known variation, all these different pre analytical and analytical methods combined actually leads to a real measurable difference in detecting mutations and crucially in accurately genotyping them. Getting the exact type right for those clinically important variants. [00:06:03] Speaker B: So how did they test this across different labs? [00:06:06] Speaker A: They ran a big external quality assessment and EQA. 16 Dutch labs participated, covering pathology, oncology, clinical chemistry. So a good mix representing routine practice. [00:06:16] Speaker B: Okay, and what did they send these labs? [00:06:18] Speaker A: Right, the samples. Each lab got six identical sets of plasma allequids, 4 milliliters each. Three of these were sort of manufactured reference samples. Artificial plasma spiked with known mutations in specific known VAF. Some really low, down to 0.17%. [00:06:34] Speaker B: Okay, so a controlled test. [00:06:35] Speaker A: Yes. And the other three samples were the real challenge. They were actual patient derived samples. Diagnostic leukapheresis or DLA plasma. These came from patients with metastatic NaLC who were known to have specific actionable mutations. So more complex real world stuff. [00:06:52] Speaker B: And what were the labs asked to do? Just find any mutation? [00:06:55] Speaker A: No, they had specific targets. They had to test all six samples for mutations in key genesis Bref exon 15, EGFR exons 18, 21 and KRAS exons 2 and 3. And here's the key. They had to use whatever their standard routine CCF DNA workflow was. [00:07:13] Speaker B: Ah, so they weren't told how to test, just what to test for. [00:07:16] Speaker A: Exactly. And that's where the lack of standardization really came into sharp focus. Was kind of chaotic, methodologically speaking. [00:07:23] Speaker B: Not chaotic. [00:07:23] Speaker A: Well, get this. Eight different DNA extraction methods were used across the 16 labs. Input Plasma volumes varied, elution volumes varied, leading straight back to that VAF instability issue we talked about. And the testing methods themselves, all over the map. Some used highly specific single target tests like droplet digital pcr, DD pcr. Others used small commercial panels like the qobis assays. And then several used different types of next generation sequencing NGS approaches, often with their own custom setups. [00:07:52] Speaker B: Okay, so a real mix of technologies and protocols. Here's where it gets really interesting. Then what did they find? You'd expect maybe some variation, but. [00:08:00] Speaker A: Well, initially the results looked surprisingly good. Actually almost misleadingly positive. 13 of the labs, that's 81%. They achieved a 100% overall detection rate for the mutations they were set up to find. [00:08:13] Speaker B: Wow. 100% detection. That sounds great. High sensitivity. [00:08:15] Speaker A: It sounds great. It suggests. Yes, high sensitivity for finding that a mutation is present. But, and this is the crucial but that masked a massive problem with accuracy. [00:08:23] Speaker B: Accuracy in terms of which mutation exactly? [00:08:26] Speaker A: The specific genotyping. That ability to tell the difference between say one type of EGFR mutation and another. Or one KRASG12 variant versus another. [00:08:35] Speaker B: Yeah. [00:08:35] Speaker A: When the study look at clinical accuracy, getting the exact mutation right, the performance just plummeted for many labs done badly. Look at some of these figures. There's a specific EGFR deletion, PS752I, $759. Therapeutically relevant. 69% of the labs did not identify it accurately. Almost 70%. Wow. Another one, an EGFR insert insertion PE and 771H773DOTA missed by half the labs. 50%. [00:09:02] Speaker B: Okay, that's concerning. What about KRAsPG? You mentioned G12C earlier being important. [00:09:06] Speaker A: Extremely important. That KrasPG12C mutation. Yeah, it's the key that unlocks eligibility for new targeted drugs like Sotorasib or Adegrazib. Huge breakthrough drugs for that specific subset of patients. [00:09:17] Speaker B: And how did the labs do on that one? [00:09:18] Speaker A: Only accurately reported in 48% of the possible cause. [00:09:21] Speaker B: Less than half. [00:09:22] Speaker A: Less than half. Just think about the clinical impact of that. More than half the time a patient's eligibility for a potentially life extending targeted drug might have been missed or maybe even misreported simply because of the lab's specific testing methods. [00:09:37] Speaker B: That's quite sobering. [00:09:38] Speaker A: It really is. So when they calculated the final performance scores, which considered accuracy and specificity, not just. Did you find something? Only six labs out of the 16, just 38% got a high score over 0.90. [00:09:52] Speaker B: Only six labs performed really well. [00:09:54] Speaker A: And here's a telling detail. Five of those six high performing labs used an NGS approach. It strongly suggested that the choice of method was directly linked to the accuracy problems. [00:10:04] Speaker B: Okay, so NGS seemed better for accuracy, but why did the other methods, like the DD PCR or the QOBUZ tests, struggle so much with getting the specific genotype right? [00:10:14] Speaker A: It really comes down to how they're designed, often balanced against cost and speed. Those focused screening assays, the DDPCR or COBUS Types often have lower performance scores in this context because they fundamentally couldn't distinguish the specific variant from other possible changes at the same spot, the same codon. [00:10:31] Speaker B: So they might say G12 but not G12C. [00:10:34] Speaker A: Exactly. They might detect something is altered at KRAS Codon 12. But the test isn't designed to tell you if it's the actionable G12C or maybe G12D or G12V, which might require different approaches or mean the targeted drug won't work. They're often faster, maybe cheaper per target, but you sacrifice that specificity. [00:10:53] Speaker B: Right. So there's a trade off. But you said NGS was better for specificity, for getting the exact genotype. The study mentioned a potential downside there too, though. Something about false positives. [00:11:04] Speaker A: Yes, that's the other side of the coin. While NGS methods were generally better at pinpointing the specific variant, the study did note they could be more prone to false positive calls. [00:11:14] Speaker B: Why would that be? [00:11:14] Speaker A: Well, it's partly complexity. NGS looks at a much broader stretch of the genome, or at least many more targets simultaneously compared to DD pcr. Sometimes the depth of sequencing coverage might be lower in certain regions. And crucially, the bioinformatics, the computational analysis used to filter out noise and identify true variants is complex and can vary a lot between labs. [00:11:36] Speaker B: So it's like casting a wider net. [00:11:38] Speaker A: Kind of. You cast a wider net hoping to get the precise mutation, but you also risk catching more noise, random sequencing errors, or other artifacts that the analysis pipeline might mistakenly call as a real low level tumor mutation when it's not. [00:11:52] Speaker B: And a false positive has real consequences too, right? It's not just a technical error. [00:11:57] Speaker A: Absolutely not. It could lead to unnecessary anxiety for the patient, maybe more invasive follow up tests, or even potentially starting or switching to an incorrect therapy or based on a mutation that isn't actually there. [00:12:09] Speaker B: Okay, so the clinical relevance here is massive. Can you give another example? Maybe with egfr? [00:12:13] Speaker A: Sure. In nsclc, getting the EGFR details right is critical. For instance, certain insertions in Exon 20 of EGFR, they need a very different treatment, like the antibody amivantama. That's completely different from the more common EGFR mutations like Exon19 deletions or LA58R, which are often treated first with pills like Osimertinib. [00:12:32] Speaker B: So if the report is wrong, if. [00:12:34] Speaker A: The report is inaccurate, either because it misses the specific type of mutation or because it calls a false positive, the patient could end up on a therapy that won't work for them. Or Miss out on one that would it directly impacts treatment effectiveness. [00:12:50] Speaker B: You know, it strikes me that this study was done in the Netherlands, a country with a really advanced healthcare system. Lots of genomic expertise. [00:12:57] Speaker A: That's right. [00:12:58] Speaker B: If they found this much variation and lack of standardization among their expert labs, what does that suggest about, say, global implementation? Is it likely even more variable elsewhere? [00:13:10] Speaker A: That's a very pertinent question. It certainly underscores the scale of the challenge. The technology itself is powerful, but the processes around it, the standardization clearly needs to mature significantly. [00:13:21] Speaker B: Are there any other caveats or limitations to this study? [00:13:25] Speaker A: We should mention one important one. The authors themselves pointed out the patient samples they used, the DLA samples, they actually had relatively high VAFs. The mutation levels were generally above 1%. Well, the really tough challenge in CTDNA testing, especially when you think about using it for very early cancer detection or for monitoring that minimal residual disease after. [00:13:46] Speaker B: Treatment, where you're looking for tiny, tiny traces. [00:13:48] Speaker A: Exactly. In those scenarios, you're often hunting for mutations at VAFs well below 0.1%. This study didn't really push into that ultra low territory. So the standardization hurdle for those most sensitive applications, it's probably even higher than what we saw here. [00:14:03] Speaker B: Right. So connecting all this back to the big picture, the main message seems pretty clear. [00:14:08] Speaker A: I think. So standardization is key both for the pre analytical steps, how you handle the blood and the analytical workflows, how you actually test the DNA. It seems mandatory if we want to get to a point where liquid biopsy testing is truly reproducible and trustworthy in everyday clinical practice everywhere. [00:14:25] Speaker B: Without it, the results are just too inconsistent. [00:14:28] Speaker A: Yes. Without that standardization, the incredible promise of liquid biopsy risks being undermined by this procedural variability. It could create real inequities in care, where the treatment a patient gets depends partly on which lab happened to run their test. [00:14:44] Speaker B: So let's try to boil down the central insight here. What's the main take home message? [00:14:49] Speaker A: I'd say it's this. We've got widespread variation in how clinical labs are doing CD DNA testing right now. And that variation is a serious problem because it directly impacts the accuracy needed for making precise treatment decisions. Even if labs are generally good at detecting a mutation is there, they aren't. [00:15:05] Speaker B: Always good at saying exactly which mutation it is. [00:15:07] Speaker A: Precisely. And that accuracy gap needs to be closed. [00:15:10] Speaker B: Yep. [00:15:11] Speaker A: So prioritizing standardization of workflows and how we validate these tests is absolutely crucial if liquid biopsy is going to live up to its potential for all patients. [00:15:19] Speaker B: Which leads to a final thought, perhaps if this level of inconsistency exists even within a specialized Dutch consortium, what does that really mean for the idea of a global standard of care for liquid biopsy right now? [00:15:31] Speaker A: That's the provocative question, isn't it? It suggests we still have some significant work ahead to ensure that identical samples truly yield results that lead down the same optimal treatment path, no matter where the test is performed. [00:15:45] Speaker B: This episode was based on an Open Access article under the CC BY 4.0 license. You can find a direct link to the paper and the license in our episode description. If you enjoyed this, follow or subscribe in your podcast app and leave a five star rating. If you'd like to support our work, use the donation link in the description. Thanks for listening and join us next time as we explore more science space by base.

Other Episodes