Tecknoworks Blog

Artificial Intelligence in Healthcare: Can AI Solve the Covid Testing Crisis?

Artificial Intelligence in Healthcare: Summary

Recently, test pooling has been suggested by Frankfurt Goethe University. This is a great example of the use of artificial intelligence in healthcare. Pooling involves testing a group of nasal swabs, all in a single test. If the test is negative, this means all the samples in the group (pool) were negative, sparing a lot of unnecessary individual tests. Unfortunately, the Frankfurt group didn’t publish any recommended pool sizes, so anyone using this approach would have to guess on the pool size, which obviously isn’t optimal. To solve this problem, we ran some simulations to determine the ideal strategies for using a pooled testing approach. We found that it varies by context, as shown below:

What this table essentially says is that in the case of Italy, for example, where there is an expected 0.33% infection rate among the full population, the optimal strategy for testing 100 people (cohort size) where you are allowed a maximum of 16 samples per pool, is to test:

  • First in groups of 15
  • The remaining people in the positive groups, regrouped in groups of 4
  • The remaining people in the positive groups, regrouped in groups of 2
  • The remaining people, individually

So, a 4 step strategy of (15, 4, 2, 1) where you should expect, on average, to find all the infections using only 10 tests. That means 10x fewer tests used, or a 10x increase in testing capacity!

Background: Quarantine Works, But...

As we have seen in country after country, and as the below video makes clear, one of the most effective ways to slow down or halt the pandemic is through quarantining sick people:

The problem with quarantines, though, is that you need to know which people are infected. And to know that, you need to do tests — lots of tests! However, as we all know, tests are a limited resource, and most countries have fewer tests than they need.

Romania, for example, only made 113,336 tests for a population of 19.6 million, which means a 0.585% testing rate, for a total daily capacity of 8000 tests / day. We see similar problems in the United States, the United Kingdom, and Italy.

This is clearly insufficient for a comprehensive testing scenario that would ensure the success of a quarantine effort.

Increased Testing through Pooling

In the test pooling method suggested by Frankfurt Goethe University, a group of nasal samples are tested in one single test. If the test is negative, this means all the samples in the group (pool) were negative, saving a lot of unnecessary individual tests. By testing in this way, you can reduce the number of tests needed by at least an order of magnitude. Or to put it in another way, you increase the testing capacity by an order of magnitudeThis seems to work because the following assumptions are valid: There are a lot more healthy people (that would have negative samples) than infected people (ones with positive samples) A single positive sample is sufficient to make the whole pooled test, positive. To give you a concrete example of how this might work, look at the image below:

We start with 20 suspect patients, and we pool them in groups of 4. For the positive (infected) pools, we test each case individually. So instead of 20 tests (going with the standard approach of testing everyone separately) we’ve only used 13 tests. This means a reduction of 35% of tests needed! Also, note that this example has a 10% infection ratio, which is a bit unrealistic at the moment. If we only had one infected patient (equivalent to a 5% infection ratio) we would only need 9 tests, so a 55% reduction.

Optimizing the Pooling Strategy

As the above example shows, pool size has a big impact on the total number of tests conducted: If the pool is too large (say 10 in our example), we end up with most (all, in our case) of the pooled tests ending up positive, essentially wasting these tests because only a few pools were eliminated. If the pool size is too small (say 2) we do way too many tests (in our example we’d do 14 total tests, compared to 13 for pool size of 4). Generally speaking, the pool size is dependent on both the cohort size (the number of tested people) and the expected infection ratio. The question is thus, what is the optimal pool size (given both of these parameters)?

To make it even more generic, it many not be optimal to test using a shallow strategy. If you look back at the example, we had a strategy with two steps:

  • Test in groups of 4
  • Test in groups of 1
  •  

For a specific infection ratio and cohort size, there might be an optimal strategy that requires 3 or more steps for an optimal strategy; for example, testing in pools of 10, then in pools of 3, then individually (in pools of 1). So we can rephrase our goal as: Given a certain context (with a known expected infection ratio and cohort size, what is the overall best pooled testing strategy that ensures finding all the infected people with the minimum amount of testing required?

Results

I’ll first present the results, to spare some readers from having to search for it at the end, and then continue with the methodology and some later discussions.

What this table essentially says is that for the case of Italy, for example, where there is an expected 0.33% infection rate among the full population, the optimal strategy for testing 100 people (cohort size) where you are allowed a maximum of 16 samples per pool, is to test:

  • First in groups of 15
  • The remaining people in the positive groups, regrouped in groups of 4
  • The remaining people in the positive groups, regrouped in groups of 2
  • The remaining people, individually
  •  

So, a 4 step strategy of (15, 4, 2, 1) where you should expect, on average, to find all the infections using only 9.47 tests. That means an approximately 10x increase in capacity!

Methodology

Monte Carlos simulations: Evaluating a strategy

We’ve used Monte Carlo simulations for estimating the expected average number of tests for a specific strategy. This means we’ve randomly generated for each case, 1000 random arrangements of test samples, drawn from the known infection ratio for that test. Say we have certain “S” strategy fixed. So for the Spain setup, where we have an expected infection ratio of 0.33%, we’ve generated 1000 random boolean arrays where the “True” values (encoding the infections) had a probability of 0.33% appearing, while the “False” values (encoding the healthy) had a probability of 1 – 0.33% appearing. On each of the 1000 cases we applied the strategy “S” and observed the required number of tests needed to find all the infections.

Note: In order to make sure we have this properly (correctly identifying each infection), each strategy needs to end in a pool size of 1.

Genetic algorithm: Finding good strategies

We’ve set the maximum number of possible steps in a strategy (the number of pooling steps) to be 5, where as noted above, the last one is always a size 1 pool. As such, the search space for all the possible solutions is of size 2^25. Doing a brute force search on it, in conjunction with the Monte Carlo simulations and for multiple configurations and setups, meant that it would take a very very long time to get results. Instead we’ve optimized the problem using a genetic algorithm:

  • It uses a proportional mix of elitism and tournament selection for the crossover step
  • A mutation chance of 5% on the mutation step
  • The fitness value is the expected average test resulting from the Monte Carlo simulation
  • The population had 100 individuals
  • The search was stopped after 20 generations where no new best solutions could be found (early stopping with a patience of 20 epochs)

Each setup required on average 12 generations to find the best solution, so along with the early stopping condition, that meant 32 generations on average for it to finish. This was a dramatic improvement over the 2^25 steps brute force required.

Discussions

There are a number of limitations and possible improvements to the way that we’ve modeled the testing problem:

First of all, we do not account for what happens to how pooled tests behave where one of the individuals, tested separately would be an inconclusive case (worst case being that on a retest he’s shown to be positive). We suspect (hope) that a single inconclusive sample in a pool would make the whole pool inconclusive as well; otherwise this pooling method might not work, regardless of the strategy. The infection ratios above might not be accurate. We’ve done only some limited computations using the data provided at the John’s Hopkins COVID-19 DashboardIf some numbers are inaccurate, of you’d like to include other cases as well, we’d be more than happy to rerun the simulations and update the table above. Just get in touch and let us know your thoughts!

This article originally appeared on www.clungu.com.

Latest Articles

Discover materials from our experts, covering extensive topics including next-gen technologies, data analytics, automation processes, and more.

Unlock the Power of Your Data Today

Ready to take your business to the next level?