As we have seen, in order to study the spread of infectious diseases, epidemiologists use models that make various simplifying assumptions. In particular, they typically assume a homogeneous mixing population, which means that contacts between people are totally random, so everyone is equally likely to infect everyone else if they are infected. However, this assumption is totally unrealistic, because in the real world transmission occurs in a highly structured population and contacts are not random. If you are infected, the probability that you are going to infect most people in the population is effectively zero, because you’ll never have any contact with them. Of course, there are many people you will never have any contact with, but whom you could nevertheless indirectly infect by starting a chain of infections that goes through them, but for most people the probability that this will happen is infinitesimal, whereas it’s much higher for people you interact with often because they are your colleagues, friends, members of your family, etc. and people who frequently interact with them. In reality, the virus doesn’t spread in a homogeneous mixing population, but in a highly structured one where each individual has different patterns of interactions with different people. How the virus spreads depends on who interacts with whom, how often they interact and what type of interaction they have, since those facts determine what chains of infections can exist and how likely each of them is depending on where the virus starts from in the population.
Thus, while standard epidemiological models represent the population as a collection of particles that interact randomly with each other, it’s better seen as a complex network where nodes are individuals and edges represent potential interactions between them that could result in transmission. Each edge in the network has a weight that indicates how easily transmission can occur along that edge if one of the individuals it connects happens to be infectious, which is determined by the frequency and nature of the contacts between them. Epidemiologists of infectious diseases have produced a voluminous literature on models that assume a virus spreads on these kinds of networks, so it’s not as if they didn’t know that real epidemics don’t spread in a homogeneous mixing population and hadn’t studied how population structure can affect transmission, but this literature had essentially no effect on applied work during the pandemic, perhaps because the kind of data that would be necessary to model real epidemics in that way is almost never available. Yet I think that population structure could hold the key to the mystery I have identified above, namely that the effective reproduction number often undergoes large fluctuations that, as far as we can tell, can’t be explained by changes in people’s behavior. Indeed, what I’m proposing is that we can solve this mystery by postulating that the network on which the epidemic spreads has what in network science is called “community structure”, which means that it can be divided into subnetworks whose nodes are internally densely connected while the subnetworks are only loosely connected to each other.
Philippe Lemoine
Interesting study about a conundrum that has puzzled many since the start of the pandemic, yet few have offered plausible solutions: why does transmission fluctuate so much even in the absence of clear measures or changes in human behavior? There are multiple examples for this, and the author mentions some of the more known, such as Florida and the Indian Delta wave, which subsided almost as abruptly as it started. The recent surge in Romania has also ended relatively fast, despite scarce measures from the authorities – as before, I mostly attribute these swings to schools openings, and the current reduction in cases comes after schools were closed again all over the country.
At first glance, this idea is reminiscent of the superspreading talk back when the pandemic started, but backed up by complex models and numerical simulation. And some of its conclusions look intuitive and plausible – I for one was wary of the alarmist news about the increased transmissibility of each new viral mutation, and this model offers a neat explanation how this effect can arise artificially even if the new variant doesn’t have a significant transmissibility advantage.
On the other hand, as I explained in my post about Delta’s transmissibility advantage, there is nothing particularly surprising about this variability if the population is structured in networks that are internally well-connected but loosely connected to each other. The idea is that, if different variants don’t spread in the same networks and some of them spread in networks where the prevalence of immunity is low because they had been relatively spared so far, a variant’s transmission advantage over the other strains of the virus can significantly overestimate its transmissibility advantage. Indeed, in that case, it will spread much faster relative to other variants than it would have other things being equal, because it’s spreading in networks where the prevalence of immunity is lower than in the networks where other variants are spreading and things are therefore not equal.
Unfortunately, as the author rightly notes, even though his model can accurately reproduce various patterns observed so far, real-world data about population structure is not available in most cases. Without validating these assumptions with actual data, the model’s applicability remains limited, and it certainly cannot be used to forecast future trends any more than traditional models. Perhaps some of this data could have been gathered through proximity apps used for contact tracing, but Apple and Google decided at the worst possible time to uphold strict privacy principles neither of them observes elsewhere (not surprisingly, considering they were never going to make money out of this proximity tracing protocol).
The state of pandemic modeling seems very poor unfortunately. In the current crisis, and for future pandemics, it would be helpful to have accurate models in order to better evaluate different public health measures and enact the most efficient mix. In effect something similar to weather forecasts for virus transmission – except that weather modelling benefits from large historical and live data sets of temperature, atmospheric pressure, humidity, and all the other factors impacting weather systems. That data is almost entirely missing for pandemic forecasting – and without it no model, no matter how clever or complex, could achieve accuracy.
3/ Along with the temperature drop, absolute humidity of the air also dropped. It dropped almost by half within a couple of days, it dropped to the level where it had not been since the spring. (data: https://t.co/VfJEbnVIRZ, https://t.co/WWOWxHcUI0) pic.twitter.com/zVuoloQp3a
— Kaur Parve (@kparve) November 21, 2021
Another factor that is rarely being discussed is the relative humidity of the air we breathe, with dry air impeding the protective mechanisms of the mucus in the upper respiratory tract. A recent Twitter thread linked this to the fast increase in cases in Central Europe in the latter half of October, but this still doesn’t explain how the pandemic can evolve in a different direction in Romania and Bulgaria, which are relatively close by, and also moving into the same cold season.
Post a Comment