In this session, we will discuss the definition of disease severity and some difficulties associated with severity estimation. Severity of an infectious disease refers to the risk of severe complications among infected cases. For example, case fatality risk or CFR, is the proportion of cases that die from the disease, where the case may be defined as those infections that are hospitalized, medically attended, or detectable by laboratory testing such as serology. The so-called clinical iceberg model is a useful concept for understanding the role of transmissibility and severity in the spread and control of epidemics. The entire iceberg corresponds to all infected cases that are capable of spreading the disease. The portion that submerges under the water are those infections that are not directly observable by the healthcare system. Infections are distributed within the iceberg from top to bottom in descending order in terms of seriousness of their clinical conditions. The disease burden observed in the healthcare system, which captures only the more severe infections, may therefore only be the tip of the iceberg for diseases that have relatively low risk of severe complications per infection. This is the case for polio because 90 percent of polio infections are subclinical. H5N1 avian influenza, for which the CFR has been estimated to be above 50 percent and the proportion of subclinical infections are believed to be very small, is at the opposite extreme. Severity of an infectious disease may depend on host factors, including age, comorbidities, socioeconomic status and genetics. For example, for people infected with the pandemic H1N1 influenza virus in 2009, the risk of ICU admission and death increased with age. Severity of an infectious disease may also depend on the characteristics of the pathogen. For example, HIV strain one is more severe than HIV strain two. Finally, severity can also depend on environmental factors, such as quality of healthcare, climatic factors, and water sanitation. For example, while the CFR of measles is only around 0.1 percent in developed countries, it can reach ten percent in populations with high levels of malnutrition and a lack of adequate health care. During an epidemic, especially for emerging infectious diseases, a robust estimate of severity is crucial for public health planning and effective risk communication with the public. For example, in anticipation of the next influenza pandemic before 2009, the US Department of Health and Human Services devised the pandemic severity index to categorize influenza pandemics based on the CFR. Identifying risk factors for higher CFR for example, age and comorbidities, can guide prioritization of high-risk groups for limited amount of vaccines and treatments. A robust estimate of CFR can help public health policymakers to more effectively communicate the health benefit and risk of vaccination with the public. In the 2011 World Health Assembly, the International Health Regulations Review Committee presented an assessment of the global response to the 2009 influenza pandemic and highlighted that, “the absence of a consistent, measurable and understandable depiction of severity of the pandemic,” as one of the major shortcomings. This high-profile report reminded the world again about the pivotal role of severity estimates in epidemic response. Although the definition of severity is easy to understand, real-time estimation of severity requires careful thinking and analysis. For example, on March 25th 2003, when the SARS epidemic was still in its early stage, the WHO estimated the CFR for SARS to be around four percent which was obtained by dividing the total number of people who died from SARS observed up to that date by the total number of SARS cases observed up to that date. Around two weeks later, on April 11th 2003, the WHO used the same method and estimated that the CFR was still around four percent. However, after the SARS epidemic was officially over, the observed final CFR was around ten percent, therefor two and a half times larger than the early estimates. At first inspection, there were two possible reasons for the discrepancy: The first possibility was that the CFR was increasing as the epidemic progressed. If this were true, it would be alarming because this suggested that the SARS virus had evolved to become more lethal within a short period of time. The second possibility was that the method used for estimating CFR was not robust. Upon closer examination, scientists quickly recognized that the method did not take into account the fact that during the early epidemic stages, a substantial proportion of SARS cases were still hospitalized and it was not known whether they would eventually recover or die from the disease. So in the method used, these patients were included in the denominator but not in the numerator of the CFR estimate, which was the reason that the early CFR estimates were substantially smaller than the final observed CFR. A simple way to correct for this so-called censoring effect is to disregard cases who had not yet recovered or died from the disease when the CFR had to be estimated. Researchers from the UK and Hong Kong showed that the CFR estimate obtained from this method remained quite constant and close to the observed final CFR of ten percent over the course of the SARS epidemic in Hong Kong. However, this method did not make full use of all the available data. For example, the duration that a patient had been hospitalized might have some predictive power for the risk of death. More sophisticated statistical methods have since been developed to make full use of this information when estimating CFR in real-time. To summarize, in this session, we have discussed the definition of disease severity and some difficulties associated with severity estimation.