Missing Data (Methods Discussion Group 11/26/2016)

Handling missing data could be an entire course in itself, but Gabrielle Simoneau teased the key tenets down to 1 hour on Friday.  In the context of mice DNA data, she first reminded us of missing data assumptions. We then discussed single and multiple imputation, inverse probability of censoring weighting, and finally touched on a complex case study that made all the methods seem inadequate (and still finished on time!).

The form of missing data dictates the methods that are available to address it.  The best case scenario is ‘missing completely at random’.  These data are not associated with the exposure or outcome, and cases with such missing data can be ignored. Larger quantities might still require imputation methods however because it affects your power.   The next best scenario is ‘missing at random’.  These data are required to identify the effect, but can be predicted from other observed variables.  The doomsday scenario is ‘missing not at random’.  Here, the data are associated with the exposure and outcome, but unavailable and unpredictable from the observed dataset. Resorting to population or literature-based values could be an option, but the methods below cannot be used as described.

Other then ignoring the problem, single imputation is the easiest way to handle missing (completely) at random data.  We choose or predict a value, substitute it in, then estimate our effect of interest. For example, we can impute the mean value of variable X for all the missing X values. However, like all things in life, there is no free lunch.  There are two major problems here: the mean value might not be a good guess at each case’s X value, and we don’t account for the added uncertainty created by our ‘invented’ values. Hence, multiple imputation.

Multiple imputation is a more complex, but more valid, way of handling missing (completely) at random data.  Instead of inputting a single value for each missing value, we generate a series of datasets that each imputes slightly different values of X based on the observed values of X in the original dataset.  We then predict the effect of interest in each of these.  The final estimate is an average of the effects estimated in each imputed dataset, and the variance accounts for the uncertainty of our imputed values.   Making the method even more useful, we can even predict several missing values at the same time. Multiple Imputation by Chained Equations (MICE) packages in statistical software can be used to employ this.

Despite the promise of multiple imputation, it is a little sketchy to start imputing things like your actual outcome or exposure variable, since this is the effect of interest.  Enter, inverse probability of censoring weighting. Inverse probability of censoring weighting avoids imputation altogether and re-constructs a complete dataset using weights. The variables in the dataset are used to predict the probability of being exposed, or a certain outcome value, or other value.  These probabilities are then used as weights in the final analysis. Unlike multiple imputation, this method is not good when there are several missing variables, because we might not have enough information to generate sensible predicted probabilities. So, it is best used when the missing values are concentrated in one variable, and imputation is undesirable.

Ready for the complex case study where all the methods above are inadequate?  In a poorly designed trial, patients were randomized to one of 3+ start treatments, then re-randomized up to three times to a choice of 3+ drugs depending on the success of each treatment.  The final dataset had missing values on follow up in each randomization cycle and treatment trajectory.  This is complex because the missing values are dependent on other variables in the randomization round, but also on individual patients’ previous values. There are also very few patients in each treatment trajectory since there were so many possible courses, limiting the amount of information available to predict anything.  In the end, some combination of all the above methods was used. But the lesson: all the methods in the world cannot save you from data that is just bad to begin with.

Resources from Gabrielle:

MICE in R     MICE in STATA      Tutorial on MICE     Tutorial on IPCW

Resource for more complex situations:

Application of multiple imputation methods to sequential
multiple assignment randomized trials

Inverse probability of censoring weighting for missing data

Methods Group Oct 28: Power Calculations

By Daniala Weir and Deepa Jahagirdar

We all learn about basic power calculations in stats 101.  But when it comes time to actually do it, it’s as if we know nothing at all.  In our methods group discussion on October 28th, we talked this challenging aspect of every study. In the simplest case, software (STATA, SAS, R) and online power calculators are your best friend, for example when the effect measure is an unadjusted difference in proportions. Gillian Ainsworth provided us with a few great examples including an overview by Dr. Hanley & Dr. Moodie.

But let’s face it: the simplest case rarely applies.  Simultaneously accounting for confounding, correlation, ranges of prevalence for outcomes and exposures requires methods beyond those online calculators. In this case, it is best to simulate data.   While the word ‘simulation’ can scare a lot of people off, Brooke Levis provided some very useful examples from her own research as well as R code written along with Andrea Benedetti to conduct a simulation for her power calculations.  The code will is pasted below.

Finally, Dr. Hanley gave some important advice,“Every study is valuable for answering an over arching research question….. Just because you don’t have enough power to conduct the study you want, doesn’t mean it shouldn’t be conducted at all. Think about it as contributing to an overall met-analysis on a particular research question.”

R Code for Power Calculations Simulations (credit to Brooke Levis and Dr Andrea Benedetti):
#this function generates a dataset of size n with a binary outcome, binary exposure, and binary confounder
#the exposure prevalence is given by prevx
#the confounder prevalence is given by prevconf
#the outcome prevalence is given by prevy
#the OR between the confounder and x is ORconfx
#the OR between the confounder and y is ORconfy
#the OR between the exposure and y is ORxy
#nreps is the number of times the data is generated and analyzed
#for each data set, a crude model and an adjusted model are fit and the significance of the exposure beta is assessed
getpower<-function(n, prevx, prevconf, prevy, ORconfx, ORconfy, ORxy, nreps){
#make a matrix to hold the results
res<-matrix(NA, ncol=11, nrow=nreps)
res<-as.data.frame(res)
colnames(res)<-c(“n”,”prevx”,”prevconf”,”prevy”, “ORconfx”,”ORconfy”,”ORxy”,”pvaladjmodel”,”sigadjmodel”,”pvalcrudemodel”,”sigcrudemodel”)

for(i in 1:nreps){
#generate the binary exposure – input prevalence of exposure
x<-rbinom(n, 1, prevx)

#generate the binary confounder – prevalence of confounder and OR between exposure and confounder
b0confx<-log(prevconf/(1-prevconf))
b1confx<-log(ORconfx)
regeqxconf<-b0confx+b1confx*x
conf<-rbinom(n,1, exp(regeqxconf)/(1+exp(regeqxconf)) )

#generate the binary outcome – prevalence of outcome, OR between exposure and outcome and OR between confounder and outcome
b0<-log(prevy/(1-prevy))
b1confy<-log(ORconfy)
b1xy<-log(ORxy)
regeq<-b0+b1confy*conf+b1xy*x
y<-rbinom(n, 1, exp(regeq)/(1+exp(regeq)))

#adjusted model
m1<-glm(y~x+conf, family=binomial)
#get p value for exposure beta
res[i,]$pvaladjmodel<-summary(m1)$coef[2,4]
#is it significant?
res[i,]$sigadjmodel<-ifelse(summary(m1)$coef[2,4]<0.05,1,0)

#crude model
m0<-glm(y~x, family=binomial)
#get p value for exposure beta
res[i,]$pvalcrudemodel<-summary(m0)$coef[2,4]
#is it significant?
res[i,]$sigcrudemodel<-ifelse(summary(m0)$coef[2,4]<0.05,1,0)
#hold onto data generation params
res[i,]$n<-n
res[i,]$prevx<-prevx
res[i,]$prevconf<-prevconf
res[i,]$prevy<-prevy
res[i,]$ORconfx<-ORconfx
res[i,]$ORconfy<-ORconfy
res[i,]$ORxy<-ORxy
}
#return the results
res
}

#call the function
p1<-getpower(n=400, prevx=.5, prevconf=.1, prevy=.2, ORconfx=2, ORconfy=2, ORxy=2, nreps=500)
colMeans(p1)

The biggest deterrent to equality: Humanity

Regardless of topic, all epidemiologists will spend time understanding equality. Trump’s (and others’) election have demonstrated that the biggest battle to realizing a more equal world is not about resources, outreach or policy effort.  It’s about what humans (do not) want to do.

Our studies further the notion of equal social status because they give inherent value to the fight. Whether just deciding to adjust for race/sex/ethnicity because we know group X is inherently worse off than others, whether its studying disease  outcomes in ‘vulnerable’ groups, or whether it’s about the effect of health insurance, the examples are infinite.  But time and time again, we show the relative health disadvantage of marginalized groups. The implicit message: more resources, more outreach and more policy effort targeting those who need it the most. While admitting it means crossing the much-dreaded line into advocacy , it is clear we are secretly imagining a world where disadvantage due to characteristics inherited at birth is gone.  If everyone did well, half of epidemiology would disappear.

Unfortunately, such a world will be stalled by those who stand something to lose. Trump’s win, Viktor Orbán, Brexit, Pegida, Marine Le Pen….have demonstrated that the desire for social power trumps (no pun) any broad desire to give everyone a chance.  Yes, poverty, disaffection, and loss of political voice (the problems of type of voter that effectively led to Trump’s and Brexit’s victory) are sad.  No doubt, large groups of people have seriously lost out on our ‘new economy’.   But these factors alone are not what caused the  recent voting and ideological trends.  The missing piece?  It’s poverty, disaffection, and loss of political voice among those who were once all but guaranteed it. 

Maybe we over-estimate humanity.  The election of Trump and others are as much referendums on whether historically excluded groups deserve a better chance, as they are driven by a belief in the urgency of restoring a social identity that was once the pride of certain demographics.  People like Mr. Trump here or Mr Farage in the UK did not create this belief: they merely gave it permission to fly.  We hope that it is a small minority, but it is not. Which leaves us with a dilemma. How can we fight for the best programs, policy and interventions to improve equality when humanity’s desire for inequality is so strong?

Methods Discussion Group 1: Manuscript Writing

The Applied Research Methods Discussion Group met last Friday to discuss this month’s topic of choice – Manuscript Writing. The discussion carried on beyond the time limit with topics including organizing literature into a Background section, journal targeting, the importance of titles and cover letters, and finally, abstracts.

The first part of the paper, the Background section, is the product of hours spent reading dozens of papers.  The purpose of understanding the literature is to fairly summarize its ‘weight’ – generally, are articles saying x or y? But keeping track of 30+ papers with new ones constantly coming in is a challenge. The group shared their best tips for organizing literature. For instance, create an ongoing Evernote, Excel or Word document to make notes about papers as you read them.  At the end, the little blurbs about each paper can jolt your memory and provide little write-ups to include in the paper. Regardless of the number of papers reviewed, it is natural to feel like you might have missed papers on the topic.  Subscribing to RSS feeds or journal alerts can help to keep up to date on developments in your field.  Ideally, you have not missed the most seminal paper ever on the topic, but remember we all have to stop reading a certain point.

We also discussed challenges related to working with interdisciplinary teams and the necessity of tailoring writing to specific journals.  Ultimately, not all disciplines’ journals are like ours.  Within typical epidemiology/health sciences journals, it may be better to write  generically rather then targeting specific journals.  Adjusting the length, a few sentences in the Background/Discussion, and formatting should be enough to submit to multiple journals.  However,there are differences to bear in mind if targeting a journal outside of epidemiology (or working with colleagues in fields such as Economics). For example, the background is often more then twice the length,  the theoretical foundations for the research are described in more detail and the paper is structured differently overall. In these cases, some minor readjustments will not be enough, and targeting while writing more helpful.

After the paper is carefully completed and the journal is finally chosen, some editors have made up their minds by the end of your title or  cover letter. The title should be succinct yet detailed enough to keep their interest. A general template is ‘General: Specific.’ For example: ‘Cat food: the role of tuna in a nutritious diet’ or ‘Obesity prevalence: differences across socio-economic status.’  Humorous titles may or may not be okay; our group was split on this issue. It may take a certain status (or a certain talent) to get away with it. If the editor has not stopped by the end of the title, s/he will at least read your cover letter. This letter’s importance is often under-appreciated.  In addition to summarizing the main findings, personalize the letter to indicate why you have chosen the specific journal.  For example, citing previously published articles that suggest the need for your work from the same journal can help your case.

At last, you have succeeded in drawing the editor to your abstract.  The abstract is likely the last thing the editor will read before deciding to send the paper for review. We had a debate about writing the abstract before or after the rest of the paper.  Beginner writers often write the abstract last but people with more experience in the group suggested writing it first.  Articulating research in ~250 words means  the purpose, findings and importance are clear.  From there, fill in the rest of the paper .  However abstract writing may also be more iterative.  I am personally convinced that the clarity of  the research increases right up until your paper is complete (‘NOW I understand what my research was about’). This clarity is essential for abstract writing.

While we covered practical aspects of writing papers and real-time challenges that go beyond the typical structure of Introduction-Methods-Results-Discussion, more resources are available here:

Stanford Online Writing Course

Clinical Epidemiology Writing Tips

BMJ Writing E-Book

We hope to see you next time when the discussion will centre on power calculations! October 28, 12:30pm, Purvis Hall Room 25.

We, too, can be health economists

Yesterday Dr Jason Guertin presented on the overlap between pharmacoepidemiology and pharmacoeconomics, challenges to translating research into decision making and the potential transition between epidemiology and health economics.

The speaker introduced the incremental cost-effectiveness ratio (ICER), and went on to describe the confounding challenges to determining it.  This ratio is the increase/decrease in cost per unit change in effectiveness (e.g. quality of life or years gained unit) for a new drug/technology, compared to its predecessor. The ICER is the key outcome in pharmacoeconomics and cost-effectiveness research for health technologies in general. It is analogous to the usual health outcomes we study in epidemiology. Similar to epidemiology, confounding is a problem in cost-effectiveness research based on observational studies. However, the ICER is actually composed of two things rather then just one health outcome – the cost component and the effectiveness component. Confounding takes on new life because of these two outcomes and the positive or negative correlation between them.

In epidemiology, our effect estimates can swing above and below the null when confounders are excluded or included.  In cost-effectiveness research, the cost per quality of life years gained can swing above and below the acceptable threshold to approve new drugs/technologies for reimbursement. In an extreme example, Dr Guertin found a difference of up to $80,000 per quality-adjusted life year gained between unadjusted and adjusted models.  Evidently such a price tag has practical implications for decision-making – in this case whether to approve a new technology to treat aortic aneurysm.

Beyond the actual study, translating findings into policy faces further complications. The public reaction has a bigger influence on the technologies and drugs approved then even the best quality cost-effectiveness studies.  For example, a very expensive drug to treat rare genetic disorders in infants may be approved because of the value society places on young lives.  At the same time, treatments for hair loss are not approved for reimbursement despite their extreme cost-effectiveness.  In epidemiology, we face similar challenges. For example, maternity leave allowances of 6 weeks may lead to better breastfeeding outcomes.  Say the research on this issue was perfect.  Would the policy be implemented everywhere? No.

In sum, Dr Guertin effectively translated his health economics research into a language epidemiologists could understand.  The overlap in confounding and study design-related challenges demonstrated that the skills also overlap.  So, pharmacoeconomics may be a new field to pursue for you!

 

 

 

 

Am I passionate enough about my PhD?

We are often told passion is one of the most important aspects of a PhD.  That if you don’t like your topic or field of study, you are doomed from the start.   It is idyllic, actually: being so passionate about your topic that you will never procrastinate, you will put in 110% every day, and, most of all, have a lifelong devotion.

Realistically, choosing a topic is one of the biggest challenges for graduate students even if you are floating in a cloud of passion. Regardless of whether the topic is from a blank slate or a continuation from previous work, for many students, passion goes something like this:

  1. An initial idea driven by passion and excitement (and practicality)
  2. Excitement builds and you feel confident
  3. Excitement dwindles and you question everything
  4. Repeat 2 and 3 until you end up in a static state of one or the other

The scary part is ending up permanently at step 3. What does this mean? Should you stick to your plan of becoming a tenured expert in fruit fly migration? Regardless of your PhD stage, divorcing yourself from a career path you had perfectly planned and a topic that used to be your passion is not impossible. Practically, one can always apply to non-traditional jobs post-PhD, and build contacts to transition into preferable topic areas and career paths.

At the same time, pursuing alternate plans is more difficult then it seems. Think about the achievements that are rewarded in our department, where reward = verbal praise, postings on the news websites, congrats from professors, wow factors at thesis/protocol defense.  These ‘wow factor’ achievements include awards at conferences, speaking invitations, novel methods, publications in NEJM, CIHR funding..Someone who has all these things is a ‘very good’, ‘very bright’ student. We all like praise, so adhering to the above model is highly tempting despite dwindling interest in the topic and career path that’s receiving the praise.

Unfortunately, similar external validation is not available for alternate plans, making two things necessary to move on from your set-in-stone path. Admitting the mismatch between previous thinking and the current state of mind, and learning to rely on internal validation are mind games that must be overcome.  So what if no one notices that you just published a very creative idea in a very mediocre journal? You should be proud, when you think of yourself explaining this idea to someone who will notice, at a time when it actually matters for you. Learning to define your own achievements is a pre-requisite to defining your own path beyond the PhD, and ultimately finding a career that is truly driven by passion.

Further reading for those interested:

Dr. Levine could no longer focus on astronomy with developing political events 

Dr Borniger started a PhD in a different field despite success in anthropology

Top 10-alternative careers for STEM PhDs & the importance of understanding your options

I hate my PhD

 

 

 

Maybe it’s not you, it’s the water

The best soccer teams never let the ball get to the goalie, and the healthiest goldfish still need their water changed. These were the central themes of Dr Sandro Galea‘s speech Thursday at the Canadian Society for Epidemiology and Biostatistics’s national conference in Winnipeg.

The talk highlighted the over-focus on tiny, proximal parts of an overall chain of events leading to disease. By now we have learned everything from eating nuts to vitamins to swimming prevents cancer, for instance. We have also made mass investment into personalized medicine, taking tailored health to a new level.  The message is clearer than ever: health is experienced by the individual so it must be in the individual’s hands.

At the same time, most of us are grounded in an overall interest to improve population health. By focusing on the determinants of health at an ever-increasing individual level, we ignore the systems and environment within which people make health decisions. For instance, there has been a major decline in automobile accidents over the past century. What is responsible? Safer roads and safer cars.  Not focusing on the individual driver’s abilities. Similarly, having a population impact requires a perspective on the conditions under which we can learn to eat more nuts and vitamins, and buy more swimming pool memberships.

IMG_20160609_101301

Our own Sofia asking Dr Galea a question

According to  Dr. Galea, the solution centers on a re-calibration of time and monetary investment. This does not involve spending more time and money, but pulling existing research energy and health system funding towards evidence on population options and public health infrastructure. The ‘no net increase in spending’ argument should win over politicians who may be completely unaware of this perspective; lately, funding has gone the opposite way (while interesting in theory,  imagine the public reaction to the headline, “Funding cuts to hospitals,” regardless of the overall benefit).

Ultimately a population approach stops people from getting sick in the first place: it is all the offense and defense in place before the ball ever gets to the goalie.  It also acknowledges the limits of individual responsibility: a goldfish eating the healthiest and exercising the most can only go so far in murky water.   While we still need a goalie when we get sick, and we still need to look after ourselves, stressing only individual health determinants hinders fulfilling what most of us would like to see: better population health.

 

Thinking about observational studies like RCTs

You too can conduct a high quality, valid observational study!  Today, Dr. Miguel Hernan reminded us of some overlooked aspects of doing exactly this.  When done correctly, we might get close to an estimate from a  randomized controlled trial. When done incorrectly, we probably have not shaped our thinking in the right way. Ultimately, sometimes observational studies can mimic RCTs (without replacement). To even begin to try to achieve this, we need to approach our observational studies more like RCTs.

The thinking begins by asking an answerable  question.  Questions for RCTs involve interventions that are ‘randomly assignable’.  ‘What is the effect of taking aspirin compared to a placebo on Y?’ ‘What is the effect of giving people a basic living wage compared to targeted benefits on Y’? Inherent to these are a well-defined intervention that is the same across participants.  In contrast consider, ‘What is the effect of weight loss on heart disease?’  This is not a causal question because multiple interventions lead to weight loss (did you start exercising or start smoking? The effect will vary depending).

Next, consider eligibility and Time 0. Here is where even high-quality observational studies can go bad. Common mistakes are accidentally (?) including prevalent users of an intervention rather than incident users or inducing immortal time bias.  A great example was the RCT versus observational design controversy regarding hormone replacement therapy and partial resolution once the time 0 and prevalent users inclusion issues were addressed.  To avoid such blunders, get to know your data. Very well.  In contrast to an RCT where we might build the data from scratch, us observational study-ists purchase pre-existing data and are thus at the mercy of its documentation.  Thinking like a trialist and digging into the data reveals whether it is possible to answer your causal question, or some worthwhile version, after all.

Third, minimize confounding biases and consider estimating a per-protocol effect while you’re at it.  Eliminating these biases means that the only important difference between treatment and control groups is the treatment itself. There are methods such as inverse probability of treatment weighting, propensity score matching, etc, to help mimic random assignment. Implementing these methods using baseline covariate measures allows estimation akin to an RCT’s ‘intent-to-treat’ (ITT) effect (the effect of being assigned versus not assigned to treatment).

But remember, confounding is an issue in both RCTs and observational studies if studying the ‘per-protocol’ effect (the effect of actually taking versus not taking a treatment).  In RCTs this problem is generally dealt with by ignoring it. Estimating a per-protocol effect requires more complex analyses using time-varying exposure and covariate measures.  But given per-protocol estimates are usually the actual effect of interest, and ITT estimates can be problematic, more effort should be made to present both ITT and per protocol results.  Dr Hernan predicts these efforts will become commonplace in the near future.

5 Software Tools to Make Grad School in Epi Better

As epidemiology and public health graduate students, a good number of us spend almost more time on computers crunching data than watching youtube. We all have our favorite data analysis tools installed: R, Stata, SPSS, SAS, JMP, WinBUGS, Matlab… we use Dropbox to sync and backup files, Google Docs to collaborate, Endnote or Papers to manage our PDFs and citations, and Evernote to manage our notes.

gif_huggingmonitor

Friday afternoons in Purvis Hall

But aside from the famous  tools we all know and love, there are a lot of awesome software tools and plugins out there that can make our lives just a little easier. You want to search for 50 different keywords in 50 different windows at the same time? there’s a plugin for that (Chrome). You want to download citations on-the-go? this button is mandatory (Chrome, Firefox).  You want to force that window to stay on top so you don’t have to flip back and forth? download a little utility software (Win, Mac).

Here are 5 software tools that have made my life just a little bit better:

Continue reading

Exciting news: Unique data on social policies is now available

The social and economic conditions that surround us can affect our health.  This is not a new idea.  If the notion was not broadly appreciated by the time it was formalized in the Lalonde report, the point was hammered down more thoroughly in the Marmot review.  However, with little evidence quantifying the health impact of policies meant to address these conditions, this idea has large stayed as just that – an idea. Due to their complexity, we have been rarely able to answer questions like, ‘What exactly would happen to people’s health if we passed policy x, y, or z?’ or ‘How many less  people would get sick?’

Part of the challenge is the lack of analyzable policy information. The other part is the inherent difficulty in using conventional epidemiologic methods to answer such questions.  Taking on these challenges, the McGill-based MachEquity project (2010-) has been building databases and applying robust methods in an effort to build a slew of causal evidence of social policy effects on health in low and middle income countries. Below is a summary of their work and recent launch of data for public use.

How has MachEquity built ‘robust’ evidence?

First, there is the stringently-coded policy data. Focusing on national social policies, staff looked at full legislation documents in each country, amendments and appeals, or secondary sources if original documentation was not available.  Two researchers then quantified the legislation into something meaningful (for example, the months  paid maternity leave legislated), ultimately resulting in longitudinal datasets for with policy information from 1995-2013.

Second, the group is making use of quasi-experimental methods that attempt to mimic random assignment.  The latter is the ‘gold standard’ to evaluate the impact of anything (policy, intervention, drug…) because it ensures that we eliminate all other potential explanations for any differences we see in the people’s outcomes (e.g. their health status post-experiment).  Evidently, perfectly controlled randomized experiments are most often impossible when we are dealing with social determinants of health (can we randomize people to have more education?).  Enter: quasi-experimental methods. There are entire books and courses on this (literally – look here), but the basic idea is that we can mimic randomization by eliminating other sources of variation, e.g. over time, in different types of people, or in different countries, by controlling for it in specific ways.

So what have they done so far?

Built policy databases. Published. Presented. A lot. See here.  But not only that, researchers and staff on this project work in close collaboration with partners at the Department of Global Affairs, non-governmental organizations such as CARE, and ‘package‘ their work for policy-maker audiences . After all, the actual policy-making is in their hands, which are often far out of reach from academic research. Specific research topics have included the effect of removing tuition-fees, health service user-fees, maternity leave legislation, minimum age-of-marriage laws on outcomes like vaccination uptake, child mortality, adolescent birth rates and nutrition.

Where is the policy data and how can I use it?

Policy datasets on maternity leave, breastfeeding breaks at work, child marriage and minimum wage are now available for download here!  For each determinant, longitudinal data on low and middle income countries’ policies is available. You can therefore make use of quantified social policy information and changes in legislation over time, and the infinite possible analyses that such data lends itself to.