Methods Discussion Group 1: Manuscript Writing

The Applied Research Methods Discussion Group met last Friday to discuss this month’s topic of choice – Manuscript Writing. The discussion carried on beyond the time limit with topics including organizing literature into a Background section, journal targeting, the importance of titles and cover letters, and finally, abstracts.

The first part of the paper, the Background section, is the product of hours spent reading dozens of papers.  The purpose of understanding the literature is to fairly summarize its ‘weight’ – generally, are articles saying x or y? But keeping track of 30+ papers with new ones constantly coming in is a challenge. The group shared their best tips for organizing literature. For instance, create an ongoing Evernote, Excel or Word document to make notes about papers as you read them.  At the end, the little blurbs about each paper can jolt your memory and provide little write-ups to include in the paper. Regardless of the number of papers reviewed, it is natural to feel like you might have missed papers on the topic.  Subscribing to RSS feeds or journal alerts can help to keep up to date on developments in your field.  Ideally, you have not missed the most seminal paper ever on the topic, but remember we all have to stop reading a certain point.

We also discussed challenges related to working with interdisciplinary teams and the necessity of tailoring writing to specific journals.  Ultimately, not all disciplines’ journals are like ours.  Within typical epidemiology/health sciences journals, it may be better to write  generically rather then targeting specific journals.  Adjusting the length, a few sentences in the Background/Discussion, and formatting should be enough to submit to multiple journals.  However,there are differences to bear in mind if targeting a journal outside of epidemiology (or working with colleagues in fields such as Economics). For example, the background is often more then twice the length,  the theoretical foundations for the research are described in more detail and the paper is structured differently overall. In these cases, some minor readjustments will not be enough, and targeting while writing more helpful.

After the paper is carefully completed and the journal is finally chosen, some editors have made up their minds by the end of your title or  cover letter. The title should be succinct yet detailed enough to keep their interest. A general template is ‘General: Specific.’ For example: ‘Cat food: the role of tuna in a nutritious diet’ or ‘Obesity prevalence: differences across socio-economic status.’  Humorous titles may or may not be okay; our group was split on this issue. It may take a certain status (or a certain talent) to get away with it. If the editor has not stopped by the end of the title, s/he will at least read your cover letter. This letter’s importance is often under-appreciated.  In addition to summarizing the main findings, personalize the letter to indicate why you have chosen the specific journal.  For example, citing previously published articles that suggest the need for your work from the same journal can help your case.

At last, you have succeeded in drawing the editor to your abstract.  The abstract is likely the last thing the editor will read before deciding to send the paper for review. We had a debate about writing the abstract before or after the rest of the paper.  Beginner writers often write the abstract last but people with more experience in the group suggested writing it first.  Articulating research in ~250 words means  the purpose, findings and importance are clear.  From there, fill in the rest of the paper .  However abstract writing may also be more iterative.  I am personally convinced that the clarity of  the research increases right up until your paper is complete (‘NOW I understand what my research was about’). This clarity is essential for abstract writing.

While we covered practical aspects of writing papers and real-time challenges that go beyond the typical structure of Introduction-Methods-Results-Discussion, more resources are available here:

Stanford Online Writing Course

Clinical Epidemiology Writing Tips

BMJ Writing E-Book

We hope to see you next time when the discussion will centre on power calculations! October 28, 12:30pm, Purvis Hall Room 25.

We, too, can be health economists

Yesterday Dr Jason Guertin presented on the overlap between pharmacoepidemiology and pharmacoeconomics, challenges to translating research into decision making and the potential transition between epidemiology and health economics.

The speaker introduced the incremental cost-effectiveness ratio (ICER), and went on to describe the confounding challenges to determining it.  This ratio is the increase/decrease in cost per unit change in effectiveness (e.g. quality of life or years gained unit) for a new drug/technology, compared to its predecessor. The ICER is the key outcome in pharmacoeconomics and cost-effectiveness research for health technologies in general. It is analogous to the usual health outcomes we study in epidemiology. Similar to epidemiology, confounding is a problem in cost-effectiveness research based on observational studies. However, the ICER is actually composed of two things rather then just one health outcome – the cost component and the effectiveness component. Confounding takes on new life because of these two outcomes and the positive or negative correlation between them.

In epidemiology, our effect estimates can swing above and below the null when confounders are excluded or included.  In cost-effectiveness research, the cost per quality of life years gained can swing above and below the acceptable threshold to approve new drugs/technologies for reimbursement. In an extreme example, Dr Guertin found a difference of up to $80,000 per quality-adjusted life year gained between unadjusted and adjusted models.  Evidently such a price tag has practical implications for decision-making – in this case whether to approve a new technology to treat aortic aneurysm.

Beyond the actual study, translating findings into policy faces further complications. The public reaction has a bigger influence on the technologies and drugs approved then even the best quality cost-effectiveness studies.  For example, a very expensive drug to treat rare genetic disorders in infants may be approved because of the value society places on young lives.  At the same time, treatments for hair loss are not approved for reimbursement despite their extreme cost-effectiveness.  In epidemiology, we face similar challenges. For example, maternity leave allowances of 6 weeks may lead to better breastfeeding outcomes.  Say the research on this issue was perfect.  Would the policy be implemented everywhere? No.

In sum, Dr Guertin effectively translated his health economics research into a language epidemiologists could understand.  The overlap in confounding and study design-related challenges demonstrated that the skills also overlap.  So, pharmacoeconomics may be a new field to pursue for you!





Am I passionate enough about my PhD?

We are often told passion is one of the most important aspects of a PhD.  That if you don’t like your topic or field of study, you are doomed from the start.   It is idyllic, actually: being so passionate about your topic that you will never procrastinate, you will put in 110% every day, and, most of all, have a lifelong devotion.

Realistically, choosing a topic is one of the biggest challenges for graduate students even if you are floating in a cloud of passion. Regardless of whether the topic is from a blank slate or a continuation from previous work, for many students, passion goes something like this:

  1. An initial idea driven by passion and excitement (and practicality)
  2. Excitement builds and you feel confident
  3. Excitement dwindles and you question everything
  4. Repeat 2 and 3 until you end up in a static state of one or the other

The scary part is ending up permanently at step 3. What does this mean? Should you stick to your plan of becoming a tenured expert in fruit fly migration? Regardless of your PhD stage, divorcing yourself from a career path you had perfectly planned and a topic that used to be your passion is not impossible. Practically, one can always apply to non-traditional jobs post-PhD, and build contacts to transition into preferable topic areas and career paths.

At the same time, pursuing alternate plans is more difficult then it seems. Think about the achievements that are rewarded in our department, where reward = verbal praise, postings on the news websites, congrats from professors, wow factors at thesis/protocol defense.  These ‘wow factor’ achievements include awards at conferences, speaking invitations, novel methods, publications in NEJM, CIHR funding..Someone who has all these things is a ‘very good’, ‘very bright’ student. We all like praise, so adhering to the above model is highly tempting despite dwindling interest in the topic and career path that’s receiving the praise.

Unfortunately, similar external validation is not available for alternate plans, making two things necessary to move on from your set-in-stone path. Admitting the mismatch between previous thinking and the current state of mind, and learning to rely on internal validation are mind games that must be overcome.  So what if no one notices that you just published a very creative idea in a very mediocre journal? You should be proud, when you think of yourself explaining this idea to someone who will notice, at a time when it actually matters for you. Learning to define your own achievements is a pre-requisite to defining your own path beyond the PhD, and ultimately finding a career that is truly driven by passion.

Further reading for those interested:

Dr. Levine could no longer focus on astronomy with developing political events 

Dr Borniger started a PhD in a different field despite success in anthropology

Top 10-alternative careers for STEM PhDs & the importance of understanding your options

I hate my PhD




Maybe it’s not you, it’s the water

The best soccer teams never let the ball get to the goalie, and the healthiest goldfish still need their water changed. These were the central themes of Dr Sandro Galea‘s speech Thursday at the Canadian Society for Epidemiology and Biostatistics’s national conference in Winnipeg.

The talk highlighted the over-focus on tiny, proximal parts of an overall chain of events leading to disease. By now we have learned everything from eating nuts to vitamins to swimming prevents cancer, for instance. We have also made mass investment into personalized medicine, taking tailored health to a new level.  The message is clearer than ever: health is experienced by the individual so it must be in the individual’s hands.

At the same time, most of us are grounded in an overall interest to improve population health. By focusing on the determinants of health at an ever-increasing individual level, we ignore the systems and environment within which people make health decisions. For instance, there has been a major decline in automobile accidents over the past century. What is responsible? Safer roads and safer cars.  Not focusing on the individual driver’s abilities. Similarly, having a population impact requires a perspective on the conditions under which we can learn to eat more nuts and vitamins, and buy more swimming pool memberships.


Our own Sofia asking Dr Galea a question

According to  Dr. Galea, the solution centers on a re-calibration of time and monetary investment. This does not involve spending more time and money, but pulling existing research energy and health system funding towards evidence on population options and public health infrastructure. The ‘no net increase in spending’ argument should win over politicians who may be completely unaware of this perspective; lately, funding has gone the opposite way (while interesting in theory,  imagine the public reaction to the headline, “Funding cuts to hospitals,” regardless of the overall benefit).

Ultimately a population approach stops people from getting sick in the first place: it is all the offense and defense in place before the ball ever gets to the goalie.  It also acknowledges the limits of individual responsibility: a goldfish eating the healthiest and exercising the most can only go so far in murky water.   While we still need a goalie when we get sick, and we still need to look after ourselves, stressing only individual health determinants hinders fulfilling what most of us would like to see: better population health.


Thinking about observational studies like RCTs

You too can conduct a high quality, valid observational study!  Today, Dr. Miguel Hernan reminded us of some overlooked aspects of doing exactly this.  When done correctly, we might get close to an estimate from a  randomized controlled trial. When done incorrectly, we probably have not shaped our thinking in the right way. Ultimately, sometimes observational studies can mimic RCTs (without replacement). To even begin to try to achieve this, we need to approach our observational studies more like RCTs.

The thinking begins by asking an answerable  question.  Questions for RCTs involve interventions that are ‘randomly assignable’.  ‘What is the effect of taking aspirin compared to a placebo on Y?’ ‘What is the effect of giving people a basic living wage compared to targeted benefits on Y’? Inherent to these are a well-defined intervention that is the same across participants.  In contrast consider, ‘What is the effect of weight loss on heart disease?’  This is not a causal question because multiple interventions lead to weight loss (did you start exercising or start smoking? The effect will vary depending).

Next, consider eligibility and Time 0. Here is where even high-quality observational studies can go bad. Common mistakes are accidentally (?) including prevalent users of an intervention rather than incident users or inducing immortal time bias.  A great example was the RCT versus observational design controversy regarding hormone replacement therapy and partial resolution once the time 0 and prevalent users inclusion issues were addressed.  To avoid such blunders, get to know your data. Very well.  In contrast to an RCT where we might build the data from scratch, us observational study-ists purchase pre-existing data and are thus at the mercy of its documentation.  Thinking like a trialist and digging into the data reveals whether it is possible to answer your causal question, or some worthwhile version, after all.

Third, minimize confounding biases and consider estimating a per-protocol effect while you’re at it.  Eliminating these biases means that the only important difference between treatment and control groups is the treatment itself. There are methods such as inverse probability of treatment weighting, propensity score matching, etc, to help mimic random assignment. Implementing these methods using baseline covariate measures allows estimation akin to an RCT’s ‘intent-to-treat’ (ITT) effect (the effect of being assigned versus not assigned to treatment).

But remember, confounding is an issue in both RCTs and observational studies if studying the ‘per-protocol’ effect (the effect of actually taking versus not taking a treatment).  In RCTs this problem is generally dealt with by ignoring it. Estimating a per-protocol effect requires more complex analyses using time-varying exposure and covariate measures.  But given per-protocol estimates are usually the actual effect of interest, and ITT estimates can be problematic, more effort should be made to present both ITT and per protocol results.  Dr Hernan predicts these efforts will become commonplace in the near future.

5 Software Tools to Make Grad School in Epi Better

As epidemiology and public health graduate students, a good number of us spend almost more time on computers crunching data than watching youtube. We all have our favorite data analysis tools installed: R, Stata, SPSS, SAS, JMP, WinBUGS, Matlab… we use Dropbox to sync and backup files, Google Docs to collaborate, Endnote or Papers to manage our PDFs and citations, and Evernote to manage our notes.


Friday afternoons in Purvis Hall

But aside from the famous  tools we all know and love, there are a lot of awesome software tools and plugins out there that can make our lives just a little easier. You want to search for 50 different keywords in 50 different windows at the same time? there’s a plugin for that (Chrome). You want to download citations on-the-go? this button is mandatory (Chrome, Firefox).  You want to force that window to stay on top so you don’t have to flip back and forth? download a little utility software (Win, Mac).

Here are 5 software tools that have made my life just a little bit better:

Continue reading

Exciting news: Unique data on social policies is now available

The social and economic conditions that surround us can affect our health.  This is not a new idea.  If the notion was not broadly appreciated by the time it was formalized in the Lalonde report, the point was hammered down more thoroughly in the Marmot review.  However, with little evidence quantifying the health impact of policies meant to address these conditions, this idea has large stayed as just that – an idea. Due to their complexity, we have been rarely able to answer questions like, ‘What exactly would happen to people’s health if we passed policy x, y, or z?’ or ‘How many less  people would get sick?’

Part of the challenge is the lack of analyzable policy information. The other part is the inherent difficulty in using conventional epidemiologic methods to answer such questions.  Taking on these challenges, the McGill-based MachEquity project (2010-) has been building databases and applying robust methods in an effort to build a slew of causal evidence of social policy effects on health in low and middle income countries. Below is a summary of their work and recent launch of data for public use.

How has MachEquity built ‘robust’ evidence?

First, there is the stringently-coded policy data. Focusing on national social policies, staff looked at full legislation documents in each country, amendments and appeals, or secondary sources if original documentation was not available.  Two researchers then quantified the legislation into something meaningful (for example, the months  paid maternity leave legislated), ultimately resulting in longitudinal datasets for with policy information from 1995-2013.

Second, the group is making use of quasi-experimental methods that attempt to mimic random assignment.  The latter is the ‘gold standard’ to evaluate the impact of anything (policy, intervention, drug…) because it ensures that we eliminate all other potential explanations for any differences we see in the people’s outcomes (e.g. their health status post-experiment).  Evidently, perfectly controlled randomized experiments are most often impossible when we are dealing with social determinants of health (can we randomize people to have more education?).  Enter: quasi-experimental methods. There are entire books and courses on this (literally – look here), but the basic idea is that we can mimic randomization by eliminating other sources of variation, e.g. over time, in different types of people, or in different countries, by controlling for it in specific ways.

So what have they done so far?

Built policy databases. Published. Presented. A lot. See here.  But not only that, researchers and staff on this project work in close collaboration with partners at the Department of Global Affairs, non-governmental organizations such as CARE, and ‘package‘ their work for policy-maker audiences . After all, the actual policy-making is in their hands, which are often far out of reach from academic research. Specific research topics have included the effect of removing tuition-fees, health service user-fees, maternity leave legislation, minimum age-of-marriage laws on outcomes like vaccination uptake, child mortality, adolescent birth rates and nutrition.

Where is the policy data and how can I use it?

Policy datasets on maternity leave, breastfeeding breaks at work, child marriage and minimum wage are now available for download here!  For each determinant, longitudinal data on low and middle income countries’ policies is available. You can therefore make use of quantified social policy information and changes in legislation over time, and the infinite possible analyses that such data lends itself to.

What do we do with all this data?

According to some, ‘big data’ will transform everything, infiltrating every aspect of our work, play and comings and goings.  But what are the implications for epidemiologists? What exactly is ‘big’? What exactly is ‘transform’?  What’s next for us?

Daniel Westreich and Maya Petersen addressed these questions in the Society for Epidemiologic Research’s digital conference  today.  For epidemiologists, the consensus (from those keen to type responses in the chat box) was that big data may not be as revolutionary as popular imagination suggests.  However to take full advantage, we may require new methods, training, more collaboration with programmers and ultimately better PR.  Below is a full summary of the talks.

So what is ‘big’?  It depends.

Daniel Westreich quoted others in saying ‘big’ is a moving target: what is big today was not big many years ago (think of your first CD compared to your current iPod).  The summary I liked best: ‘big’ is anything that cannot fit on conventional devices.  For example, I only discovered my dataset was ‘big’ when I tried to read it into R, the program froze, and my computer crashed.  That’s big data (or a bad computer, but anyway, that’s the idea).

And could ‘big data’ transform epidemiology? Sort of.

First, unfortunately, simply having more data does not guarantee that causal assumptions are met. For example, Dr Westreich explained how scrapping big data from Twitter would result in huge amounts of highly biased data because the site is only used by a non-random 16% Americans.  At the opposite extreme, we may end up over-confident in highly precise yet biased results. Big data could instead contribute more to prediction models. But Maya Petersen cautioned that even in these models, our implicit interest is often still causal – how often are we interested in knowing the probability of an event without even taking guesses as to why it occurs?

At the same time, we would need to move beyond classic model selection procedures to use it.  Imagine 1000s of possible covariates, interactions and functional forms.  According to Dr. Petersen, the way to arrive at a logical estimator must be to move away from using our logic: take humans out of it.    She gave examples using UC Berkley’s signature SuperLearner in combination with Targeted Maximum Likelihood Estimation. Essentially, the first amounts to entering the covariates in a type of black box that attempts to find the best combination. Obviously the ‘best combination’ depends on the question at hand, hence the combined use with Targeted Maximum Likelihood Estimation.  Though a specific example, we can only expect  the use of such computer-intensive methods to increase alongside the use of big data in epidemiology.

Finally, what’s next for us? Training, collaboration, PR.

1) Revised Training: Requiring the use of these more computer-intensive methods also requires development of more advanced programming skills.  But both speakers commented on the existing intensity of epidemiology PhD training.  In fact, we are perhaps the only discipline where students come into the PhD with exactly 0 previous epidemiology courses. There is a lot to learn.  At the same time, we cannot place the onus entirely on the students to self-teach. A better solution may be more optional courses.

2) Better collaboration: Rather than all of us returning to complete Bachelors in Computer Science, we could just become friends with programmers. In fact, there are lots of them.  Dr Petersen discussed how teaching collaboration with computer scientists is a more feasible approach than teaching computer science itself.  Part of that involves knowing the kinds of questions that we need to ask programmers.

3) More PR: Epidemiology public relations is a little non-existent relative to others sometimes (e.g. Economists).  If we think we can benefit from big data to answer population health-relevant questions, we need to get ourselves invited to bigger discussions on the topic. For example, epidemiologists should be involved in a discussion on what data need to be collected. But the status quo generally excludes us.

More information: Daniel Westreich / Maya Petersen / Big Data in Epi Commentary

Operational research: bridging theory and practice

One of the things we are told as students in public health and epidemiology is that our work has real life implications and will help in making better decisions in practice. On the first week of May a group of us from very diverse backgrounds, academia and field workers, participated in a week-long course focused on operational research methods offered through McGill’s Global Health Programs and partners. This course gave us a chance to see how exactly that gap between academia and practice can be bridged. Operations research is a term with broad scope, used by the military, industry, and the public sector. The objective of this course was to give us insight as to how analytic methods can be used to guide planning and decision-making in global health operations, particularly in low and middle income countries. The workshops were guided by Dr. Rony Zachariah and Dr. Tony Reid of MSF, Dr. Ajay Kumar of the The Union, and Dr. Srinath Satyanarayana of McGill University. Below are some ideas worth sharing that participants in the course from our department picked up:

Ebola treatment unit (ETU) run by Médecins Sans Frontières (MSF). Photo: UNMEER/Simon Ruf released under creative commons.

Ebola treatment unit (ETU) run by Médecins Sans Frontières (MSF). Photo: UNMEER/Simon Ruf released under creative commons.

The simplicity of operational research: simple solutions for important issues – Vincent Lavallée (Public Health)

Like many others, I was new to the field of operational research when beginning the course. My greatest takeaway from this course was the potential for simple solutions when tackling difficult questions. A common trend coming from academics is a stubbornness that demands perfect study design. This is often described as the holy grail of epidemiological research, the randomized control trial. While it is very important to identify potential biases and errors in reporting when conducting a study, unfortunately gold-standard RCTs are rarely feasible in the field.

What I enjoyed most about this course was how they highlighted the use of natural experiments and creative solutions to answer questions regarding health care implementation and utilization in low resource settings. While finishing my public health degree, one class required we write proposals for theoretical research projects. Many groups often got caught up in trying to answer all the questions, resulting in increasingly complex study designs. It was refreshing to see how operational research teams from MSF take on one or two very poignant questions and develop simple yet eloquent solutions to answer them. In doing so, they manage to change policy and current practice in these settings.

The importance and challenges of publication in operational research – Marzieh Ghiasi (Epidemiology)

One of the interesting topics covered in the course was the important role that publication can play in operational research. In academia, for better or worse, the mantra ‘publish or perish’ exists in part because publications are a measure of productivity. In implementation settings, the objectives and pressures are different and publication is not a priority. In fact, projects are often are implemented by governments and agencies, without a strong empirical framework or post-hoc analysis– and the people doing the implementation may or may not be trained in constructing scientific publications. The course instructors highlighted how conducting operational research and publishing can play the role of providing an evidence-based road-map and dissemination tool. Consequently, the capacity to conduct operational research is built by not only by training people how to develop protocols, collect data, but also how to publish and do it well. The presenters gave the example of a course by The Union/MSF focused on developing these skills.

We had a hands-on overview of how to use EpiData, a free open software for systematic data entry ideal for use in constrained settings. As well, an overview of how the publication process works: for example, the often overlooked but important task of actually looking at and adhering to author guidelines before submitting a manuscript to a journal! One of the most interesting things I took away from the workshops was the idea of ‘inclusive authorship’ in operational research, which is critical in projects that involve dozens and dozens of people in design, implementation, data collection and analysis. The instructors recalled their own experiences of trying to chase authors and contributors down by email versus bringing dozens of people in a room over the course of a couple of days to get them to write a paper together (the latter works better!). Bringing 30-something people to write a paper is, of course, in itself an operational challenge. But, as this paper showcases, it is possible and should be done to ensure fairness and engagement.

The untapped potential of operational research – Marc Messier-Peet (Public Health)

When I first glanced at the course outline for this Operational Research course, I felt this wave of relief come over me. Yes, people are researching implementation science, and yes, people acknowledge the potential gains it can bring to the field of public health. Delivered by an exceptional team of operational research experts, we had an excellent crash review that would appeal to anyone interested in strengthening health systems. Among the many things I took away were how to improve routine data collection through streamlining it and making it as user-friendly as possible, in order to ensure benefits for researchers and decision makers alike. We were shown that data collection is not inherently justified in itself. In operational settings, there is an ethical imperative as publicly-funded researchers to make sure any data collected answers a relevant question and the final work is disseminated to those best suited to use it.

Focusing on collaborations and partnership between stakeholders, the course underlined how it is important to build relationships all along the operational research trajectory. With a greater emphasis being placed by the international development community on impact evaluation and donor accountability, operational research can help find the necessary tweaks and adjustments needed to improve any under-performing health systems. Perhaps we in Canada could benefit from turning the operational research lens inwards, and develop our capacity to see how our institutions could perform better? The questions raised from an operational research approach are ones that need to be asked, and provide the opportunity for engaged researchers to bridge the ‘know-do gap’ and see their work make a real difference in people’s lives.

Why is society the way it is? The problem of infinity DAGs

Consider two questions:
1. Does racial bias in law enforcement in the U.S. occur? (assuming the answer is yes)…
2. Why does racial bias in law enforcement in the U.S. occur?

Ezra Klein wrote a piece in December on the danger of controlling for large numbers of variables in analysis because we could end up ‘controlling out’ key parts of our effect of interest (no, he doesn’t appear to be an epidemiologist or even any type of researcher, but nevertheless seems to have a better understanding of confounding than many with those titles).

In DAG language:

As he aptly recognizes, researchers know this. As we’ve been taught, there are also ways of dealing with it: base your variables on substantive knowledge, do not adjust for mediators, and if you do adjust for mediators, know what you’re doing [see: mediation analysis].

Klein’s problem with over-controlling is philosophically grounded in question number 2 above. He suggests that controlling for effects of the exposure prevents us from knowing why phenomenon occur. Once you control for location of drug use, black people end up far less likely to be arrested for drug crimes than white people. This is because they are more likely to use drugs in urban settings, and police are more likely to make arrests there. So in controlling for location, we lose the ‘why.’

But there is a distinction between question 1 & 2.  The first is complicated enough: teasing out whether an association exists and its strength is undoubtedly ‘epidemiology.’ It’s also quantitative ‘sociology’ with some ‘economics,’ and probably any science. It involves describing the world as it is.

Consider the (highly simplified) reality of why people who are black are potentially more likely to be arrested (Y= e.g. Arrest):


There are lot of cumulative, intertwined reasons ‘why’ racial bias might exist in U.S. law enforcement.  The particular letter (i.e. variable) we choose to study is somewhat arbitrary (A on Y? B on Y? C on Y?…).  Say we look at the effect of C on Y.  There are ancestors (A and B…) and mediating effects (D and E….).  Such is the case no matter where our study sits on the causal path.  In other words, there are infinity letters behind and ahead of our letter of choice.

Figuring out why society is the way it is is entirely relative. When viewed as ‘yes’ or ‘no,’ (‘Have you ever been target of racism?’) we can measure these things. But the cut-point is arbitrary.  When viewed as a cumulative sum of experiences, the DAG possibilities approach infinity; the ‘why’ is less and less measurable (‘So, what’s like to be Black in America?’).

As Klein suggests, we shouldn’t over-control or adjust for mediators. But perhaps the problem with this is more to do with biasing our analysis away from some true effect (i.e. the effect of the letter we arbitrarily choose to study) than Ezra Klein’s suggestion that it prevents us from knowing why. Are epidemiological studies of social phenomenon meant to answer ‘why’? Can they? The rigour in their methods comes from their ability to figure out what is. We know black people are more frequently arrested for similar crimes to white people. Why? We can only adjust so much, calculate the effect along so many possible pathways, and collect so much data. And we probably still wouldn’t know.