Research Fellow, Injury Division; Research Fellow, Policy Impact, The George Institute India
How did you start out in medicine and how did you get to where you are now?
I grew up in West Bengal, just across the street from a rural hospital where my parents worked. As a result, I saw firsthand the difference health care made to people’s lives. Dinner conversations at home revolved around health system issues such as the gaps, what more could be done, and what was preventing changes that would improve the system.
I studied medicine in Bankura Sammilani Medical College, which serves the two most under developed and insurgency affected districts in West Bengal. Although a medical college, it was resource-scarce at the time. For example, some basic facilities or services were not available 24X7 like blood biochemistry or CT scans. While I was studying, I did some clinical research, which sowed the seeds of my interest in medical research, but I never I thought I would move towards a full-time career in research. Clinical medicine was what I loved.
After medical school, I started working in a private tertiary care hospital in Calcutta. I could clearly see the differences between my previous experiences in a public system and the private system, and the various pros and cons of each system and level of care provided.
It was around this time that I started realising that I didn’t want to be someone, complaining about system gaps and problems. I decided I needed to do something outside my clinical role to try to understand the challenges better, and find solutions. I decided to move into research and was lucky enough to get a great opportunity to work at the South Asian Cochrane Centre in India. My role there was to carry out systematic reviews and help train others to also conduct them.
What is a systematic review?
Decision makers - clinicians, patients, health system managers and policy makers - always have choices to make; which drug to use, whether to conduct surgery or not, which services to reimburse, which strategy to implement, etc. We can look at research studies to inform our decisions but there are always conflicting results from different studies about what to do.
Systematic reviews are important instruments that assist decision makers; they collate all the evidence that is available on a particular research topic and critically evaluate its merits and shortcomings to help inform choices. Having one pooled estimate (meta-analysis) makes life easier and helps one make informed decisions.
Can you tell us about some of your different roles at The George Institute for Global Health?
I wear two hats at The George Institute: I am a Research Fellow in the Injury Division, which has also been designated a WHO Collaborating Centre for Prevention of Injury and Trauma Care, and I am a Research Fellow in Policy Impact.
Most of my time in the Injury Division is spent working on health systems and policy research on snakebite in India. This is an area of research that I am very interested in personally. Snakebite does not receive a great deal of attention and WHO only recently recognised it as a neglected tropical disease.
I also provide technical support to a large multicentre cohort study with a targeted recruitment of about 10,000 patients in India and a global target of 40,000.
I am also involved in participatory research to understand the Char community’s perspectives on disaster risk resilience. In Sundarbans, West Bengal our team is working to understand the scale of drowning, especially with regards to child deaths.
In the policy impact space, I am the methods lead for the rapid evidence synthesis unit. We inform health policy and support systems decision making through tools such as evidence gap maps for informing future research priorities.
Can you tell us about your roles outside of The George Institute?
Outside of The George Institute, I am an associate editor of the BMJ Global Health. I review journal submissions, including systematic reviews and scoping reviews, among other items. Last year, I was privileged enough to be the handling editor for a large number of papers for the special supplement on methods for evidence synthesis for complex health interventions in WHO Guidelines. I am also the co-convenor of the Cochrane Priority Settings Methods Group.
Why are you particularly interested in the theme of harnessing evidence?
As researchers, typically we talk about evidence as something that is objective and free from judgement, however this is not at all true. I think it is important for us to understand that all evidence comes with values attached (consciously or otherwise), which therefore makes it political in nature. From the moment we decide on a research question, values get attached. Our values influence everything - from the aims and objectives of the research, to which methods and analytical frameworks are chosen, to how narratives are built into the discussion section and conclusions are reached.
Using evidence synthesis in decision making is a way to bring some rationality into the process. It is important for those involved in research to be reflective of the process and transparent about it. Primary studies are extremely important as they help build the evidence base. But when it comes to decision making, it’s crucial to have an interim point where the evidence is harnessed and then synthesised into meaningful results and different evidence synthesis products, such as systematic reviews.
At The George Institute, we recognise evidence can be value-laden, and we have mechanisms to try and balance this out, including peer-review and involving consumers and other stakeholders in the design process.
What are the key areas and gaps in synthesising evidence?
This is a broad and evolving area of research that has come up a lot in the last 10-20 years and is still going through a lot of change. The need for evidence has to be balanced with the amount of time it takes and the needs of decision makers. At the moment there are challenges associated with this process.
A lot of evidence can be too academic in nature and/or take a long time to develop. At The George Institute India, we have been working to address these challenges and developed several products that harness evidence in different ways, for different needs. One is called ‘rapid evidence synthesis’.
Rapid evidence synthesis balances the rigor with the needs and time-frame of a decision maker. A question comes from decision makers, and the synthesis is done rapidly within the required time period. As timeliness is very important, rapid evidence synthesis is completed in just four to 10 weeks, unlike systematic reviews, which take one to one and a half years to complete. In this area, we have been supported by the WHO Alliance for Health Policy & Systems Research and we collaborate with the government’s National Health Systems Resource Centre.
Another area of work for us is ‘evidence gap maps’ (EGMs). An EGM considers the evidence available around the globe on a particular subject, and identifies gaps. This is important, as without investigating what the gaps are, there is a chance the research will not contribute any new information. A few years ago, the Lancet Research Waste Series showed a big proportion of research is actually wasted, in the sense that a majority of research does not actually lead to an incremental increase in knowledge.
By using an EGM, researchers can ensure their work will add to a greater understanding of a topic. We are working to develop low-cost indigenous approaches to develop EGMs with support from the Indian Council of Medical Research so they can be used to inform national research priorities.
Another key gap that exists in the Indian system in terms of evidence synthesis is in clinical practice guidelines. A common challenge is translating the evidence to consider implementation issues. This is a challenge not only within India but also in other low- and middle-income countries, and affects how successful the guidelines will be in improving health outcomes.
Other teams at The George Institute are working on gaps in Indian cardiovascular diseases guidelines such as how guidelines should be developed after a systematic review has been conducted.