Mooney-Somers, J, Erick, W, Brockman, D, Scott, R. & Maher, L (2008). Indigenous Resiliency Project Participatory Action Research Component: A report on the Research Training and Development Workshop, Townsville, February 2008. National Centre in HIV Epidemiology and Clinical Research, The University of New South Wales, Sydney, NSW. ISBN: 978 0 7334 2647 6.
See that bit at the end, that’s my first ISBN. I can’t recall where I got the notion from, and I wonder now at my presumptuousness. I don’t think it was standard practice in my research centre to get ISBNs for research reports. But I had just come out of a horrid job that I’d stayed put in to get publications (it didn’t really work). I was in a new job and determined to get as much on my CV as I could. The first output was a report on a training workshop. I was thoroughly engrossed in the methodology we were using (participatory action research) and genuinely interested in how it worked in practice. So writing about our process was something I was into, but it was also a publication. The ISBN though, that was kind of surprising.
I’m now the proud owner of 7 ISBNs.
For those who don’t know, an ISBN stands for International Standard Book Number. It is a unique code assigned to your ‘book’. It is super easy to get one if you know how and a complete mystery if you don’t (typical university*). You don’t need one; I suspect most reports published by academics don’t have one. Let me tell you why you should use them.
An ISBN “makes your book more discoverable” says the Australian provider of ISBNs, Thorpe-Bowker. Unsurprisingly a unique code means no confusion about which title is your book if it also has its own code attached. Well. I’m not entirely convinced this is a big deal for academics (honestly, Google your intended title to make sure it is unique-ish).
The much more compelling reason?
An ISBN means your book exists, it gets listed in registries. In the case of the report above, I got a call out of the blue from a library network asking if they could buy (buy!) several copies. Seriously, how did they even know it existed? It had an ISBN.
And then there’s this…
You see an ISBN means your work is published and that makes it subject to legal deposit rules (a quick look at Wikipedia suggests this is an international standard).
Legal Deposit is a requirement under the Copyright Act 1968 for publishers and self publishing authors to deposit a copy of works published in Australia with the National Library and when applicable, the deposit libraries in your home state. Legal Deposit ensures that Australian publications are preserved for use now and in the future. National Library of Australia (for more read this http://www.nsla.org.au/legal-deposit-australasia)
In NSW a publisher (e.g. your university if they secure the ISBN) is required to send copies of published material to The National Library of Australia, The State Library of New South Wales and The NSW Parliamentary Library. And because the publisher of my work is The University of Sydney, I have to send a copy to them as well.
That’s an very easy way to get my work into the Parliamentary library.
A major struggle in one of my research area’s (lesbian, bisexual and queer women’s health) is the persistent charge that there is no evidence base. The charge is wrong; there is considerable evidence of disparities in health outcomes out there, but it is a hard perception to shake. So getting our biennial reports of the longest running (in the world) survey of lesbian, bisexual and queer women’s health on to the shelves of policy makers… That’s a win. You never who might stumble across them.**
*Your institution’s library should be able to help or look for the “Legal Deposit Officer”.
I’ve just attended the 9th National LGBTI Health Conference in Canberra. The conference organisers had a very progressive approach to communicating with delegates – for the few months leading up to the conference they sent out short announcements (blog posts) about the papers to be presented, along with the more usual updates with delegate information. We also received a post called, “A safe and inclusive conference”. Not something I’ve ever received from a conference but very much appreciated. It has lots of useful thought provoking and anxiety relieving advice. From questions about how we re-tell the personal stories we hear at the conference, through inclusive language to tips on how to avoid misgendering. Honestly, this is useful life advice.
One point they raised resonated with me. In a section on ‘LGBTI’ and inclusion they said:
“Deliberateness: How can we make sure that we move from habitually using all five letters to earning each of them? Is it appropriate to use all five letters or does the topic we are discussing apply more specifically only to some of these populations and need rethinking for some populations?”
Recently, for a project on LGBTI smoking, my colleague Israel Berger and I reviewed published evaluations of smoking cessation interventions for LGBTI individuals (19 studies). We found that:
* All studies included gay men
* About two thirds of studies used general terms like ‘LGBT’ but didn’t necessarily include every group.
* About two thirds of studies mentioned bisexual participants as targets/participants, but there was insufficient reporting of bisexual status. Indeed, several studies used the term LGBT or LGB but then only referred to lesbians and gay men effectively erasing bisexual people.
* About two thirds of studies were nominally open to women, but only a quarter of those studies had women participants (of those that reported gender at all)
* A quarter of studies mentioned trans people, but trans people only represented 3% of participants (of those that reported trans status).
* None of the reviewed studies targeted or reported intersex participants.
So despite two thirds using general terms suggesting their intervention was developed and evaluated for LGBT people, few had earned this terminology. The problem here should be obvious – it looks like we know quite a bit about how to develop and deliver smoking cessation programs to L+G+B+T people. When in fact, we’re on shaky ground for most of these letters.
At the conference I heard several examples of researchers claiming LGB/T/I when their sample was no where near that. I wonder why we do this. And I say “we” deliberately as I know I have done/do this. The conference organisers’ interpretation of this practice is, habit. And so they frame their advice in terms of being deliberate, mindful of the language you use (I appreciate the list of dos in their guidance where others would have a list of do nots). But I wonder if we also over claim inclusiveness because we feel our research should be applicable to all the letters. Even if in practise we can make no such claim. Or do we think the gesture to inclusiveness is sufficient? Or (worse) do we think the whatever we find for some letters will apply to them all?
We had an interesting discussion at the conference about how to earn the letters. For example, should we design our surveys so all the members of our LGBTI communities feel recognised and able to participate in ways that capture their experiences? A good idea but is it enough? The two surveys I’m involved with seek to do this, we ask a question about trans status and a separate question about intersex status (the letters I think that are most commonly claimed but not earned).
What if I don’t do any targeted recruitment? To go back to the review of smoking cessation studies – most were nominally open to women but had low numbers that suggest to me a failure to engage and/or failure to provide culturally safe programs for women. So is saying trans and intersex people are welcome to do my survey – look! I wrote questions – earning the T or the I? I’d find this position hard to stand behind.
What if I have the separate questions but my question responses don’t adequately reflect the diversity of trans or intersex people’s lived experiences? Have I earned it? Hard to argue yes. I’ve had some feedback that one of my surveys does this so my colleagues and I will think carefully about the claims we make about the people our research findings reflect.
What about if during analysis I collapse the beautifully crafted and community-consulted question responses because the cell sizes are too small to be statistically meaningful. Does this make the original attempt tokenism? I am worried that it might be. Yet reporting the % of trans and intersex identifying people but doing no further analysis is what I do in my survey research. I feel uncomfortable but I’m not sure what else we can do.
At the conference wrap-up, rapporteur Terence Humphreys (from Twenty10) said” ‘We are entering a new and nuanced era of deliberately engaging with peoples bodies, genders, sexualities, +identities”. This echoes some work colleagues and I did in relation to same-sex attracted women. We argued that there are important and meaningful differences under the ‘same-sex attracted’ umbrella and this demands a nuanced approach to health promotion. I think this is the better response to the conference organisers’ call for deliberateness. Claim the letters that do reflect the population your research is about, be transparent about the boundaries. And own who is missing.
It’d be great to hear about how you earn the LGBTI in your research…
Over this past semester I’ve introduced about 250 postgraduate students to qualitative research. Last year it was about 200. Actually through running a postgraduate coursework program in qualitative health research for 5 years and teaching several qualitative methods courses for community researchers there have been quite a few over my relatively short teaching career. I’ve taught people who are seeking to use qualitative methods in a research project and those who are only learning it because it is mandated by their degree (e.g. I teach the core unit in qualitative methods for a Master of Public Health).
In the first few years of teaching qual research I noticed something interesting. In the early part of the course I’d look into the auditorium and see some pained expressions, students shifting uncomfortably, and at its most extreme, open hostility towards qual research. Questions would burst out of students about bias or generalisability. They’d come up to me after a lecture and admit they had some worries. Most were genuinely struggling to understand the logic of qualitative research, but felt like they were failing.
About a third of the way through one multiple-day course something would click and they’d get it. In the early days I only taught face-to-face so there’d be a palpable buzz in the air. I’d know we were going to be ok when they’d start making jokes about being a realist or a social constructionist. Or when they’d make their epistemological assumptions explicit when it wasn’t necessary. Or when I’d overhear a conversation in the tea break about sampling and saturation. Phew! A few wouldn’t get it and the bafflement would continue. They’d ask fundamental questions I thought we’d addressed (‘but isn’t it biased?). I’d see their peers glance at them with embarrassment or irritation.
Most of my students are encountering a qualitative paradigm for the first time. They usually have a health or medical background. Many are health practitioners. Few were exposed to social science in their undergraduate degrees. It is unsurprising then that they are baffled at qualitative research:
“call[s] into question students’ taken-for-granted assumptions about so many things: the purpose of research, the uses of method, the nature of knowledge, and what it means to be human” (Webb & Glesne, Teaching qualitative research, 1992)
Indeed I received the following written comment from one student after our first lecture: “qualitative research may lead you to question the very nature of reality :)” I hoped the smiley face meant that they were ok with this.
Eventually I connected what the students were experiencing to my own experience of moving countries. When my partner and I moved to Australia in 2000 we bought a book called “Culture Shock! Australia”. (ours was the original 1992 edition, that cover is so much more evocative then the most recent edition).
I’ve starting using culture shock and address it explicitly at the start of my introduction to qual research course. I tell them my own story:
When I moved from London to Sydney I experienced culture shock. I was surprised as everything looked pretty similar (language, driving on the left) but some pretty small differences made me feel out of place – I didn’t understand the rules, I felt like everyone could tell I wasn’t from here. I felt home sick (I understood the rules there). Not understanding and not fitting in sometimes made me frustrated – or angry. The Australian ways seemed stupid, wrong, old fashioned! This may be familiar to some of you who have travelled or spent time with overseas visitors.
And I link this to what some of them may experience during the course:
Well, I’m going to take you on a bit of a journey on this course. For many of you it will be a new land – and there will be ideas and ways of doing things that are different to how you’re used to doing them. New ways that might challenge things that you take for granted. Sometimes when I take people on this journey I see the same kind of culture shock I experienced. By the end of the course they’ve usually started to feel more at home.
And then I give them permission to be challenged – culture shocked – and some tips from a fellow traveller:
So I’m giving you a heads-up and some advice: Bring your curiosity to this new land. Be interested in how they do things here. Notice how they are different. Pay attention to when you start thinking – that’s stupid/not the right way to do it. These feelings are useful – they signal when you are moving across paradigms or belief systems. Most of you will be used to thinking in a particular research paradigm – let’s call it quantitative – where the community shares particular beliefs, assumptions, values and ways of doing things. I’m not asking you to abandon your paradigm (your community, your sense of home), I’m asking you to recognise that you are seeing the world from this perspective.
Two students reflected on these ideas in written comments after the first lecture: “being a person used to RCT, this is very interesting but worried if I can cope” and “Coming from a science background it was helpful to know that the shift in paradigm to qualitative is a challenge”. I’m trying to encourage the students to be fearless adventurers. I came across the following quote today and it helped me understand another aspect of the culture shock experience.
It is only when you meet someone of a culture different to yourself that you begin to realise what your own beliefs really are. (George Orwell, The Road to Wigan Pier, 1939)
I like the idea that being culture shocked creates a distance that allows you to see yourself. I plan to add this next year.
[check out some more thoughts at the end of the post]
I’ve just taught a session on ethics in qualitative research, part of an intensive course designed to give attendees an appreciation of the philosophical and ethical issues underlying research involving human participants. There was good representation from those who called themselves qual researchers, those who had done some qual research, those who felt comfortable that they knew a bit about it and finally, those who had only been exposed through sitting on a Human Research Ethics Committee (HREC).
Many of the concerns that qualitative research raised for HREC members were driven by the sense that the particulars of qualitative research are unspecified and/or unspecifiable. HRECs can’t be sure who exactly researchers will talk to and what precisely they will talk to them about. It sounded like HREC members felt they can’t exert the control they think is necessary to protect participants. I think they are right. Much qualitative research involves a flexible iterative process where the design emerges, the research questions are refined, the interview questions are specified, revised and often abandoned, all post-ethical review. Indeed, the precise focus may not emerge until the research is well on its way. One HREC member who sees a lot of research proposals about children with chronic illness felt very protective towards potential participants. Already burdened with illness, the thought of just anyone being ‘let loose’ on them, with a vague set of research areas rather than a set of approved questions, was pretty discomforting. In the absence of specifying the ‘who, how and what’ the participants in my training felt they had to simply trust the researcher knew what they were doing.
HRECs officially do have a responsibility to determine if the researchers they are ‘letting loose’ know what they are doing. The National Statement on Ethical Conduct in Human Research (2007) says research that has merit “is: (e) conducted or supervised by persons or teams with experience, qualifications and competence that are appropriate for the research.” So how do HRECs make a judgement about appropriate qualitative experience, qualifications and competence?
In judging appropriate experience, qualifications and competence I think HREC should start with: Who is the qualified qualitative researcher on the team who can undertake this work. Is evidence of formal training in qualitative research too high a bar? Absolutely not! A Master of Applied Epidemiology or Biostatistics is official recognition of competence; that is how it is understood when it appears on an ethics application. Why not expect the same of researchers planning to undertake a qualitative project? It is not like it is that hard to get some training. [Gratuitous plug coming] I run a really very good postgraduate course and offer a range of short courses. There are short course offerings in Australia through ACSPRI or researchers can do an online course. We’ve come a long way since qualitative methods had to be self-taught or when the attitude of ‘how hard can it be to do a few interviews’ was acceptable?
In the absence of a formal qualification how else can a HREC judge competence? I don’t have a qualification; I covered qual methods briefly in my undergraduate degree, used them extensively for my PhD; have years of practical experience and have received supervision from experienced qualitative researches. I might convince a HREC of my competence by saying something like:
“Dr Mooney-Somers has over 20-years of experience in the development and use of qualitative research in health and psychology, including in her PhD research. She has employed several qualitative methodologies, and conducted research with a range of populations including young people and Aboriginal and Torres Strait Islander people and on sensitive topics including youth cancer and sexually transmitted infections. Dr Mooney-Somers has been the principal lecturer on the Sydney Qualitative Health Research postgraduate program for five years, taught qualitative research to community researchers and supervised students undertaking qualitative methods from Honours to PhD level.”
There are other clues to the presence or absence of appropriate experience, qualifications and competence. HRECs might look for the following:
- Do the researchers seem to understand qualitative research? Red flags for me include: the research aims are not broadly about meaning, understanding, experience, or process; surveys as the only method (unless there is a lot of free text questions); references to measurement; claims about representative sampling or generalising findings to the general population.
- Are they drawing on their experience to inform the proposed practice? “In the past I have used ranking exercises in focus groups to successfully engage young people in conversations about X”
- Do they present a methodology that justifies their proposed actions? Are they just gesturing towards a branded methodology or drawing on a specific version/methodologist? Are the methods and language consistent with the claimed methodology? They need not use a branded methodology, I’m looking for a coherent justification that ties the research aims/questions to the methods and the outcomes. “In line with our ethnographic methodology (ref) adopted for this project we propose to conduct observations in three sites” “Following Charmaz (2014) this constructivist grounded theory study will…”
- Who is actually generating the data and are research assistants receiving training in interviewing/facilitating focus groups?
- How is the data analysis process described – anything that looks like “data from interviews will be transcribed and analysed thematically” is a massive red flag. It suggests they have no idea how they will analyse the data, or that an analysis strategy is not part of a methodological framework.
Additional thoughts (18 June 2015)
I sit on a research ethics committee for a non-governmental organisation. I read two applications yesterday that concerned qualitative research. Both did pretty poorly at demonstrating they were prepared by teams who had appropriate qualitative experience, qualifications and competence (although one was prepared by very experienced researchers). I’d like to add to my original list of clues to the absence or presence of qualitative competence:
- Is there alignment between the research questions and the data generation strategy? Between the research questions and the sampling strategy? Between the research questions and the analysis plan? That is, are they generating data and analysis that will answer the questions?
- Training is in my original list but I was really struck again by its importance in an application from a student. Is it clear who is conducting the data generation? Do they apear to have appropriate experience, especially if dealing with sensitive or complex issues? If inexperienced (eg a student), is the supervisor experienced? What plans are there to provide training and ongoing guidance around data generation? You can support a novice interviewer through short courses, practice interviews (consider video and review), an experienced interviewer reviewing early interview transcripts, and regular debriefing.
- Do the researchers demonstrate an awareness of and handle the specific ethical issues that qualitative research produces? What are those issues, I hear you say… that calls for another blog post!
I attended a workshop by Nick Hopwood on presenting qualitative research. It was full of tips and strategies – check out the storify – and useful frameworks; Hammersley’s framework for critical review of ethnography (reminding me again that I need to read Hammerlsey) and Kamler and Thompson’s framework for writing abstracts from their ‘Helping doctoral students write’ book (which I promptly ordered).
I’m a keen reader of Nick’s blog and have used his tips for conference presentations. For my last conference however, I failed to implement one: ‘turn it upside down’; that is, state my argument at the beginning rather than pull it all together at the end. This was because I love a good mystery novel and my co-presenter wasn’t keen.
At the workshop, Nick asked us to review a recent presentation in light of what we’d just learnt. I used this last conference presentation. And funnily enough, I could see how much better it would have been if I had turned it upside down. Here’s why.
The logic for turning the presentation upside down is that it helps you achieve your key motivation for presenting at all – give the audience a clear sense of your key take-home message. If someone pulls the plug on you 10 minutes in (maybe because the previous presenter rambled on), at least they know your argument. And Nick insists the audience is less likely to fall asleep. Luckily, I got the whole 12 minutes allotted at my conference presentation, and as far as I could tell, everyone stayed awake.
After the workshop three compelling reasons to ‘turn it upside down’ occurred to me.
First, if I’d made my big statement upfront then the audience might have been more engaged, curious as to how I’m going to convince them (i.e. the mechanism for ‘the audience is less likely to fall asleep’).
Second, it would have meant the rest of the content would be more likely to be relevant to that argument and not just self-justifying waffle about methods or demonstrations of how clever and well-read I am (no of course I didn’t do that).
The best reason I could think of? Making my argument up front would have given the audience time to digest it. Usually – and indeed in my case – the key argument is in the last slide or two. That gives them about a minute to catch it and process it before the chair calls for questions. This might be the reason for the measly post-presentation discussion at so many conferences. If I’d put it up front, they would have had a whole 10 or 11 minutes to think about my argument, in context, in relation to my data, and more importantly for engagement, in relation to their experience and knowledge of the phenomena. So I’m convinced by Nick’s advice.
But there is a problem. Putting your argument up front means you have to have one. I’m not being flippant here. How many qual presentations have you been to where the main game seemed to be to describe what participants said? You get to the end and think, well gee people really thought some stuff / felt some stuff / needed some stuff. But it can be a bit meh, you’re not sure what it all means, why it matters. I find this kind of qual research depressing; I am sure I have been guilty of it.
Making an argument is scary (people might disagree with you!). Arguments involve taking a stand, saying, this is how we should think about this phenomena. They require I work to persuade you, generally through presenting evidence, like my data analysis. If you think of it, data interpretation in basically an argument. I am claiming that mine is the best (or at least, most productive) way to understand what this participant means. Moving from description to interpretation can be a difficult things students to accomplish. It requires they develop confidence in their ability to interpret (not easy at all). Some tips I give my students:
Lyn Richards uses a great metaphor in her book ‘How to handle qualitative data’ for understanding the difference between data description and interpretation:
‘Somebody’s dead, they were shot and there’s a gun on the ground’ is the beginning of the detective’s questions. We hardly expect the enquiry to end with the facts of a dead body and discarded gun.
In her book ‘How to write a journal article in 12 weeks‘, Wendy Laura Belcher draws on a similar metaphor when talking about making arguments in papers:
Present evidence that supports your case, cross-examine evidence that doesn’t support your case, ignore evidence that is irrelevant to your case, and make sure the jury always knows whom you are accusing of what and why.
So, write the lawyer’s brief not the detective’s report. Can you imagine a prosecuting barrister standing up on front of the judge and jury and not telling them who they think did it? Or, to return to my topic, holding the punchline for the end? I would speculate that if you give the argument up front the audience starts doing some of the work for you – they know where you’re going so they are looking for the links. Hey’ that’s four reasons to put your argument first.
Last piece of advice from Wendy Laura Belcher to relieve you of some argument making anxiety:
Arguments don’t need to be unassailable or bullet proof, just interesting.
The Indigenous Resiliency Project was part of the International Collaborative Indigenous Health Research Partnership (ID: 361621), a trilateral partnership between the National Health and Medical Research Council of Australia, the Canadian Institutes of Health Research, and the Health Research Council of New Zealand. There were parallel projects in Canada and New Zealand; together we aimed to examine the role of resilience in protecting Indigenous populations against sexually transmitted and blood-borne infections.
In the qualitative arm we conducted community-based participatory projects with two communities. Check out the findings below:
- Mooney-Somers, J, Olsen, A, Erick, W, Scott, R, Akee, A, & Maher, L (on behalf of the Indigenous Resiliency Project). (2011) Young Indigenous Australians’ sexually transmitted infection prevention practices: A Community-based Participatory Research project. Journal of Community and Applied Social Psychology, 12(6): 519-532.
- Mooney-Somers, J, Olsen, A, Erick, W, Scott, R, Akee, A, Kaldor, J, & Maher, L (on behalf of the Indigenous Resiliency Project). (2011) Learning from the past: young Indigenous people’s accounts of blood-borne viral and sexually transmitted infections as resilience narratives. Culture, Health & Sexuality, 13(2): 173 – 186.
- Mooney-Somers, J, Erick, W, Scott, R, Akee, A, Kaldor, J, & Maher, L (on behalf of the Indigenous Resiliency Project). (2009) Enhancing Aboriginal and Torres Strait Islander young people’s resilience to blood borne and sexually transmitted infections: Findings from a community-based participatory research project. Health Promotion Journal of Australia, 20(3):195-201.
- Mooney-Somers, J & Maher, L (2009) The Indigenous Resiliency Project: A worked example of community-based participatory research. NSW Public Health Bulletin, 20(7 & 8), 112–118.
In 2010 we used the qualitative work as the basis for community surveys. Again, researchers worked closely with community, in this case the Townsville Aboriginal and Torres Strait Islander Health Service. The cross-sectional survey covered location of usual residence, recent and past sexual activity, alcohol and other drug use, history of selected health outcomes and health service utilisation. We trained five young local Aboriginal and Torres Strait Islander people in research ethics and survey methodology. These peer researchers collected surveys from Aboriginal and/or Torres Strait Islander people aged 16 to 24 years at the Townsville Show, sporting events, shopping centres, a health service open day and a NAIDOC parade and community event.
Check out what we found: Scott, R, Foster, R, Oliver, L, Olsen, A, Mooney-Somers, J, Mathers, B, Micallef, J, Kaldor, J and Maher, L (accepted 22/10/2014). Sexual risk and health care seeking behaviour in young Aboriginal and Torres Strait Islander people in north Queensland. Sexual Health
Note: this is a pre-copyedited, author-produced PDF of the accepted article; I’ll post the link to the definitive publisher-authenticated version as soon as it is released.
I find something useful in every blog NIck Hopwood writes. And a week later I realise it was two useful things. Lots of useful ideas here
First up this is not just about PhD supervision, but supervision of research degrees, whether Masters, PhD, Professional Doctorates etc. PhD in the title is just a convenient shorthand.
One of the interesting things that has been going on where I work is ‘Learning2014’. This is UTS’ approach to changing teaching and learning across all our campuses (including the online ones) and disciplines. One of the features of this concerns ‘New Approaches’ to pedagogy, and within this, a key idea is ‘flipped learning’.
Flipped learning is gaining currency as a way to describe certain ideas about what might happen before a key pedagogical interaction, such as a lecture or tutorial. While the term feels relatively new, it builds on key ideas that have informed teaching and learning for a long time.
Admittedly, I was initially a little cynical (as I tend to be about most things)…
View original post 1,087 more words