Content uploaded by Mark Boyes
Author content
All content in this area was uploaded by Mark Boyes on Mar 23, 2015
Content may be subject to copyright.
1
The cost of action:
Large-scale, longitudinal quantitative research with AIDS-affected children in
South Africa
Cluver, L, with Boyes, M; Bustamam, A; Casale, M; Henderson, K; Kuo, K; Lane, T;
Latief, S; Latief, N; Mauchline, K; Meinck, F; Moshabela. M; Orkin, M; Sello, L.
Note: This is the authors’ version of a chapter that was published in Posel, D. & Ross,
F. (2015). Ethical quandaries in social research. Cape Town: HSRC Press. Changes
resulting from the publishing process, such as editing, corrections, structural
formatting, and other quality control mechanisms may not be reflected in this
document. Changes may have been made to this work since it was submitted for
publication.
Citation:
Cluver, L. D., Boyes, M. E., Bustamam, A., Casale, M., Henderson, K., ... Sello, L.
(2015). The cost of action: Large scale, longitudinal quantitative research with AIDS-
affected children in South Africa. In D. Posel & F. Ross (Eds.). Ethical quandaries in
social research. Cape Town: HSRC Press.
This chapter reflects some of the key ethical quandaries that emerged from two
longitudinal South African studies of the impacts of parental HIV/AIDS on children.
The first was a four-year study of 1025 AIDS-orphaned, other-orphaned and non-
orphaned children to identify long-term impacts of orphanhood. In 2005 all children
lived in the Cape Flats or on the streets of Cape Town, but by 2009 we followed them
up into three provinces, and interviewed them in prisons and hospitals. The second
study was a one-year national longitudinal study of 6000 children and 2600 of their
primary caregivers – comparing children orphaned, living with sick parents and in
healthy families to understand the impacts not just of death but also of parental
illness on children’s health, psychological, sexual and educational outcomes. In
2009-10 this involved random systematic sampling of census enumeration areas in the
Western Cape, KwaZulu-Natal and Mpumalanga, and in 2011-12 children were
followed up in six provinces (Eastern Cape, Western Cape, Gauteng, Mpumalanga,
Limpopo, Free State and North West), and also into Mozambique and Swaziland. The
studies are explicitly aimed at helping to inform evidence-based policy-making, and
are conducted in collaboration with the South African National Departments of Social
Development, Health and Basic Education, with UNICEF, USAID, Save the Children
and the National Action Committee for Children Affected by AIDS, and with the
Universities of Oxford, KwaZulu-Natal, Cape Town and Witwatersrand. As the
Principal Investigator of both studies, I was responsible for the studies overall,
including the overarching ethical planning and conduct, and the day-to-day ethical
decisions made by the project’s fifty fieldworkers, fifteen data capturers, ten
managers and seven volunteers.
2
An sms at 5am means that something has gone really wrong. And I don’t mean the
general logistical crises that come with a large-scale South African survey. With 80
staff in three provinces and 12,000 interviews with children and their families, these
logistical problems are plentiful: urban riots, floods, cars breaking down, threats from
gangs, an escaped family of lions in one of our field sites. These are the kinds of
challenges that you approach with strategy, innovation, and occasionally just sheer
panic. I remember walking into a rather stuffy scientific meeting in Geneva whilst on
the phone to a project manager in the Western Cape. ‘Real bullets or rubber bullets?’ I
was asking as I opened the door. The room went silent. ‘Screw the sampling strategy -
get yourselves the hell out of there!’ I concluded. Someone dropped their croissant.
But the 5am problems are qualitatively different. These ones are those that keep our
team up at night. They rarely have a clear or practical solution, and can’t be addressed
by shifting money between budget lines. This one read: ‘CHILD RESPONDENT
DYING OF AIDS. MOTHER IN DENIAL REFUSES TO TAKE HER TO
HOSPITAL. CAN WE TAKE CHILD WITHOUT MOTHER’S CONSENT?’ I
cancel all my meetings for that day and book a flight to a rural township.
The science of evidence-based social intervention requires that we understand certain
things about a problem before we can determine the most effective way to intervene.
We need to know whether a social problem exists (in this case, whether AIDS-
affected children have worse mental health, sexual health and exposure to infectious
diseases than other children); the extent of that problem (for example the proportion
of children who have contracted tuberculosis whilst looking after their sick parents),
the mechanisms or mediators by which negative outcomes occur (for example, that
parental death leads to children’s sexual risk via increased depression and poverty)
and whether a problem has differential impacts on population sub-groups (for
example, whether extreme poverty is a greater risk for transactional sex amongst girls
than boys). In order to design interventions that are acceptable and culturally relevant,
we also need to understand how the affected community perceive the problem and its
causes (Kleinman & Good, 1985). Whilst the last requirement is usually approached
through qualitative work, the first four are often addressed through quantitative
surveys.
Two overarching ethical principles of evidence-based practice are first, to do no harm,
and second to identify the most effective interventions to improve wellbeing
(McCord, 2003). Intervening without understanding and considering the dynamics of
a problem can violate both of these principles, risking the enactment of interventions
that are useless, or even harmful. For example, child labor deterrence legislation in
the 1990s aimed to close down child sweatshops and reduce child labor in the
developing world. However, a 1997 UNICEF report showed that this approach had
misunderstood the complex dynamics of child labor within contexts of severe family
poverty. Children were still going out to work, but instead of doing this within
sweatshops, they were forced into illegal and highly dangerous work such as stone-
crushing and sex work (UNICEF, 1997).
Robust and reliable epidemiological science is thus part of this ethical imperative. The
understanding of social problems requires that research is designed in such a way that
it can reliably determine causality. Bad science can and does lead to bad interventions
(Goldacre, 2008). Some causality can be identified through cross-sectional research,
3
for example, child depression is very unlikely to lead to parental AIDS-death and so
we may assume – if we have checked that it is not due to other related factors - that
the parental death is leading to the child’s depression (Cluver, Gardner, & Operario,
2007). However, many processes need longitudinal research for reliable determination
of causality. For example, a child’s depression and his or her exposure to AIDS-
related stigma could be bi-directional, but panel data can determine that stigma
increases depression at a later time, regardless of earlier depression levels (Boyes &
Cluver, 2013). Of course, longitudinal research thus requires a non-invasive approach
if it is to truly track the natural progression and changes in peoples’ lives. By
understanding the dynamics of the impacts of parental AIDS on children, we can
identify the modifiable risk and protective factors that can be targeted by
interventions (Cluver et al., 2013).
But in our studies, the ethical imperative of the provision of unbiased research data
frequently collided head-on with more immediate realities. We intentionally designed
our research to combine qualitative and quantitative elements, with questionnaires
designed like teen magazines and staff chosen for their empathy and love of children.
These choices had unintended consequences. I can recall with clarity the feeling of
the straw mat that we sat on, one day outside a rural house in a clearing. We had been
told that a child was unable to be interviewed for the study as she was mute. But she
seemed to listen and to be interested and indicated that she wanted us to come back –
we thought perhaps she could write some answers or we could use sign language. On
our second visit she spoke aloud for the first time in five years. She had been raped by
her stepfather and wanted us to know, and to help her.
For many of the children we have interviewed, the opportunity to discuss their lives
with a concerned, motherly and interested adult is a rare experience. Participants
revealed situations that they had never told anyone else: abuse, murder, suicidality,
abandonment, beatings, drugs and crime: children growing up exposed to the worst of
our society. We had agonized in our ethics applications about what we would do
when a child disclosed that they were in danger but demanded secrecy or refused to
allow us to take action, but in more than 700 referral cases that situation never arose.
Instead, children and their caregivers repeatedly took the research as an opportunity to
ask for help in desperate situations.
In theory this presented a clear and stark ethical dilemma. Should we, as a research
study, intervene in the lives of children at severe risk of harm? By intervening, we
risked damaging the fundamental ethical aims of evidence-based intervention by
providing longitudinal evidence that had been interfered with, or biased, by the
research study itself. By not intervening, we contravened legal and ethical
requirements, as well as professional ones. As a social worker I am required to report
any disclosure or suspicion of abuse of a child and the Children’s Act is clear that any
child at risk of significant harm should be reported to the relevant supportive services
(Government of South Africa, 1993, 2005). But perhaps more fundamentally, the
prospect of not intervening was impossible on an individual level. Humanity
precluded watching children suffer and recording it over repeated visits. The
questions that came up over and again as we held our team meetings in schools, NGO
offices and once – memorably – in the back garden of a shebeen, was not whether to
intervene, but how to intervene.
4
**
Early on in fieldwork we had abandoned the idea of our research project functioning
as an objective, non-invasive observer. But at this point our ethical protocols – passed
by the University of Cape Town, University of KwaZulu-Natal, Oxford University
and six provincial Departments of Health and Education – still served us well. We
followed guidelines set out by UNICEF (Dawes, Bray, Kvalsvig, & Richter, 2007),
the Children’s Institute (ACESS, 2002) and the Medical Research Council (Seedat,
Nyamai, Njenga, Vythilingum, & Stein, 2004; Seedat, van Nood, Vythilingum, Stein,
& Kaminer, 2000). These promised confidentiality except when a child is shown
through the research to be at risk of harm. In cases of risk, and with the consent of the
child, they would be referred to organisations that can provide assistance
1
.
We made hundreds of detailed written referrals to social services, NGOs and clinics.
We followed up with phone calls. We met with provincial ministries of social
development, chieftains, NGOs, police, social workers, and schools. We developed
lists of contacts at NGOs and state services. We juggled the research budgets to allow
for extra staff, extra time, extra fuel. We were intervening to the extent that we were
alerting existing services to the needs of children who might otherwise have not been
identified.
However, these services rarely arrived. Our teams visited clinics and social services to
find out why referrals had never been taken up. They discovered overwhelmed,
exhausted staff who lacked the transport, resources and support to be able to reach
even a fraction of their caseloads. What were the implications of making referrals to
services that were never going to be able to follow up? We began demanding action,
taking the social workers to the children, making our own chains of referrals –
identifying functioning services, driving children long distances to make
appointments at clinics or to domestic violence shelters.
On a baking hot afternoon in 2011, I was in one field site when our team from another
provincial site called. Our interview had identified that a child who was caring for his
tuberculosis-unwell mother had all the symptoms of severe pulmonary TB, had lost
more than half of his bodyweight and was extremely weak. Our project managers had
taken the child to the clinic for testing, and returned to take him again for the results.
The clinic told them that the child had tested negative for tuberculosis and should go
home. They had a dilemma: the health services were clear that they had no role to
play, but the child was coughing blood, and they had a strong suspicion that the clinic
had indeed lost the sputum and blood samples. My team wanted to drive him to the
closest hospital, which was four hours away. We barely hesitated: the child sat in the
back of the bakkie with the windows open, the hospital immediately admitted him as
an inpatient and said that a few more days would have been too late. We lost a day of
fieldwork, risked the health of our own staff, and earned the hatred of the staff of that
1
We had – intentionally – left the definition of ‘risk of harm’ unclear. Child protection legislation usually uses the
concept of ‘risk of significant harm’ to a child’s health, education or development, and relies on the judgement of
professionals to define this, with court arbitration where families or children disagree with professional decisions.
In the event of our research, I don’t think we ever questioned the boundaries and definitions of what harm, or risk,
really meant. Perhaps because the harm was so apparent, or perhaps because we couldn’t address anything but the
worst of it.
5
particular clinic by circumventing their authority and their role in healthcare. We also
– probably – saved a child’s life. But it raised further questions about our ethical
obligations. It was clear to us – and in legislation – that we had a duty to refer
children to the appropriate services. But was it our right or obligation to circumvent
those services where we believed that they were unable to reduce harm for a child?
Could this have been ethically essential on one level, but damaging on another?
Our strategy of hiring staff who cared deeply about children had another, unintended
consequence: we all got personally involved. One project manager – the sender of the
first sms in this chapter - repeatedly saved a child’s life when her mother refused to
take her for HIV-testing, then the clinic refused to test her, then there was no family
member able to give her treatment. She took the child to hospital, bought food parcels
from her student stipend so that she could take her anti-retroviral medication, called
her twice a day to keep her adherent, and arranged a foster carer for her when the
project ended. I would come on field visits and find our field staff visiting children in
prison, teaching them about where to get condoms and how to use them, taking them
to church when their parents had died and there was nobody to go with them, and –
every day – pulling out their boxes of tissues, putting their arms around children, and
crying with them. But in the writing of this chapter – and only now do I reflect on this
– I wonder whether this really was an ‘unintended’ consequence. I had consciously
designed a selection process at management and fieldwork levels that prioritized
empathy, experience working with AIDS-affected children and kindness, rather than
research experience or qualifications. I had – if I am honest – hired potential social
workers rather than researchers. By doing this, had I precluded the chance that we
would ever have made a team decision to prioritise scientific over humanitarian
principles? They say that one’s mind plays tricks on us. I think one’s heart plays its
own tricks.
As a social worker, I now acted as a supervisor whilst our project managers did
casework that they had never anticipated. We advertised for volunteers to work with
us on the project and to focus just on support services for children. We bargained with
services – offering to provide counseling and support in exchange for them taking on
another of our cases. I found myself arguing with clinic staff, taking HIV tests with
children to show them that I wasn’t scared (I was), counseling rape victims in fields,
and waking up at 4am whether my cellphone had beeped or not. For these children,
we had irrevocably crossed the line from research to intervention.
**
Evidence-based practice is not against intervention – far from it. But social
intervention research requires that interventions be rigorously evaluated, either in
randomized controlled trials or in quasi-experimental designs. Implementation
fidelity, planned cultural adaptation and the comparison of intervention groups with
wait-list controls allow the least possible bias into evaluation designs.
What we were doing could not in any way claim to be an evaluation of these
interventions, or of the local services provided. We were not doing intervention
research. But were we, by intervening at all, destroying the inherent value of our
epidemiological research? Of course, we will never know the answer to that question,
but to my mixed sadness, I suspect not. We kept careful records of all children that we
6
had referred to services or to whom we had given support, so that we could control for
this in data analysis. But these records revealed that the extent of the effects of our
interventions was constrained by two major factors. The first was the overwhelmingly
low levels of follow-up of referrals by state services, and the second was our limited
capacity. In one – truly awful – case, a teenaged girl had been repeatedly raped by her
neighbor throughout her childhood. She was failing at school and suicidal. She asked
for our help in pressing charges, and after a long process, and when evidence emerged
that he had raped multiple other children, he was convicted and imprisoned. The girl
passed her school grade with flying colours and planned to apply to university to read
medicine. Two months later, the rapist was released: the prisons were too full to hold
all those incarcerated there. He returned to live opposite the child, who became
suicidal again. I think of that case with frustration, sorrow and sheer rage. Sometimes
you can do everything, and it isn’t enough.
We learnt fast that only the most serious of cases were likely to receive a response
from overburdened services, and so our referrals reflected only those most in need.
And despite everything, only a tiny fraction of the children that we referred ever got
help, and an even smaller number of those received sustained help. We pored over
lists of children, trying to work out which were the most desperately in need so that
we could focus our resources, we hired our research team for extra months to
transport children to hospitals and services, but in the end we were a tiny group of
researchers facing an unprecedented level of need. And all the time we knew that we
were only there for a short time, but that these kids were there for life. Did we have
significant impacts on a small number of children? I’d hope to think so. But was this
enough to skew a national sample? Probably not.
Perhaps the real question though is not what really happened, but the decisions that
lay behind it. What if I could have chosen to remove the problems of every child in
that study who needed our help – all the thousands of them who asked for our
assistance? In the process of removing every problem I would have completely
destroyed the validity of our epidemiological research for informing children’s
services – our research findings would no longer reflect the reality of children’s lives
and the real-world impacts of family HIV/AIDS on their health and development. I
would have broken chains of commitment to our funders to provide reliable research,
and to the wider group of children that this research could benefit. But - knowing all
this - would I have still done so? I honestly don’t know.
**
One afternoon in 2010, a fieldworker in our Mpumalanga site was mugged whilst
interviewing a child. Her bag was stolen from her, with all the day’s completed
questionnaires. Three female research fieldworkers chased the mugger to the end of
the village, tackled him to the ground and made a citizen’s arrest. The message sent to
my phone concluded: ‘QUESTIONNAIRES FINE. NOT SURE ABOUT MUGGER’.
This was probably the only time that the question of staff safety made me laugh.
The ethics of a major fieldwork study didn’t end at the participants – the safety of our
staff at work was an ongoing fear. Random sampling methods meant that our research
sites included areas rift with violent crime – over 90% of the children in our study had
seen someone being shot or stabbed. We spoke to children living in gang-run homes
7
and drug dens. We had interviews that had to be stopped for the participants to lie flat
on the floor whilst a rival gang pointed guns through the windows, and one
particularly memorable interview where, halfway through, the house was raided by
armed police officers and everyone arrested. Our research assistant reported that the
police apologized for the interruption and called her ma’am.
And it presented new problems: when I was running my own field team, I felt
somehow that it was my decision to be there, and that I could weigh up the potential
risks. But now I was sending my students and teams of local interviewers to work in
areas that we knew were potentially dangerous. Of course, we followed University
policies, wrote endless health and safety forms and sent people on training courses,
but I don’t think I had properly thought through the power and economic relations
between a university teacher, their students and local staff. Could people really say no
to fieldwork when their career or their livelihood might depend on it? To what extent
were we responsible for their safety? And was this different for those who lived in the
areas in which they worked, and those who were from other areas?
We tried to buffer the risks – providing panic alarms, travelling in pairs, writing
collaborative safety protocols – but the potential dangers were greater than these
measures. The studies expanded as political violence erupted: anti-foreigner riots,
protests against lack of services, election tensions. We cancelled field sites when they
became too dangerous for staff to enter, and had to move to the next census area in
the randomized lists. We started to use less orthodox methods of helping staff safety:
‘community guides’ who were well-connected or former freedom fighters; enlisting
the protection of police, chieftains and pastors. I sat with every project manager and
taught them how to spot typical patterns of hijackers, which had been taught to me by
an ex-SAS officer – a truly honorable man who had spent days teaching a young
social worker how to keep safe. We developed ‘panic codes’ – a telephone call that
would alert project managers to send help immediately, and I spent an instructive
afternoon with a former gang member to find out what were the most desirable
vehicles for carjacking (helpfully, he had photos on his phone). And – probably least
usefully of all - we all worried. All the time.
**
It wasn’t only physical safety that was a concern for our teams. Staff started having
trouble sleeping, unable to stop thinking about children they had met. One said to me
‘I asked myself the questions on our form – the one for children with post-traumatic
stress – and I answered yes to all of them’. But telling me was rare, and this was part
of the problem: I realize now that our staff felt that they needed to stay strong and not
complain. But the strain of seeing death, illness and abuse on a daily basis manifested
itself in sadness, irritability and fear. Our project managers, who were often studying
for degrees, would work days, nights and weekends to do the extra referrals, to follow
up just one more child, and we had to introduce a strict system of enforced holidays to
limit burnout. But for our field staff, the experiences of children in the study were
often much closer to home. For almost all of them – as for almost all South Africans -
HIV was an active presence in their families. But their daily work was forcing them to
see the most severe consequences of HIV and AIDS: children sobbing that they
missed their parents, girls pregnant by their sugar-daddies, siblings thrown out of their
family homes by relatives. The very things that families and societies often need to
8
keep as secrets to maintain social acceptability were being revealed. And for staff, it
was impossible not to translate this to their own lives. Perhaps this was hardest of all
for the several staff that, as the research progressed, were tested HIV-positive
themselves. One said to me as we drove to the clinic: ‘Every child I interview, I see
my children. If this thing kills me, will they suffer like this?’
What was the extent to which we were responsible for our staff? It was always
difficult to know how to balance the need to provide support to them, whilst
maintaining our roles as managers, within the context of a research study with a
shoestring budget and massive targets. We were a group of PhD students and a tiny
support staff: myself and one heroic postdoctoral researcher, who quickly became an
indispensible, co-PI. The local teams became close-knit groups who would share their
illness, depression, and their challenges at home. Team managers would schedule
weekly telephone calls with our postdoctoral officer ‘just to get things off my chest’.
We skated an uneasy line between management, friendship and paternalism. We tried
to think of ways to help staff deal with the ongoing emotional burden of their work.
We offered the services of a psychologist and reflexologist – these were rejected in
one province and warmly welcomed in another. More popular was giving the teams
the opportunities to celebrate successes and to support each other: field trips, parties,
cakes for team meetings, and the occasional team Kentucky Fried Chicken. It was
clear, however, that we were dealing at best with the symptoms of the very real
emotional challenges that this research entailed.
On reflection, I wonder if much of the blame for this rested with me. Throughout, I
tried to be the problem-solver, always calm, never upset. I wanted the teams to feel
that if everything went wrong – and it certainly did at times - I would take the
responsibility for sorting it out. But did this create an atmosphere where, at each level
of management and local field staff, we all pretended to each other that we could
cope? I remember with absolute clarity the public services strike of 2011. One of our
hospital offices were barricaded by protestors, and we asked the team if they wanted
to wait until everything had calmed down. Instead our project managers rescued forty
boxes of data at midnight, and the study continued.
But over and over again, as we visited rural homes, we would find people lying in
beds in the back of the room, smelling of death. We would offer to take them to
hospital, and their relatives would tell them that they had been inpatients, but that the
hospitals had closed down and sent everyone home. The team kept on going, the
people kept on dying, and I left to go to a government meeting in Pretoria. Whilst I
was there I attended a religious service for the Jewish high holydays. As l looked out
over the rows of hats and prayers, a woman touched my arm, and I realized that I had
been crying for the past hour. I never told my colleagues – I thought my weakness
would be unhelpful, and besides, I had left and they were still there, visiting the
dying. But perhaps I should have.
***
This winter, the last child was interviewed, the last questionnaire checked, the
datasets created. To an extent, the individual life stories of these children have
become rows of thousands upon thousands of numbers: an attempt to quantify the
experiences of a generation. And the numbers reveal their own stories: children in
9
AIDS-affected families have double the risks of depression, anxiety, and suicidality
(Cluver, 2011). They have threefold risk of child abuse and twofold risk of being
stigmatized. Parental AIDS can trigger a set of social consequences that have long-
term and major impacts on children’s life chances: a girl exposed to parental AIDS,
abuse and hunger has a fifty-seven times higher risk of transactional sex than a girl
who does not (Cluver, Orkin, Boyes, Gardner, & Meinck, 2011). The numbers also
unravel into stories of hope: if those same girls get a child support grant or foster care
grant, that risk of transactional sex can halve.
This led us to another ethical need that we had under-anticipated: the responsibility to
ensure that these findings have impact. Our ethical protocols, written before the study
began, are full of detail about participant confidentiality, anonymity and privacy. But
– beyond the need to avoid stigma in their immediate community – children and their
families were explicit that they expected us to tell the government what they were
experiencing. They wanted the people in charge to know. And over ten thousand
participants trusted us enough to do so that they told us some of the most intimate
details of their lives: their lovers, the deaths of their family members, their feelings of
despair. I remember watching a new interviewer in the rural Western Cape talk a child
through the information and consent forms. The child asked ‘what will you do with
my answers?’ and the interviewer pointed at me – she was a little nervous and didn’t
yet know that I spoke some Xhosa. ‘See that lady there?’ she said, ‘She will speak to
lots of other children and then she will go and tell the President what you said’. Just
as I started frantically scribbling in my notebook to make sure they stuck to our
carefully worded description of general government engagement and avoided any
suggestion that their personal responses would be reported, the child replied. ‘Can she
ask the President to make my mother able to get out of bed?’
And so we peddle our findings. The NGOs and government departments ask us
questions, and we ask the datasets for answers. I’ve yet to speak to the President, but
his senior civil servants are actively seeking the findings of our research to help with
their policy-making. We present at government policy forums; in small, closed
meetings of politicians; to UNICEF and WHO and Save the Children and USAID-
PEPFAR. We go back to community meetings, to chieftains, to local hospitals and
schools, and explain what these numbers mean for their province. I give the same talk
so often that it becomes a blur. I see my students give the same talk, but better than
me. Sometimes it feels a little like being a very minor band on an economy-class
world tour: Addis, Dar, Washington, New York, Lagos, Basel, Kampala. We give
newspaper interviews and radio interviews and podcasts; we spend three months on a
statistical analysis and then simplify it all to a single slide.
In many ways it is incredible that the research is being used – in national policies, in
international programmes, in determining funding decisions. But each time I know
that this engagement with policy is another trade-off in the zero-sum game that has
become my timetable. I remind myself that I am hired as an academic – a lecturer and
a scientist. But every set of three-day government meetings is a peer-reviewed paper
not written. Every evening spent commenting on draft legislation is a student’s thesis
that I mark on the bus instead. I am sometimes amazed at the tolerance of my
University colleagues who arrange my lectures and examination meetings around my
travel, and my students who have their tutorials over skype as I sit at airport boarding
10
gates. But it raises the question of to whom we owe our loyalty and our time most: to
our research participants, to policy-makers, to our students?
When I read qualitative research, I always dwell on two key aspects – the discussion
of ethical issues and the researcher’s reflexivity about how their own characteristics,
views or relationships may have affected the process. It strikes me over and again that
quantitative and qualitative work face so many of the same ethical problems and
dilemmas when conducting research with vulnerable populations. And in many ways
the structural approach of qualitative research reporting has something of great value
to teach the quantitative world: that a thoughtful and critical examination of our own
influence on the research process and findings is an important part of understanding
the results of that research. Of course, the researcher themselves may not be the best
person to provide that critique – there is now compelling evidence of our human
capacity to justify almost any of our actions – but it is a good start.
And this raises another question. When I sent this draft chapter to the book’s editors,
they asked a question: what were the responses of other scientists, in the
quantitatively-driven world of HIV research, to the way we had responded to the
ethical dilemmas of the study? And I realized that I’m not sure if I can answer that
question. My close colleagues in the field of children and HIV research all know what
we are doing – and some of them have been asked for urgent help with particular
cases and have unfailingly been supportive – but I rarely mention it in scientific
meetings, and never in publications beyond a single, standard section in our published
papers reporting ethical processes and referrals to services where needed. No reviewer
has ever criticized or questioned this, and no reviewer has ever commented on it. But
I do recall a meeting I had many years ago, when I was planning my Masters
research. I went to see Professor Lorraine Sherr – one of the most senior researchers
in this field of HIV research. She looked at my questionnaire, and suggested that
when we asked the children about the death of their parents, we gave them an
opportunity to draw a picture or write a message – essentially to introduce a
therapeutic element into the research. The sentiments behind that advice still guide us
now. My colleagues may be an unusual or un-representative group within the wider
field of HIV: research with children is less glamorous, less well-funded and less
prestigious than other aspects of the disease, and many have come to the field from
backgrounds of clinical or community work. But I wonder why, at international AIDS
conferences or meetings - I don’t mention it. There is a reason for every inaction as
well as every action – and perhaps I prefer a conspiracy of acceptance to the potential
for an active debate.
When I look back on those two studies, sometimes I’m amazed we made it through.
At times it felt like a practical, financial and ethical minefield, a stumbling attempt to
balance the needs of individuals with the needs of science that will feed back into the
needs of the community. We knew that a step in the wrong direction could be
disastrous, but the right and wrong directions were not always clear. However, our
experiences can be of value in identifying the next set of research needs. The fourteen
year-old girl who couldn’t manage to take her medication every day has led to our
next study: a qualitative and quantitative linked project on how to help teenagers
adhere to their antiretrovirals and access sexual and reproductive healthcare. The grim
process of trying to respond to hundreds of disclosures of child abuse has led to
another project: testing a child abuse prevention programme in the Eastern Cape. We
11
know that both of these projects will contain more and major ethical challenges: HIV,
children, abuse and sex are a volatile combination. But equally, can we know that
these things are happening and not try to do anything about them? And so the risk
also carries great potential: to develop and evaluate effective interventions to improve
children’s lives. As Karl Popper argued: ‘We should [have] a more modest and
realistic principle, that the fight against avoidable misery should be a recognized aim
of public policy’. And there is one thing that I know: whilst each action has had a
cost, it is the inactions that I truly regret. At least I think I know.
Acknowledgements:
These studies were funded by the Nuffield Foundation, the UK Economic and Social
Research Council, the South African National Research Foundation, the Health
Economics AIDS Division at the University of KwaZulu-Natal, the South African
National Department of Social Development, the Claude Leon Foundation and the
John Fell Fund. We would like to thank the Regional Interagency Task Team for
Children and AIDS Eastern and Southern Africa and USAID-PEPFAR for supporting
data analyses. We wish to thank our fieldwork teams in the Western Cape, KwaZulu-
Natal and Mpumalanga for their unfailing hard work in often very difficult
circumstances. Most importantly, we thank more than ten thousand brave participants
and their families.
References:
ACESS. (2002). Children speak out on poverty: Report on the ACESS (Alliance for
Children's Entitlement to Social Security) Child Participation Process: Soul
City, The Children's Institute, University of Cape Town.
Boyes, M., & Cluver, L. (2013). Stigma mediates relationships between HIV/AIDS-
orphanhood, anxiety and depression in South African youth: A longitudinal
investigation. Clinical Psychological Science.
Cluver, L. (2011). Children of the AIDS pandemic. Nature, 474, 27-29.
Cluver, L., Gardner, F., & Operario, D. (2007). Psychological distress amongst AIDS-
orphaned children in urban South Africa. Journal of Child Psychology and
Psychiatry, 48(8), 755-763.
Cluver, L., Orkin, M., Boyes, M., Gardner, F., & Meinck, F. (2011). Transactional
sex amongst AIDS-orphaned and AIDS-affected adolescents predicted by
abuse and extreme poverty. [Research Support, Non-U.S. Gov't]. Journal of
acquired immune deficiency syndromes, 58(3), 336-343. doi:
10.1097/QAI.0b013e31822f0d82
Cluver, L., Orkin, M., Boyes, M. E., Sherr, L., Makasi, D., & Nikelo, J. (2013).
Pathways from parental AIDS to child psychological, educational and sexual
risk: Developing an empirically-based interactive theoretical model. Soc Sci
Med, 87, 185-193. doi: 10.1016/j.socscimed.2013.03.028
Dawes, A., Bray, R., Kvalsvig, J., & Richter, L. (2007). Indicators of South African
children's psychosocial development in the early childhood period. Cape
Town: UNICEF, HSRC Press.
Goldacre, B. (2008). Bad Science: The Fouth Estate.
Government of South Africa. (1993). Prevention of Family Violence Act.
Children's Act (2005).
12
Kleinman, A., & Good, B. (Eds.). (1985). Culture and depression. Berkley, CA:
University of California Press.
McCord, J. (2003). Cures that harm: Unanticipated outcomes of Crime Prevention
programmes. Annals of the American Academy of Political and Social
Sciences, 587, 16-30.
Seedat, S., Nyamai, C., Njenga, F., Vythilingum, B., & Stein, D. J. (2004). Trauma
exposure and post-traumatic stress symptoms in urban African schools. British
Journal of Psychiatry, 184, 169-175.
Seedat, S., van Nood, E., Vythilingum, B., Stein, D. J., & Kaminer, D. (2000). School
survey of exposure to violence and posttraumatic stress symptoms in
adolescents. South African Journal of Child and Adolescent Mental Health,
12(1), 38-44.
UNICEF. (1997). The State of the World’s Children. New York.