Te Kete o Karaitiana Taiuru (Blog)

Māori experiences with online harmful or inappropriate content

The Classifications Office released a report called “What we’re watching New Zealanders’ views about what we see on screen and online”. This is an analysis of the Māori findings with a conclusion identifying some possible solutions.

The report shows that 83% of New Zealanders are concerned about harmful or inappropriate content on social media, video-sharing sites or other websites. 53% of New Zealanders had seen online content that promotes or encourages harmful attitudes or behaviours, such as discrimination, terrorism or suicide.

Compared with other groups, it was more common for Māori participants to report seeing online content that promotes or encourages violence towards others due to characteristics like race, sexuality or gender, violent extremism or terrorism. It was also more common for Māori and Pacific participants to see content promoting hatred or discrimination based on race, culture and religion.

It was also somewhat more common for Māori participants to have seen content promoting suicide or self-harm. It was also more common for younger Māori and NZ European participants to have seen content promoting suicide or eating disorders.

Again, Māori are over represented in relation to online harm and negative effects, yet there still are no solutions to combat the abuse against Māori. Previous research has suggested that it is Māori females with a disability that are more likely to suffer harm. The report would have done more justice if the demographics were identified. As with other research into online harm, the generations if children immersed in te Ao Māori and te reo Māori have also not been considered.

 

Pacific participants (39%) were also more likely than Māori (32%), Asian (27%) or NZ Europeans (26%) to say they would be likely to report harmful or illegal online content to an official agency.

 

There is a substantial amount of research proving that Māori have a mistrust of authorities and are less likley to report issues to authorities.  This could have been broken down more by age, gender, economic and location such as rural/urban. It would also be beneficial to know the definition of “official agency” and to know which agencies were most likely for Māori to report harmful or illegal online content.

Asked if they ‘feel I know enough to help my family/ whānau stay safe online’, Māori (24%) participants were somewhat more likely than non-Māori to ‘strongly agree’, but overall agreement with this statement was similar for different groups.

Again, more in depth analysis would be beneficial to the report to understand which groups of Māori this applies to.

 

Participants who had recently helped choose a movie, show or video game for a child or young person were asked about the importance of age ratings. Responses tended to be similar across different ethnic groups, however, Māori participants were more likely to think age ratings are very important (giving a score of 9 or 10 on the scale provided). When asked about choosing movies, shows or games for themselves, Asian (53%) and Pacific (48%) participants were more likely than Māori (36%) and NZ Europeans (30%) to think the age rating is important.

Asked about legal restriction on what children and young people can watch, Māori participants were more likely to think age ratings on streaming services should be for guidance only (33%), and to think underage people should be able to watch a restricted movie in a cinema if accompanied by a parent or guardian (49%).

This reflects a common tikanga in Māori whānau and societies of Tuakana/Teina and as tuakana often having lived experiences that make us more resilient than non Māori.

 

Most participants thought negative comments, behaviour or stereotypes about groups of people can be harmful. This includes content involving racism (91%), sexism (88%), and negative comments about gay, lesbian, bisexual or transgender people (85%).

It was common for participants to have seen online content that encourages some form of discrimination, with 40% selecting at least one of these options. This includes content that encourages ‘misogyny or sexist attitudes about women and girls’ (24%), and hate or discrimination based on things like ‘race, culture, or religion’ (31%) or ‘sexuality or gender’ (24%).

It is becoming common knowledge that misogynists, homophobes and racists find comfort in the anonymity of the Internet to spread their misguided views. Again, as with the lack of Māori representation and online protection frameworks, the LGBTQIA+ communities are severely under represented, or even at all (?) in terms of online harm.

 

Participants tended to be significantly more concerned about content on social media, video-sharing sites or other websites, compared with commercial entertainment such as movies, shows and games.

Commercial entertainment such as movies, shows and games in New Zealand have a legislation to ensure viewers are protected and offered guidance with viewing that content. In contrast, social media and the Internet have very little protections and legislation. Even the most heinous material can be freely published, shared and traded online with impunity until ad hoc and often international investigations are carried out.

This also reflects previous research that shows younger Māori would support some form of Internet censorship in New Zealand.

 

The need for more and better regulation or government action: The most common response was in support of government action and more effective regulation. Relatively few participants talked about age ratings or restrictions on content like movies or shows, rather, the pressing issue for most was about social media and other online content. Some talked about tougher measures to hold tech companies to account, and others about legal requirements for online age restrictions.

Many lack confidence in reporting harmful content. Most New Zealanders (74%) would consider reporting online content that was harmful, dangerous or illegal to an official agency in New Zealand. However, results showed a high level of uncertainty about how to go about reporting such content, or what the response would be.

This appears to contradict the earlier statement about reporting online content to an official agency, or at least requires more explanation. If only 26% of new Zealanders have confidence in reporting online, then the prevoiusu statement “Pacific participants (39%) were also more likely than Māori (32%), Asian (27%) or NZ Europeans (26%) to say they would be likely to report harmful or illegal online content to an official agency” would suggest that these stats are a very small group of participants.

It also highlights the need to state how many Māori, Pasifika, Asian, Pākehā and others were interviewed to allow a deeper and better analysis of the issues.

 

In conclusion, there is a dire need to review the Harmful Digital Communications Act 2015, to create new authorities that reflect the marginalised communities that are constantly being effected by online harm and online harm convictions (Māori) and for an overhaul of all of the “official agencies”.

A new resource needs to be made publicly available for all New Zealanders who are impacted by harmful digital content. It should have a contact list of all of the “official agencies”. Also with step by step questions about the harmful content so victims know the seriousness of the offending content and which agency to contact.

DISCLAIMER: This post is the personal opinion of Dr Karaitiana Taiuru and is not reflective of the opinions of any organisation that Dr Karaitiana Taiuru is a member of or associates with, unless explicitly stated otherwise.

Leave a Reply

Your email address will not be published. Required fields are marked *

Archive