Discord is an important, major platform that is home to countless memecultures and communities.
But it's notoriously difficult to research because chat servers are often private, ephemeral, and resistant in design to observation. There are also research ethics challenges associated with collecting and using data from chat servers, which are not typically considered public spaces in the same way that (say) Instagram pages are.
How can we research memes, memecultures and meme communities on Discord? What are some tools and existing research around Discord research?
The Discord Academic Research Community (affectionally called the D/ARC) is here to equip and connect researchers studying Discord’s platform and cultures. We started this network because we believe Discord is a critically understudied platform and we really hope to change that! But we can’t do that without established methods , usabletools , and the support of community . Our goal is to kickstart critical investigations into Discord by circulating new tools, monitoring platform changes, and networking scholars.
D/ARC is a new research network and looks like it'll become a goldmine for Discord research related materials. Their Discord server is naturally well organised and maintained too.
In my opinion there's a threshold at which a public discord above a certain size has no different ethical considerations with meme collection than a forum or a Facebook group. And the smallest servers are basically group chats. But even for the middle ground I think collection is fine if you're a participant in the community yourself.
I did my PhD research on vaporwave, a community in which I am actively involved, and I am very interested in the issue of research ethics and methodology for researching one's own community. It raises a lot of interesting questions about navigating the dual role as a researcher (who must be critical) and a community member (who is emotionally invested and has personal loyalties).
To resolve these matters in my own work I turned to co-design as a way of bringing community members into the inquiry process so that the work was accountable to the participants needs. I developed a collaborative project that was of mutual interest to the academy and the community so that knowledge creation became a partnership where the community members were critically reflecting on their own experiences and learning about themselves. I wanted to do something that was not just about extracting data from the community to be analysed outside of its own context.
Although this is definitely not the only way to approach such methodological/ethical complexities and I am interested in other people's approaches to these messy questions.
Getting consent from the server admin/mods to collect data seems like the obvious strategy to avoid trouble (Con: Increases manual work, since individual communication with all server owners is required for every server). Following this, the next question is then the ethics of collecting data from regular users who may not know that their data is being collected. An obvious way to clear this would be to have any existing/new users of the server consent/be informed that their data is being collected on the server, and what data it is. This method presents a few challenges:
Admins might not want to implement this, either because of the effort required, or because fewer users will join the server if they know their data is being collected.
GDPR / data protection compliance (Can the data collected be used to identify anyone?):
This is relevant everywhere, but seems to apply differently to Discord than conventional social media. Discord can offer more anonymity than public social media, but can also be home to more intimate groups, where real names can be used to refer to people, et cetera.
It's not obvious that the size of the server plays a role in how private people are with each other, as there are personal relationships where people are referred to on a first-name basis etc that develop on even the biggest servers.
An option here could be to have a simple automatic opt-in/opt-out if the method is scraping.
My initial thoughts, obviously more to dig in to here. I went a bit off track and wrote about the legals which obvs never matches ethics 100%, but in the case of GDPR, if you are actually properly following GDPR-level rules while collecting data, it seems pretty hard to do it unethically.
A lot of the time, getting consent from the community is a non-starter (especially if it means getting consent from everyone on the server or the site).
GDPR was something of a crisis moment for web archivists; consider also the fact that much of web archival is not only done "without consent" (insofar as if the site owners had their say, they would arbitrarily restrict access to certain audiences), but weaponised. It's standard practice in political communities to share a copy uploaded to the Internet Archive instead of linking directly to the source, both to preserve "dirt" on opponents as well as prevent giving clicks to platforms owned by companies they don't support.
As for GDPR compliance to the letter, big website owners seem to have quickly figured out that people just click through annoying popups and that they don't even really need to worry about it most of the time. I wonder if one way that researchers are cutting through the tangle of ethical issues related to the ethics of privacy (and also the ethics of "the right to be forgotten") is to focus on "enemy combatants" as the research subjects, so that they don't have to bother with getting consent and so on (e.g. aligning research priorities and viability researching public, anonymous, "toxic" platforms where "user consent" is deemed less coherent as a concept).