Memes are vessels for all varieties of speech. As subjects for research, there's a natural affinity between the impactfulness of a meme and it's research value- a meme that moves hearts and mobilizes people is an interesting phenomenon. This points to the necessity of studying memes relating to extremist ideologies such as ISIS, the white identity movements in settler colonial States, and so on.
However, we are often participants in the same societies that meme spaces operate in. The act of analyzing a meme also disseminates it. If meme studies is to develop into a serious field, we have to start building an ethical framework for isolating and neutralizing memes that advocate for positions antithetical to human liberation. With that in mind, here's a few questions I've been considering.
Should memes for fascist or extremist groups be analyzed?
should the original text (image or otherwise) be included in analysis?
what duty does the analyst have to address the thesis of the meme in addition to analyzing it's effectiveness or technique?
what hueristics should we use to identify fascist or extremist ideas to treat with extra care?
I have some thoughts but want to read others before I share. Hopefully this is a contribution to this communities standards as well.
I think we absolutely need ethical theories of memes. Much of what I ended up focusing on for the past 7 years as I worked on meme studies from a memeculturalist perspective ended up being ethical and political theories of memes, since the mere act of analysing memes was already political.
There was a period in which a large portion of memeculturalists believed that academics and journalists talking about memes would kill the memes (i.e. destroy their meaning and functionality, as low-literacy users pollute the noospheric territory or the namespace of the memes). This paranoia about an ignorant out-group has given way to paranoia about fluent infiltrators that manipulate and sabotage the memecultures from within.
It feels like the discourse among academics has paralleled this shift, but I haven't verified that with research yet and I am not an experienced academic with deep insider knowledge either (it's merely a hunch). Earlier Internet meme research sometimes focused on harmful users like trolls but the focus has since shifted to the memes themselves (often produced and spread within the same platform or even the same social contexts).
At any rate, much of the contemporary meme research is about "hateful memes", namely bigoted or extremist memetic content with overt political messaging. The ethical positioning is clear: hateful memes are bad and we should learn how influential they are, and how that influence can be minimised. Discussions primarily about the ethics and metaethics of memes per se is not as prevalent.
I think this question is fairly straightforward to answer, but only because it also pertains to research on any kind of dangerous material, whether digital or not. There doesn't seem to be a satisfactory consensus around handling extremist materials and other toxic content in general. The central problems probably have very little to do with memes specifically, and more to do with how we understand and treat meaning and (technologically mediated) communication.
It's important and valuable to understand entities that have a negative impact on society such that they can be counteracted, unless there are additional harms produced by the research that outweigh the benefits of such research. Prima facie, this also applies to memes. The question thus seems to be whether memes are especially (and perhaps inherently) more susceptible to having the harms of the research itself outweighing the benefits:
The point being made by this question is primarily that the benefits of research into harmful memes may be outweighed by the fact that such research involves collecting, preserving, hosting, and disseminating the harmful memes being researched. I think generally if the research is done well and isn't misrepresenting the memes in a way that diminishes the credibility of the research, presenting harmful memes within this context can act to neutralise them. That is by no means to say there isn't a tradeoff (e.g. free image archive for extremists, and so on). But this kind of calculus applies to all manner of things, not memes specifically.
I would think unironic extremist ideas (like political philosophical writings) are much more straightforward to handle, and generally safer for academics to handle.
It's the variously ironic content that are frequently a part of metagames about spreading memes that might be more challenging to handle. But that's becoming less of a concern as everybody learns more about irony and memes, and the novelty and aesthetic cachet of spreading memes into unusual contexts has waned.
This is an evocative question. I think it would be good if researchers were to address the theses of harmful, extremist memes; people make countermemes, response videos, etc. all the time. But it's probably quite difficult for one researcher to do the (say) network analysis of memes as well as provide a philosophical critique of the meme, and so on, on their own. However, we discussed above that contextualising harmful memes within a paper that presents counteracting information is important for neutralising them.
That suggests to me that one implication of the question about harmful memes is that we ought to develop a new methodology that allows researchers to pool their efforts together around the analysis of the same meme in a way that lets them supplement one another. An integrated database of memetic content and meme research could link works by separate researchers who commented on the same meme using the memes themselves as the shared node.