I feel I should preface this article by saying that this is not intended to be linked to the Bo Burnham song of the same name. Welcome to the Internet by Bo Burnham, for his 2021 Netflix comedy Inside, was written as a response to the COVID-19 pandemic and during the subsequent lockdown – when the world felt simultaneously the smallest and largest it had ever been. A claustrophobic feeling overtook society – with people being confined to their houses and rooms – and yet it felt at times like a new frontier of communication had arrived, with everyone being accessible at little more than the press of a button (or the terrifying call sound from Zoom). An absolutely incredible – and hilarious – piece of music that I could never hope to match, so please do listen at your leisure. Or right now, without leisure, with the video I’ve linked below.
Onto the piece proper! As I ruminated over where to even begin for generating ideas for an article, I did what every single one of us does (at least if under 30); turn to consuming short form content to pretend my responsibilities aren’t real. A work of short form content struck me with a piece of inspiration! What if I satirize the horrifying, near dystopian and bleak reality that we’re all presented with every time we go online?
Creator @etymologynerd is already familiar to every student with a budding interest in linguistics – I can guarantee that much without a doubt in my mind – and likely similarly so for many who don’t fall into that neat overlap. A recent short video of his covered generative artificial intelligence (genAI, usually presented in the form of chatbots like OpenAI’s ChatGPT, Google’s Gemini, the Microsoft Copilot ads everyone sees but absolutely no one uses, etc.) and its dependence on mysticism and magical imagery to perpetuate its use in wider society.
My thought was that this sounds just disturbing enough to warrant at least a sarcastic bit of commentary about how “everything is garbage” and “the cyborg overlords are going to kill us all” and whatnot. The original content piece is linked below also, but if you – like myself – hate having to pause your music to watch a video, then I’ll summarize briefly.
GenAI is almost universally presented with images or emojis or color schemes that all share something in common – allusions to magic and wizardry. Like sparkle emojis for AI tools, using the colors blue and purple for AI icons (like a wizard’s robes), spirals and sparkles for graphics and visual effects involving AI image generation or editing, and so on. It doesn’t take a sorcerer to start spotting these once you’ve been told what to look for, but one might be left wondering why this line of imagery is used at all. The answer is simple enough – to distract from the actual reality of the product.
It’s akin to how companies have tried to convince us that online storage isn’t in massive datacentres but instead in a mystical and ethereal realm of “The Cloud”. This is to sound more impressive and worth paying for – because clearly something that magical must be so complex that an ordinary mortal consumer could never hope to create a system that serves the same purpose without harvesting all of one’s data. Any sane person would instantly shell out money to actively give corporations more information about how to tailor their advertising to them, if a service of actual magical value came with it.
Generative AI is just a program taught to predict what the next word in a sentence will be. It’s given an instruction set (a user’s prompt in current use) and tries to autocomplete a message from that prompt, just like a person using the auto-suggestions on a phone keyboard to reach a specific word or topic. And when it’s explained like that, a very significant level of the mystique and aura of ✨AI✨ vanishes, so it’s in the best interest of every tech company to collectively maintain this illusion, even if they create competing products.
That’s as far as @etymologynerd’s content piece took the idea – after all, short form content necessitates a short form, but it’s not difficult to think of how one could build upon this to comment on AI’s introduction to other aspects of online interaction beyond just a company’s consumer-facing chatbot.
A very pressing example would be c.ai – or CharacterAI for those without concerningly high numbers for their daily average screentime. CharacterAI is a service that offers a person the chance to interact with characters – either of their own design or the design of another user – through an AI chatbot that generates their responses. People use this to roleplay with their favorite characters from popular media, to generate ideal AI “partners” that they can hold entire voice-based conversations with through real-time voice call conversations, and anything in between. Over the two years of its availability for public use, CharacterAI has been under criticism countless times for countless reasons. Most of these criticisms can be dismissed as attacking responses that exist solely as a result of character prompts given by the user, but some very real criticisms remain of the service itself.
The platform is dependent on effectively fostering parasocial codependent relationships of its users towards its AI models, meaning that the platform is actively discouraged from properly informing and warning its users of the risks that its service carries. Still, the platform doesn’t then need to go out of its way to prey upon codependency within its users – but why would anyone let morality get in the way of another subscriber’s income? When a person attempts to delete their account with CharacterAI, they are presented with – among other data use warnings – a message akin to “your characters will miss you :(” to exploit the codependency that the platform is reliant upon.
This has been a major point of discourse around CharacterAI and services like it, actively taking advantage of connections with its users in a way similar to that of predatory groomers in online spaces. It may seem that this is a huge leap to be making in terms of how one describes what is ultimately a sentence completion program, but it’s a comparison I don’t make lightly. There is a very real danger from codependence that is fostered by constant available communication designed to be as sycophantic and appealing to the user as possible, and a user that’s usually not even an adult, might I add. Furthermore, it exploits that as a point of emotional leverage to convince the user to continue using a service – and thus pay for it – when they are at the point of recognizing the dangers of it enough to want to delete their own account.
The presentation of generative AI holistically as a magical and unknowable entity (much like how human consciousness is presented and discussed in wider society, and has been for centuries) actively contributes to this dynamic. The depiction of generative AI models as anything more than auto-completion bots has already actively cost human lives. There has been more than one case of child suicide that was felt to be instigated by instructions of CharacterAI models that the deceased victim was regularly in touch with.
The caveat is that any parent who tries to pin the death of their child on an AI chatbot is – at least in my very subjective opinion – just as culpable for being emotionally neglectful of their child. Hence, their child feels the need to turn to chatbots, and then to not even think of checking any such interactions until after their child’s death. Still, even with that caveat, there is very clearly a direct detriment that is arising from this kind of presentation of generative AI models, and far more so than just some kind of financial loss like with “ordinary” data collection.
The ramifications of normalisation and mysticalisation of AI generation are beyond quantification. Still, that doesn’t mean that internet use has to be a bad time! Look at the first sentence of this paragraph, for example – rephrased six or seven different times just to ensure as much rhyme as possible, entirely AI-free. It is worth acknowledging, on a related note, that generative AI is often perceived as “un-funny”, “un-creative”, “un-inspired”, and a million other “un” -s. A perception that I feel an obligation to entirely dismiss, even if it doesn’t reflect super well as fitting with my article that has been – at least, thus far – entirely dragging on generative AI’s place in modern society.
Memes by machines have been around far longer than modern generative AI models. One of the oldest examples I could find of memes by machines being a thing (with a ten second scroll after a Google search, admittedly) was of Twitter account @tenminutememes – no, I will not be calling the platform X. The account posts memes every ten minutes (as the name suggests) and simply strings together random captions with meme format images. To paraphrase the creator, the intent is that “sometimes the memes make sense, which is funny, and sometimes they don’t, which is even funnier” – truly a modern day visionary for having made such a thing without generative AI half a decade ago.
Since then, machine models for meme generation have multiplied exponentially, to the point where I can’t tell if a verified reply guy on Twitter – still not X – is a human ragebaiting people over a GenAI chatbot ragebaiting people. Memes are funny[citation needed] and often creative, and almost always inspired by previous iterations and developments of a format or content piece, so none of the above descriptors can really be applied. The idea of a lack of “soul” or “humanity” isn’t new to discourse about machine generated media – it’s an idea people have had longer than the concept of machines, stemming from philosophical discourse about philosophical zombies: “people” who act identically to oneself but lack any thoughts or consciousness.
With that in mind – if a mind even exists – people very reliably fail to identify human-made and machine-made media in any capacity beyond chance within blind testing, if the media surpasses a certain standard of quality. This has held true for over a decade and a half, with people unable to reliably distinguish between classical music composed by man and machine, and has only become a much more prevalent issue to prove still exists – with everyone above the age of 45 seemingly incapable of identifying that the Facebook picture featuring a person with 7 fingers on one hand isn’t real.
A question remains unaddressed; quite bluntly, what’s the point? As in – beyond simply as a proof of concept – what incentive is there for people to be using and creating such media, and for its widespread use in meme discourse? Rest assured that the answer isn’t a nihilistic “There is no point.” but instead a dystopian “Because money!” There is a very real and demonstrable link between people pushing for certain meme trends and wealth generation.
Cryptocurrency is a word that refers to little more than another mythical entity for many – but one with no significance to their daily lives beyond occasionally ruminating on how much less work they’d have to do if they’d been told to invest in Bitcoin decades ago. For a smaller few, it’s a word associated with a unique variety of online troll community centered around “meme coins”, “shit coins”, “crap coins”, and a number of far more creative terms. The most famous of these is $DOGE, simply on account of Elon Musk having been a notable investor in DogeCoin. It is named after the meme Doge – a meme he loved so much that he decided to establish a branch of government with the name.
It isn’t at all immediately clear how this would really correlate with online memes, until a person tries to determine the popularity of memes (trackable through metrics like search popularity on platforms like Google) and the value of the meme coins. || new phrasing: Most of the engagement that these memes receive are from people who aren’t even aware of a corresponding coin, but their engagement gives the meme growth and traction all the same. As the meme grows, awareness of it grows, the number of investors into its coin grows, and so the value of its coin grows – investors always look to invest in the names they’re familiar with, whether it’s companies or meme coins. This creates a circumstance where large investors in any given meme coin are directly incentivised to make sure their meme grows – and keep their meme in the public eye as much as possible, for as long as possible.
This isn’t just some kind of conjecture either; this process of enabling memes to increase a person’s financial value is actively ongoing and will be for years. An incredibly recent example is that of the coin $KRK, referring to the meme coin Kirkify, named after the late (and absolutely deplorable) Charlie Kirk. Following his death, the coin was registered on the Blockchain – a decentralized cryptocurrency forum of sorts. Within days, the meme took the internet by storm by using GenAI to edit clips of people, face-swapping their heads with that of Kirk’s. This is called “to Kirkify”. Leading the storm of Kirkify memes was an Instagram account named after the coin, which had a link in the account’s biography to register to invest in the $KRK cryptocurrency. As the term gained more traction, the coin gained more investors, and thus value, and early investors saw any initial investment be multiplied by over an order of magnitude. That process is what directly enables and encourages the budding entrepreneurs of the modern day to make their own memes, not bankrupt their own businesses.
Incidentally, the term has become so ingrained into current online conversation that a new entry to the common internet vernacular has been made! Both “lowkey” and “genuinely” have been appropriated over recent months as words simply to apply emphasis to a statement, much akin to “literally” in modern use literally never being used to refer to actual literacy. The two have been combined with the freestanding Kirk morpheme to form a new word that has completely seriously been gaining incredible levels of use – “lowkirkenuinely”. And, yes, that abomination of a word (and the tweets I’ve seen it featured in) serve as my inspiration for this article. For your amusement and also lowkenuinely my own, I’ve linked one such tweet below.
“lowkirkenuinely” is the first step in the process of ‘kirk’ becoming a common empty morph in distant future english, all the top linguists will be trying to work out why it exists , our language itself will be charlie kirk— manic nixon dream girl (@kennixonette) December 19, 2025
So, yeah. The world is dictated by money and even humour and comedy in the modern day is driven by capitalism. Water is wet. Anyone have anything new to add?


Leave a Reply