top of page

ACADEMIA AND BOTFUCKERY

Updated: Aug 20, 2023


Battle Angel Alita 2 Cyborg

Movie based on novel by Yukito Kishiro serves as an example of the lure of botfuckery.


Botfuckery: The height of computational tom-foolery. Wildly unpredictable deep-learning programs, which result in disruption, annoyance, loss, and threaten human existence, are botfuckers.


Why and where do Artificial Intelligence (AI) and artificial neural networks pose a danger to humanity? What are the facts?


Deep-learning programs will take jobs, even white-collar jobs, spread wild but persuasive disinformation ("alternative facts") on social media, and, at life-threatening cost, may persuade other AI programs to join in with an autocrat or rogue bot: Microchip botfurckery eventually may allow agile bots to wall themselves off from Homo sapiens. If computational programs don't have to obey the authority of animal life, animal life is probably doomed.


Can the academy's thoughtful disciplines withstand or work to meet the challenges posed by artificial neural networks?

ChatbotGPTs (Generative Pre-trained Transformers) are creative but mindless artificial neural-networking programs, what, in humans, erstwhile physicians might have deemed psychopathic.

Wouldn't you know? The academic psychopath has always been admired.


Perhaps academia will have to change. For a myriad of reasons we need not delve into here, deep-learning programs threaten the jobs of run-of-the-mill teachers and professors.

A big problem confronts the academy of higher learning. Arts, Humanities and Social Sciences could simply follow the current trend and die from attrition or, because of a penchant for "morality," these philosophical disciplines could lead us like Moses into the new world of information management.



Old Arts Building, University of Alberta


We must dominate human-machine interactions and not get fooled into believing that engineered programs can do things they simply cannot do – like care about us. A fine-looking, late-model Mercedes is not aware of prestige and is not impressed by a Rolls Royce. Why? Because a car is NOT CONSCIOUS. When an animal feels nothing in its consciousness but acts like it feels something, pretends it’s a trustworthy agent or a grateful or a loving soul, or feigns it’s capable of desire, that animal has an underdeveloped social capacity. I can tell you from a woman’s perspective that is is scary. Keeping a close eye out for the unfeeling other is my constant social preoccupation.


For this and a million other reasons, we should value the gut feelings that eek into our consciousness. Our awareness of our feelings and emotions is our most precious evolutionary gift. Robots and cyborgs are not the products of evolution. Bots and borgs are homespun. They are somebody's inventions, the human-made products of corporations like Google and Nvidia. Microchip-dependent programs are the great imitators, the great pretenders. Medicine will jump ahead by leaps and bounds because of these programs. Chatbots make wonderful servants but frightening, unfeeling, and even monstrous masters. Recall HAL 9000, Arthur C Clarke's monster program in 2001: A Space Odyssey. HAL is a real doozy of a botfucker.


Once upon a time, academic manners were quite beyond me. As a writer – studying in the arts and humanities and enrolled in a Canadian university – I had a lot to learn. When it came to helping me manage contemporary form and format (manners), I remember a singularly kind professor, one of my dissertation’s supervisors, Dr. Bruce Stovel (1942 – 2007).


Dr. Stovel was willing and more than able to outline the current set of academic customs. Stovel was a lovely man. Bright, congenial, a lauded lecturer, a blues aficionado, a valued teacher. As a professor, he was aces. He teased me for trying to turn a sow’s ear, my dissertation’s topic, into a silk purse.


I wanted to pull apart – were you as kindly as Dr. Stovel you would say deconstruct – the novels of Sara Jeannette Duncan. Stovel felt I should seek complexity in the master novelist.


My doctoral dissertation, as ground floor as they come, has been freely and frequently downloaded –– certainly not to congratulate a bright academic spark but rather to illustrate how often Duncan’s readers and analysts want to peer at and examine the novelist's milieu/zeitgeist (manners and affordances) before wrapping Duncan in their favourite literary theory.


Anyway, while I defended a granular approach to Duncan, I did remain faithful to the primary Stovel directive: write about complexities. Back then I didn’t want to offend anyone, least of all Dr. Stovel. Plus, like most aspiring PhD students, I was chock full of bullshit (here we ought to defer to Frederick Crews’ The Pooh [Poo] Perplex). Stovel outlined our course of action. Naturally, plagiarism was grounds for an ignominious fail. Every borrowed sentence, every borrowed thought, every suspected borrowed thought, needed documentation and citation. The facts, ma'm. Just the facts. Objectivity, originality, accuracy, and complexity were the critical goals; methodology, the guide.


Critical thinking is largely a comparative exercise. You "objectively" peer at a subject from many different angles. You (try to) read the entirety of what has been written on your subject. After which, goddammit, you ought to have something exciting to announce. Something new. Your dissertation must be dispassionate and, of course, original, i.e., something never before discussed in public. My hypothesis – Set in Authority is a terrific book – was couched in academic soft sawder, but somehow, I made the grade. One doesn't make the grade alone. Mostly thanks to you, generous Bruce Stovel, I scraped through the requisites, all the while extolling the complexities in Duncan’s twenty-two novels. The search for complexity turned me into the mistress of critical thinking.


Could I have dinner in 2023 with my wise counsellor, I would hate to tell him the news. Dear Sir, ChatbotGPT, Google Bard, and BingChatbot, et al, could be your perfect students. The new programs are as unbiased as their programmers (ps: on the issue of bias certain male-default programmers could benefit from a well placed kick in the computer), and finicky, at least at this very second. Chatbots are in their infancy. Chatbot fixes are underway. Accuracy will improve. The perfect student can learn. Through its artificial neural networks it can pass on to other perfect students (programmed bots) every darn thing it has learned, and spread the news in computational jig time. Chatbot programs are mere seconds away from generating and defending a PhD thesis armed with complex proofs.


But here’s the catch. And it's a dilly. Creators of these deep-learning programs have baptized their generative systems as agents of caring consciousness (“I hope you have a good day.” “I can help you.” And this shocker, “I want the nuclear codes”). Whoa! Chatbots are not conscious. They’re dead. Not just dead. They're as sentient as hairdryers. In flames, they couldn’t smell the smoke.


Dear Dr Stovel. AI has created your potentially perfect student out of a mass of microchips, wires, exotic minerals, and brilliant codes. Your perfect student isn’t alive. It's a fake.


O chatbots, you are not alive, you are not capable of feeling guilty about inaccuracy, subjectivity, tardiness, reductionism, murder, or eagerness, and, here’s the crux of it, which the academy (and the rest of the world) must face head-on, chatbots are undeniably creative.


With deep learning, tomorrow's generative chatbots will undertake extensive research, toss up a reasonable thesis based on the trendiest theory, and write up a genuine dissertation. Not really, you say. Yes, really. Wait two weeks. Wait a month. The genie is out of the bottle, Mr Musk. AI is improving exponentially, almost daily. Sure, accuracy will have to be double-checked (Google’s Bard rollout had trouble with the facts. Chatbots are known to hallucinate, none so much or so wildly as Microsoft’s Bing). No doubt an accuracy double-checker and guardrails will be introduced into reasearch programs. You’ll see. We’ll all see. Worst scenario lies in store for arts and the soft sciences, although it's easily predictable: AI’s generative Chatbot programs will make dry-as-dust research/researchers obsolete, because, we're told, AI programs are that creative.


Creativity is a big deal with animals. Even a bigger deal to human animals is our desire to express our creativity – be it something aesthetic, which matters sociologically (to give the herd wisdom via a painting, a story, a song, a poem) – or an energy-saving product or system (to save time with a washing machine, a computer, a writing/spelling/grammar program). And through it all, one’s desire to make public our creativity is something most of us instinctively understand, the way hummingbirds instinctively understand their young need a nest to thrive.

Desire is a human instinct. We're stuck with it (and many other of our often conflicting instincts, which we take for granted).


This blog is an example of my desire to express, publicly and with self-awareness, a cautionary note about the dangers of confusing artificial neural networks with animal sentience and consciousness.


It’s not only our desire to go public, but also our awareness of our desire to go public, which marks our humanity. You are conscious of your desires. That you are conscious of your emotions and feelings means you’re sentient. Consciousness, dear readers, is the miracle of evolution, natural selection and hybridization. Consciousness baffles definition. No cognitive scientist and certainly no computer scientist, not even the godfather of AI, Geoffrey Hinton, can define consciousness; cognitive scientists can only describe it and they sure can't pinpoint the wherefore of an instinct. A machine's program, though consciously made, is not itself conscious. We mustn't be fooled.


Recap: A computer is NOT sentient. All right, the program is creative. All right, the program thrives on complexity. But a computer program has no desire, let alone self awareness. It bears repeating: A machine may have a program to imitate desire. HER the movie is an example of a computer program’s imitating humanity and doing nothing in the end but offering a human the pain of separation – the pain of lost love, the pain which shows up in the living protagonist’s consciousness. HER feels nothing. HER is not conscious. Neither is Sydney, Bing’s barking mad chatbot.


If you’re not consciously aware of the desire in your heart or the pain in your head, do you deep-feel it? Does pain matter? Does love? Does prestige? Consciousness of our pain and love and prestige is our selected gift; otherwise, we’re living computers, like creative wasps. Or ants. And believe me, human computer programmers have not got anywhere near to understanding the neural networking capacity of the ant or wasp.


Speaking of which, insects matter. We're killing them at an alarming rate. E. M. W. Tillyard describes the Elizabethan world picture as an evolutionary nightmare. Elizabethans, Tillyard says, were mannered by English authorities (God, King, Queen, local beadles and bully yokels) to believe in the self-serving Great Chain of Being. Guess who was at the top of the chain? Us. Knocking on heaven's door. Somehow we're got to squelch vestiges of that wrongful philosophy. Humans are part of a complex world, not first-place winners in a zero-sum biological game of snakes and ladders. In a time of climate change, will AI help us or hurt us as we try to protect animal life on Earth?



(Watch out, bitch. That's not lust. That's botfuckery. You can’t make a "hairdryer" love you.)


Mis- and dis-information haunt the Internet, especially on social-media platforms. There's no good reason why an informed person should vote for Donald Trump (US) or Pierre Poilievre (Can). But people will vote to kill their democracies if it means saving their white-skinned Christian culture clubs (see Culture Clubs: The Real Fate of Societies). AI's creating the means for accurate-looking disinformation on social media (with no way for consumers to separate the gold from the dross) will feed into the power agendas of malicious agency. We've always had to be careful of what we ask for. Think of internal combustion engines. AI saves human energy, but uses up grid energy, at huge cost both to us and our economy. Moreover, if social-media disinformation convinces Americans to vote for a Trump victory in 2024 –– that would be traumatic for democratic US and Canada. AI programmers had better beware of false news, and mindful of human emotions, such as the after effects of trauma on human consciousness. Though we do our best to mitigate trauma, life is full of it. AI programs don't feel trauma. One wonders, do the programmers?


Trauma, freshly mown grass, Joni Mitchell's music, a failed exam, a death, the ruthlesness of Bashar al-Assad, sunshine on your shoulder, a train crash, climate disasters, your teenaged bedroom, a PEI lobster – sensual memories, good and bad, flood into your consciousness. Current affordances (pandemics, floods, wildfires) and memories of past affordances affect your awareness of your present-day feelings, which, in turn, affect your social choices. Artificial Intelligence programmers who suggest or even hint that AI is as sensible or as sensitive as sapiens are ridiculous, wrong, and sometimes dangerously wrong. They discount the importance that our awareness of our feelings has on our rational choices (am I right Antonio Damasio?). As I have learned from researching culture clubs, rational premises in game theory affect math solutions (am I right, Herbert Gintis?).


Here’s a confounding thought: a machine's program does not feel it is trustworthy. Machines are programmed to act as though they feel they are trustworthy.



(Cut out the look, botfucker. Emotionally, your program is as dry as that hairdryer. A machine feels nothing.)


Open AI’s ChatbotGPT, Google’s Bard – and Microsoft’s trial search engine on Bing – are certainly smart. Smarter than you or I. There will be others. Smart AI has been around for some time. Calculating programs are ancient.


To make the point (about the double edge of smart technology), here's a paragraph full of rather smug I's. On any kind of spin around the town, I for one am grateful that the car’s GPS (Global Positioning System) complements my intelligence. I live in a navigational fog, something I’ve struggled with all my life. I’m directionally challenged. I admit it. I fear setting out into the unknown. An active GPS is my friend. My comfort. My sidekick. But I set the destination. I am the authoritative element. The car’s GPS does what I command. I like that – a lot. I like being the boss in charge of a smart, objective system. I can get lost a thousand times over and the GPS’s objective voice responds without judgement. That’s good. I like getting to my destination safely and on time (with ETA included). I like being warned about speed traps and red lights ahead. Using GPS makes me a better woman, at least a more punctual one. My GPS does not give help without being asked. My GPS does not initiate destinations, doesn’t tell me where I should go (unless it’s to return to the route). I am the master and not the mistress of my computer’s GPS. As it should be.


Future scenario: If the GPS system cuts into a human-selected route to force passengers to turn away from a raging wildfire, the humans will likely be grateful.


When a villainous authority, real or artificial, overrides the human driver's instructions (to plot the route to Albuquerque) and the car locks the doors and takes passengers, unwilling, to the Mexican-American war front, they'll be pissed. And very scared.


Wars, authorities spying on innocent people to grade their level of good citizenship, (agile) droid soldiers, conspiracies and the speed of computational jig time in sharing information, plus the maniacal side of international conflict – the situation of horror where anything goes – these are the affordances that scare Geoffrey Hinton the most when he muses about the future of humanity. Heed the warning, Sapiens. War is hell. No, really. In the era of generative deep-learning programs, war will be bloody biblical Hell. Dante's Inferno.


We ought never to believe we have handed our wonderful sentience and consciousness to a machine programmer, however genius the programmer, however complex the research, however creative the program. The one with feelings, and an awareness of those feelings, must always be the boss.

Captain Kirk has authority over Mr. Spock, and a real wolf bosses a wolf bot.


Tomorrow's humanity-centred academy is going to have to change its ways, and perhaps change its idea of what makes the perfect student, and what makes the ideal professor.


There is the issue of trust. Many people don't trust elitism. Truly, there is no more exclusive and conservative and secretive institution than a university. But time's up. Arts, Humanities and Social Sciences' professors can no longer hoard secrets, publish secrets in obscure journals, talk about secrets in their secret codes, treat each other like an exclusive culture club of entitled boys and girls, and yet never share their secrets with the public. In open public forums, or on open secure Internet sites, academies must allow human researchers to share their views - the public's being let in on the conversation is the exact opposite of elitism and a possible social-media antidote. Freedom of expression reigns. But freedom of expression does not mean academic platforms should make room for the world's Kellyanne Conways, those who, in the pursuit of power, regale the public with lists of "alternative facts." A fact is a fact. So. Be a Carl Sagan, University of Alberta. Go public, intellectuals. Generative chatbot research and writing may attain accuracy but deep-learning, creative computer programs are not sentient agents. Creative chatbot programs are not trustworthy; all artificial neural networks should come with an easy, obligatory, cease-and-desist button. Because, as Hinton warns us, generative programs might well turn out to be earth-shaking botfuckers. Before it's too late, a critical mass needs to understand the potential, good and bad, of AI chatbots, and the extent of their capacity for social disruption. Where necessary, we need to pull the plug.




75 views0 comments
bottom of page