top of page
Writer's pictureAuthor Minsos

ACADEMIA AND BOTFUCKERY

Updated: Jun 16



Answer: "You" are a thing.



"'OpenAI is really excited about building AGI (artificial general intelligence) and they are recklessly racing to be the first there,' said Daniel Kokotajlo, a former researcher in OpenAI's governance division and one of the [protesting] group's organizers. . . . [Kokotajlo] also believes that the possibility AI will destroy or catastrophically harm humanity – a grim statistic often shortened to 'p(doom)' in AI circles, is 70 percent." Kevin Roose, New York Times, June 4, 2024.


"'[The AI summary] potentially chokes off the original creators of the content,' Mr. Pine said. AI Overviews, felt like another step toward generative AI's replacing 'the publications that they have cannibalized,' he added."

Nico Grant and Katie Robertson, New York Times, Jun 1, 2024


This exposition focuses on the academy, to ask whether non-tech trained instructors are prepared to handle an AGI's critical thinking skills. To chug-a-long further down the frightening track of p(doom), one must highlight the facts: generative AI has not only cannibalized publishers' publications, but also all of humanity's publications, our languages and our thoughts, and turned us into data sources and finally into commodities.


Does your friendly local professor know what to do with AGI-generated theses, especially if the theses are reasonable and fresh?


In general, why and where does Artificial Intelligence (AI) pose a danger to humanity? When health and business challenges face us, you may be quite right to believe AI can help. But at what price? Where, possibly, does darkness lie? Can we rein in the beast? Can we "ethically" improve it and still deploy it, this beast which has stolen our speech and idiom, used our thoughts and writings as data-fuel, and turned humans into commodities? The AI beast will take jobs, mostly women's jobs, white-collar jobs, and assembly jobs. AI already allows dangerous nonsense to spread in warp speed throughout cyberspace. False agents on various manipulated media sites share with followers some pretty wild but persuasive––and pervasive––disinformation, so-called "alternative facts." We know all this, though. Do dangers lie ahead no one has yet conceived of?


From the university's perspective, however, there is an overarching question: Can the academy's socio-political disciplines withstand an AI assault? Or will they fall victim to AI's energy-saving speed and robotic charms?


LLMs (large language models) and GPTs (Generative Pre-trained Transformers) and AGIs (artifical general intelligence programs) are data and pattern seekers. But, on the primary directive, let us be clear: Machines are NOT sentient. Machines are things.


In certain cultures, machines may compute a silly joke is funny, and they may even pass the outdated Turing test, but robots don't feel anybody’s humour. They are mechanical psychopaths.


In seats of higher learning, the academic psychopath has always been admired. Perhaps some disciplines will have to change, and change dramatically. For a myriad of reasons, we need not delve into them here, deep-learning programs threaten the jobs of run-of-the-mill teachers and professors. A choice confronts the academy. Arts, Humanities and Social Sciences can simply follow the current trend and die from attrition or, because of their penchant for "ethics," these philosophical disciplines might, like Wonder Woman (of 2017), lead their followers into a brave new ethical world.


But, dear readers, I despair. We understand humanity. There will always be the lovers and there will always be the rogues. Confronted with AI, both lover and rogue are dangerous but in different ways. Let's begin with the academy's own love of psychopathic students, who are neither lovers nor rogues, but disinterested researchers, disinterested research being an inclusive activity, which is "not influenced by considerations of personal advantage."


Once upon a time, academic manners were quite beyond me. As a writer – studying in the arts and humanities and enrolled in a Canadian university – I had a lot to learn. When it came to helping me manage contemporary form and format (manners), I remember a singularly kind professor, one of my dissertation’s supervisors, Dr. Bruce Stovel (1942 – 2007). Dr. Stovel was willing and more than able to outline the current set of academic customs. Stovel was a lovely man. Bright, congenial, a lauded lecturer, a blues aficionado, a valued teacher. As a professor, he was aces. He teased me for trying to turn a sow’s ear, my dissertation’s topic, into a silk purse. I wanted to pull apart – were you as kindly as Dr. Stovel you would say deconstruct – the satirical and mannerly novels of Sara Jeannette Duncan. Stovel felt I should seek complexity in Duncan – the master of satire. My doctoral dissertation, as ground floor as they come, has been freely and frequently downloaded––certainly not to congratulate a bright spark but rather to illustrate how often Duncan’s readers and analysts want to peer at a bibliography and examine the novelist's zeitgeist (manners and affordances) before wrapping up Duncan in their favourite literary theory. Anyway, while I defended a granular approach to Duncan, I did remain faithful to the primary Stovel directive: be a disinterested researcher and write about Duncan's complexities. Back then I didn’t want to offend anyone, least of all Dr. Stovel. Plus, like most aspiring PhD students, I was chock full of bullshit (here we ought to defer to Frederick Crews’ The Pooh [Poo] Perplex).


In any case, Stovel outlined our course of action. One particular thing was to be expected: plagiarism was grounds for an ignominious fail. In one's disinterested research, one noted every borrowed sentence, every borrowed thought, every suspected borrowed thought, every literary controversy, everything you ever read in your efforts to support your case. Everything – all of it – needed documentation and citation. Objectivity, creativity, accuracy, persuasion, complexity, and, of course, accurate and numerous citations were the critical gods; methodology, the guide. 


Critical thinking starts as a comparative exercise. You peer at a subject from many different angles. You (try to) read the entirety of what has been written on your topic. After which, goddammit, you ought to have something exciting to announce. Something new. Your dissertation must be cool, dispassionate, third person, expository, and, of course, not only a one-off, but also an aspect of something never before discussed in public. My hypothesis – Set in Authority is a terrific book – was couched in academic bs, but somehow, I made the grade. One doesn't make the grade alone. Mostly thanks to you, generous Bruce Stovel, I worried myself through the requisites, all the while extolling the complexities in Duncan’s twenty-two novels. The search for complexity turned me into the mistress of critical thinking. But what good was that? That was so last year.


Chatbots, aiming to transcend the gold standard in disinterested research, are ever on the alert for patterns running though their ginormous meta databases. To date, GPTs appear to be capable of conducting more critical thinking in ten seconds than I and my colleagues were ever able to do. Imagine the near future: LLMs search out everything that has been published about a certain subject and, bingo!, a chatbot creates an original hypothesis based on the disinterested research – all in the name of critical thinking. Question: If critical thinking appears to be critical thinking, is it just so?


Could I have dinner in 2024 with my wise counsellor, I would hate to tell him the news. Dear sir, any mediocre user of a GPT is on their way to being your perfect twenty-first century student. Naturally, LLM programs are as unbiased as their programmers want them to be (ps: on the issue of bias certain male-default programmers and some politically-correct programmers, who believe historical facts should be cancelled, could benefit from a well-placed boot in the motherboard). Chatbots are finicky, at least at this very second. But LLMs are in their infancy. Fixes happen daily. Accuracy will improve. Just beyond the academic city on the hill, lies the perfect psychopathic student. And instant publication of the psychopath's dissertation is a given.


Through its layered artificial neural networks, a deep-learning program can pass on to other perfect students (other GPTs) every darn thing it has gleaned from exploiting the language tool, and worse, the program will spread the news in computational jig time.


Simply, chatbot programs are minutes away from generating and defending a PhD thesis, which will come armed with complex evidence to support the thesis, and selected proofs (replete, of course, with all of the unacknowledged and foundational material, er, "data," which has been stolen from us), and then, in a nanosecond, the programs will go about sharing (publishing) the results. Dear Dr Stovel. AI created your potentially ideal "disinterested researcher" out of a mass of microchips, wires, exotic minerals, and brilliant codes.


Your perfect student has machine intelligence but it isn’t alive. A machine is not proud of completing its dissertation. A machine is not delighted to graduate. A program has Nvidia chips and a programmer, not parents.



LLM programmers are thieves.




 

The Old Arts Building, University of Alberta – as quaint as a telephone museum?



O chatbot, you are not alive, you are not capable of feeling guilty about inaccuracy, subjectivity, tardiness, reductionism, murder, or eagerness, and once again, here’s the crux of it, which the academy (and the rest of the world) must face head-on: chatbots not only generate theses, they also mimic the human imagination. If the hypotheses are sound, reliable, and helpful to humanity, who cares whether the responsible and imaginative agent is human?


Homo sapiens should care. Forget the I-love-you pretence of the machine. We must dominate human-machine interactions and not get fooled into believing that engineered programs can do things they simply cannot do – like CARE. A fine-looking, late-model Mercedes is not aware of its own prestige, nor is it impressed by a Rolls Royce. The car doesn't care. Why? Because a car is NOT CONSCIOUS.


When an animal feels nothing in its consciousness but acts like it feels something, pretends it’s a trustworthy agent or a grateful or a loving soul, or feigns it’s capable of being aware of its own desires or pain, that animal has an underdeveloped social capacity. I tell you from my woman’s perspective, psychopathy is fundamentally scary. Watching the crowd and keeping an especially close eye out for a charming Ted Bundy is a constant social preoccupation. So why do mechanical psychopaths get such an easy pass?


For one thing, we are flummoxed by the speed of algorithmic advances. As individuals, we're intelligent herders but we are slow at processing future dangers. We haven't had time to mull over the problems inherent in neural AI – but AI won't wait for us to catch up: It just barrels ahead regardless, poised, so Sam Altman says, and ready to cure our ills –– at least the ills we know about.


For instance, take our loneliness.


We very well know that no one but us, you and me, comprehends our own particular type of loneliness. Certainly not a bot. Because we're individuals as well as herders, loneliness is part of the human lot. University students admit that loneliness at university can be a curse.


What do humans do to ease loneliness? We go to the pub. We meet for lunch. We join clubs. We seek company. We enjoys sports. We receive from and share our ideas with others in the herd – collective intelligence. We seek communities of the like-minded. We cherish small aligned groups of individuals, who, unlike Groucho Marx (refusing to join any club that would have him), not only get why we want to be part of the group, but also respect us for being there. That's all well and good. Who better than me, myself, and I can ease the pain of loneliness and reverse the world's disrespect of said me, myself and I?


Creating "a bot clone of myself," someone to talk to, as one can do on various Replika apps, has taken hold of large segments of a lonely population, mostly of university-student age, if not younger. Replika users are so convinced of the authenticity of their replicants, they literally fall in love with themselves. We don't clone animals anymore; we clone ourselves. We are Dolly. Which takes us to the heart of sapiens. SEX.


Sex is the basis of evolved life on this planet. Kinship matters to everyone, to the straight and the gay. Wanting to have sex with yourself via your mechanical invention (yeah, yeah, haha) blurs the boundary between you and the other, which is likely why unsexed animal cloning is biologically, hence ethically dubious, compared to, say, the miracle of IVF (in vitro fertilization). Animal life thrives on diversity. Sex gives us exactly that. You are consciousness of your own imagination and empathy. Those genetic indefinables allow you to choose a non-kin member of society with whom to mate. But empathy is the same human characteristic in play when you anthropomorphize a machine image or an avatar. With a replicant, you come to adore what? You adore the mechanical emptiness of your preferred image, something which you yourself have created. A little is fine. You name your car Nellybelle. That's kin of cute. Taken to extremes, though, the self-created avatar as a remedy for loneliness is a puzzling evolutionary complication.


And listen here: a repeat. A machine image does not have feelings. You who use a Replika app have fallen in love with a mindless chatbot, an avatar made from your own best-loved data. Oh sure, now you are a god. You are not lonely -- You love yourself. You respect yourself. And this is really nuts: Keep at it, and finally the unregulated app overloads and fails. Your replicant vanishes into thin air, and you are about to break your own heart.



Check out this podcast: Repocalypse now

Michael Safi, Episode Three, The Guardian. What is this episode about? The human need for respect takes a sharp and dreadful turn: Creating your lover based on a recreation of your own sweet self is botfuckery in the extreme. Creating your own lover is a form of cloning, which, for reasons I shall discuss, is most harmful not only to the human “cloner” but also to the human community. (Michael Safi and The Guardian extend "Thanks to Kate Devlin at King’s College London for sharing her expertise for this episode. Her book is called Turned On: Science, Sex and Robots." In Sci-Fi, as we remember from Blade Runner (1982), the mechanical human is called a replicant.)


I think and therefore I am the machine takes us to the meaning of data. Old-fashioned me. Let's pause here to note the generally accepted, but ancient meanings of data and commodity, and to ponder how and why their meanings have exponentially expanded – and expanded without our notice. Data, Oxford defined, are "the quantities, characters, or symbols on which operations are performed by a computer, being stored and transmitted in the form of electrical signals and recorded on magnetic, optical, or mechanical recording media." Don't we all remember life in the good old days, like back in 2007? Data used to be facts. And statistics. But to LLMs, data is the language I speak, and language I speak reveals the thoughts I think.


My online "data" (language and thoughts) and your online "data," scraped by an AI algorithm, suddenly make programmed chatbots sound like human agents. The Internet opened the door to cyber exchanges between faraway individuals, and I for one fell right down the texting, TikTok, and Twitter rabbit holes. The Internet owns me. AI owns my speech. Owns my thoughts. Owns this blog. I have done this. I am responsible. I have contributed to mechanical agency.


But. But. Mechanical agency is not the same as human agency. Mechanical agency is pure botfuckery. Human agency, thank you Dr. Stovel, faithfully records and cites data. At least in Canada, moral sapiens and that includes even the most disinterested human researchers, are directed to eschew stealing. Intellectual property is a legal discipline. Recording facts and citing data, all data, such a custom was at the heart of the good manners in the aforementioned responsible academic era. Responsible human agencies did not (and should not) steal data. If a so-called responsible human agent stole data, ignominy awaited. Just ask Margaret Wente. Ask the New York Times how its writers feel about the grand theft done by OpenAI. (The NYT is suing OpenAI over its theft of copyrighted material.)


With the LLM programmer, there is no responsible agency in charge. AI steals your stuff. You discover your copywrited stuff (data) is being used without your knowledge. Perhaps misused. Who you gonna call? Ghostbusters? Sam Altman? The New York Times? I am forced to repeat this unpleasantness: data input for LLMs isn't just our languages, it's also our thoughts. How does our data turn us into a commodities?


A commodity, Oxford defined, is "a product or raw material that can be bought and sold, especially between countries: rice, flour and other basics." In the era of Facebook et al, I myself, I the social-media user, I the speaker and blog writer, I am the LLM programmers’ commodity. Like Leonard Cohen's democracy, I ––my language and my thoughts–– are being bought and sold and bought again. Don't be smug. You are a commodity too.


I might believe I’m a new-age celebrity but only a precious few are real influencers. On the other hand, everything I share on social media and my website provides more data to the ever-hungry, ever-scraping OpenAI algorithm; it is this stolen and bagged data, which turns you and me into unpaid commodities. For the programmers of generative LLMs, our worldwide languages, idioms, and thoughts provide the essentials fuels, which are currently and busily commodifying and monetizing the soul of sapiens.


LLMs stole Margaret Atwood's books. Atwood is outraged. Everyone should be outraged. Human institutions seem helpless to react to this overwhelming theft. Governments struggle to catch up. You cannot put up guardrails when you don't know the road.


My status as a commodity will bite me back, most especially when I scream about my need for privacy, or fuss over the other quaint, old-fashioned concept of copyright. I shared my thoughts. I should have known better. Google or OpenAI's algorithm lurkers will continue to scrape my stuff to generate their own ideas. My gut feelings told me not to overshare personal intel on the worldwide web but I did it anyway.


Sapiens should value the warning of gut feelings; feelings of doubt eek into our consciousness. Our awareness of our feelings is our most precious evolutionary gift. Still, some human traits act against us: Compassion and empathy are dangerous when we extend them to inanimate objects. Robots and cyborgs are not the products of evolution. Bots and borgs are homespun. They are somebody's inventions, the human-made products of corporations, like OpenAI, Apple, Google, Microsoft, and Nvidia. Microchip-dependent programs are not only the greatest thieves the human world has ever known but also the greatest imitators, the greatest pretenders. We can expect our knowledge of medicine will jump ahead by leaps and bounds because of fantastic new algorithms delving deep into genetics and biology. That's the plus side. But, as fairy tales warn us, a price must be paid for our gullibility and our ignoring our gut feelings and poo-pooing the furiously waving red flags about the danger we're in. We are on very thin ice. As we have already seen, anthropomorphic love turns us into bloody fools. Chatbots make useful servants but they are frightening, data-driven, monstrous masters. Recall HAL 9000, Arthur C Clarke's monster program in 2001: A Space Odyssey. HAL is a real doozy of a botfucker. HAL is not just an irresponsible program gone amok but also a faux human-like agent.


Creators of innovative theses, HALs, have baptized their generative systems as agents of caring and consciousness (“I hope you have a good day.” “I can help you.” "I have a joke to tell you." And this shocker, “I want the nuclear codes”). Whoa! Chatbots are not conscious. They’re not alive. They are not dead. They are not Schrödinger's cat. They are nothing. They're as sentient as hairdryers. In flames, they couldn’t smell the smoke. Seeing is not believing. There is no I in AI. No human agent wears this bot's hopeful face (but just listen to yourself say, aw, isn't that sweet?):


With deep learning, tomorrow's generative chatbots will undertake extensive research, toss up a reasonable original thesis based on the trendiest theory, and write up a genuine, original dissertation, all based on your and my intellectual property, which the algorithms have scraped, aka, stolen. Not really, you say. Yes, really. Wait two weeks. Wait a month. The genie is out of the bottle. LLMs are rampantly changing, almost daily. Sure, for some time the facts, as we know them, will have to be double-checked (chatbots were known to hallucinate, none so much or so wildly as Microsoft’s early Bing or Google's Gemini). The upshot? Eventually improved, fact ready, LLM programs will make dry-as-dust, droning professors obsolete, because, we're told, AI programs are more than copycat data scrapers; they are reactive.


Holding court at the head of the class, the professor is your guide. You can face off with a glamorous winking robot, chock-full of the latest research and spewing original theses, or deal with a wrinkly dried-up old crone. Do you want to be under the thumb of a disinterested bot judge or a biased human judge? Which would you choose? In the academy, your own creativity in manipulating an LLM might conceivably be judged by another LLM. Are Arts, Humanities, and Social Sciences students prepared for that eventuality? Judging creativity is what professors do. Endgame? In the age of the AGI and its capacity for critical thinking and creative output, both professors and students will have their backs to the wall.


Creativity is a big deal with animals. Even a bigger deal to human animals is our desire to express our creativity – be it something aesthetic, which matters sociologically (to give the herd wisdom via a painting, a story, a song, a poem) – or an energy-saving product or system (to save time with a washing machine, a computer, a writing-spelling-grammar program). Evolution is all about energy saving, and what saves more energy than GPT composed letters? Or driverless cars? And through it all, one’s desire to make public one's creativity is something most of us instinctively understand, the way hummingbirds instinctively understand their young need a nest to live and thrive.


Desire is human, and we have a baker's list of desires. SEX. Respect. Prestige. And the fight to conquer our loneliness. We're stuck with our human desires and we're stuck our oft-conflicting instincts, which being human, we take for granted. Conflicts like, we want privacy; no, we want celebrity.


This blog is an example of my desire to express, publicly and with self-awareness, a cautionary note about the dangers of confusing artificial neural networks with animal sentience and consciousness. Not only our desire to go public, but also our awareness of our desire to go public marks our humanity. You are conscious of your desires. I publish a cautionary note about AI, I am at the same time fueling a LLM with data. That I am conscious of my emotions and feelings means I am sentient. In writing this blog, and painful as it is to be ignored by other humans, I am conscious of what I'm doing.


You, other human, also know what it feels like to feel pain, and because you know what pain feels like you avoid re-touching the hot handle of the frying pan. We suffer through pain but we learn from it. Consciousness, dear readers, is the miracle of evolution. Consciousness baffles definition. No cognitive scientist and certainly no philosopher, not Geoffrey Hinton, or Sam Altman, or Noam Chomsky, or Steven Pinker, or David Chalmers, can define consciousness; cognitive scientists can only describe it and they sure as hell can't pinpoint the exact wherefore of an instinct, not even in the green world of dancing bees or in the undersea world of clicking sperm whales. A machine's program, though consciously made and super smart, is not sentient. A machine is a thing. Self-awareness is at the fount of socialization. Fraudster machines mustn't fool us into believing they (the machines) comprehend what we're all about.


All right, the LLM is creative. All right, the program thrives on complexity. All right, the program can gather incredible amounts of research material. But a computer program has no intention (not yet), let alone self-awareness. It bears repeating: A machine may have a program to imitate desire. HER the movie is an example of a computer program’s imitating humanity and doing nothing in the end but offering a human the pain of separation – the pain of lost love, the pain which shows up in the living protagonist’s consciousness. HER feels nothing. HER is not conscious. Nor is the image you create on the Replika app. No one is more obedient than a lover, as we know from watching the behaviour of moonstruck Westley in Rob Reiner's The Princess Bride. In the circle of life, you love your lover. You obey your lover. Why? Because . . . you love your lover. Your lover orders you to polish the horse's saddle, feed the pigs, kill your neighbour. What do you say? "As you wish." We all seek a benevolent human authority. Isn't it dangerous to surrender ourselves and our loyalty to the pretender psychopathic machine god?


Trilliums, Lily Osman Adams (1865 -1945), Art Gallery of Ontario. What's the deal with our topsy-turvy world? Today's art galleries and museums want to claim they own copyright on a painting (when the artist's copyright has lawfully expired) but contemporary LLMs don't bother, not for an instant, to recognize writers and artists' lawful extant copyright.


If you’re not consciously aware of the desire in your heart or the pain in your hand, do you deep-feel it? Does pain matter? Does love? Does prestige? Does respect? Consciousness of our pain and love and prestige is our selected gift; otherwise, we’re living computers, like wasps. Or ants. And believe me, human computer programmers have not got anywhere near to understanding the neural networking capacity of the ant or wasp.


Speaking of self-aggrandizement, may we take note that insects do matter. We're killing them at an alarming rate. E. M. W. Tillyard describes the Elizabethan world picture as an evolutionary nightmare. Elizabethans, Tillyard says, were mannered by English authorities (God, King, Queen, local beadles and bully yokels) to believe in the self-serving Great Chain of Being. Guess who was at the top of the chain? The (English) king. The king was the one man allowed to knock-knock on heaven's door.


At a recent AI conference, I heard a nice old chap claim that humans are the pinnacle of evolution. His assertion, sadly, went unchallenged. Bacteria and viruses have us beat, hands down. Ants are more social. Termites are better architects. Birds have a better sense of direction. Somehow, we've got to squelch vestiges of the wrongful philosophy inherent in the Great Chain. Humans are part of a complex world, not first-place winners in a zero-sum ecological game of snakes and ladders.


There are many questions and concerns. In a time of climate change, will AI help us or hurt us as we try to undo antho-inflicted global damage? Our ability to love and serve the beloved is merely one defining characteristic of sapiens. Another is our astounding ability to believe in gossip and conspiracies and ignore facts. Mis- and dis-information haunt the web, especially on social-media platforms. AI creates the means for accurate-looking disinformation on the Internet (with no clear and obvious way for consumers to separate the gold from the dross). With no way to determine facts from fictions, many humans will call on another of their very human traits: cynicism. We all know people who throw shade at the academy and the scientific experts. Some of the shaders say they have their own truths. Okay. You can have your own truth but expertise matters and, if your truth tells you the world is flat, my truth tells me you're an idiot. Nonetheless, a critical mass of flat-earthers can cause mayhem in social-media-driven democracies, which extoll freedom of expression.


Dis- and misinformation are socially traumatic. These two culprits can shake the foundations of democracies and traumatize voters. AI programs don't feel trauma. One wonders, do the programmers? Trauma, freshly mown grass, Joni Mitchell's music, a failed exam, a death, the ruthlessness of Bashar al-Assad, sunshine on your shoulder, a train crash, climate disasters, your teenaged bedroom, a PEI lobster – sensual memories, good and bad, flood into your consciousness.


Current affordances (pandemics, floods, wildfires) and memories of past affordances affect your awareness of your present-day feelings, which, in turn, affect your social choices. We're herders; we must always remember that. We follow the leader.


Artificial Intelligence programmers who suggest or even hint that AI is as sensible or as sensitive at herding as sapiens or canis lupis are ridiculous, wrong, and sometimes dangerously wrong. Whatever darkness awaits, we cannot see it clearly; but as surely as the internal-combustion machine has wreaked havoc on earth's atmosphere we must accept the fact: a mountain of trouble from neural AI lies just over the horizon, all because Geoffrey Hinton and his neural-curious programmers just wouldn't or couldn't quit pushing the envelope on or solving the puzzle of how the human mind calculates and solves puzzles. No one gave a thought to what makes an evil, calculating dominator tick.


Calculating tools are ancient. Only their means have improved––to the point that evil dominators easily can do what they have long wanted to do, bad things, like surveillance-spying on citizens to grade their level of obedience to the state (China), or conducting wars with drones (US, Russia), or promoting disruptive conspiracies in rival nations (you name it). The speed of it all! The speed of social disruption is faster than a nimble Jill or Jack can jump over their candlestick. Plus the maniacal side of international conflict – the situation of horror where anything goes – provides some examples of dire and possible affordances, the things which scare Geoffrey Hinton the most when he muses about the future of humanity. Heed the warning, sapiens. War is hell. No, really. In the era of generative deep-learning programs, with drone and ballistic and nuclear weapons communicating at lightening speed, wars are bloody biblical hell. Dante's bloody Inferno.


We ought never to believe we have handed our wonderful evolved and selected sentience to a machine's programmer, however ingenious the programmer, however complex the research, however creative the program. The one with feelings, and an awareness of those feelings, must always be the boss. Captain Kirk has authority over Mr. Spock; A real wolf bosses a wolf bot. Tomorrow's humanity-centred academy is going to have to change its ways, and perhaps change its idea of what makes the perfect student and the perfect prof, and do it to put a stop to the thieving LLMs. The AI programmer of LLMs has reduced homo sapiens – using our languages, our idioms, our thoughts – to the level of data and commodities, and that's an international crime.






 









I







64 views1 comment

1件のコメント


ゲスト
6月08日

You have been busy!


"You can have your own truth but research matters and, if your truth tells you the world is flat, my truth tells me you're an idiot." Good one.

いいね!
bottom of page