No greater example epitomizes the heartlessness of wokism and the intellectual laziness of its proponents than the Vanderbilt University Diversity, Equity and Inclusion deans who used ChatGPT to write an open letter, in the aftermath of a deadly mass shooting, to their students.
ChatGPT, for those who haven’t fully abandoned their thoughts to our AI overlords, is a software application that churns out prose as a literate human being would — minus the heart and soul, of course.
And it is soulless.
‘In the aftermath of the Michigan shootings,’ the letter reads in part, ‘let us come together as a community to reaffirm our commitment to caring for one another . . .’ Blah, blah, blah.
It was a long letter, as boilerplate in sentiment as the clichéd ‘thoughts and prayers’ response whenever there’s a mass shooting. In that sense, I guess, it’s on-brand.
No surprise that colleges are leaning into ChatGPT. On-campus woke-ism has led to trigger warnings, safe spaces, and Orwellian codes of thought and expression. One wrong utterance can get you cancelled.
These hall monitors of campus political correctness must have assumed that it was safer, and no doubt faster, to have a bot do the work. And so this program trawled the internet and 200,000 years of human knowledge in nanoseconds and came up with the purest, most highly sanitized, meaningless babble imaginable.
Truly, this open letter uses a lot of words to say nothing.
Is it really such a stretch that these would-be educators let an algorithm think and speak for them?
We’ve already trained a generation to automate thought rather than wrestle with it, to generate and calibrate their expressions with robotic reflexiveness rather than human messiness.
And these DEI deans, who copied-and-pasted this computer-generated text into an email, nearly got away with it.
Want proof that artificial intelligence is smarter than your run-of-the-mill woke campus bureaucrat? ChatGPT actually signed itself as the primary author of this open letter! Apparently proofreading was too heavy a lift for these deans.
One of the Vanderbilt signatories, dean Nicole Joseph (left), was already suspect for this tweet back in December: ‘I’m trying to hype myself up to face all the writing I have to do today.’ Hasina Mohyuddin (right), is Assistant Dean of Peabody’s Office of Equity, Diversity, and Inclusion
ChatGPT, for those who haven’t fully abandoned their thoughts to our AI overlords, is a software application that churns out prose as a literate human being would — minus the heart and soul, of course.And it is soulless.
What more evidence do we need of wokeism’s uselessness? That wokeism is the enemy of nuanced thought, real emotion and creativity? That instead of enriching college students and taking them seriously by challenging them, even outraging them, campuses prefer mono-thought, round edges, bubble-wrapped ideologies? That the basis of woke philosophy maintains that the worst offense you could ever commit is to offend?
Your vehicle’s GPS might tell you to turn left and into a lake, but the difference here is: You can see the lake and save yourself. AI, as in this instance, has no idea what Vanderbilt students might be thinking or feeling. AI has never experienced a mass shooting, or PTSD, or the fear of an unhinged incel. It doesn’t do the real hard work of being human.
And if, as they’re inadvertently telling us, their very important work can be done by machines — well, what are they getting paid for? Why not just replace them with machines?
It would all be funny if it weren’t so infuriating.
Just ask Vanderbilt students, who were rightly appalled and disgusted.
‘Do more,’ Vanderbilt senior Laith Kayat told the student newspaper. ‘Do anything. And lead us into a better future with genuine human empathy, not a robot.
‘Indeed. When deans at institutions of higher learning can’t be bothered to sit down and think through a genuine response to such a grave and preventable tragedy, you have to wonder: What are these kids and their parents even paying for? What’s the point of college if you can log on to ChatGPT and have it do your homework or write your resume?
By the way, one of the Vanderbilt signatories, dean Nicole Joseph, was already suspect for this tweet back in December: ‘I’m trying to hype myself up to face all the writing I have to do today. I would not have it any other way. [Smiley-face emoji] It matters when you LOVE what you do.’
As most writers will tell you: Hate having to write, love having written. Why? Because writing is hard! It requires thought and the ability to articulate what one actually wants and means to say. This woman already proved herself dubious at best.
Creativity is what makes us human. It’s why the most successful civilizations have, throughout history, prioritized and venerated the arts. Originality is rare. It’s a virtue and it’s valuable. Do we really want AI writing books or op-eds or reporting the news? Architecting or painting or choreographing ballet? Making movies? Writing songs or composing symphonies?
It’s already happening. Meet Brett Schickler, a salesman from Rochester, New York. He had always aspired to be a published author. Never happened — till ChatGPT, which did the heavy lifting of writing a children’s book for him, now on sale at Amazon.’The idea of writing a book finally seemed possible,’ he told Reuters. ‘I thought, ‘I can do this.’
It was a long letter (above), as boilerplate in sentiment as the clichéd ‘thoughts and prayers’ response whenever there’s a mass shooting. In that sense, I guess, it’s on-brand.
Brett, I don’t know you, but allow me to say: You can’t, and you didn’t. It’s offensive to anyone who has done the hard work of conceiving a book, writing a proposal, selling that book, sitting down with oneself and a blank screen and grinding it out, day after day, producing and refining and rewriting and editing, to say that you, enabled by ChatGPT, have now written a book.
It’s terrifying that Amazon is endorsing this. Over 200 e-books written using ChatGPT are sold through Kindle. So are books about the app produced by the app.
‘A.I. is here, and it’s making movies,’ said the L.A. Times in December, ‘Is Hollywood ready?’ The story explained how a director used AI to change his actors’ expressions, manipulating the way their mouths moved to fit looped-in dialogue.’You can’t tell what’s real and what’s not,’ director Scott Mann said. ‘Which is the whole thing.’
But don’t we want to be able to tell what’s real and what’s not? This goes beyond special effects — we’re talking about acting, and as we well know, few take their profession — excuse me, craft — as seriously. But we’re beginning to enter the Uncanny Valley-ization of acting for film and television, faces manipulated digitally to change expressions, mouths rearranged to suit post-production dialogue changes or dubbing for foreign markets.
Does this not obviate the entire movie-making enterprise? Don’t we want to know if we’re watching a human emote and speak rather than a facsimile? How are such deep fakes not corrosive to art? To literature? How will this impact our ability to think critically or originally?
Reuters reports that lawyers are now using ChatGPT to co-author legal briefs. Will we outsource human judges and juries? Surgeons? Four-star generals in wartime? Where does it end — when AI takes over humanity itself? When ‘I, Robot’ becomes a reality?
We long ago melded with the machine. Our cell phones have become literal extensions of ourselves, appendages as vital as arms and legs. Amazon’s Alexa, we know, spies on our most intimate conversations and reports back to the Bezos Death Star, but hey — it cuts down on the maximum effort of flipping on a light switch or Shazaming a song or tapping the weather app to see how much longer it’ll rain.
Big Tech has made us soft and lazy. Its ultimate annexation is our hearts and minds, our individuality of thought and expression, our very personhood.
Will resistance be futile? In the end, that’ll be up to us. For now, anyway.
Read the full article here