Tech And Us
Our relationship with technology marches on in a straight line ... I believe
“The effects of technology do not occur at the level of opinions or concepts, rather technology alters patterns of perception steadily and without any resistance.” - Marshall McCluhan, In Understanding
It has been a year now since generative AI hit the mainstream and was opened for the techophiles of the world to dream again of new profits, conquered domains and new silver bullets. Of course, AI has been in the background for a number of years prior, under a number of monikers; mechanical turks, machine learning, intelligent retrieval, artificial design, natural language processing and many more terms. I’ve written a lot about it.
But here - I’d like to address something I feel is fundamental, especially in education - our relationship to these bits and bytes, to the millions of over-heated servers, serving up knowledge and generative intelligence.
I’ve found it curious, how so many teachers and those who deem themselves as “technology” advocates or leaders - champion generative AI. I find it strange given how inaccurate, wrong and not just that, how deceitful it is. It goes against all that we profess to believe regarding “sources” and epistemological rigor.
Most of the AI tools on offer, including ChatGPT, now come with an in your face warning that they aren’t accurate or to be relied upon (solely). Note this message on ChatGPT Plus. I paid the $20 bucks for a few months but just threw my hands in the air at how even basic math was beyond its abilities.
This is a new tool for getting timelines for events, historical events. Note too, the warnings about it. I tried to generate a timeline for the relatively clear topic of the Cuban Revolution but it even mixed up Raul and Fidel in the timeline itself.
I think the warnings are good, necessary, don’t get me wrong. But I’m sure they are they just for legal reasons. Still, I have a huge problem. Let me explain …
I’ve written loads of published articles, texts and materials. But should I make one inaccuracy, I’m called to the carpet on it. I could list so many emails, comments etc … And for good reason. And these are only typos. But those angry about finding in print inaccuracies are correct. No excuses! However, with generative AI, it gets some kind of pass, like it is ok not to just have a typo (rarely) but to just make up stuff and be wrong about basic facts. It gets a shrug and teachers still use it in the belief that hey, we’ll catch it, nothing wrong there.
I feel the divide here underscores a fundamental belief system in operation regarding technology and how our culture views its use, and our relationship to it.
There is an old but true story that sheds light on this. I’ve told it many times in my presentations about technology. It goes like this.
There was a final engineering exam accounting for 50% of the final grade, in the late 1990s at Yale. Everyone received an email (a relatively new technology) late, the night before the exam. It said the exam the next morning was cancelled and would be rescheduled. The next day, exam day, only about 1/3 of the students showed up for the exam. They wrote the exam but ultimately the exam was rescheduled. The email was a fake, a hack. But most students believed the email and didn’t attend the exam because “Hey! It was in an email. It must be true.” If someone had stuck a note under the students’ dorm doors, nobody would have believed it. But because it was there blinking on a computer screen, delivered technologically, the message had to be authentic, true.
This illustrates our continued relationship with technology. It is always true, it is always superior in intelligence, output, design, knowledge. We must defer to it.
Sure, you might suggest that a warning label suggests we’ve come to recognize the fallibility of technology. But I would profess, based on my own observations of user behavior, the opposite and a big NO. The warnings are only a legal shrug. We still believe that what the machines tell us is true. We still don’t default to our own critical acumen. And that is terribly dangerous in education and for society.
I’ll give another example. So many of the same “leaders” and people in technology, constantly refer to and ask “ChatGPT” or whatever source of AI. They talk to it as a person, in the first person. They post about their conversations with it, theirs is a real relationship and one in description, that is professing a belief in the superiority, infallibility of ChatGPT - that it has something to say to us that we can’t say among our own selves. We are worshipping golden calves. False idols.
Sugata Mitra, a man I admire but don’t always agree with, wrote about a conversation he had with AI. I offer this up as just one of the “bait and switch” exercises going on regarding our relationship with generative AI. We defer to it.
My god, I even read that the Israeli IDF over-relied on AI for its border control and this ultimately more than anything was the reason for the security breach on Oct. 7th.
I feel like a mother whose child is dating a really stupid, ill-suited, naive young man or woman. I just want to say “Break it off.” Yet, they’ll have to learn the hard way.
Generative AI is far from being useful in most domains. Sure, for quite a number of tasks, rote tasks it is ok. Give me an excel sheet containing X. But if you are asking it about solving our problems with plastics or what you should do next in your life … you are probably going to end up with a very broken heart. And I am not even mentioning so many other problems besides this one of inauthenticity.
Let’s back up, especially with our students. The ed-tech companies have sold the hype and it is up to us to push back. Generative AI is a bad relationship to have for both teachers and students. It creates a default “get out of jail free” card for all kinds of things we should for the benefit of education - be doing ourselves. We don’t need AI between us and knowing … let’s keep the agency where it belongs, in our hearts and minds and our own learning and teaching.
Sorry AI, I won’t be catfished by you any more. You don’t really exist. Be gone …
I concur. What bothers me the most is that it discourages creative thinking!