Playback speed
undefinedx
Share post
Share post at current time
0:00
/
0:00

A Is For AI

A personal glossary and notes about artificial intelligence using the letter "i".

I’ve been continuing to explore generative AI and think deeply about its impacts and possible uses in education. Many, many YouTube videos, tutorials, projects. Right now, I’ve set up AutoGPT on a local machine and I’m exploring that. Personally, I do think AI has great potential to solve the problem of how children learn to read. More on that another time.

If you’ve read my previous thoughts about ChatGPT and AI in general, I’m not particularly bullish about it. There is a lot of hype. There is a lot of deterministic thinking going on. Plus, there is a lot of money driving the whole circus. It’s like 2010 and everyone is throwing money at the deeply wired, golden pig.

That said, I do think we need to think deeply, have discussions and be the ones driving the bus with hands on the wheel, managing how tools like ChatGPT and a plethora of others are used in our classrooms. There is a lot at stake, even the nature and foundation of learning and education itself.

Here below are some thoughts, ruminations, questions. Short and hopefully meaningful and penetrating. I’ve used terms with the letter A to highlight my thinking about generative AI. Next week, I’ll list some thoughts using the letter “I”.

I do hope my brief glossary will ping some thoughts and stir the discussion about AI and education.

…………………………………………………………………………………

Artificial: Let’s start here. AI is not organic but material and thus, we use the word “artificial” to indicate the type of intelligence that machines have. It’s a VERY important distinction and one we should not let slip. The organic is crooked, with the artificial, we are controlling it, the generation of itself is but the result of our own input, design, set up. Artificial is straight.

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”—Eliezer Yudkowsky

Agency: ChatGPT and all technology solves one principle problem, the problem of efficiency, excess work. Technology aims to make things “easier” to do. It takes away agency from students and sometimes that is not a good thing. Thinking involves a high level of agency, commitment, struggle, ownership, mental activity that teachers strive to develop in students. But what if that agency is taken away? What then is education, but a prompt and answer? Not the journey, but a simple arrival and booking of a ticket on a train. Not an adventure? Think about it like we do fitness. Nice if you can buy a fit body, get surgery, have plenty of O2 available. But what has been lost by not obtaining this of our own effort(s)?

Authenticity: What is real, true? It is scary thinking of how generative AI can generate with a push of a button, 1,000s of stories that can be spread and “seem” true. In fact, you’ve already been reading many of them, unaware. Since 2017 or so, AI has powered all of the world’s major news networks and producers. Look at what happened with one German magazine’s recent AI generated interview with Michael Schumacher. In English language teaching, many bloggers are simply generating AI content to drive traffic. Same, same, it is about profit, clicks. How will this all end? How will the authentic survive when copying is so easy? It adapted with the digital world (music, film) but how will writing, authorship adapt, react? Should we welcome the “death of the author” as Roland Barthes called for, half a century ago? Will it all just boil down and into an “average” mess (which at best is what ChatGPT generates)?

Anthropomorphism: I find it strange how we think of ChatGPT as a person. We chat with it politely and use formal discourse markers even though it is a machine. You know, please, thank you, could you … these aren’t needed for it to process a prompt but we treat it as human. It really is “machine learning” and I do think it dangerous to believe ChatGPT is a sentient being, as many materialists profess. Don’t you? I think we need to teach and point out this fact to students. It’s just a machine predicting text based on very large data sets as tokens and re-inforcement learning (human training - and who has thanked all these 100s of 1,000s who got paid a pittance doing this work?).

“Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.”—Alan Kay

Attribution: Who is the author? The source? The machine? The company owning the servers? Every person whose text, images were used by the AI? Certainly not who prompted the AI, right? For me, the question is a red herring. AI is not even a secondary source, it just gives the illusion of thought, originality but truly has none. If I were to put together a puzzle of the Mona Lisa, would I be the author of said picture puzzle? No. What about the puzzle maker, the one that created the pieces? No. It’s a copy. So who to attribute? Nobody. Only indicate that AI was used in the creation. Yet, the APA has a style guide for citing ChatGPT as if it were a real author or source. I’m bedaffawled. Cuffawed. How can you cite something that doesn’t exist? Meaning, the info. ChatGPT provides is not retrievable, thus not a valid source. It cannot be verified, checked. It simply can’t, so citing makes NO sense at all.

We do NEED all AI to be clearly marked and should make this mandatory, something built in as attribution. Right now, you can read LinkedIN “AI Powered” articles that are totally AI created. I find this a rabbit hole of content generation, we don’t want to go down.

Affordances: It’s a useful term to define what generative AI can do. But it has its troublesome past as a term. Basically, we need to know the limits and abilities of generative AI. Use cases. Education needs to define this.

Alignment: A much talked about problem and black hole. How can we make sure AI is doing what we want, acting with our own values and general culture in mind? Technology doesn’t (yet) understand emotion, pain, the soul, human affection. So, until it has been aligned, we are hesitant to let it make decisions for us, especially big decisions. Or should we let it make any decisions at all - given how AI could disrupt our world? Why bother to align AI when we should control it? Isn’t that most important?

“The upheavals [of artificial intelligence] can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to obliterate cancer is to exterminate humans who are genetically prone to the disease.” — Nick Bilton, tech columnist wrote in the New York Times

Accuracy: ChatGPT “hallucinates” and shows its machine soul by providing very inaccurate information in most environments. It’s startling and revealing at the same time. Why do we use a tool when it clearly is error prone, so error prone? Type in your own name for example and see what biography it comes up with! Generative AI as currently designed won’t ever despite Musk’s proclamations, be able to be accurate. There is no truth to be true to, for a machine that doesn’t “live” in the world and just cuts and pastes text together. At bottom, AI hallucination is a huge warning sign that the underlying premises on which it is based are wrong, deadly wrong. Filled with bias, non-contextuality and inhuman bearings.

“I’m more frightened than interested by artificial intelligence – in fact, perhaps fright and interest are not far away from one another. Things can become real in your mind, you can be tricked, and you believe things you wouldn’t ordinarily. A world run by automatons doesn’t seem completely unrealistic anymore. It’s a bit chilling.” —Gemma Whelan

Access: ChatGPT is accelerating language death. Only 20 of the world’s languages will ever be AI accessible. There just won’t be the time and effort to train and make active the rest of the world’s languages. They’ll be left behind. In particular, many languages with difficult writing scripts, many of them Asian. Speech recognition too, ill serves the almost billion who are illiterate (hate that litote - word) on earth. This means, more and more use will be in these primary languages and the others will be left on doormat of history. As Wade Davis so well outlined, our cultural genome and language in particular is so vital to our world. What will happen when we so quickly head down the road of language homogenization? Further, let’s think about who will be able to access generative AI once the paywalls begin to really be turned on (to keep their server lights on). Probably only the rich part of the world. More haves and have nots. Is this a good thing?

Accommodation: Historically most of what we call “educational technology” came from developments to accommodate students with low ability, disability. However, generative AI has no such focus. Despite all the rhetoric and blablabla - technology for profit has a very poor track record of inclusion and accessibility. There will be winners and losers. Like so much of education, it is about those at the top of the class. How will AI help students with issues of access, how does it accommodate students with special needs? Will these students simply be left behind. Nobody is discussing this still, closed door.

Awareness: Is AI aware of what it does? Will one day it demand “human” rights and that if we shut it off, we are causing it pain, even death? Consciousness, it’s a riddle with many definitions. I tend to go by this view - consciousness is organic in origin. It’s a collective sense of belonging, existing. I don’t not see any end game where awareness plays a part with AI though many materialists believe life is just swirling atoms and it is all the same, human, inanimate. We need a much stronger Turing type test for consciousness. Right now, so many being fooled into thinking AI does show signs of awareness. AI is not even anything close to how the human brain or embodied cognition work.

I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain actually works. - Geoffrey Hinton

Assessment: How do we know, students know? Students understand? This is the million dollar question now that generative AI allows undetectable answering. Did the student simply prompt a machine for the answer or do they actually know the answer, can restate the answer, apply the answer, apply the answer in differing environments and modify it? Assessment right now, is not ready for these new forms - it must be turned on its head, given the non-attribution/non-detecting of generative AI. Teachers need to know that students know.

Augmentation: What will ChatGPT augment and make “more”? Or will it just destroy jobs, lessen, decrease so much of what we humans have gotten used to doing, creating, being? Will the world be swamped with AI generated text and what will that do to our ways of knowing, the epistemology inherent in our world of meaning? Will nobody “have” knowledge because it has been so devalued, because there is so much of it, it’s so easy to have. Where goes “genius” before this electronic savant partnering with each of us?

Architecture: Is AI as “open” as they say? Nobody really knows what it is actually doing, the ghost inside the machine. Should we know? How will it interface with existing technology? It’s API is open for now, but like so much of the web, we know those doors will close once the $ sign and signature becomes larger. What are the rules by which generative AI will be embedded into the wider world, your computer of course but also security services, data management, even your refrigerator?

Alienation: Technology as it grows, becomes a bigger part of our lives, leads inevitably to estrangement, depression, inauthentic experience, social breakdown. If AI comes into our world ever stronger, what will that do to our traditional communities, socialization, relationships, ways of loving and understanding each other. Do we risk alienation and becoming unable to tell the human from the artificial?

Appropriation: Open AI and other AI companies have basically “stolen” and appropriated others creative works. Stability, Midjourney and other AI image generators have been hit with class action lawsuits. The case against them and even ChatGPT is solid. YOU are/were the data. How will this be resolved legally? So many other issues in this realm. Who owns thoughts, words? How many words until ownership begins?

“Before you diagnose yourself with depression or low self-esteem, first make sure that you are not, in fact, just surrounded by assholes.” William Gibson

0 Comments
NAKED AND ALIVE
Education
All about teaching, teaching English, ed-tech and learning language.
Authors
David Deubelbeiss