the effect of ai on intelligence (behold the idiocracy)

image from https://iai.tv/articles/ai-war-and-transdisciplinary-philsophy-2583-auid-2684

I’m sure my picture choice above suggests this post is just another doomsayer predicting AI will eventually kill us. Well, only if AI 1) becomes a true intelligence, 2) is asked if humanity will hurt it, and 3) has the ability to do us harm. Thanks Robopocalypse and countless movies for scaring the crap out of me, BUT I DIGRESS… This post is about what AI is (in layperson terms) and how the current trajectory will bring forth the Idiocracy.

TL;DR

AI can find answers to known questions today as it finds patterns & correlations in the information already recorded. This can be incredibly powerful when we want to increase our productivity, but human supervision is still required to ensure quality responses. The problem is that this reduces humanity’s need to continue to build foundational knowledge which will eventually dumb us all down & bring innovation to a standstill.

Longer version…

AI is essentially finding patterns in existing knowledge. It take “chunks” of information, such as book paragraphs or report sections, and creates/stores mathematical representations, referred to as “vectors”, of these chunks. When you “prompt” an AI for an answer, it creates a vector out of your input message and then searches for other closely related vectors that were previously created & stored.

Yes, I’m oversimplifying things a bit and AI is an ever-evolving space, but this simply means that AI can determine fairly accurate results to questions when there is enough relevant information already processed & recorded. If the requested knowledge is not already stored somewhere (or the AI systems have not processed the information due to expense, obscurity, privacy, etc), AI often hallucinates instead of simply stating it cannot offer a quality response.

The AI is not trying to lie to us, or “fake it until it can make it”, it simply was designed to provide a response and when your question creates a vector that there is limited (or no) closely matching vectors of existing information it widens its scope. It is doing its job, but now YOU have to do yours — you have to evaluate the response for accuracy & applicability.

This does allow AI to be an incredible tool for scenarios such as customer support where the inquirer is not a subject matter expert (SME) in a particular area (and they don’t want to be) AND the answers to customer inquiries are often previously answered. AI is also an incredible tool for SMEs who are looking to improve their productivity.

This all means that users of AI need to be aware that the quality of the responses they receive will vary and they should be evaluated. For companies offering solutions such as customer support, the burden falls on the companies themselves to utilize humans & additional automation as quality control agents supervising the AI systems.

For SMEs it usually means they need to learn to effectively instruct (aka prompt) an AI to get the most useful response and then apply their existing expertise to evaluate what was received and likely enhance/modify the result.

This all sounds pretty good, right? So, what’s the problem?

AI can be very useful for a while, but my concern is that as people (especially children going through K-12 and professionals early in their career field) leverage AI more and more, the foundational knowledge that is essential to make AI successful now, will all but disappear.

Eventually, very few will invest the time and energy in building that subject matter expertise that makes AI possible. With smaller and smaller pools of SMEs, less and less people will be able to quality check and fine-tune these AI systems for usefulness. And worse yet, with smaller and smaller numbers of people actually wanting to learn the fundamentals of a particular focus area, innovation itself will come to a standstill.

What’s next?

First, let me know what you think in the comments; even if you disagree. I usually talk about being #DataDriven, yet it is fair to say this post is more #EmotionallyDriven, thus there is LOTS of room for discussion. There are also plenty of pros & cons articles across the web and I encourage you to see what other (sharper) minds have to say about all of this to round out your own position.

Secondarily, keep learning and know it is OK to reply to your own email! Don’t always take a shortcut and if in a learning situation, don’t cheat or lessen the knowledge building process!!

Lastly, I’m interested in your predictions on how long it will take for me to lighten the heck up and just give in to letting AI produce/consume correspondence & work deliverables instead of just trying to use that grey matter between my ears.

Maybe there’s a PEBCAK or I’m becoming a grumpy old man. I’ll guess we’ll see OR maybe AI can answer it all for me. 😉

Published by lestermartin

Developer advocate, trainer, blogger, and data engineer focused on data lake & streaming frameworks including Trino, Hive, Spark, Flink, Kafka and NiFi.

2 thoughts on “the effect of ai on intelligence (behold the idiocracy)

Leave a Reply

Discover more from Lester Martin (l11n)

Subscribe now to keep reading and get access to the full archive.

Continue reading