Developing human intelligence in the time of artificial intelligence
This last week I finally took the (non-cursory) plunge into understanding the latest developments in AI — machine learning, large language models, and of course, ChatGPT.
In the media I’ve consumed so far, some have compared ChatGPT to the (graphing) calculator, computer, internet, search, the iPhone, etc. in the seismic shifts it will lead to in how we live and work, while others downplay it as a fad technology that ultimately won’t live up to its hype or find its utility. On one end of the spectrum, we seem to be realizing science fiction — Ex Machina, Westworld, M3GAN— moment by moment. On the other end, is this just the next gold rush? Because ChatGPT is a linguistic technology (and thus inherently social), I’m far more interested and far more likely to believe in its societal and political impact (Language: The Original Interface) than say the last big thing i.e. cryptocurrency. The hot takes that would bait me include …
- How might AI solve or exacerbate climate change
- Anything involving Universal Basic Income (UBI)
- But above all, how fucked are my kids.
I hate playing into stereotypes but since I’m usually trying to multi-task as both parent and tech worker, e.g. listening to a tech podcast while cleaning up the playroom, my thoughts have naturally wandered to how these developments will affect my offspring, their less well-off peers, and future generations. While I am not free from existential angst and concern for my own career, I feel more certain about how to improve my prospects in the now than how to prepare them for the economy of the future. Maybe I have a false sense (one might say hallucination) that the changes coming won’t upend civilization in my lifetime. Did the cave person understand the discovery of fire? Okay okay no more hyperbole. (Will we understand hyperbole in 10 years?)
What can or should I do about AI is a question every parent should be asking themself. And yet, I’d be surprised if any #parenting influencers or journalists have covered it. The one caveat is I have come across a bunch of media about how it affects schools today. My main critique is that they’re often focused on ChatGPT as a tool to be fought over or shared between students and teachers, instead of grappling with ChatGPT as the precursor to a radical revolution in how humans think and exist in reality. (Hyperbole or die)
These are my questions as a language enthusiast, a software designer, and a parent:
- If large language models are to words what a calculator is to numbers, how should people learn to communicate? How should you learn how to communicate well? Will there be more emphasis on linguistic concepts and principles but not actual writing as there is for understanding math “concepts” but not producing manual calculations?
- Will there be any value in being a good writer? Or will there be less emphasis on developing written/”asynchronous” communication skills, which can be automated and thus easier to produce, and more emphasis on developing oral/”synchronous”/face-to-face communication skills, which cannot be automated and thus, harder and more valuable?
- How biased are large language models toward business writing? Will they further “flatten” and shorten our common language down to machinespeak? Can you be a good writer outside the guidelines enforced by the models? What incentives do you have to learn more complex syntax or more specific vocabulary?
- What will be the highest-paying jobs and what will offer the most jobs, if any— developing, training, or applying large language models? Maybe the answer is obvious but maybe it isn’t.
- The other day, my mother told my 2-year-old to start learning math (so far they’re counting to 5) otherwise they won’t be able to keep up with AI. This kind of nonsense begs the question, in a world where the machine will always outpace the human brain, what is the right pace and focus for the human?
- How biased are large language models towards English, English code (is there any other kind), and Western worldviews? I assume there are Chinese companies working furiously away at their own versions. I’ve always wondered how different the world would be if the internet was invented in China. Example: to my knowledge, Chinese words don’t quite have synonyms. Would that make it easier or harder to name variables and keys?
- If art can be generated in infinite variations at the press of a key, does that really replace the need for creative people and creative skills? Is this the death of creativity, the death of art and poetry? Or will creativity come to be less about generation and more about critique — choosing the best out of infinite possibilities? Can statistical probability replace human imagination and evoke human emotion? Will humans be driven to imagine and enact less and less statistically likely variations? Will it be more or less important to teach kids to “think different?”
- How old was the textual material they fed into ChatGPT? How old were the books? Would studying materials outside its data set be an advantage or a waste of time? What can human history teach the model and what can it teach our kids?
- If you have access to a learning partner that never gets tired or forgets stuff, why would you need to learn how to memorize? How will you be motivated to train your memory?
- How will children learn the difference between true and false information? How will they be able to tell the difference? How will the information they get from AI shape their thoughts?
I welcome any resources on these questions!