I’m a techno optimist by default, I do believe technology can solve the problems we face today, and make our lives better. But I do have concerns with the direction we are taking with Generative AI, and the future we are heading towards. This is an opinion piece...
I’ve not posted anything in over 6 months! It’s been a busy time at work, and I’ve not had much time to experiment. I’m not great with words and I do not want to use GenAI to “improve” my writing. This document is v1, dated 18 March 2025, and I have been mulling over and will no doubt continue to mull over.
We’ve been promised that AI will take away the mundane work, and leave us with the high-value creative work. We’ve been promised that Large Language Models (LLM) will enhance productivity. No doubt would be useful to have an AI summarise long documents or conversations and to answer questions accurately, based on facts. However, today’s don’t do this well, and often “hallucinate.” LLMs predict text based on context, and therefore will strive to provide a answers that pleases, even at the expense of correctness. We are trying to implement guardrails, but guardrails today “overlay” LLMs with code or more LLMs, implying it is impossible to correct, through training, the intrinsic problems with LLMs.
Instead of “mundane” Q&A, today’s Generative AI models seem to be better at creating code, images, music, poems, and stories. I’ve been playing around with image generation AI like Stable Diffusion, Flux, etc. but have realised I do not want a “creative” AI. I do not want an AI to generate “art.” I want real human-created art, music, stories, core, and even emails which someone put thought into, meticulously crafting its meaning or message.
I am also concerned that AI is an enabler for fraud hyper personalised phishing spam, for cheating (even something as trivial as homework) and for rapidly penetrating and hacking systems. We see AI used to spread disinformation, breach cybersecurity, generate personalised spam, and generally fool us (voice cloning, deep fakes). I especially hate to proliferation of AI-generated images, podcasts, videos, books, etc. plaguing social media, creating echo-chambers that spread biases, lies and half-truths. Look at our divided political landscape today!
(With the flood of AI generated crap and synthetic data, I think we might look back at sometime around 2023 as the cut-off for “clean” training data scraped from the Internet, but that’s a separate discussion).
Of course everyday there are new techniques being invented and better models released everyday. Examples include Large Quantitative Models (LQMs) that work on data using math and statistics to solve prediction, optimisation, and simulation problems; and neurosymbolic LLMs that fuse LLMs with Symbolic AI (or Good Old Fashioned AI) that can use understand language for accurate reasoning and rules-based decision-making.
No doubt a more deterministic AI model would be more valuable than todays LLMs. But no matter the improvements, surely machines, being trained on data, will lack life-experience, EQ, intuition, the ability to learn and desire to improve. I am concerned we delegate important decisions to machines, so called “human out of the loop” (HOOTL) in the name of scaling expertise, in medicine (treatment, triage, etc.), in business (approvals, eligibility, etc.) and worse, in war.
Speaking of HOOTL, today we are experimenting with “agentic AI.” Imagine an AI “bot” that can autonomously perform tasks, on our behalf, by breaking down our instructions into discrete steps that the AI will independently work on step-by-step, to achieve the final outcome. “Buy me a ticket to Rome if it is sunny” or “Research the market and buy top stocks based on predicted returns”. These steps could involve understanding our current profile, doing “research” on the Web, reading a document or instruction manual, asking for more information or even taking to another agentic AI to achieve its goal.
As amazing as this sounds for tasks like researching a travel destination and a personalised itinerary, I’m concerned about AI that can make decisions and perform “transactions” on our behalf. Surely this is a recipe for disaster, due to the potential for unexpected, inscrutable, even damaging outcomes. Not to mention the potential for fraud raised previously, since we really should not trust AI bots.
Imagine that world. We use machines for creative work. We use machines for “symbolic” work. We are just prompt engineers. We feed the machines input, to get an output. At the most we adjust what we feed the machines, but that’s also being foolish. Today there are courses on “prompt engineering”, no doubt some will be already created by AI, but tomorrow, there will be an AI to improve what we feed automatically, thus altering and likely dumbing down this future so-called “career.”
What will the next generation be left with? Perhaps they can look forward to being jobless and worthless, having been replaced with robots and AI? Perhaps they can enjoy dead-end soul-crushing pencil-pushing to feed the machine that is the “organization.” Perhaps they are the growth engine in the sense they are nothing but consumers, who buy and eat but never find satisfaction.
Perhaps they will never feel that that they are of value, that they contribute, that they matter. This is new reality of the corporate environment we live in. Workers are churned at spat out as and when, retired when they are too costly, replaced at the slightest whim of their overlords.
Perhaps they will never experience the joy and thrill of creating something beautiful and elegant, be it art, fashion, music, literature, code, math, gastric delights, in fact anything handmade, created with thought, intention, dedication, pride and love.