Why Joseph Plazo Told Asia’s Future Leaders to Slow Down
Why Joseph Plazo Told Asia’s Future Leaders to Slow Down
Blog Article
In an age where machines are revered, one man stood before the next generation of leaders and said:
“It’s not that simple.”
The man whose code has outperformed seasoned traders, walked into a grand hall at the University of the Philippines —not to celebrate AI,
but to show its cracks.
---
### A Lecture That Felt Like a Confession
No graphs.
Instead, Plazo opened with a line that sliced through the auditorium:
“AI can beat the market. But only if you teach it *not* to try every time.”
Eyebrows raised.
The next hour peeled away layers of false security.
He illustrated algorithmic mistakes—buying crashes, shorting recoveries, mistaking memes for sentiment.
“ AI is trained on yesterday’s logic. But investing… is about tomorrow.”
Then, with a silence that stretched the moment:
“Can your machine understand the *panic* of 2008? Not the numbers. The *collapse of trust*. The *emotional contagion*.”
It wasn’t a question. It was a challenge.
---
### When Bright Minds Tested a Brighter One
The students didn’t stay quiet.
A student from Kyoto said that sentiment-aware LLMs were improving.
Plazo nodded. “Yes. But knowing *that* someone’s angry is not the same as knowing *why*—or what they’ll do with it.”
Another scholar from HKUST proposed combining live news with probabilistic modeling to simulate conviction.
Plazo smiled. “You can model rain. But conviction? That’s thunder. You feel it before it arrives.”
There was laughter. Then silence. Then understanding.
---
### The Trap Isn’t the Algorithm—It’s Abdication
Plazo shifted tone.
His voice dropped.
“The greatest threat in the next 10 years,” he said,
“isn’t bad AI. It’s good AI—used badly.”
He described traders who no longer study markets—only outputs.
“This get more info is not intelligence,” he said. “This is surrender.”
Still, this wasn’t anti-tech rhetoric.
His company runs AI. Complex. Layered. Predictive.
“But the final call is always human.”
Then he dropped the line that echoed across corridors:
“‘The model told me to do it’—that’s how the next crash will be explained.”
---
### The Region That Believed Too Much
Across Asia, automation is sacred.
Dr. Anton Leung, a noted ethics scholar from Singapore, whispered after:
“It wasn’t a speech. It was a mirror. And not everyone liked what they saw.”
In a roundtable afterward, Plazo gave one more challenge:
“Don’t just teach them to program. Teach them to discern.
To think with AI. Not just through it.”
---
### No Product, Just Perspective
There was just stillness.
“The market,” Plazo said, “isn’t code. It’s character.
And if your AI can’t read character, it doesn’t know the ending.”
Students didn’t cheer. They stood. Slowly.
One whispered: “We came expecting code. We left with conscience.”
Plazo didn’t sell AI.
He warned about its worship.
And maybe, just maybe, he saved some from a future of blindly following machines that forgot how to *feel*.