I spend a lot of time loving and hating AI.
My company makes a ton of money off of the AI rush, even though we aren't an AI company.
The circle of intellectuals I work with in the Department of Defense/War are all over the place on it. Things like me using an LLM to help write, and what that does to me (especially someone with so many ADHD and OCD tendencies) is a lot different than what it does, cognitively, to my kids. And that's why AI is in a really dangerous inflection point right now.
Another friend of mine wrote a great white paper about how vibe coding is dangerous and also amazingly powerful. And yet three other friends ran a vibe coded workshop and put out some ultra powerful tools.
I can say this: I wrote this blog without any LLM help.
But I wrote a lot of code over the past few weeks in my free time with a ton of LLM help. Vibe coded the crap out of it. And I can say this: All my friends (Jeff Bailey of Nike, and then Bryon, Adam and Carlo of Rise8) were totally right, but they also all know WHY they are right.
Fundamentals.
If you start slinging code into a compiler or a Kubernetes orchestrator or container or whatever, without understanding design principles, code basics, and data design principles, you'll be lost in no time, and suffering greatly for it. But man, with that skillset in place, vibe coding makes productivity astronomical. I built my own app and a Kubernetes deployment on Google that does day trading for me automatically. I used LLM agents within my code to be the "good idea fairy" for certain behaviors I'm looking for - did someone known to be corrupt in Congress invest in something? Are behaviors for a certain stock or cryptocurrency over the last month more appropriate for channelization (and even then, do I use Keltner, Bollinger, or Donchian methods?) I'm in the process now of writing a penny stock trading portion, and then another portion that will add shorting (all my positions now are long positions) as a capability... Then seed some capital and let it run.
The fun part has been that I know the scope of trading okay. I'm certainly not professional though. And this is all being done in python... Of which I'm clueless. I'm an old-school GNU C guy who remembers when Linux libs were a.out, before ELF, before lib, back in the long-long ago.

But software design principles? The database structures? These are all things I'm extremely comfortable with. So wiring something together like this, the LLM becomes a part of the solution to make me more productive.
And yet I can't help but think if I'd tried to do this otherwise, without basic skills, I'd get garbage out. The LLM missed certain parameters I found during copy/pasting of procedures that would have made a purely vibe coded toolset either worthless, or worse, bad. It also randomly hallucinated on certain things.
Instead, I'm making steady profit (we'll see if its enough to justify my Google bill; my k8s engine costs are currently pretty steep). But I also know I would have missed a lot of )'s and ;'s if it wasn't for an LLM helping me code the whole time.
As to LLMs and AI in general, relative to deep decision making? I think there's a lot of opportunity, especially in the military space where we need to keep moving forward. Every time I hear a "horror story" about putting the genie back in the bottle, I'm reminded of how stupid everyone is trying to avoid reality and growth. There's real market forces pushing us forward, and there's going to be development on it. You can't stop it anymore than you can make nuclear fission un-learned. It's reality and it's marching forward. The questions are tougher than that; you can't say "I don't want AI autonomous systems involved in killing people." It already happens. The question is whether you want people in Beijing or Moscow or Washington or Brussels in charge. And even then, you can't really de-couple the decisions in Beijing's war rooms with the work in Shanghai's boardrooms, or the decisions in the Pentagon with the decisions in garages in the Silicon Valley. These things are going to move forward.
I'm more worried that we end up with cognitive deficits in human development because we put too much faith in the technology before its ready than we do in recognizing its strengths and weaknesses relative to the cognitively developed humans doing the work.
I guess we'll see.
We'll also see if I get any growth in my market system... Strategy B (Congressionally tracked), C (LLM informed channelization of equities using long positions, with different risk buckets, channelization methodologies, and volume/liquidity weights), and E (LLM informed channelization of crypto) are live. They've all generated a little profit this week. All the trading is purely algorithmically deterministic; I only use AI models for tuning suggestions I implement myself through an Admin API/UI I wrote with my own Oauth tied to my own accounts (I even managed to make it two factor pretty easily). Strategy A is next (OTC penny-stock trading, which is going to require a whole new set of containers and structures in a pretty robust model, that only has Admin API/UI commonality with the rest of the system) then strategy D (ability to short; this one is obviously dangerous compared to the rest). When all are done, I'll let them run for a few months and when the system is making enough each month in rolled over profits to operate the entire k8s architecture (I'll be subsidizing it for a few months), I'll then write an overall orchestrator to move funds between strategies based on successes; right now all three live ones and the two new ones I'm planning live in their own isolated universes.
