The Day Lightning Struck
- Tom Nault
- Jul 11
- 3 min read
By Tom Nault, CEO, Hudson Cloud Systems

When ChatGPT went live on November 30, 2022, we didn’t pay much attention until sometime that December, and there was no talk of it at the Consumer Electronics Show (CES) in January 2023. There was no mention of Large Language Models in any form, anywhere, as if they had no relationship to anything at the show. We were talking about it among ourselves, but that was it.
I didn’t begin using ChatGPT regularly until it occurred to me one day that it could edit all of my work in one sitting. I woke up like I’d been struck by lightning and thought, what if I asked it to edit my work? I grabbed my laptop from bed, fed it a draft, asked it to clean it up, and it did a beautiful job. I knew at that very moment my life had changed forever. I didn’t sleep that night. The next day I was feeding it everything!
That wasn’t long ago, but in the world of LLMs, it may as well have been ancient history. Today at Hudson Cloud, we use LLMs across our workflows, and the idea that this started only 2.5 years ago still astounds me. Where will we be in another 2.5?
People tend to think of technological progress as linear. They assume LLMs and related technologies will improve each year by some predictable percentage. But that’s not how this works. Progress here is exponential, not incremental. And for some reason, our human brains struggle to wrap themselves around compounding progress.
From late 2022 to mid-2024, OpenAI’s models went from GPT-3.5 to GPT-4 to GPT-4o—each leap reducing latency, improving multimodal input, and compressing what used to take minutes into seconds. GPT-4o can respond to speech in under 320 milliseconds, which is roughly on par with natural human conversation. The model runs vision, audio, and text simultaneously and can outperform GPT-4 in many use cases, while being cheaper and faster to operate.
That level of advancement in less than 30 months is not evolutionary, it’s a new pace of invention. If that same rate continues, the systems we’ll be using by 2027 will make today’s models look primitive.
Ray Kurzweil predicted we’d achieve true Artificial General Intelligence before 2027, and he may be right. NVIDIA just reported that the total addressable market for AI infrastructure will reach $600 billion annually by 2027, up from roughly $100 billion in 2023. If you’re wondering whether LLM development will slow down, it won’t. The incentives are too big, and the market is now racing at full speed.
This is why I love being the CEO of Hudson Cloud Systems. We are a product company that benefits from every leap in LLM or AGI development. We can push updates across our entire client base faster than any on-prem server environment ever could. And we do it without the fear of becoming obsolete because someone still has to physically make the hardware work. That’s where we come in.
From there, AI helps us plan, upgrade, and future-proof systems at speed and scale. We aren’t just along for the ride. We are out front. We comb overnight AI updates every morning before we start work. If something new is released, we know about it instantly or, at worst, just a few hours after release. And yes, it’s all tremendous fun for us.
Every PC sitting on a desk is aging out by the day, and more and more are falling behind with nothing but expensive options to remain current. In a world where desktops were installed at different times in different states of readiness, staying current becomes a nightmare. That’s why we feel lucky to be in the niche we’re in.
For us, this is fun. For many IT departments still stuck with outdated desktop models, it’s anything but.
For the schools and institutions that want to stay ahead of the AI curve, we’re ready for you. We can carry you with us. But for those choosing to wait and see, time is working against you. Meanwhile we’re loving all of this and are excited about what we can do for our customers. That’s what makes Hudson Cloud Systems unique and why I’m so excited about the work we’re doing here.
Stay tuned for some exciting AI developments as they unfold. Grok is releasing a new version of itself later today. See what we mean? You know I’ll be up working with it late tonight.