Finally got to a decent point this week on the software side. The prototype I built a while back needed to be reworked into something with proper architecture. I’ve been experimenting with different “vibe coding” apps—mainly Cursor lately—but that’s been getting worse. The prototype was interesting, but I knew I had to put it into a solid framework. That meant spending some time learning, then implementing.
Now that’s done, and I’ve got a platform where you can manage your data, apply AI workflows to it, and get useful output. There are still a few things to iron out, but overall I’m happy with the progress. Such are the wonders (and headaches) of programming.
Why thinking through pitch decks and financial projections matters
Over the last 24 hours I’ve started working on the pitch deck and financial projections for this AI platform. We needed to define a clear initial niche, and this exercise forced that thinking.
The brain—mind, subconscious, whatever you want to call it—feeds on the information we give it. Your future reality is largely shaped by how you use your mind. When you actually spend time thinking things through (not just asking ChatGPT to spit something out), your brain begins mapping the future. It starts planning at a subconscious level.
(As an aside: Psycho-Cybernetics is worth a read on this topic—there’s evidence the conscious/subconscious dynamic doesn’t work quite the way we’ve been taught.)
So when you see a spreadsheet forecasting customer numbers on specific dates, your brain takes it as truth and begins working toward it. Of course, it all depends on your mindset at the time, but clarity + projection = momentum.
Progress this week
I finally integrated AI workflows into the platform. It took time because I wanted the right architecture around it, but it’s now functional.
I’m fairly certain the software of the future will be heavily abstracted away from us.
The future of software = Speech recognition + LLM + MCP + Generative UI
Here’s my best guess: most of the software we use today will eventually be abstracted away. Speech recognition + LLMs is already good enough that you can just talk to your software, which is sufficient for many use cases.
A simple example: forms. Instead of filling out endless fields, you’ll just say what you want. The system will validate your input, confirm it, and submit. In fact, an AI “agent” that already knows your preferences will fill it out for you automatically—eventually you won’t even need to speak.
For now, point-and-click interfaces will still exist, because they’re efficient for certain tasks. But speech → LLM → MCP → Generative UI is where things are heading. LLMs understand intent, query services, get the data, and then spin up a UI on the fly if needed.
— note … super tired so this is half tidied up by AI … may rewrite it tomorrow with fresh eyes
Leave a Reply