Cursor can now do automations.
Target:
Custom platform -> web hooks … users will provide feature requests and bugs found … agents will attempt to implement … humans will QA and merge PRs.
Cursor can now do automations.
Target:
Custom platform -> web hooks … users will provide feature requests and bugs found … agents will attempt to implement … humans will QA and merge PRs.

On the 13th February 2025 I embarked on a new adventure. That’s over a year ago now so it’s worth me taking some time to re-appraise. I’d given myself until October 2025 to make something worthwhile. In the last year, AI has improved so rapidly – at this time Opus 4.6 is out and it really is excellent. Programming as a career is changing but I am less sad about the consequences now. For me at least, it unlocks a whole new world of productivity and the key is being willing to take the leap, work smart and learn the art of using agents and sub-agents.
Keeping consistent online updates is a challenge
The main task last year was to learn AI and build something out of it before October 2025. At that point if I wasn’t satisfied with the direction that I was going in, then I would happily concede and then move back to employment. I also said I would be doing regular LinkedIn, YouTube, Insta updates, etc, etc. I did pretty well at keeping this blog up to date in the beginning, but as I got more embedded into other things the consistency in keeping the public profile up to date was not something I could keep up with.
The key thing is – I have ended up somewhere in the middle – I didn’t completely fail and I didn’t completely succeed. I have recently taken an employment role at a local company that I’m very happy with. It’s great to be back working in a team, and in a company that is well organised and with all the social good that you miss out on working as a self-employed. This is somewhat part-time role at three days a week, but I’m working quite intensely throughout the days – last week for the first time I saw the light and had half a dozen Opus agents working alongside each other on the same codebase. The only challenge is keeping on top of them all and making sure they are behaving themselves.
So, having gone back to employment I have fixed the looming financial problems that were mounting. Rather than stop in October 2025 I went on for a few more months to see how far I could push things. Whilst nothing is certain in life, having some life structured and a regular paycheque this year is truly welcomed – as well as having some good projects to get my teeth into.
Alongside this, I have developed an AI platform called AffiliateFactory in a partnership with Digital Fuel Performance. Its foundations are based on things that I have been working on for a while, and whilst it is essentially a website management system … it is evolving into a custom agent marketing platform. After a few more iterations over next few months I will be releasing more information on it.
My intention is to continue working on this platform alongside my regular employment.
If you’re building AI-powered features like semantic search or working with Large Language Models (LLMs), you’ve probably encountered terms like “vectors,” “embeddings,” “token embeddings,” and “neural network weights.” These concepts are often confusing because they’re related but serve very different purposes.
This guide will clarify:
A vector is simply a list of numbers. In mathematics, it’s an array of numerical values.
Common Misconception: “Vectors are always 3D (three numbers) representing points in 3D space”
Reality: Vectors can have any number of dimensions (any number of values), not just 3!
Examples:
Why the Confusion?
3D graphics (video games, 3D modeling) popularized the concept of vectors as `[x, y, z]` coordinates, but that’s just **one use case** of vectors.
Vectors are used everywhere in computing:
The “Space” Concept
While 3D vectors represent points in 3D space, higher-dimensional vectors represent points in higher-dimensional spaces:
You can’t visualize 768-dimensional space, but mathematically it works the same way – it’s just more dimensions!
An embedding vector is a specific type of vector that represents text (or other data) in a way that captures its semantic meaning.
Key Point: An embedding vector IS a vector, but it’s a vector with a specific purpose – to encode meaning.
The Relationship:
– ✅ An embedding vector **is** a vector (it’s a list of numbers)
– ❌ Not all vectors are embedding vectors (vectors can represent many things)
Think of it like this:
Vector = A container (like a box)
Embedding vector = A specific type of box (one that contains meaning-encoded numbers)
An embedding vector is a numerical representation of text that captures its semantic meaning. Think of it as converting words into a list of numbers that represent what the text “means” rather than what it “says.”
When you vectorize text like “Government announces new climate policy,” the embedding model converts it into a list of numbers:
Original: “Government announces new climate policy”
Vector: [0.123, -0.456, 0.789, 0.234, -0.567, …] (768 numbers for nomic-embed-text)
In practice, people often say:
– “Vector” when they mean “embedding vector” (in AI/ML context)
– “Embedding” when they mean “embedding vector”
These are usually interchangeable in conversation, but technically:
– Vector = General term (any list of numbers)
– Embedding = The process of converting data to vectors
– Embedding vector = The resulting vector from embedding
Think of it like a fingerprint:
– A fingerprint uniquely identifies a person
– But you can’t reconstruct the entire person from just their fingerprint
– Similarly, a vector captures the “essence” of text meaning, but not the exact words
Mathematical Reason: The transformation is lossy – information is compressed and discarded. Multiple different texts could theoretically produce similar (or even identical) vectors, so reversing would be ambiguous.
Use Case: Semantic Search
Embedding vectors excel at finding semantically similar content:
Example:
– You search for: “climate policy changes”
– The system finds:
– “Government announces new carbon tax legislation” (high similarity)
– “Parliament debates environmental protection bill” (high similarity)
– “Manchester United wins match” (low similarity – correctly excluded)
Even though these articles don’t contain the exact words “climate policy changes,” they’re semantically related.
How Similarity Works in High Dimensions:
Just like you can measure distance between two points in 3D space:
– 3D: Distance = √[(x₁-x₂)² + (y₁-y₂)² + (z₁-z₂)²]
You can measure “distance” (similarity) between two points in 768D space:
– 768D: Similarity = cosine of angle between vectors
The math works the same way, just with more dimensions!
The models have already been trained on billions of words and their ‘closeness’ to each other, which is why when you vectorise an article (or something else) using vector searches will find semantically similar content.
Whilst exact keyword search is slightly faster, vector search enables search through meaning, which means it’s a lot more flexible.
For companies to win during this major transformation, the key principle is having someone at C level who is responsible for integrating AI. This person must understand the business workflows and combine that with AI knowledge. The creation of a Head of AI role is the first step to take in your AI transformation.
Initially the AI role is focused on becoming more effective at what the company does currently i.e. making X widgets faster, better, or looking after more customers for less money in customer service. But in reality, the real winners will be the ones who innovate with AI.

Whilst I’m enjoying traditional web programming enhanced by my AI partner (using Cursor still, but may move onto another one soon since it feels like it’s gotten worse)… I am constantly aware that the software world is going to be dramatically different very soon. I don’t think I will, in particular, be out of a job… but it is definitely a chance possibility. However inertia of software stacks will probably keep me employed for the next decade or so.
I don’t know with 100% accuracy, but I believe some financial and insurance institutions are still running on COBOL software. These are extreme examples but I think there is going to be caution on the uptake of new forms of software in many areas – there will be inertia and things will change slowly for some industries. So there will be web software that needs to be maintained … I don’t know how massive this market is, it may well drastically shrink as companies merge, but I’m fairly certain old school web tech will be a niche job that can pay potentially well. Coupled with the ongoing worries about juniors finding it incredibly difficult to find jobs I think an entire generation of programmers will go missing, which will decrease the supply of traditional web development talent. So hopefully good for me.
I was speaking to someone from the printing industry who worked in it many decades ago. He transitioned through from traditional printing (magazines) using HUGE rollers in factories, to digital printing…, *almost* overnight (months to within. year) most of the workforce lost their jobs because technology steam rolled them. And we lost all that clever skill that put magazines together – if you aren’t aware of the skill that went into magazine printing it was a labour intensive process … now we can just colour laser print onto glossy paper and think nothing of it.
And so it will be with programming … so you need to stay up to date.
If I had to start all over again, and had the energy to do it … I’d be focusing now on applying automation tools to companies, which is a first stepping stone to AI’ing companies… the tools are so simple now that almost anyone can make really quite good solutions or prototypes for their companies.
Then I’d be focusing on learning actually Machine Learning stuff, for instance image/video recognition i.e. counting the number of chickens going past a certain point on a video; and then the potential for smart cities is very much there already. And finally cyber security will just get more and more important.
In other cases, I think software will leap ahead. We haven’t even touched upon the brain-input mechanisms which would change everything in mind-bending ways … there’s the story of a teenager with a disability able to use Musk’s brain chip to play Call of Duty with his friends; as with most things these days, I take them with a pinch of salt; but I do think the blending of the biological and digital (eventually quantum…) worlds will remove the input restrictions that we currently face. Sometimes you can think far faster than being able to actually implement.
It’s not that we’ve been in stealth, we’ve just been working on something quietly. It’s still traditional web albeit blended with AI workflows.

Another week flies by. At the moment I’m working on one platform which is a very thin layer that interacts with data sources and LLMs. This week I got it to a very solid state and was able to do some preliminary demos as we are now going to start applying it to some real world scenarios.
Ultimately, all IT stuff is basically IPO – input process output … back in the 90s when you are doing computer science lessons, it’s the fundamental thing that makes software valuable to people. People put information in, the system does something, and it outputs something of value to that person. That value is then monetised.
LLMs have actually changed the whole IPO paradigm – the inputs are now completely different – you can talk in natural language and through LLMs it can somewhat guess intent correctly (even without ‘intelligence’, the prediction is good enough) …some of the processing can be done via MCP and finally, and now the output UI can be voice, or it can be a UI generated on the fly.
I expect LLMs to become more optimised to the point where the bigger devices like MacBook Pros have them running locally, so you really can have more privacy when using ChatGPT-esque UIs … once we’ve got local LLMs running and you can plug into them how you want, I think that’s going to be really interesting from a creation point of view.
All fun and games.
Whilst I haven’t played with the other LLM APIs yet, the thing I really like about the responses API is the structured output where you can enforce a response in a specific format. So if you want information back as an article, or a car, or an animal, or whatever, you can enforce the attributes that you want.
After initiating a connection, it’s really easy to request what tools are available:
{ "jsonrpc": "2.0", "id": 2, "method": "tools/list" }
Unsure entirely what the ID corresponds to at the point, but I’ll figure it out.
The server will respond with something like
{
"jsonrpc": "2.0",
"id": 2,
"result": {
"tools": [
{
"name": "calculator_arithmetic",
"title": "Calculator",
"description": "Perform mathematical calculations including basic arithmetic, trigonometric functions, and algebraic operations",
"inputSchema": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Mathematical expression to evaluate (e.g., '2 + 3 * 4', 'sin(30)', 'sqrt(16)')"
}
},
"required": ["expression"]
}
},
{
"name": "weather_current",
"title": "Weather Information",
"description": "Get current weather information for any location worldwide",
"inputSchema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name, address, or coordinates (latitude,longitude)"
},
"units": {
"type": "string",
"enum": ["metric", "imperial", "kelvin"],
"description": "Temperature units to use in response",
"default": "metric"
}
},
"required": ["location"]
}
}
]
}
}
The server responds with a list of available tools, and as you can see it describes how you are meant to respond if you want to use those tools. So name, title, description are fairly straightforward … inputSchema describes the inputs and data types needed.
The OpenAI docs recommend that you do a ‘tool discovery’ routine that cycles through all MCP servers that it’s connected to, and then put them in a tool registry. The LLM can then understand what it has access to.
Finally got to a decent point this week on the software side. The prototype I built a while back needed to be reworked into something with proper architecture. I’ve been experimenting with different “vibe coding” apps—mainly Cursor lately—but that’s been getting worse. The prototype was interesting, but I knew I had to put it into a solid framework. That meant spending some time learning, then implementing.
Now that’s done, and I’ve got a platform where you can manage your data, apply AI workflows to it, and get useful output. There are still a few things to iron out, but overall I’m happy with the progress. Such are the wonders (and headaches) of programming.
Over the last 24 hours I’ve started working on the pitch deck and financial projections for this AI platform. We needed to define a clear initial niche, and this exercise forced that thinking.
The brain—mind, subconscious, whatever you want to call it—feeds on the information we give it. Your future reality is largely shaped by how you use your mind. When you actually spend time thinking things through (not just asking ChatGPT to spit something out), your brain begins mapping the future. It starts planning at a subconscious level.
(As an aside: Psycho-Cybernetics is worth a read on this topic—there’s evidence the conscious/subconscious dynamic doesn’t work quite the way we’ve been taught.)
So when you see a spreadsheet forecasting customer numbers on specific dates, your brain takes it as truth and begins working toward it. Of course, it all depends on your mindset at the time, but clarity + projection = momentum.
I finally integrated AI workflows into the platform. It took time because I wanted the right architecture around it, but it’s now functional.
I’m fairly certain the software of the future will be heavily abstracted away from us.
Here’s my best guess: most of the software we use today will eventually be abstracted away. Speech recognition + LLMs is already good enough that you can just talk to your software, which is sufficient for many use cases.
A simple example: forms. Instead of filling out endless fields, you’ll just say what you want. The system will validate your input, confirm it, and submit. In fact, an AI “agent” that already knows your preferences will fill it out for you automatically—eventually you won’t even need to speak.
For now, point-and-click interfaces will still exist, because they’re efficient for certain tasks. But speech → LLM → MCP → Generative UI is where things are heading. LLMs understand intent, query services, get the data, and then spin up a UI on the fly if needed.
— note … super tired so this is half tidied up by AI … may rewrite it tomorrow with fresh eyes
{
"jsonrpc": "2.0",
"id": 1,
"method": "initialize",
"params": {
"protocolVersion": "2025-06-18",
"capabilities": {
"elicitation": {}
},
"clientInfo": {
"name": "example-client",
"version": "1.0.0"
}
}
}
The server will respond like this
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"protocolVersion": "2025-06-18",
"capabilities": {
"tools": {
"listChanged": true
},
"resources": {}
},
"serverInfo": {
"name": "example-server",
"version": "1.0.0"
}
}
}
Notes
Once the client checks the response from the server, it sends out a one way notification, expecting no immediate response:
{
"jsonrpc": "2.0",
"method": "notifications/initialized"
}
Google completely wiped the floor with independent content publishers a few years ago; instead promoting websites that venture capitalists had (potentially) shared interests in.
I wrote about that here in my post ‘Google’s Move Kills Small Independents & Keeping Going…‘
This week Google have effectively won an important ruling in court where they won’t have to take the harmful action of breaking up its company. I’m not always a fan of forcing large companies to break up because after all, free market American capitalism is about competition and winning (at all costs) … but as I’ve grown older I’ve also realised that corporations are essential dead psychopaths with no real reason to do anything good for humanity … so there do need to be checks in place, and the anti-monopoly laws are a good start.
The legal case focused on the dominance in search of which, since we now know that Google are inherently biased and throttle any information which might give you a different perspective on things, is a major problem in trying to at least maintain a free and open society.
Google has done a great deal of good but seems to have got worse as a product over the years. It’s not just the preferential treatment to some topics, but it’s also how internet marketers have used it to try and sell you something at every corner. It’s difficult to find good well loved websites on Google that aren’t backed by highly profitable entities.
Anyway, because of the case, they don’t need to sell their Chrome Browser or break anything else up.
The main problem now though is the new Google AI summaries and the complete overhaul of the Google results service currently only available in the USA.
DMG Media, owner of MailOnline, Metro and other outlets, said AIO resulted in a fall in click-through-rates by as much as 89%, in a statement to the Competition and Markets Authority made in July.
This is an astounding drop, but not unexpected. The future for getting traffic through search for the average joe is going to be increasingly difficult. Of course, we may well see new attempts at search engines filling this gap as more people start realising Google isn’t helping the little guy anymore.
Social networks and communities will continue to be a good source of traffic for the independent publishers.
That’s all for now.
MCP Clients have their own set of primitives:
I need to test this out but it seems there is a sampling/complete method that a server can send to a client, which essentially asks the client to complete a LLM request. The reason for this is seemingly when the MCP server itself doesn’t want to handle a LLM itself … so it passes it off to the client.
There’s also a straightforward elicitation/request method where the server will ask the client for further information from the user.
Involving debugging and monitoring, servers can send logs to the clients.
The protocol also has facility for real-time updates from server to client, in the form of JSON-RPC 2.0 notifications. Nothing super new there.