Blog

  • Day 210 – MCP #4 – Lifecycle Management

    The client will send this

    {
      "jsonrpc": "2.0",
      "id": 1,
      "method": "initialize",
      "params": {
        "protocolVersion": "2025-06-18",
        "capabilities": {
          "elicitation": {}
        },
        "clientInfo": {
          "name": "example-client",
          "version": "1.0.0"
        }
      }
    }

    The server will respond like this

    {
      "jsonrpc": "2.0",
      "id": 1,
      "result": {
        "protocolVersion": "2025-06-18",
        "capabilities": {
          "tools": {
            "listChanged": true
          },
          "resources": {}
        },
        "serverInfo": {
          "name": "example-server",
          "version": "1.0.0"
        }
      }
    }

    Notes

    • If protocol versions are different, advised to terminate the connection to avoid any incompatible requests being made.
    • The capabilities object lists the primitives support, although I need to get more definitive on the full array of potential options
    • The server, in this example, has a tools object within capabilities, and rather than at first glance being limited to just the listChanged notification… it means the entire tool primitive methods are available AS WELL AS the listChanged.
    • The resources object means the entire primitive is available for resources, so /list /read

    Once the client checks the response from the server, it sends out a one way notification, expecting no immediate response:

    {
      "jsonrpc": "2.0",
      "method": "notifications/initialized"
    }

  • Day 209 – Google’s Quest For Dominance Continues

    Google completely wiped the floor with independent content publishers a few years ago; instead promoting websites that venture capitalists had (potentially) shared interests in.

    I wrote about that here in my post ‘Google’s Move Kills Small Independents & Keeping Going…

    This week Google have effectively won an important ruling in court where they won’t have to take the harmful action of breaking up its company. I’m not always a fan of forcing large companies to break up because after all, free market American capitalism is about competition and winning (at all costs) … but as I’ve grown older I’ve also realised that corporations are essential dead psychopaths with no real reason to do anything good for humanity … so there do need to be checks in place, and the anti-monopoly laws are a good start.

    The legal case focused on the dominance in search of which, since we now know that Google are inherently biased and throttle any information which might give you a different perspective on things, is a major problem in trying to at least maintain a free and open society.

    Google has done a great deal of good but seems to have got worse as a product over the years. It’s not just the preferential treatment to some topics, but it’s also how internet marketers have used it to try and sell you something at every corner. It’s difficult to find good well loved websites on Google that aren’t backed by highly profitable entities.

    Anyway, because of the case, they don’t need to sell their Chrome Browser or break anything else up.

    The main problem now though is the new Google AI summaries and the complete overhaul of the Google results service currently only available in the USA.

    DMG Media, owner of MailOnline, Metro and other outlets, said AIO resulted in a fall in click-through-rates by as much as 89%, in a statement to the Competition and Markets Authority made in July.

    This is an astounding drop, but not unexpected. The future for getting traffic through search for the average joe is going to be increasingly difficult. Of course, we may well see new attempts at search engines filling this gap as more people start realising Google isn’t helping the little guy anymore.

    Social networks and communities will continue to be a good source of traffic for the independent publishers.

    That’s all for now.

  • Day 208 – Learning MCP #3 – Client Primitives

    MCP Clients have their own set of primitives:

    • Sampling
    • Elicitation
    • Logging

    MCP Client Primitive Sampling

    I need to test this out but it seems there is a sampling/complete method that a server can send to a client, which essentially asks the client to complete a LLM request. The reason for this is seemingly when the MCP server itself doesn’t want to handle a LLM itself … so it passes it off to the client.

    MCP Client Primitive Elicitation

    There’s also a straightforward elicitation/request method where the server will ask the client for further information from the user.

    MCP Client Primitive Logging

    Involving debugging and monitoring, servers can send logs to the clients.

    Notifications

    The protocol also has facility for real-time updates from server to client, in the form of JSON-RPC 2.0 notifications. Nothing super new there.

  • Day 207 – Learning MCP Part #2

    MCP is slightly different from a standard web API, in that the server can request data from a client, to which the client will respond. If no response is required, they send notifications.

    Communication between client and server is stateful – meaning that the history of communication is kept and used as part of the response, thereby getting a larger context over time.

    MCP uses primitives to describe the data / capability of the server. Primitives cover tools (functionality that servers will offer the AI applications), resources (data sources) and prompts (prompt templates).

    MCP is probably one of the extra technologies alongside LLMs that will start breaking a lot of industries. When LLMs can just ‘talk’ directly to services, there will be a lot less requirement for user interfaces … and by that I mean a ton of SAAS products. Simply the functionality a SAAS product does will be abstracted away by automation and MCP. So the web dev industry for sure will start to have demand drop for such things.

  • Day 206 – MCP Part #1

    One of the interesting things that emerged soon after LLMs, was the creation of the Model Context Protocol.

    It does exactly what it sounds like it does… LLMs provide answers to users based on context … so if you have chatted to ChatGPT for months … it will remember what you have said… i.e. it has context on you/your situation.

    For instance, you might have input all your business ideas into ChatGPT, and now its answers will be more personalised to you because it has a better idea of the background (context). It’s like getting to know someone new, the more time you spend with them, the more context you have to understand them and refine your conversation around these understandings.

    So all that was great, and then developers realised they wanted LLMs to have even more context … because there are these wonderful things called databases which have all our information in. So they needed a way to ‘talk to these databases’ rather than just rely on a traditional API.

    Hence MCP was born.

    Model Context Protocol (MCP) is basically a very cool standardised way of linking Language Models (‘AI’) to existing data information systems.

    Protocols have always been fundamental to building the internet. Having standards that we all agree on makes things a lot simpler. Obviously, it doesn’t always work out that way, and you end up with walled gardens.

    But MCP quickly emerged after LLMs to address the issue of how to make AI more useful, by getting it to co-ordinate actually doing something rather than just outputting text or data.

    For companies, being able to ‘talk to your data’ is pretty awesome, but most don’t have it currently.

    MCP operates between three different ‘things’:

    • A host
    • A client
    • A server

    The host is basically a typical application Like a mobile, web or desktop app but with AI programming in it.

    The host application has a component that maintains a connection to another server – the MCP server – and this component is called the MCP client.

    The server provides the context to the client.

    An application (aka MCP host) will, as part of its functionality, manage an internal ‘client’ that maintains a connecting to a server (aka MCP server). Each server will have its own client … 1:1.

    An MCP host can run locally on your computer, or on a remote server.

    The architecture is very straightforward – two layers – data and transport layers.

    The data layer is described in the JSON-RPC 2.0 format, and is used by both the host and server to both request information, and give information. For instance, the server can send data back to the host to request certain input from the user.

    The transport layer is either standard input output for local (STDIO) or HTTP POST for remote.

    More soon…

  • Day 205 of AI Startup – Recommencing blog with some light artwork

    Day 205 of AI Startup – Recommencing blog with some light artwork

    Well summer is over and I got knocked off track for various reasons from doing this blog. But I have continued working on the AI world.

    Sometimes AI can just be for a bit of fun. Today’s work doodle was this lovely fella.

    In the future I will bring up to date on what I have been working on.

    So I asked ChatGPT to redraw it, and I’m not sure why I was so shocked by it now, but it was really surprising how good it was. This was in the style of ‘cyberpunk’.

    ‘Claymation style’

    The next ‘street art / graffiti’ style is probably my favourite

    Renaissance style

    ‘hyper realistic in new york dark wet night setting’

    and then erm this kind of went wrong

  • Day 149 – Cursor decreasing in quality

    I’ve been using Cursor a lot recently and in the last few weeks the quality of the suggestions has really left a lot to be desired.

    Most of the times I am finding myself having to reject or rework all suggestions. I do think over time when the training gets more specialised, framework specific LLMs will bridge this gap. At that point when we have a Laravel specific LLM, for instance, that’s when I think we will know for sure how many software engineering jobs are going to get impacted.

    Cursor is great for prototyping your ideas, but it quickly gets confused.

  • Day 148 – Vibe Coding Worsening?

    I’ve noticed that once you are up and running with a proper web application framework like Laravel, blindly using Cursor on it turns it into a nightmare. It feels as if the quality of answers has gone down over last weeks, but I’m frequently rejecting its suggestions. Often it’s got the SoC wrong.

    Anyway, potentially better prompt engineering would fix it, or more training on Laravel framework specifically. Once it fully is trained on the Filament package, it will be amazing.

    Some notes

    • I am finding myself rejecting a lot of the Cursor agent advice… potentially because it needs training on some newer libraries I’m using; but also I do see the glaring errors it makes sometimes.
    • Whilst LLMs are awesome, to mistake them right now for anything approaching intelligence is silly considering ARc2 is being passed at rates less than 5% by AI/LLMs. That really has blown me away. We will see in fall 2025 if anyone has come close. If they do, it’ll be a incredibly scary step forward.
    • For traditional web frameworks, Laravel Filament still looking great. Todays development included:
      • Mostly smooth configuration of import/export to CSVs on models within the Filament table panels.
      • Got job queues setup
      • Quality of life improvements to UI
      • Notifications work great
    • Laravel Pulse is quite good. I was having a few problems with CSV import jobs failing and then infinite retrying but this made me realise I wanted monitoring software asap. Pulse still requires someone to be observing however, and Sentry would still be obvious choice.

    I’ll shortly be in a position to launch the very early version

  • Day 147 – Current ARC-AGI-2 progress proves ‘AI’ is not intelligent (well, duh).

    The Arc Prize is a programming competition to drive progress towards Artificial General Intelligence (AGI). It is now in its second iteration: ARC-AGI-2.

    What I find really interesting is that the latest challenge is really easy for humans, but LLMs have 0% success rate, and other AI reasoning systems get less that 5% success!

    If there is ANY proof that the hype over LLMs being ‘Artificial Intelligence’ is somewhat misleading … it’s the fact that current LLMs cannot get anywhere close to a decent success rating.

    Take the human test for yourself.

    Technical guide here


  • Days 115 – 146 – Placeholder

    This post is a placeholder to discuss progress during this timeframe.