Category: Daily Report

  • Day 322 – Do businesses need to create a Head of AI C Level role?

    For companies to win during this major transformation, the key principle is having someone at C level who is responsible for integrating AI. This person must understand the business workflows and combine that with AI knowledge. The creation of a Head of AI role is the first step to take in your AI transformation.

    Initially the AI role is focused on becoming more effective at what the company does currently i.e. making X widgets faster, better, or looking after more customers for less money in customer service. But in reality, the real winners will be the ones who innovate with AI.

  • Day 214 – 216

    Day 214 – 216

    Whilst I’m enjoying traditional web programming enhanced by my AI partner (using Cursor still, but may move onto another one soon since it feels like it’s gotten worse)… I am constantly aware that the software world is going to be dramatically different very soon. I don’t think I will, in particular, be out of a job… but it is definitely a chance possibility. However inertia of software stacks will probably keep me employed for the next decade or so.

    Inertia & Junior Software Roles

    I don’t know with 100% accuracy, but I believe some financial and insurance institutions are still running on COBOL software. These are extreme examples but I think there is going to be caution on the uptake of new forms of software in many areas – there will be inertia and things will change slowly for some industries. So there will be web software that needs to be maintained … I don’t know how massive this market is, it may well drastically shrink as companies merge, but I’m fairly certain old school web tech will be a niche job that can pay potentially well. Coupled with the ongoing worries about juniors finding it incredibly difficult to find jobs I think an entire generation of programmers will go missing, which will decrease the supply of traditional web development talent. So hopefully good for me.

    I was speaking to someone from the printing industry who worked in it many decades ago. He transitioned through from traditional printing (magazines) using HUGE rollers in factories, to digital printing…, *almost* overnight (months to within. year) most of the workforce lost their jobs because technology steam rolled them. And we lost all that clever skill that put magazines together – if you aren’t aware of the skill that went into magazine printing it was a labour intensive process … now we can just colour laser print onto glossy paper and think nothing of it.

    And so it will be with programming … so you need to stay up to date.

    Future Software – Automation & Machine Learning… + Cyber Security

    If I had to start all over again, and had the energy to do it … I’d be focusing now on applying automation tools to companies, which is a first stepping stone to AI’ing companies… the tools are so simple now that almost anyone can make really quite good solutions or prototypes for their companies.

    Then I’d be focusing on learning actually Machine Learning stuff, for instance image/video recognition i.e. counting the number of chickens going past a certain point on a video; and then the potential for smart cities is very much there already. And finally cyber security will just get more and more important.

    Some thoughts on Brain Inputs

    In other cases, I think software will leap ahead. We haven’t even touched upon the brain-input mechanisms which would change everything in mind-bending ways … there’s the story of a teenager with a disability able to use Musk’s brain chip to play Call of Duty with his friends; as with most things these days, I take them with a pinch of salt; but I do think the blending of the biological and digital (eventually quantum…) worlds will remove the input restrictions that we currently face. Sometimes you can think far faster than being able to actually implement.

    That’s it for now, demo coming soon.

    It’s not that we’ve been in stealth, we’ve just been working on something quietly. It’s still traditional web albeit blended with AI workflows.

  • Day 212 – 213 AI Workflows and Structured Data & MCP Stuff

    Day 212 – 213 AI Workflows and Structured Data & MCP Stuff

    Another week flies by. At the moment I’m working on one platform which is a very thin layer that interacts with data sources and LLMs. This week I got it to a very solid state and was able to do some preliminary demos as we are now going to start applying it to some real world scenarios.

    Ultimately, all IT stuff is basically IPO – input process output … back in the 90s when you are doing computer science lessons, it’s the fundamental thing that makes software valuable to people. People put information in, the system does something, and it outputs something of value to that person. That value is then monetised.

    LLMs have actually changed the whole IPO paradigm – the inputs are now completely different – you can talk in natural language and through LLMs it can somewhat guess intent correctly (even without ‘intelligence’, the prediction is good enough) …some of the processing can be done via MCP and finally, and now the output UI can be voice, or it can be a UI generated on the fly.

    I expect LLMs to become more optimised to the point where the bigger devices like MacBook Pros have them running locally, so you really can have more privacy when using ChatGPT-esque UIs … once we’ve got local LLMs running and you can plug into them how you want, I think that’s going to be really interesting from a creation point of view.

    All fun and games.

    OpenAI structured output

    Whilst I haven’t played with the other LLM APIs yet, the thing I really like about the responses API is the structured output where you can enforce a response in a specific format. So if you want information back as an article, or a car, or an animal, or whatever, you can enforce the attributes that you want.

    MCP Stuff – Tool discovery

    After initiating a connection, it’s really easy to request what tools are available:

    { "jsonrpc": "2.0", "id": 2, "method": "tools/list" }

    Unsure entirely what the ID corresponds to at the point, but I’ll figure it out.

    The server will respond with something like

    {
      "jsonrpc": "2.0",
      "id": 2,
      "result": {
        "tools": [
          {
            "name": "calculator_arithmetic",
            "title": "Calculator",
            "description": "Perform mathematical calculations including basic arithmetic, trigonometric functions, and algebraic operations",
            "inputSchema": {
              "type": "object",
              "properties": {
                "expression": {
                  "type": "string",
                  "description": "Mathematical expression to evaluate (e.g., '2 + 3 * 4', 'sin(30)', 'sqrt(16)')"
                }
              },
              "required": ["expression"]
            }
          },
          {
            "name": "weather_current",
            "title": "Weather Information",
            "description": "Get current weather information for any location worldwide",
            "inputSchema": {
              "type": "object",
              "properties": {
                "location": {
                  "type": "string",
                  "description": "City name, address, or coordinates (latitude,longitude)"
                },
                "units": {
                  "type": "string",
                  "enum": ["metric", "imperial", "kelvin"],
                  "description": "Temperature units to use in response",
                  "default": "metric"
                }
              },
              "required": ["location"]
            }
          }
        ]
      }
    }

    The server responds with a list of available tools, and as you can see it describes how you are meant to respond if you want to use those tools. So name, title, description are fairly straightforward … inputSchema describes the inputs and data types needed.

    The OpenAI docs recommend that you do a ‘tool discovery’ routine that cycles through all MCP servers that it’s connected to, and then put them in a tool registry. The LLM can then understand what it has access to.

  • Day 211 – A Happy Programmer, Why Pitch Deck & Financial Projections Program Your Brain When You Actually Think About Them … and a brief future of software

    Finally got to a decent point this week on the software side. The prototype I built a while back needed to be reworked into something with proper architecture. I’ve been experimenting with different “vibe coding” apps—mainly Cursor lately—but that’s been getting worse. The prototype was interesting, but I knew I had to put it into a solid framework. That meant spending some time learning, then implementing.

    Now that’s done, and I’ve got a platform where you can manage your data, apply AI workflows to it, and get useful output. There are still a few things to iron out, but overall I’m happy with the progress. Such are the wonders (and headaches) of programming.

    Why thinking through pitch decks and financial projections matters

    Over the last 24 hours I’ve started working on the pitch deck and financial projections for this AI platform. We needed to define a clear initial niche, and this exercise forced that thinking.

    The brain—mind, subconscious, whatever you want to call it—feeds on the information we give it. Your future reality is largely shaped by how you use your mind. When you actually spend time thinking things through (not just asking ChatGPT to spit something out), your brain begins mapping the future. It starts planning at a subconscious level.

    (As an aside: Psycho-Cybernetics is worth a read on this topic—there’s evidence the conscious/subconscious dynamic doesn’t work quite the way we’ve been taught.)

    So when you see a spreadsheet forecasting customer numbers on specific dates, your brain takes it as truth and begins working toward it. Of course, it all depends on your mindset at the time, but clarity + projection = momentum.

    Progress this week

    I finally integrated AI workflows into the platform. It took time because I wanted the right architecture around it, but it’s now functional.

    I’m fairly certain the software of the future will be heavily abstracted away from us.

    The future of software = Speech recognition + LLM + MCP + Generative UI

    Here’s my best guess: most of the software we use today will eventually be abstracted away. Speech recognition + LLMs is already good enough that you can just talk to your software, which is sufficient for many use cases.

    A simple example: forms. Instead of filling out endless fields, you’ll just say what you want. The system will validate your input, confirm it, and submit. In fact, an AI “agent” that already knows your preferences will fill it out for you automatically—eventually you won’t even need to speak.

    For now, point-and-click interfaces will still exist, because they’re efficient for certain tasks. But speech → LLM → MCP → Generative UI is where things are heading. LLMs understand intent, query services, get the data, and then spin up a UI on the fly if needed.

    — note … super tired so this is half tidied up by AI … may rewrite it tomorrow with fresh eyes

  • Day 210 – MCP #4 – Lifecycle Management

    The client will send this

    {
      "jsonrpc": "2.0",
      "id": 1,
      "method": "initialize",
      "params": {
        "protocolVersion": "2025-06-18",
        "capabilities": {
          "elicitation": {}
        },
        "clientInfo": {
          "name": "example-client",
          "version": "1.0.0"
        }
      }
    }

    The server will respond like this

    {
      "jsonrpc": "2.0",
      "id": 1,
      "result": {
        "protocolVersion": "2025-06-18",
        "capabilities": {
          "tools": {
            "listChanged": true
          },
          "resources": {}
        },
        "serverInfo": {
          "name": "example-server",
          "version": "1.0.0"
        }
      }
    }

    Notes

    • If protocol versions are different, advised to terminate the connection to avoid any incompatible requests being made.
    • The capabilities object lists the primitives support, although I need to get more definitive on the full array of potential options
    • The server, in this example, has a tools object within capabilities, and rather than at first glance being limited to just the listChanged notification… it means the entire tool primitive methods are available AS WELL AS the listChanged.
    • The resources object means the entire primitive is available for resources, so /list /read

    Once the client checks the response from the server, it sends out a one way notification, expecting no immediate response:

    {
      "jsonrpc": "2.0",
      "method": "notifications/initialized"
    }

  • Day 209 – Google’s Quest For Dominance Continues

    Google completely wiped the floor with independent content publishers a few years ago; instead promoting websites that venture capitalists had (potentially) shared interests in.

    I wrote about that here in my post ‘Google’s Move Kills Small Independents & Keeping Going…

    This week Google have effectively won an important ruling in court where they won’t have to take the harmful action of breaking up its company. I’m not always a fan of forcing large companies to break up because after all, free market American capitalism is about competition and winning (at all costs) … but as I’ve grown older I’ve also realised that corporations are essential dead psychopaths with no real reason to do anything good for humanity … so there do need to be checks in place, and the anti-monopoly laws are a good start.

    The legal case focused on the dominance in search of which, since we now know that Google are inherently biased and throttle any information which might give you a different perspective on things, is a major problem in trying to at least maintain a free and open society.

    Google has done a great deal of good but seems to have got worse as a product over the years. It’s not just the preferential treatment to some topics, but it’s also how internet marketers have used it to try and sell you something at every corner. It’s difficult to find good well loved websites on Google that aren’t backed by highly profitable entities.

    Anyway, because of the case, they don’t need to sell their Chrome Browser or break anything else up.

    The main problem now though is the new Google AI summaries and the complete overhaul of the Google results service currently only available in the USA.

    DMG Media, owner of MailOnline, Metro and other outlets, said AIO resulted in a fall in click-through-rates by as much as 89%, in a statement to the Competition and Markets Authority made in July.

    This is an astounding drop, but not unexpected. The future for getting traffic through search for the average joe is going to be increasingly difficult. Of course, we may well see new attempts at search engines filling this gap as more people start realising Google isn’t helping the little guy anymore.

    Social networks and communities will continue to be a good source of traffic for the independent publishers.

    That’s all for now.

  • Day 208 – Learning MCP #3 – Client Primitives

    MCP Clients have their own set of primitives:

    • Sampling
    • Elicitation
    • Logging

    MCP Client Primitive Sampling

    I need to test this out but it seems there is a sampling/complete method that a server can send to a client, which essentially asks the client to complete a LLM request. The reason for this is seemingly when the MCP server itself doesn’t want to handle a LLM itself … so it passes it off to the client.

    MCP Client Primitive Elicitation

    There’s also a straightforward elicitation/request method where the server will ask the client for further information from the user.

    MCP Client Primitive Logging

    Involving debugging and monitoring, servers can send logs to the clients.

    Notifications

    The protocol also has facility for real-time updates from server to client, in the form of JSON-RPC 2.0 notifications. Nothing super new there.

  • Day 207 – Learning MCP Part #2

    MCP is slightly different from a standard web API, in that the server can request data from a client, to which the client will respond. If no response is required, they send notifications.

    Communication between client and server is stateful – meaning that the history of communication is kept and used as part of the response, thereby getting a larger context over time.

    MCP uses primitives to describe the data / capability of the server. Primitives cover tools (functionality that servers will offer the AI applications), resources (data sources) and prompts (prompt templates).

    MCP is probably one of the extra technologies alongside LLMs that will start breaking a lot of industries. When LLMs can just ‘talk’ directly to services, there will be a lot less requirement for user interfaces … and by that I mean a ton of SAAS products. Simply the functionality a SAAS product does will be abstracted away by automation and MCP. So the web dev industry for sure will start to have demand drop for such things.

  • Day 206 – MCP Part #1

    One of the interesting things that emerged soon after LLMs, was the creation of the Model Context Protocol.

    It does exactly what it sounds like it does… LLMs provide answers to users based on context … so if you have chatted to ChatGPT for months … it will remember what you have said… i.e. it has context on you/your situation.

    For instance, you might have input all your business ideas into ChatGPT, and now its answers will be more personalised to you because it has a better idea of the background (context). It’s like getting to know someone new, the more time you spend with them, the more context you have to understand them and refine your conversation around these understandings.

    So all that was great, and then developers realised they wanted LLMs to have even more context … because there are these wonderful things called databases which have all our information in. So they needed a way to ‘talk to these databases’ rather than just rely on a traditional API.

    Hence MCP was born.

    Model Context Protocol (MCP) is basically a very cool standardised way of linking Language Models (‘AI’) to existing data information systems.

    Protocols have always been fundamental to building the internet. Having standards that we all agree on makes things a lot simpler. Obviously, it doesn’t always work out that way, and you end up with walled gardens.

    But MCP quickly emerged after LLMs to address the issue of how to make AI more useful, by getting it to co-ordinate actually doing something rather than just outputting text or data.

    For companies, being able to ‘talk to your data’ is pretty awesome, but most don’t have it currently.

    MCP operates between three different ‘things’:

    • A host
    • A client
    • A server

    The host is basically a typical application Like a mobile, web or desktop app but with AI programming in it.

    The host application has a component that maintains a connection to another server – the MCP server – and this component is called the MCP client.

    The server provides the context to the client.

    An application (aka MCP host) will, as part of its functionality, manage an internal ‘client’ that maintains a connecting to a server (aka MCP server). Each server will have its own client … 1:1.

    An MCP host can run locally on your computer, or on a remote server.

    The architecture is very straightforward – two layers – data and transport layers.

    The data layer is described in the JSON-RPC 2.0 format, and is used by both the host and server to both request information, and give information. For instance, the server can send data back to the host to request certain input from the user.

    The transport layer is either standard input output for local (STDIO) or HTTP POST for remote.

    More soon…

  • Day 149 – Cursor decreasing in quality

    I’ve been using Cursor a lot recently and in the last few weeks the quality of the suggestions has really left a lot to be desired.

    Most of the times I am finding myself having to reject or rework all suggestions. I do think over time when the training gets more specialised, framework specific LLMs will bridge this gap. At that point when we have a Laravel specific LLM, for instance, that’s when I think we will know for sure how many software engineering jobs are going to get impacted.

    Cursor is great for prototyping your ideas, but it quickly gets confused.