Category: Uncategorized

  • Day 225 – 231 – Software Milestone reached!

    Day 225 – 231 – Software Milestone reached!

    It’s quite funny thinking how far things have come in 231 days – in terms of AI and my own progress.

    I started this process off with a desire to use some runway that I had in order to build something for myself. Some sort of system which I had a vague idea of what it was going to do.

    In the 200~ days I have worked a couple of months freelance to bring in some cash, had another couple of months during the summer off to deal with some personal issues, taken my health habits to a new level which has resulted in a much higher level of vitality, energy and drive; and I’ve also helped a local company resurrect their platform resulting in having a very solid base of an AI platform now.

    The Future Of Software

    Ultimately, as I’ve said many times now on this blog… I think software is going to radically change. Voice and gesture activation, Facebook glasses, etc all change the game in terms of user interface … as well as MCP servers changing how applications can interact with each other (with security caveats!) … add automation ‘flowgramming’ tools to that and suddenly we’ve got businesses that will have processes effectively running in the background with only minor oversight. Or to put it another way, businesses will have a fully automated schematic process flow which will prompt humans for input and then carry on. Any company that doesn’t add this digital backbone to their business will not take advantage of the cost (time/money/energy) savings that this will enable. It will require, however, upfront consideration of how the business works, and for that – people will need to think!

    How AI Has Stopped People Thinking

    This is a really serious topic and is fundamentally taking us into the age of stupid. Children can’t write properly anymore because they do all their typing on laptops. I’m shocked by how poor adult handwriting is anyway, to the point where my standard handwriting is considered ‘beautiful’ and I have many compliments on it. Now, children can barely think through answers to questions … instead they just google it, and now, they just ask ChatGPT!

    I’ve only really been using ChatGPT Plus so far, I haven’t used Grok yet, or any of the other ones … but it has got really good. If you give it a great prompt to begin with to template the sort of levels of response you want, it can give some very, very good answers that mimic wisdom. I can clearly see how it is good for people to talk their problems through with AIs but I do worry about giving that sort of private data away to private corporations. Thankfully language models are available open source and we’re only a few years away from hardware evolving where many people can run LLMs locally.

    Local platform

    This year I was asked to look at a platform built on top of the Graphile framework. It’s a high end opinionated stack that works well for experienced javascript devs, but in the end the complexity and lack of support/documentation for such an obscure framework … meant that it was very difficult for developers to understand the system. Overall, I knew from the beginning that it was going to be an uphill task; we did TRY; but it ultimately failed. I was able to rebuild a better system that did the same thing within two months and that’s the AI platform that we are moving forward with now.

    This week I finished a long sprint (scope creep lol!) that really got the system to a very solid working state. It currently allows the user to run and manage a website by curating content with AI from news sources, and also build layouts using AI. This may not sound very ground breaking and it isn’t from a limited perspective, but the infrastructure has been put in place for sending any items of data to AI workflows (with structured responses available) and then looking at the output. It’s back to the basics of Input-Process-Output.

    This will eventually link with flowgramming tools like N8N and Zapier to take fuller advantage that those ecosystems offer.

    Whilst I wasn’t entirely happy about replicating much of the functionality of WordPress, and subsequently having many thoughts that I should have just written a WordPress plugin, I think the Laravel infrastructure is a far better bet, and I will provide a WP integration at some point so posts and pages can be pumped to websites for people who don’t want to switch CMS’ (which is a very good decision in most cases).

    Having deployed this latest iteration internally, I’m very happy with it. There’s a ton of things we can now put on the roadmap and deploy very quickly. I’ve got a snag-list of quality of life improvements for the workflow, since it’s a bit clunky at the moment. Once those are done we’ve got some internal projects planned that will use the system – we want to use what we build not only as demos but as fully fledged side businesses/hustles to our main product. We’ve also got some demos planned to a couple of companies that have shown some interest. And we’ve got a secondary platform niche started up in the affiliate market.

    This all sounds pretty good, but there’s lots of work ahead to continually refine the product to something that will suit the market. But at least we’re getting somewhere.

    That’s it for today!

  • Days 217 – 224

    Work doodle. The days are flying by as I’m working on getting this AI web application together.

    This was AI’d from:

  • Day 205 of AI Startup – Recommencing blog with some light artwork

    Day 205 of AI Startup – Recommencing blog with some light artwork

    Well summer is over and I got knocked off track for various reasons from doing this blog. But I have continued working on the AI world.

    Sometimes AI can just be for a bit of fun. Today’s work doodle was this lovely fella.

    In the future I will bring up to date on what I have been working on.

    So I asked ChatGPT to redraw it, and I’m not sure why I was so shocked by it now, but it was really surprising how good it was. This was in the style of ‘cyberpunk’.

    ‘Claymation style’

    The next ‘street art / graffiti’ style is probably my favourite

    Renaissance style

    ‘hyper realistic in new york dark wet night setting’

    and then erm this kind of went wrong

  • Day 147 – Current ARC-AGI-2 progress proves ‘AI’ is not intelligent (well, duh).

    The Arc Prize is a programming competition to drive progress towards Artificial General Intelligence (AGI). It is now in its second iteration: ARC-AGI-2.

    What I find really interesting is that the latest challenge is really easy for humans, but LLMs have 0% success rate, and other AI reasoning systems get less that 5% success!

    If there is ANY proof that the hype over LLMs being ‘Artificial Intelligence’ is somewhat misleading … it’s the fact that current LLMs cannot get anywhere close to a decent success rating.

    Take the human test for yourself.

    Technical guide here


  • Days 115 – 146 – Placeholder

    This post is a placeholder to discuss progress during this timeframe.

  • Day 114 – Self Awareness

    Self awareness is the path to wisdom. True self awareness only comes from learning what it means to quieten the mind – your thoughts. I remember years ago my life was literally ran by my thoughts – my mind would never shut up or stop imagining some idea. Through meditation and maybe just general growing up, I’m grown beyond the mind level. It takes away the sharpness of the mind, and can lead to a inward collapse as you realise your thoughts were just the programmings of a mask you began as a very young person; but that’s life. Also, not everyone is due to go through this journey in this lifetime.

    Self awareness doesn’t always mean blissful. In fact, the more self awareness you get, the more tricky things can become. But overall, I would rather be more aware of my nature, rather than unaware. Through self-awareness you have control and choice over your thoughts and decisions in the moment. The silence of self-awareness can be overwhelming. When you become fully aware, it’s a challenge to realise you are completely responsible for what’s going on in your life and life situation.

    Self awareness is a nice place to be though. The noise of the mind that doesn’t shut up can go on for years, and then when you come out of that through plenty of time spent in meditation, you realise your mind *was* your personality. You were a programmed mind, and that’s the way it had to be. But if you keep walking the path of awareness, it can get a little messy here. You literally obtain a new identity.

    This identity is often completely shattered by the growth pains it had to go through to get to this point. The souls awakening, or whatever you want to call it, – the souls realisation – or is it just the human body just reaching another plane of consciousness in mid-life? At any rate, the beauty of self awareness is you only really ever need to worry about what’s going on in the moment. You can plan ahead, but you still only do that now. Now you deal with things in the moment; which can incidentally be difficult for people who are still on calendar time, so you need to bridge the gap.

    No real idea why I’ve talked about this today on my AI blog. But why not.

  • Day 113 – Just some thoughts on LLM files

    What are theses things called Parameters?

    Are they sentences? Are they words? Are they something else?

    What actually is a language model file?

    I was under the impression that LLMs were single files, but I realised today that I hadn’t actually double checked this.

    Then I wondered, is there a really simple example for how to build your own tiny, tiny LLM … and even if it wasn’t predicting words correctly yet, would be good to know how that works.

    So, what are my thoughts on this:

    • I am assuming some sort of tool (maybe PyTorch?) is used for building this?
    • How would I prepare the original data set of files

    There’s the Olama package, which lets you use any LM

    • Are they singular files?
    • Are there different types or formats of LLM files?
    • Are these LLMs stored in RAM memory, and/or GPU memory?
    • What resources (time, energy) does it take to train LLMs?
    • Who is training LLMs at the moment?
    • What does customising a model mean?

    Moondream 2

    On my travels today I discovered Moondream 2 … it was on the list of Llama models that I was reading through. Will look into this another time. It’s a micro LLM for vision.

  • Day 112

    Just taking a bit of time on this Saturday morning to update the blog.

    I’ve not been in the headspace of daily updates for a good few weeks/month now, for personal reasons. But that’s life and alas, normal service will eventually be resumed. I have the vision for the next stage of this blog site – initially I just wanted a basic journal platform, and WP is always going to win at that.

    So, what’s going on in with AI?

    • It’s still shocking people at what it can do
    • It’s still very hard to keep up to date with all the developments
    • It’s still improving

    For me, being able to write python scripts without having that grounding in python … having had generic programming grounding it now means I can produce at least small useful tools or scripts in other languages that I’m not traditionally familiar with.

    I’ve been doing quite a bit with playwright – absolutely phenomenal for data trawling… and this is without using LLMs at the moment to look at that data. But it’s all very fascinating when you consider the value add you can give companies on the data mining front now.

    Devices will eventually have the equivalent of Cursor running as their own containerised environment from which they can safely run things like python scripts. So you will be able to talk to your laptop, ask it to go and get the latest information from (insert any website here), interpret as per some pre-chosen rules you’ve created, and then it will write a python script that goes out and gets that data for you, puts it in some sort of data pipeline.

    LLM Output Containers

    I’m not an expert at containers, but can clearly see that LLMs will eventually want to be able to start executing code on the users behalf. They already do this to an extent, but it will be abstracted away more. For instance, cursor will write your python program and run it, but it still needed to set that all up, and it does this within your own local environment… whereas it will probably have its own container quite soon.

  • Day 111 – AI & The Content Model

    The existing model for marketing was:

    • make a website
    • put useful content on that website
    • when people search with Google, Google will show your website
    • monetise

    Google AI is now giving the answers to their users without sending them to the end website. This means Google will index your content, possibly whitewash it, and then just give it to their customers.

    This is a fair simplification of where we are.

    The issue is that people are now interacting with LLMs (full ongoing spoken conversations are now imminent) and the LLMs have 85%-95% of the answer desired within them. This inherently means people will use search engines less.

    I don’t know exactly what the way forward is at the moment, but it’s important for people to understand what’s going on.

  • Day 110

    I’m glad I started this process but I’m starting to think like it has served the purpose that it was intended for.

    I will say, that consistently producing original content everyday, is going to be the only way for many people to compete in this increasingly crazy content abundant world.

    In most cases I either want some news or information digested, but for the rest of content I am only interested in original thoughts.

    Most AI reports that I get sent, or anything that is “someone’s thinking” but is actually AI … is not something I am going to engage in. I do like to see what it says, but the point of writing a report is to do the thinking.