Author: admin

  • Day 114 – Self Awareness

    Self awareness is the path to wisdom. True self awareness only comes from learning what it means to quieten the mind – your thoughts. I remember years ago my life was literally ran by my thoughts – my mind would never shut up or stop imagining some idea. Through meditation and maybe just general growing up, I’m grown beyond the mind level. It takes away the sharpness of the mind, and can lead to a inward collapse as you realise your thoughts were just the programmings of a mask you began as a very young person; but that’s life. Also, not everyone is due to go through this journey in this lifetime.

    Self awareness doesn’t always mean blissful. In fact, the more self awareness you get, the more tricky things can become. But overall, I would rather be more aware of my nature, rather than unaware. Through self-awareness you have control and choice over your thoughts and decisions in the moment. The silence of self-awareness can be overwhelming. When you become fully aware, it’s a challenge to realise you are completely responsible for what’s going on in your life and life situation.

    Self awareness is a nice place to be though. The noise of the mind that doesn’t shut up can go on for years, and then when you come out of that through plenty of time spent in meditation, you realise your mind *was* your personality. You were a programmed mind, and that’s the way it had to be. But if you keep walking the path of awareness, it can get a little messy here. You literally obtain a new identity.

    This identity is often completely shattered by the growth pains it had to go through to get to this point. The souls awakening, or whatever you want to call it, – the souls realisation – or is it just the human body just reaching another plane of consciousness in mid-life? At any rate, the beauty of self awareness is you only really ever need to worry about what’s going on in the moment. You can plan ahead, but you still only do that now. Now you deal with things in the moment; which can incidentally be difficult for people who are still on calendar time, so you need to bridge the gap.

    No real idea why I’ve talked about this today on my AI blog. But why not.

  • Day 113 – Just some thoughts on LLM files

    What are theses things called Parameters?

    Are they sentences? Are they words? Are they something else?

    What actually is a language model file?

    I was under the impression that LLMs were single files, but I realised today that I hadn’t actually double checked this.

    Then I wondered, is there a really simple example for how to build your own tiny, tiny LLM … and even if it wasn’t predicting words correctly yet, would be good to know how that works.

    So, what are my thoughts on this:

    • I am assuming some sort of tool (maybe PyTorch?) is used for building this?
    • How would I prepare the original data set of files

    There’s the Olama package, which lets you use any LM

    • Are they singular files?
    • Are there different types or formats of LLM files?
    • Are these LLMs stored in RAM memory, and/or GPU memory?
    • What resources (time, energy) does it take to train LLMs?
    • Who is training LLMs at the moment?
    • What does customising a model mean?

    Moondream 2

    On my travels today I discovered Moondream 2 … it was on the list of Llama models that I was reading through. Will look into this another time. It’s a micro LLM for vision.

  • Day 112

    Just taking a bit of time on this Saturday morning to update the blog.

    I’ve not been in the headspace of daily updates for a good few weeks/month now, for personal reasons. But that’s life and alas, normal service will eventually be resumed. I have the vision for the next stage of this blog site – initially I just wanted a basic journal platform, and WP is always going to win at that.

    So, what’s going on in with AI?

    • It’s still shocking people at what it can do
    • It’s still very hard to keep up to date with all the developments
    • It’s still improving

    For me, being able to write python scripts without having that grounding in python … having had generic programming grounding it now means I can produce at least small useful tools or scripts in other languages that I’m not traditionally familiar with.

    I’ve been doing quite a bit with playwright – absolutely phenomenal for data trawling… and this is without using LLMs at the moment to look at that data. But it’s all very fascinating when you consider the value add you can give companies on the data mining front now.

    Devices will eventually have the equivalent of Cursor running as their own containerised environment from which they can safely run things like python scripts. So you will be able to talk to your laptop, ask it to go and get the latest information from (insert any website here), interpret as per some pre-chosen rules you’ve created, and then it will write a python script that goes out and gets that data for you, puts it in some sort of data pipeline.

    LLM Output Containers

    I’m not an expert at containers, but can clearly see that LLMs will eventually want to be able to start executing code on the users behalf. They already do this to an extent, but it will be abstracted away more. For instance, cursor will write your python program and run it, but it still needed to set that all up, and it does this within your own local environment… whereas it will probably have its own container quite soon.

  • Day 111 – AI & The Content Model

    The existing model for marketing was:

    • make a website
    • put useful content on that website
    • when people search with Google, Google will show your website
    • monetise

    Google AI is now giving the answers to their users without sending them to the end website. This means Google will index your content, possibly whitewash it, and then just give it to their customers.

    This is a fair simplification of where we are.

    The issue is that people are now interacting with LLMs (full ongoing spoken conversations are now imminent) and the LLMs have 85%-95% of the answer desired within them. This inherently means people will use search engines less.

    I don’t know exactly what the way forward is at the moment, but it’s important for people to understand what’s going on.

  • Day 110

    I’m glad I started this process but I’m starting to think like it has served the purpose that it was intended for.

    I will say, that consistently producing original content everyday, is going to be the only way for many people to compete in this increasingly crazy content abundant world.

    In most cases I either want some news or information digested, but for the rest of content I am only interested in original thoughts.

    Most AI reports that I get sent, or anything that is “someone’s thinking” but is actually AI … is not something I am going to engage in. I do like to see what it says, but the point of writing a report is to do the thinking.

  • Day 109

    Been observing vibe coding; and also doing some of my own.

    The backbone of the new AI boom will be the processing power. I don’t need to know much about this, but it’s worth taking a Quick Look at things.

    So, for instance do OpenAI or any of the AI companies release information on their power consumption.

    Is there any official power consumption data?

    Does the hardware vary from enterprise companies, or are they pretty much set?

    I suppose at this point you have two ways of looking at it:

    1 – How good are the micro level language frameworks going to get? i.e. low powered devices
    2 – People are going to get lazier and overuse AI for everything which will create a radical increase in energy demand.

    Turns out that in the main, energy usage of actually performing a ChatGPT response is fairly negligible.

  • Day 108 – I don’t know how vibe coding ends

    My friend has recently been hammering Cursor, and has gone truly into the world of vibe coding. One thing I have realised is that if you have ever done any sort of serious development, you will never truly be a vibe coder.

    A vibe coder is not restricted by the traditional restraints of the programmers mindset. This is an advantage and disadvantage. Whilst a programmer will be unhappy about the generation of duplicate components and functionality, but the AI will always keep on trying to create your solution until you give up trying to prompt it. Most developers are simply too ‘programmed’ to see ‘reality’ – whereas someone new is just seeing AI build something in front of them, in a way that they’ve never, ever been able to do before. For developers, we are inherently biased – we see a computer doing the job that we have traditionally done… you have to be quite a disciplined person to be self aware enough to recognise that’s not going to have an effect.

    Anyway, the point is … they aren’t doing the job quite right yet, and that’s because we haven’t got the prompt engineering right yet.

  • Day 107 – Mac Mini Me

    Still a few days behind on the blogging so adding a general thought for day 107.

    Recently bought new Mac Mini M4 Pro. Entry level.

    Potentially could have upgraded RAM and hard drive (especially, since 500 gigs gets eaten up pretty quick).

    Overall the experience of going back to a desktop rather than laptop with two monitors is quite different. Feels different. I don’t have to keep unplugging wires all the time when I go out to a meeting which happens at least every other day.

    It’s also really nice just having a blank new OS to start with. As with a nice notebook you try and keep it all clean to start with. That’s the feeling I’ve got right now with it.

    The Mac Mini’s are excellent value for money, and was even surprised they have a speaker that pretty much does the job for every day work related audio. A built in microphone would have been welcome, but probably impractical.

    That’s it for now.

  • Day 106 – Back To The Tech Specs with AI

    Today I realised that technical specifications have come full circle in importance.

    We’ve been agile for so long, with focus on short sprints; and I don’t think we need to throw away those benefits.

    But with AI, you can now plan the system on a very wide scale from the beginning.

    I also realised today that I have been thinking too small when it comes to AI. Whatever you think the capabilities of AI are, you are probably underestimating it.

    Today we put together a loose specification of about 3000 lines. Broadly speaking these included:

    • database schemas
    • user stories
    • features
    • UIs
    • component structures

    Using Cursor we would ask it to amend a technical specification MD file as we mapped out the idea from scratch again. Thank to Shaun who pointed this idea out which I’ve sort of tweaked now.

    From this technical specification we asked it to build a vanilla HTML/JS frontend prototype.

    The purpose of this was to see whether the AI could correctly interpret the spec.

    So we went back and forth from the tech spec to prototype, and were able to hone it after a few iterations. This allowed us humans to visually see what we were building.

    I won’t share the prototype UI yet, but suffice to say it reduced several days of work down to an hour or so.

    Crazy times.

  • Day 105 – Cursor

    Today my business partner took his codebase and gave it to cursor.

    The thing is, since he wasn’t hooked up to GitHub, we took the easy route and downloaded the zip file, and I just told him to go play with it.

    He spent the entire day upgrading the application and from the sounds of it integrated a ton of awesome new features.

    And then when it came to showing it to me, it had stopped working.

    Turned out after all that, that he had lost the work. Something went radically wrong, and either some files were deleted, or it just got mega confused.

    No Git commits!

    At any rate, his mind was blown. We will re-continue tomorrow.