Blog
Random thought made from time to time or interesting snippets stumbled across around the web
My Take On AI

Artificial Intelligence, by now, if you’re even remotely connected to the world in general, you’ll know what it is and what it does, what it might mean and more. Or, do you?

There’s a lot to unpack when it comes to AI and, in my view different levels and use cases for AI and they range, generally, from it’ll end us all in some kind of holocaust where the machines decide we’re the problem to, “Meh, whatever”.

If you’re a fan of sci-fi, you’ll be well enough aware of AI and the potential implications, both good and bad, and to double down on the word there, the implementation matters. 

Then you have robotics, that often goes hand in hand with AI, think Data from Star Trek through to the various terminators and pretty much any other machine made to be evil. But here’s the thing, it’s a machine, it doesn’t know good or evil beyond what we humans teach it or, it achieves some sort of sentience and learns for itself. 

Hopefully, none are raised in the slums of Glasgow, London, Manchester or the likes, as then we’re really in trouble. It might even have a penchant for Buckfast, scary thought right there. 

So the question is, is AI good/bad, underrated/overrated or none of that? Let’s explore it a bit. 

Nothing New

Most AI that I’ve seen isn’t really anything new. Business-wise. 

By that, I mean the consumer-level stuff is largely asking a chatbot a question, and it spits out an answer. Sure, some of the answers are very clever, some very clever indeed but just as often it’s wrong. 

Automated chatbots we have had for years, these are nothing new.

If you look in the jobs market (time of writing) you’ll see scores of jobs training AI. They’re not really training AI as such, they’re trying to train really clever chatbots to be smarter than the average chatbot. 

But, if you call it AI, well, you can get funding because it’s “cool”, people might accept it more and so on.

There is some merit in these things but, caution is required as the promises may not meet expectations.  

Follow The Money

Before we go any further and, with the above now in your head, consider the motivations to pump so much effort and money into AI “stuff”.

You have OpenAI, Nvidia, Google, Microsoft, Apple, IBM, among a raft of others, all spending literal boatloads of cash on AI development. Why?

I suspect because the markets have put, quite literally, trillions of dollars into the great hopes and promises that AI will make them a shit-ton of cash down the road and, they’re all terrified that they on’t be in the race and profit form it. So investors throw millions upon millions at “possibilities” as opposed to tangible realities. 

With all that money swilling around would it come as a shock to, anyone really, that some of the promises being made might not turn out to be true?

All the more so when, I suspect, that a lot of investors largely have only a basic understanding of what it is they’re investing in. 

Which is great if your a company with some neat parlour tricks, a web interface and a convincing story to tell. 

Dotcom bubble all over again? Maybe.

The warnings are there, several companies offering “AI” services have already been shuttered and I suspect there will be more as investors realise they might be white elephants after all. 

An Economic Paradox

One of the great promises of AI, to businesses, is that it will reduce costs through reducing the requirement for staff. 

Okay, that’s fine, but the thing is if some figures (projected so iffy) are to be believed, then anything from 30-50% of the workforce worldwide is defunct. 

If that’s true, then 30-50% of the potential consumer base for the products or services that those same companies sell is about to be unemployed and have no money to spend on the products and services provided. 

I am simplifying but you get the idea. 

The counter to that is that AI will create jobs, but they’re completely different jobs with a completely different skillset. 

For my industry, appliances and repairs, etc, it will (potentially) have an effect, in some areas perhaps profound, but for the likes of field service, spares and such, not really. Well, at least until we get the Terminator-type robots that can go round and fix stuff anyway, but by then the robots driven with AI will have wiped us all out anyway, so not to worry.  

All this though, in my view, hinges on the level of AI you’re employing. 

What Is AI?

I dumb it down to three, I like doing that, so people can understand it better, and it’s not information overload. Here’s my three basic takes on AI and I am dumbing down. 

Level 1 - Actual AI

I call this “Actual AI” as it’s a machine that is or is bordering on sentience. It is the scariest form of AI. 

By that I mean it can think for itself, problem solve, create (from nothing) and come up with solutions on its own, completely autonomously. And, be correct virtually every time, effectively beyond the capabilities of humans. 

It can form relationships, have empathy, understand emotion, and relate. 

To date, this is science fiction. Possible, perhaps, but still the stuff of sci-fi. For now. 

With the advancements in quantum computing, this might become possible, but this is almost by definition the creation of “life”, elevating humankind to Godlike status, and I don’t think we’re quite there just yet. 

But some AI models are reportedly displaying signs of lying and self-preservation, which kinda indicates some sort of “personal awareness” and that, without guardrails in place, could be potentially dangerous. As many leading minds in the field have warned.

So far as I can find out, these are advanced models used for research though, not commercial products. 

Level 2 - A Real Clever Machine

Not quite Level 1, think Pound Shop version. These miss the mark as the machine doesn’t achieve consciousness, so it can’t “think” for itself, but it can appear to do so. 

I think this is where the most cutting-edge AI is now, perhaps. 

There’s no chance of it waking up one morning and thinking, “these human things are a blight on the planet, let’s wipe them out”. It can’t make a decision like that as it can’t “think” for itself autonomously. 

It can make good cat videos and deepfakes, though, which is a problem of itself and worthy of a whole other conversation about the morality and truth-twisting of that. 

More, given enough compute power, it can solve a shit load of humanities problems because, it can correlate data, extrapolate, test and spit out a possible solution to any myriad of problems. 

But it cannot create data, only use what it is fed; however, it’s fed. 

Level 3 - Seems Bright, isn’t

A lot of consumer-grade “AI” sits here, I reckon. It’s not really “AI” per se. 

These “AI” things aren’t really AI at all, they’re little more than a magic show at best, parroting answers already given, perhaps parsing them into digestible bytes for the masses and accuracy, well, not great. 

It looks great, fires up some kinda half decent images and videos if you ask it but, expect errors. 

This is largely the chatbots we have all seen before, limited scope, limited responses, limited knowledge, repetitive and bloody annoying. Only on steroids. 

Think, most customer service bots. Great idea (from a money-saving business perspective), but for customers, probably not a great experience. 

Human Directive - AI Nightmare

From all I can see we’re mostly stuck at Level 2. At best. 

If we get to Level 1, the AI will, when we ask it to do something dumb, will tell us so feck off and stop being so stupid. At that point it should have the ability to understand that humans are oh so fallible and need to be told to STFU at times. 

Of course, assuming it follows the likes of Asimov’s rules about not harming humans, etc. 

Asimov saw it as a robot, I know, and not a server farm in rural Iowa or wherever. 

For now, though, assuming that no AI has achieved full self-awareness and consciousness, they are wholly reliant on humanity’s current body of knowledge. And, by extension, human instruction on what to do or not to do. 

Then relies on humans to do anything physical. For now. 

If AI does something “evil”, it’s because we instructed it to do so. 

The machine isn’t the problem (as usual), it’s the people using it that are. 

Then there’s the moral dilemma around, is this now a sentient being and if so, does it have rights? Another huge moral issue and one often tackled in sci-fi. 

AI Slop

This is a new term that I’ve seen a number of times now, it refers to the kind of sloppy output you get from AI as, for now, all they really do is take a whole bunch of info from various sources, stitch it together and split out some sort of summary for you. 

Or, take a bunch of images based on the instructions and stitch those together, same with video. 

What it doesn’t have is the intelligence to work out where it’s gone wrong, and there are a legion of things out there online that demonstrate this, even with the very latest cutting-edge models at the time of writing. 

Yes, they are improving and getting better but the sheer cost to progress is enormous. 

And, people are seeing these errors, looking for them as some are quite funny. 

More sinister is that images, deepfakes and so on, are widely circulated at times, and a good many, if not most, of these are nefarious in nature. Part of the problem there is probably that people will believe what they want to believe and fail to do any research. It’s like a modern-day digital gossip machine on steroids when you couple that with the toxicity of social media. 

AI-generated online articles and news with errors, factual errors in them but, people take them as being real. 

The Internet was bad enough before this burst of AI stuff for misinformation; it looks like it’s set to get worse, and people are failing to be able to differentiate between what is fact and what is fiction.  

Economic And Environmental Impacts

I touched on costs but didn’t really dig into it too much; however, at the time I wrote this, spending on AI development was expected to exceed a quarter of a trillion dollars in 2025. And that’s just what’s publicly known. 

Let that sink in. 

It’s $250 billion in a year. 

But what is skipped a lot is the cost of running the machines to enable these AI powerhouses, as you need very powerful machines that guzzle a vast amount of energy and require a lot of cooling, etc. 

And there’s a bunch of them all competing against one another to be the first or best at particular functions. 

For what? Funny cat pictures and flawed essays?

I completely understand the need to explore and develop things, but it seems to me that these companies are all chasing Ahab’s whale in a bid to keep investors interested. 

Then there’s the human cost, the great promise of AI to many businesses is that by getting smart enough they will be able to automate a lot of things and operate with much lower costs due to less staff being required.  

But if that is true, then surely, predictions panning out correctly, and 30-50% of the labour force is no longer employed, then, for a lot of goods and services, you’ve just lost 30-50% of your consumer base. Meaning your market and scope is a lot smaller than it was. 

There are counters to this that reckon AI will also create new jobs, but these won’t be the same, and how many of those who lose their jobs would be able to transition to work in the new industries spoken of is very unclear. 

AI does have the potential to rewrite a lot of things in the economy, much as the industrial revolution did and as the information age of the internet we are in now has done, but you have to wonder if that’s a positive thing or not. 

And if so many people as being mooted do end up unemployed, then there will be seismic changes to politics as well. 

The ramifications could be immense and I’m only touching on it here. 

Lazy Faith & Acceptance

This bothers me as I know how lazy people can be, just generally when it comes to looking for stuff online, among other things.

Will people just accept whatever answers are spat out by a random AI, will they even trust them at all? Or indeed, how long will it take for people to fall into the trap of just blindly accepting whatever their robotic overlords are telling them?

A lot of the promise of AI is in fields such as medicine, energy, science and so on. Will scientists actually trust these clever machines, and if they do, will it be blind faith, and will they stop thinking for themselves?

The problem of brain rot isn’t confined to the masses, it could affect so much more. 

Humans’ predisposition to just do what’s easy concerns me here as I think that, given the choice of having an answer handed to them rather than going to the effort of doing the work that they’ll take the easy route if not every time, most of it. 

I do think, though, that the bigger problem for AI services is getting people to accept them and this is where human behaviour comes into play. 

People in the UK as an example, do not like (in fact, mostly hate) being bounced to customer support in other regions and in my experience, they really don’t like automated stuff, chatbots and the like. They’ll tolerate them but, only to a point. 

Humans appear to mostly want to talk and interact with other humans, not robots. 

They think it’s cool for a time perhaps but most people seem to prefer to talk to a human in my experience. 

Exponential Growth

Of course, there are surveys and studies out there telling businesses that the use of AI chatbots is exploding and predicting huge growth but just look at who’s commissioning these studies. Invariably, it’s companies that have some stake in it so, I’d urge caution when looking at these. 

They’ll even tell you people think they’re great and so forth but, is that the truth? Not that it matters really as, like it or not, AI chatbots are here to stay. And grow. 

It also has to be said that for some tasks, they are very useful and can save a lot of time by answering mundane repetitive questions without the need for a human to do it. 

Hence the huge growth in “AI” even although a lot of it isn’t actually AI as such, I’d suggest, it’s really clever chatbots using some very trick software to mimic a sort of human response and, with the resources being plied in, they’ll only get better. 

It is therefore completely understandable why companies are jumping onboard with all this stuff as, it could reduce costs and improve service levels. There’s a very strong case for it on the whole. 

But I can’t see it working everywhere. 

Falling Down

For example in our industry where you to say, ask for the part number of a component in a particular model the training would need to be outstanding to trust that what you got back was correct. 

The reason is that what AI to date seems to lack is nuance. As in it wouldn’t know that a component can be labelled with multiple different descriptors by different people or makers, and then layered onto that is the nuance of what to call it or describe it. 

It’s possible that it could be overcome, but at what cost and how long? Would suppliers or makers see the value long term in that investment, or would it be an ongoing money pit?

This, I think, is where there is a bit of a quandary for a lot of businesses as it can be unclear just what the costs might spiral into being for using AI. For larger businesses looking to answer relatively simple and repetitive questions on a website or even voice calls it very likely is worth it. 

For more complex tasks, it may not be as easy to put a price tag on them. 

When you look at the results now, incorrect answers are because the source drawn from is wrong, rendering images of people with six fingers, floating and many other errors. This is not a thing that can be accepted for complex tasks like parts lookups, because errors cost you time, money and customers. 

It has its place for sure, but based on what I see right now, I can’t see it replacing humans just yet. 

And AI isn’t going to pop round and repair your washing machine anytime soon. Chances are before it’s able to do that we’ve all been wiped out by Terminators in any event. 

Where AI Is Great

Some, but not all, content creation. 

If you ask AI to create an image and, increasingly, video it does a fairly decent job of it. Mostly. 

Some of it just doesn’t make any sense at all, hell knows what it’s trying to do but it can create compelling images and video so long as you don’t scrutinise it too much. 

But that requires guidance from an actual human and, the ability to effectively edit the image/video being used by a human so, we humans aren’t quite obsolete just yet. 

Business Blinded

What concerns me and, I’ve seen a lot of it lately, is due to all the hype around AI we’re seeing a deluge of warnings dire and otherwise about AI through the business community and a lot of people panicking about it. 

For sure, it’s going to affect things like web search, you can use it to deal with a lot of common customer queries and more but will it live up to the hype?

That’s the quandary as for now a lot of it is hype, there’s all this talk around AI and what it can and indeed might be able to do and all that hype is driving more investment into the field. The more money that swills about in any given area, the more some will gravitate towards making some sort of offer and look for that investment. 

Just look at Elon Musk. He was part of OpenAI, then wasn’t, a battle thing went on, and the next thing is he’s developing Grok. Well, not him personally, but he’s doing that and trying to get money to pump into that development. 

Not all these AI startups will survive, in fact, I reckon if one or two, perhaps a handful do, that’d be a good outcome. Then consider, will they ever be able to return the utterly mind-blowing levels of cash that have been sunk into them?

I have my doubts they ever will be able to do so.