Categories


Authors

Here be dragons: Use AI responsibly

Intro

I’m not an expert in GenAI. I’m a power user, a tech enthusiast, and a realist, with a thirst for knowledge and experience regularly using this stuff since February 2023.

I’m also a Security Automation Engineering Lead at a FTSE 100 company, so:

  • I’ve worked with some incredibly talented people in this space (who share my views).

  • I’ve seen how non-technical people - including executives and senior management - perceive, use, and advocate for AI usage.

  • I’ve worked directly with some of the most notable vendors in the AI space, so I’m familiar with how they push AI.

In what seems to be a new campaign of mine, this is everything I feel needs to be said but hardly anyone is saying.

The tips and guidance I’ve written here have been broadly stable over the past 3 years, so hopefully they good for a while, despite the unprecedented speed of AI development.

TL;DR: Deploy it, but do so responsibly, not blindly. It absolutely has its uses, but do not under any circumstances treat it as anything close to a silver bullet, or it’ll go straight into your own foot. Here be dragons. Believe me, I’m Welsh.

Index

Learn capabilities

To be extremely simplistic, GenAI (LLMs specifically) works by predicting words. That’s it. Models are created by getting them to read pretty much everything that’s ever been written down, and it creates a neural network of what words come up in what contexts. And we don’t know how it works (link), but it does.

Because of this and that these models still have to run on a computer somewhere, there are a number of problems and mitigations of varying degrees of success.

Here’s what I mean:

Flaw Technical mitigation Efficacy Cost
It has no innate understanding of facts, and it regularly makes things up (or “hallucinates”). Use web search and/or "thinking" modes. High High
It does not learn and is very general. Use the latest models, if possible ones with domain-specific training.

Use "memory" features and/or Retrieval Augmented Generation (RAG) so it can always reference information unique to you and your environment.
Medium Low
It can only keep track of so much context. Break up sessions / chats into dedicated ones. Medium Low

↑ Back to Index

Be sceptical

It’s vital to be sceptical because:

  • Again, AI has no innate understanding of facts, and this flaw is only countered by optional, more-costly modes (read: often not enabled by default).

  • AI was designed to appear confident. It can be useful to think of it as someone who just wants to impress you.

  • Tech companies are heavily biased and incentivised to get you to use AI.

When using it yourself, double check the content it generates if the topic is important.

When viewing content produced by the average person, be aware that due diligence may simply not part of the equation, more than you’d think.

When content is produced by a baddie, remember that ethics are non-existent and the scale of their influence goes through the roof - there’s little cost and much to gain. That means that you can no longer rely on old addages like “the email is well-written and seems genuine, so it’s probably not phishing” or even how a product looks on a listing (see “AI Experts Break Down the Latest AI Scams”).

When considering a sales proposition by a vendor, be aware that the product is often nowhere near as capable as they make it out to be. For just one example, I went to a Microsoft AI Innovation Lab day where there was a flurry of representatives who spent all morning telling us that the sky is the limit. So, in the workshop time, I showed them a Copilot agent I’d created to use the most advanced model, thinking mode, RAG with our best practices and template, and a comprehensive prompt in order to generate Python code for our SOAR, but it simply didn’t work as instructed. Their collective response was that it I’d have to dumb it down and hold its hand more..

↑ Back to Index

Save yourself

Simple facts of life:

  • Use it or lose it.

  • We learn through struggle.

  • Just because you can, doesn’t mean you should.

This is no different.

AI can do some genuinely surprising and useful things for us. But if you start to develop the habit of turning to it first, your critical thinking and skills will atrophy. I’ve noticed it in myself many times over the years, and I have to actively counteract it. We all do.

Use it to replace busywork or when you’ve run out of ideas. Keep your brain active.

And write the email / message yourself, for the love of God. Communication is a human skill, and I’m sure it’s not just me whose brain disengages when we sense that content wasn’t created by a person.

↑ Back to Index

Measure efficacy

I broadly subscribe to the idea that automated systems only need to be better at something than the average person. For example, driverless cars need to have fewer incidents than the average driver.

However, it’s crucial that you actually have a reliable, accurate way to identify whether this is actually the case or not. Otherwise you’re worse off, in all sorts of ways. Don’t just believe the hype or hope it’ll be fine.

↑ Back to Index

Clearly reference

Make it clear when content has been generated using AI, either in part or in full. There’s nothing inherently wrong with doing so, but people deserve to know that something may require extra scrutiny or care.

↑ Back to Index

Prioritise talent

Yes, AI may be able to do some entry-level / tier-1 jobs.

But speaking on behalf of people like me who started their careers at the bottom and grew to be very capable professionals that regularly implement highly complex and valuable work, you will be doing your organisation a disservice. Maybe not immediately, but definitely over the years.

It’s not all about the bottom line. We - as a society - need to stop thinking so much in the short term.

↑ Back to Index

Plan B

Over at least the past year, I’ve read posts like https://www.wheresyoured.at/wheres-the-money/; seen countless news articles on the eye-watering talent, training, and run costs which aren’t (can’t?) be covered by paying users or selling data or ads; watched videos like “Has Generative AI Already Peaked? - Computerphile” (the first scientific research into diminishing returns and data scarcity); etc.

All of this has led me to the conclusion that one of the following things will happen over the next 2 or so years:

  1. There will be a breakthrough which will fix all of the problems with AI - the insanely high resource requirements, the hallucinations, etc.

  2. The investor money will run out, no one will want to or be able to pay for the actual run cost, AI usage will fall into a niche, and the Gartner Hype Cycle will continue to be true.

There has been truly incredible innovation (good and bad) thanks to AI services being free or cheap for anyone and everyone.

But it’s only been free or cheap because investors have been footing the (staggering) bill, and if there’s one thing that about investors it’s that they want a return. If that return doesn’t come sooner rather than later, loss-cutting begins..

So, why am I saying all of this?

Because I’ve seen businesses completely rebuild their processes with AI as the foundation.

That would be bad enough given that the technology is fallible to say the least, but there’s no guarantee that it will even still be accessible in a few years time.

And it’s not just little old me saying this. I found it deeply validating when I read “A CIO’s Guide to Redesigning the SOC Analyst Job in the Age of AI” by Gartner - the go-to consulting firm in the tech industry - and it said:

AI is still evolving, and many capabilities and use cases are unproven. Before developing a dependence on AI, CIOs should build a way back to doing things manually in case the technology becomes unreliable or too expensive to scale.

We saw a very similar thing with the transition from on-premises to the cloud. Sometimes I feel like we never learn..

↑ Back to Index

The environment

Try not to use AI frivolously.

As with so many other things in our word, I’m sure most people don’t know the true cost of the products they use (almost certainly because it’s kept from them), so let’s be clear: Usage of most AI - especially image and video generation, but definitely also text generation - is insanely resource intensive and environmentally damaging:

And it’s easy to forget, but let me just state the obvious: The resources it uses are not unlimited - the electricity, water, and specialised hardware that it uses means they’re not available for use elsewhere.

While this advice is getting less and less impactful with changes such as Google enabling AI Overview on all searches by default, I have to believe that it still matters and needs to be said. Especially when companies that claim to be “green” are relentlessly pushing AI adoption for no good reason and with seemingly no awareness of the cost.

↑ Back to Index

Still amazing

Everything above may seem overly negative, but sometimes when there’s an overly strong push for something, there needs to be an firm push back to balance things out.

I do genuinely think GenAI is incredible and fascinating, and I’m really interested to see where it goes. For example:

  • It holds incredible promise for science. Just read “AI cracks superbug problem in two days that took scientists years”.

  • It’s great for learning and training because it can adapt to your level. I’ve been using Codex for learning Swift in Xcode for iOS development, and the progress I’ve made in 2 weeks probably would have taken me 2 months otherwise, if not longer.

  • It’s great for diagnostics. Like a rubber duck with a brain.

  • Speak to it. While humans can struggle with understanding the messy, unstructured, back-and-forth verbal explanation of a problem, it turns out that AI is remarkably good at it.

↑ Back to Index

Summary

Whenever I have this conversation, the question I always end up asking again and again is simply “for what?

The technology is genuinely impressive and the stuff of sci-fi. But what exactly are we getting in return for the costs?

The costs are:

  • Accelerated climate change (links above).

  • Scalable and automated:

    • Scamming (link above).

    • Cyber attacks (link)

    • Disinformation / propaganda, and in a time where we need peace (link).

  • Not being able to trust our eyes or ears for the first time in history.

  • Accelerated wealth inequality (link).

  • Generation of sexual content without consent (link).

  • Accelerated language extinction (link).

  • Job losses (link).

  • Skyrocketing computer hardware costs (link).

  • Possibly the destruction of the human race. No, seriously. The “alignment” problem is incredibly difficult and unintuitive, and we’re going full steam ahead anyway. See “Ai Will Try to Cheat & Escape (aka Rob Miles was Right!) - Computerphile” by PhD AI safety researchers, one of which (Robert Miles) who advises the UK government.

The benefits that I can think of are, in rough order:

  • Improved scientific research.

  • Faster learning.

  • Easier and faster web searching.

  • Amusing content.

  • Improved video gaming.

So, is it worth it? You tell me. Time will tell.

Anyway, I hope this is useful to someone.

How to resize and fill iPhone photos AND keep HDR