top of page
Search
Writer's pictureSarah Turnbull

Is ChatGPT all it's cracked up to be?


Robotic hand touching dots

AI's been around for a couple of years but ChatGPT has recently exploded onto the market and is gaining huge amounts of attention. It’s being touted as one of the most advanced language models in the world.

But is ChatGPT all it's cracked up to be? In this blog post, I'll take a closer look at the capabilities and limitations of ChatGPT to see if it lives up to the hype.


So, is it any good?

In a word, yes. It’s good, and no doubt it’ll get better.


ChatGPT (other AI tools are available!) is a conversational language model developed by OpenAI. It's based on something called Generative Pre-trained Transformer architecture – that’s where the GPT comes in - and is fine-tuned on a huge database of human conversations and available content.

OpenAI’s goal is to generate 'human-like' responses to text-based questions and prompts. ChatGPT is ideal for tasks such as chatbots, question answering and text generation.


It’s capable of generating logical, understandable and realistic responses to a wide range of questions and prompts. It’s got vast knowledge and it can handle a variety of topics with ease.


In addition to responding to prompts, it can also create text on its own, producing everything from short phrases to full-length articles. And this is where writers are getting nervous… because its ability to generate coherent and grammatically correct text makes it a useful tool for tasks like content creation and copywriting. Uh oh.


Limitations:


While it’s incredibly powerful, it's not perfect. Like all AI models, it's only as good as the data it's trained on and it can still make mistakes and generate ridiculous responses.


It lacks common sense. It hasn't been trained to understand the world around us in the same way that we understand it. So it can sometimes generate responses that are inconsistent with reality. It also lacks an understanding of nuance, humour and subtlety!


It doesn't have feelings - and this one for me is the biggie. AI can produce words, and lots of them, and pretty well - but without feeling it's never really going to resonate with the target audience.


It also has a potential for bias. Since it’s trained on data from the internet, it may reflect the bias present in that data. For example, if the data contains gender stereotypes, ChatGPT may generate responses that reinforce those stereotypes. Not good.


So, for me, it’s a tool and should be used as one. It’s amazing for research, for generating ideas and answering questions and prompts. No doubt about it. And, as I say, it’ll get better.


But it is a tool that’s generating its responses from data that’s already out there. Content that’s already been written by someone else.

And you could argue that there is very little totally original content being generated, but with human

Woman at desk looking at a laptop, drinking a coffee

attributes like nuance, humour, common sense and feeling, copy and content is differentiated when written by humans as opposed to apps and machines.

What does it mean for businesses and writers?


There’s no doubt that some businesses are going to use AI to generate their content. But I fear this is content for content’s sake. Because this content is a mishmash (I don’t think you’d get AI using that terminology, but who knows?) of what’s already out there, it’s not original and not ‘human-first.’


If you’ve read my post on Google’s Helpful Content Update from September ’22, Google will penalise machine-generated/AI content. It’s considered spam and their webmaster team is authorised to act upon it and move content like this down the rankings – or even shadow-ban it.



Computer keyboard with 'Caution' traffic cone on it.

They want human-generated content that focuses on delivering quality info that’s relevant and answers their searchers’ questions.

So, I think AI content should be used wisely – maybe as a ‘base’ for content generation, then humanising it by some person tapping on a keyboard. This means they’ll get the tone and the feeling right, not just the content.


I've recently been contacted by a client who's said openly that they're considering AI-generated content and they'd like to chat about me reviewing and improving what AI produces. I think this is a pretty good idea. Content is king, there's masses being produced - and it should be being checked and improved upon if it's produced using AI.


There's businesses that won't. They'll just have ChatGPT produce it and whang it on their website, which I think is a bit of a mistake.


I really rate ChatGPT, but do think it needs to be managed!


And what about websites?


I don’t think that AI understands – yet – what’s needed for great sites. What it is that makes people take action (tip – rhymes with ‘schmenefits’), but it’s the same argument here. ChatGPT has a huge database of information at its disposal, but the ‘copy’ that it’s going to generate is based on stuff that’s already out there. Not unique, possibly machine-generated… Google considers it spam. See ya!


In summary, yes, ChatGPT is a brilliant tool and has the ability to generate human-like responses to a wide range of questions and prompts.


Its ability to produce understandable and grammatically correct text makes it an incredibly powerful and useful tool for lots of different things.


It’s not perfect (what is?)... It has limitations and they must be taken into consideration. As with any AI tool, it's important to use ChatGPT in a responsible and ethical manner, taking into account both its strengths and weaknesses.


It's good. I've said that - and mean it. Just proceed with a little caution. Know what it can do and build on that. That's a winner!




 

Comments


bottom of page