Use AI; don’t be used by it
Manuel says: “Use AI and don’t be used by it.”
What’s an example of being used by AI?
“There is a lot of stress around the use of AI to generate content, meaning text generation. For SEO, that initially meant a lot of people thought you could use AI to just generate magic content that will solve all their issues, and suddenly make your site rank higher for what you are trying to achieve.
It’s not like that because it’s also quite risky. You need to make good use of AI by understanding the tools, the benefits, the limitations, and the risks, and then embed it as part of a process.
You need to build the right ecosystem, and you need to have human validation to ensure the content aligns with the brand’s voice, it has been fact-checked, and you also provide currency when you create text and content.”
What kind of AI should SEOs be using?
“Again, it’s about using AI as part of a process.
You can train language models to understand your brand. For example, you can train the model to know the brand voice, your competitors, and the business’ goals, and then create something that aligns with what you’re trying to achieve.
It’s usually time-consuming for an SEO. You spend a lot of time doing all the analysis upfront, where you train the model, but then you can start to generate content at scale. You’ve trained the model once, then it’s there. It knows the business already, so you can start creating the content at scale, but also with consistency.”
Do you have a preferred model that you’re using at the moment?
“I’m just testing all the models. ChatGPT is key, of course. I started testing GPT-3.5 back in the day. It’s been a year now that I’ve been testing all the different models, such as Gemini 1.5.
I would say that GPT-4 is the best model that I’m using nowadays, in terms of language adaptability, currency, translations, and also human fluency when you generate content. You do all of your analysis upfront, and you provide all the inputs to the model, and then you get the output. For this, the latest model of ChatGPT OpenAI, GPT-4, is the best one so far.”
Do you do a lot of work outside of ChatGPT initially, to make sure it provides what you need?
“I’ve been working on a project for brightonSEO. Essentially, I built an entire site using ChatGPT, not only for text generation but also for images, videos, etc.
I’ve been doing the nitty-gritty of the analysis upfront, in terms of keyword research, keyword tagging, and understanding user intent and what the user was searching for. Then, I input everything via the prompts.
I’ve been instructing ChatGPT through multiple prompts, not just one. The first prompt would be providing information regarding the tone of voice and business goals, then you would provide the keyword research, and then ask it to produce a content outline or a draft, and then expand on each section. Nowadays, I’m automating all of that.
I work at Publicis now, and we are creating tools that allow us to provide all the contextual information upfront in a way that will be stored. We train the model, and the model knows about your business goals, tone of voice, etc., so you don’t need to retrain it.
Each time you use it, you can then give it all the keywords, links, FAQs, and specific inputs you require to adapt it for the specific output that you require. It essentially gives you a user interface.”
Are you still having challenges with hallucinations?
“Yes, a lot. There are also specific words that keep recurring. That’s why it’s key that you set up your process and you allocate time for humans to review it. There may be some words that are always there, but they don’t really make sense.
In terms of fact-checking, you may have hallucinations. Sometimes, ChatGPT generates examples or ‘facts’ that don’t make sense for what you’re trying to produce.
You need to be careful. You need to know the limitations of the model and the risks. One is hallucinations and another is security. You may have some specific niches where security is paramount. If you work in a medical or legal business, for example, then you must be sure you aren’t sharing any secure information.
An article came out about a year and a half ago about JP Morgan banning the use of ChatGPT for internal employees. There was a real concern about feeding all their information to OpenAI. When you’re feeding and training the model, that information could potentially be shared and stored who knows where.
We are solving this problem by using a sandbox. We are building private environments where the information stays. We don’t use the public OpenAI, we just set up a private environment where the information is shared. There, the model can still learn, but it’s more secure.”
Can that be done using ChatGPT, and can anyone set this up?
“Yes. We have a partnership with ChatGPT and OpenAI, so they can set up a private API for us. Anyone can set up a private environment, though, like a sandbox where you share the information, and it stays there. The model can still learn, based on what you share – you can train the model – but that information is not shared publicly.
Also, they say that GPT-4 is more secure. Less information is shared openly, and they don’t share sensitive information. You can train the model, but sensitive information is not shared publicly. That’s what they say but, when you work with big firms and big businesses, you have to be especially careful.
If you have a partnership with OpenAI you can discuss this with them, and you can ask them to set this up for you. However, there is also guidance online on how to build a private instance of ChatGPT. If you have the Plus model, and you pay to use ChatGPT, then you can create a private instance of it.
However, if you have a partnership, it’s on a bigger scale. You can have a private API and build a bigger environment. That is for their very big clients.”
What’s best practice for being as efficient as possible with the use of ChatGPT?
“It is always best practice to have manual checks. However, the optimal process would be to try and spend more time at the beginning refining the process. Spend more time with the client initially to understand the requirements and how you need to train the model, then train the model, then start developing the tool or the prompt. A lot of time should be spent on prompt engineering because you need to understand how to perfect the prompt beforehand.
What do effective prompts look like?
“It’s never one prompt. Usually, you get a bad output when you use just one long prompt.
You’re better off using a series of shorter prompts. That also cuts down on the number of tokens that you use. Shorter prompts means less cost involved because ChatGPT charges you based on tokens, which are pretty much the number of words in your prompt.
The shorter the prompt, the better. It’s cheaper, more efficient, and more consistent in terms of output. If you use a single long prompt, then the output is often inconsistent, it’s usually not very accurate, and it can have a lot of hallucinations.
If you split the process into multiple prompts, you can use the first 2 or 3 prompts to train the model. You can specify the point of view or the voice you want in the output, then you can specify business goals, targets, and the role of the model. For example, I usually start a prompt with, ‘Act as an SEO specialist’ or ‘Act as a copywriter.’
Then, after you’ve trained the model with the first 2 or 3 prompts, you can also see the output in the log. If you do it manually, you have the reply from ChatGPT so you can see, straight away, whether you are going in the right direction. If you automate it, though, it’s key to log the answers automatically from ChatGPT, so that you can review whether the prompt is going in the right direction.
After you do that, then you give it the specific information. For SEO, that would be your keywords, FAQs, intent, relevant pages, the links you want to add, etc. If you do all of this in one prompt, it would be a mess, and it wouldn’t be consistent.
If you build a process as efficiently as possible, and the prompts are perfectly engineered, then you will improve the cost-efficiency and the consistency of the output. It’s not magic, though, so you still have to review it.”
What kind of output are you looking for?
“In the input, when we pass information to the model, we specify the type of output we want.
If you want to generate content for Facebook, you don’t want something extensive. You might want 100 words or fewer, for example. If you want to write an article, a recipe, or a product description, the word count is key.
You also need to specify the specific language for the audience. If the language is English but the audience is Australian rather than British or American, the output needs to change.
If you use the same prompt every time, or an old model, you won’t have consistency. You might see hallucinations or mistakes. Even if you specify that you want 1,000 words in the output, it might go lower or higher. It won’t necessarily follow your prompt. If you split up those prompts and use the latest model, then it’s way more accurate.”
What do you tend to be looking for in the review process?
“For SEO output and content that’s generated for specific platforms like social media, then you should be reviewing the quality of the content – so whether it’s human-readable and SEO-optimized. You should also be fact-checking whether it’s accurate or suffering from hallucinations. Then, whether it’s well written or it has a lot of repetition.
Then you’re checking whether it’s including what you specified. Is the word count correct? Did it include links? Then, based on those different elements, you’re giving it a score.
You develop the tool, and you plan a review in order to engineer the right prompts. Once you reach a good point, then you can proceed with the development of the tool. Most likely, the tool will keep providing good output going forward, but there needs to be human review. Using the tool should always include a human edit being fed to the model, for the model to learn going forward.”
If an SEO is struggling for time, what should they stop doing right now so they can spend more time doing what you suggest in 2025?
“SEOs tend to spend a lot of time trying to understand how to build their own tools. Now, though, the technology is evolving fast. Every week, you will find more and more tools out there.
Instead of trying to build your own tool, just look at what’s out there. Be aware of what problem you’re trying to solve and see if there is a solution available, then just use the tools that already exist.”
Manuel Madeddu is Account Director at Publicis, and you can find him over at ZeroDueDesign.co.uk.