AI Text Generation Block

The AI block connects to ChatGPT and executes your AI generation request.

As with all other modules the AI module can have variables inserted in the prompt which will be filled in before ChatGPT is called.

Options

Prompt – The request you’ll send to ChatGPT. Eg: Conjugate the French verb {{verb}} in the future tense.

Model – Specify the model you want to use. ChatGPT 4o is the latest and greatest as of 2024, but more pricy. ChatGPT 3.5 Turbo is still very good but a lot cheaper. ChatGPT 3.5 Turbo Instruct gives you much more concise answers without the chatbot niceties.

My default is to use the instruct model if tasks are simply and don’t need a large context.

To see what each plan will cost you see this page on OpenAI pricing.

Alternatively you can test each prompt a few times and the sample output will give you an idea of how much each use will cost you.

Stop Project if Module Fails – Set this if you want the project to stop completely, for example if something goes wrong with ChatGPT. If not set then the article will be skipped and the next job will continue as planned.

Do Not Output – Set this is you DON’T want the system to output the generated text. If checked then you will need to include the output (using shortcode) in another module somewhere, otherwise it won’t be used!

Red Flag Text – Use with caution! If this text is found in the output then StudioPret will skip generation of this article. This is only useful if no-one is auditing the output (we highly advise you have someone check it over, at least at the start!).

Examples of red flags might be if ChatGPT replies “I don’t know what that is”. However ChatGPT is fickle and may slight variations on these “I dunno” phrases so you can try the following:

Include in your prompt something like: “If you don’t know the answer please just say ‘I don’t know’.

That way you will actually get the red flag you expect!

Advanced Options

Cut off ChatGPT extra text: ChatGPT sure likes to talk a lot so this option tries to erase such clues. The goal is to be invisible to the search engines and a phrase like “My AI brain found the answer” is an easy clue. I recommend you leave this option checked.

Maximum Tokens: This limits the output length. It’s usually not necessary but may be helpful for you.

Temperature: AI is simply a fancy next word predictor. Adjusting temperature down allows it more freedom to select the next word and up gives it less freedom.

Low temperature example:

“I go” is pretty much always followed by “to”

High Temperature:

“I go” will be followed by all kinds of options like: “for”, “with”, “up” etc

I recommend you leave this at 0.8 (default).


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *