ProductGen AI Recommended Practices

When adjusting the system and instruction text of the prompts, test the results with a batch of test products to ensure you are seeing the results you expect. Send a portion of these results to expected content approvers to validate the output prior to automating the content generation within workflows or bulk operations.

Where applicable, use a text generation prompt to guide the LLM to provide a confidence score based on what the content should or should not allow. This confidence score can be used as a metric within sufficiency scores to route insufficient content to manual review workflows. See below for example instructions on how to generate a confidence score in a retail environment.

Example prompt

For product ${NAME} with product data ${PROMPT_DATA}

Determine if the quality of this product data is good enough to sell in the North American clothing retail market. The values should not contain poorly written or low-quality content. Avoid typos, grammar mistakes or phrasing that can be misunderstood. Avoid overly generic filler statements such as "Great for all occasions". Avoid terminology specific to factory specs like "yarn #34 blend" so that the text is more refined and readable for an upscale audience.

Avoid content that is off-brand or attribute descriptions that do not match each other. Avoid unverified sustainability claims or medical claims that are not relevant for fashion retail. Based on the adherence to the above requirements, generate a confidence score to determine the value. The score should be between 0 percent and 100 percent. If the confidence score is higher than 80 percent then output the same value.

If less than 80 percent, provide the score along with reasons why the score has been provided and no more than five (5) suggestions for how to correct the data.