One of the hottest topics everyone’s talking about is generative AI. Depending on who you ask, some will say it’s the answer to all their copy and imagery needs. Others are skeptical and believe it’s just a fad that will pass within a few months’ time.
As experts in AI data and advocates for advances in AI that improve how we work and live, our Marketing team of creators decided to test the hype for ourselves and see if generative AI is as impressive as the masses claim it to be. Our Head of Content, Senior Brand Designer, Senior Manager of Global PR, Product Manager, and Copywriter spent several weeks in January testing out generative AI programs. We’re sharing their experiences, what programs they tried, what they asked it to do, and the outcomes they received from their experiments.
Our Product Manager tested ChatGPT and DALL-E, both created by OpenAI. The requests were both work-related and personal.
- Christmas gift suggestions
- Create KPIs for a role they haven’t worked in before
- Rewrite technical jargon for investors to be able to understand
- Provide starting framework for various projects
- Create cool product names based on a provided description.
- Suggest PowerPoint layouts
- Create icons to use
- Generate images that can be shown to clients to demonstrate services Appen offers while ensuring all current projects are kept confidential
Our Product Manager said ChatGPT was great and gave it an 8.5/10. It was best at reducing the time needed to complete daily tasks, however she noted that she’s unable to use outputs as final deliverables. The platform’s suggestions are a great starting point she can then revise into a final document. She likened it to “sticky notes on a board for ideas.” Our PM also noted that ChatGPT is still relatively new and has a lot to learn. When asked to suggest Christmas ideas, the recommendations were predictable rather than something unique and creative.
The experience with DALL-E was not as successful for our PM and received a 2/10. The program had a difficult time understanding the requests such as the keyword “realistic” in the prompt. She asked DALL-E to create two side-by-side images of a realistic human, one wearing glasses and one not. Each image generated showed the same human wearing glasses in both images. When the word realistic was removed, DALL-E was able to complete the request correctly, except the human was a cartoon. Another area where DALL-E fell short was with the request for PowerPoint layout suggestions which generated very generic, or not possible to use outputs.
Senior Manager, Global PR
Due to all the media buzz around ChatGPT, our Senior Manager, Global PR decided to test the platform out to see how it could support her at work.
- Summarizing a video script into an internal communication piece
- Writing a job description to post on social media
The outcome was a positive one for our PR Manager, earning ChatGPT a 7/10. She summed up her experience by noting the program delivered within the requested wordcount and only minor edits were needed before being able to use the generated text. For the job description, she “provided vague inputs to write a PR position job posting and was surprised at how good the result was.” The final piece was refined for voice and tone to make sure it matched the personal style of our PR Manager.
Head of Content
ChatGPT was also the program our Head of Content tested. As a budget-conscious department head, she liked that it was a free tool which created a risk-free environment. All requests were work related.
- Write social media posts
- Edit a blog post
- Generate SEO friendly headlines
- Write target audience descriptions
“I absolutely love this tool. As a content strategist and creator, ChatGPT saved me hours of research to build out a content strategy.” Our Head of Content put ChatGPT to the test by building an outline of a content plan and asking it to fill in the content needed. The program even helped generate SEO friendly headlines. Content was factual and easy to read, but final deliverables still required human input. She noted making edits to account for our brand’s tone as well as structure with blogs and social media posts.
“From a content perspective, this tool is a dream.” ChatGPT earned high praise and a 7/10 (as it got her 70% of the way there) from our Head of Content, who described the experience as “it’s like having my own content coordinator to draft work for me.” While thoroughly impressed with the program, she doesn’t see the tool taking over the jobs of writers. If anything, it showed it’s a great platform to create a starting piece, but a human will always need to put finishing touches on it, to adjust for the style and tone they need, and add intrinsic qualities only humans possess.
Senior Brand Designer
Our Senior Brand Designer tried out ChatGPT and Midjourney. He was primarily focused on programs capable of generating visual content for this test. The experiment was to design an ice cream website.
- Visual layout for an ice cream website
- Write headlines
- Write subhead lines
- Write call to action
- Create general structure for an ice cream website
Midjourney was chosen as our designer knew it could create the various concepts of the desired aesthetic. It took several iterations of edits and description changes to create a web layout our designer was happy with. One downside was the images of ice cream Midjourney generated looked like they were very clearly created by AI, which is exactly true, however the goal was for it to look realistic. This earned it a 7/10 by our designer.
“I was extremely impressed with the amount of time saved in the development and planning stages of this project. The mockup website looked very professional. Better images or photography would still be needed to showcase the product being sold.” Of the two programs, our designer preferred ChatGPT giving it a 7/10. “It was as if I had my own personal Marketing Copywriter.” One advantage to the experiment being a success was our designer leveraged his skillset to ensure the program would be able to deliver the best possible results. “Although these platforms helped generate quick layouts and definitely saved time, it still requires someone with basic skills to know and understand what needs to be asked in order to obtain the best results.”
Our Copywriter had her eyes set on using Jasper but ended up using ChatGPT and DALL-E as it was a free option to try. Requests were both work-related and for fun.
- Writing blogs
- Writing headlines
- Creating social media posts
- Writing generic valentine’s themed cards
- Generating valentine’s themed images
Our Copywriter used both programs to conduct an experiment to see if emotion could be expressed by AI. A focus was placed on cards as she asked ChatGPT to create verbiage for a Valentine’s Day card. An article with the results is available to read next month. She also had ChatGPT write a generic card, one in the voice of Appen, one in the voice of Mr. T, and one in her own voice.
While using this program, an interesting error occurred; after asking the program to write a card in the voice of Appen, she asked it to write a card in her own voice, it made the incorrect assumption the Copywriter was the CEO and started the card with “as the CEO of Appen”. Our copywriter then refreshed the program and asked it again to write a card in their own voice. This time it worked and wrote something our copywriter thought was similar to her actual voice. Note this won’t work for everyone, it works better for public figures and might deny the request for some individuals it cannot obtain data on. ChatGPT also matched the style of Mr. T perfectly and started with, “I pity the fool.”
Testing DALL-E was a bit of a challenge for our copywriter, who gave the program a 4/10. She asked it to create an image of lipstick on top of chocolate cake and, while the program could generate lipstick, it could not generate chocolate cake in the same image. It could, however, generate strawberries and other fruit with lipstick. Our copywriter tried asking the original request in several different ways and was unsuccessful.
“I prefer ChatGPT and DALL-E for fun. For work, I’d rather create the entire piece of content myself. I found editing the content ChatGPT wrote took me just as long, if not longer, than writing the content from scratch.” Our copywriter admits she had a hard time getting the info she wanted but gives the program a 7/10. One example this is clearly evident was when both she and the Head of Content asked ChatGPT the same question and got very different answers.
Head of Content: can you write me a LinkedIn post based on this copy
Copywriter: can you write a LinkedIn post on this
The difference was minor, but minor enough for ChatGPT to deny our Copywriter’s request but comply with our Head of Content. What puzzled our Copywriter, was she had asked the exact same question for Twitter and ChatGPT complied then. It shows the program still needs some training.
ChatGPT is a Clear Winner
Our marketing team enjoyed testing out generative AI programs and were especially glad they gave ChatGPT a chance. It aided each worker in various aspects of their job and provided valuable time savings, allowing them to focus on getting more tasks done each day. It was also a great platform that helped with brainstorming and providing inspiration when they hit writer’s block.
Using the program takes some getting used to, and all agreed you need to be careful with how you word requests in order to get the results you’re looking for. It’s also imperative to edit the provided copy. Not just for adjusting to a preferred style or tone, but because ChatGPT can occasionally make mistakes –it’s a young program that’s still learning. As the model is fed more training data and the programs capabilities expand, the potential of what it can accomplish will evolve.
We’re eager to see if the trend of using generative AI will be just as popular by the end of the year. Will businesses incorporate it into their everyday work? Will schools continue to ban it in classrooms? Only time will tell.
We believe the future is bright for generative AI.