Customers decry wonky Wonka experience

The easy access to AI image generators have made it almost absurdly easy to get lifelike renditions of scenes that have not and will not ever exist. I’ve seen dreamy “libraries” and even entire “houses” rendered by AI tools like Midjourney and others that were – at first glance – convincingly real. All you need is a prompt and the algorithm does the rest. Recently, parents and their children were drawn in by such imagery to what was billed as a “Willy Wonka experience” in Glasgow. The promotional materials, created using AI, depicted a wondrous event with images of a magical landscape in a kaleidoscope of colors reminiscent of a high-budget film set.

However, when the families arrived they found a mostly bare warehouse with cheap plastic props and a small bouncy house. Social media posts of what was promised versus what was delivered illustrated the stark contrast between the promise and the product. It was so bad that someone called the police! The event organizers ended up apologizing and refunding the ticket price, but that does not completely make up for the disappointment those children felt (ironically, a fairly accurate parallel of what happened in the original story). I suppose the moral of this tale is that you can’t believe everything you see in promotional materials, especially now that entire scenes can be conjured in seconds.

To show how easy it is to make this kind of promo, I entered a prompt of “Willy Wonka experience chocolate river Oompa Loompas ” into a free AI image generator, and what I received a few seconds later is the image in this post. Like many AI generated images, if you don’t look too closely it isn’t half bad – a whimsical image that evokes the fantasy of the original book and movie. Upon closer inspection, however, the details are unrealistic, odd, and in my opinion, downright disturbing (the eyes!).

While we usually use recipe images from the EYB Library or photographs we take ourselves to illustrate EYB blog posts, we have occasionally purchased rights to use a stock image. One company we have used for this purpose has been promoting – nay, pushing – their new AI image generator for several months, but I have not (and will not, save for this post), use the tool because I would rather pay a real person for a quality image than use a tool that makes it harder for photographers to make a living, even if it does save a few bucks. The same goes true for AI copy in blog posts, which as I have mentioned before, is not something EYB is interested in using. We think our Members deserve better than a word salad assembled by a bot or a creepy image designed by AI. The downside risk for Members is that they must occasionally endure bad puns in blog titles like this one. That’s a small price to pay, don’t you think?

Post a comment

5 Comments

  • janecooksamiracle  on  February 29, 2024

    Bring on the puns ?

  • Indio32  on  February 29, 2024

    Love your stand on AI which should be applauded but I think if you can back in 10 years or so human photographers will be so expensive compared to AI that it’ll make no financial sense to use them. Will the membership pay extra for non-AI images?
    Looking at it in a wider context go back 30 years and look at what supermarkets were selling compared to now. There are a huge number of ingredients that it’s no longer possible to get as slowly, slowly supermarkets have swapped them for much more profitable ultra-processed variants.

  • pitterpat4  on  February 29, 2024

    Thank you for not using AI generated images. AI has some good uses but this is not one of them. The AI generated images are always disturbing to me. When it comes to food, it makes people expect something that is unattainable. Sure, one day they will be more realistic but I would support individual artists. I’ve been in the IT world for 40 years but still think there are some things that technology should not replace.

  • TeresaRenee  on  March 1, 2024

    Another vote for puns!

    I am on the AI at work. We’ve been using ChatGPT for some specific tasks and have hilarious stories about its errors. I’ve told my kids to never, ever use it for essays or anything factual because it’s designed to be “creative”. I once asked for the top 10 Canadian Olympic high jumpers from the 1950s and it got all 10 wrong: age, sport, nationality, etc. I wish I was allowed to make that many errors and still be considered cutting edge.

    My husband uses it at work to do first drafts of presentations. Then he goes through the content and ensures everything is factually correct. Last week he tried to get it to create a poster for a family ski trip. It never got the family name correct (each iteration used a different, incorrect name) and one of the posters showed the skier using skis as poles which my kids found hilarious.

    An online retailer I use started using artificially generated photos of models for its clothing. I get creeped out looking at them and noticed I buy less clothing because I hate the images. I can’t figure out what’s wrong with them but they just don’t look like real people.

  • Fyretigger  on  March 1, 2024

    A STEM oriented college professor friend of mine visited recently and we got to talking about the AI problem. And he said that in academic circles there are early indications of AI starting to self-destruct. Basically, AI has trained on the internet or some subset. As has been shown, nobody is acting as a teacher for the AIs. The AIs themselves have no basis to discern truth, ethics and so on. No one is deciding for them what is good content and what is bad content. This is how AIs become rapidly every “ist” you can name: racist, misogynist, sexist, ageist, etc. As low quality AI content proliferates, the AI itself taints the learning pool, and you get a downward spiral. Unfortunately, it has the potential to ruin the internet for us all. Wikipedia is already dealing with this, as has been nationally reported.

    AI does have its positive sides there as well. A San Francisco based AI company used its AI to analyze Wikipedia against published news articles and journals to detect under representation, particularly of women in particularly science on Wikipedia. The AI found 40,000 entries that should exist, based on Wikipedia’s own criteria, and it created fully cited summaries for human review and possible inclusion. Those further interested can search for the Wired article “Using Artificial Intelligence to Fix Wikipedia’s Gender Problem”.

Seen anything interesting? Let us know & we'll share it!