Does Apple Intelligence Now Form Words?



As powerful as LLMs can be, all have one shared weakness: Hallucination. For reasons beyond our understanding, AI models have a habit of making things up, completely out of the blue. A response can be specific with well-cited sources and relevant information; Then, suddenly, the AI ​​pushes a false claim, or mistakenly interprets an ironic forum comment as fact. (This is how you end up Google’s AI overview recommends adding glue to your pizza.) Some LLMs may hallucinate less than others, but no one is immune. That’s why every time you use a chatbot, you’ll see some kind of warning on the screen, letting you know that the AI ​​might make mistakes.

Apple’s AI platform Apple Intelligence is no exception here. When the company first released its AI, it included notification summaries as a “perk.” Apple had to quickly back off, however, once the feature started summarizing news alerts incorrectly—As in one caseWhen Apple Intelligence read a brief BBC headline that Luigi Mangione, the suspect in the United Healthcare shootings, had committed suicide in prison. The company later reinstated the feature but included some additional guardrails, such as putting news summaries in italics.

Apple Intelligence may create new words

I stumbled across This post Thursday on the r/iOS subreddit, which adds an interesting note to the AI ​​illusion debate. The post reads, “Anyone else getting fake words in their AI summary?” With an attached screenshot, showing a notification summary for the Acme Weather app. The first sentence reads: “Light rain for hours.” Ah, the incessant rain. At least it’s only for an hour. wait; unclear?”

Despite sounding plausibly like a real word, inbixtent is, in fact, completely made up. The poster didn’t share exactly what the notification said, so we can’t know what words Apple Intelligence is working from here. What we do know is that the poster saw the “unclear” three times, and they are not alone. Given the poking jabs at the weather app the OP uses, some of the comments on the post confirm that others have seen Apple Intelligence make up fake words in its notification summaries. One commenter said they saw “flammating” in one summary and “tranquified” in the mail summary; Another shared that they looked “roughly” rather than strictly on two separate occasions.

What do you think so far?

I couldn’t find any other examples on the internet showing this phenomenon, and I personally don’t use notification summaries on my iPhone, so I haven’t seen this problem myself. I can’t say for sure how widespread this problem is, or whether it’s limited to a specific version of iOS, a specific device, or one app over another. One of the commenters has a theory, however: they think that when the on-device AI model uses Apple Intelligence it can’t shorten the original phrase on its own, creating a portmanteau to include it. In his words, the AI ​​”yolos” is a “vibes-word,” as in ambiguous. They say this happens to them the most with weather app summaries.

Does Apple Intelligence create words in your summary?

Again, there’s no telling if this affects a large number of Apple users or just a small fraction. The fact that I can only find one post about it, with two commenters sharing similar experiences, leads me to believe it’s the latter, but I’d love to hear from anyone with a similar experience. If you use Apple Intelligence’s notification summary, please let me know if you’ve seen the wording created on your end. I may need to turn on the feature to keep track.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *