Personal Voice on iOS 17: Apple’s most prominent AI feature yet

Are we a step closer to virtual immortality?

7 ways Apple is already a major player in AI

  • Apple’s advancement in consumer-facing AI has been slow compared to Google and Microsoft.
  • The Personal Voice feature on iOS 17 impressed the author and renewed their faith in Apple’s progress in AI.
  • While not actively relied upon in day-to-day life, Personal Voice showcases Apple’s efforts in AI and could pave the way for more advanced features in the future.

When compared to Google and Microsoft, Apple’s advancement in the consumer-facing AI department has been slow. While the Cupertino firm has been reportedly testing Apple GPT in its secret labs, iPhone users still have no access to a smart assistant. Today, those on iOS can utilize Siri’s ancient knowledge to get some basic tasks done. But even then, this assistant often fails to comprehend unsophisticated commands. After using the Personal Voice feature on iOS 17, however, my faith in Apple and its progress in the AI field was renewed.

What’s Personal Voice?

For those unfamiliar, Apple first announced Personal Voice a few months ago as one of several accessibility features coming to its platforms later this year. This addition is now available in beta to those running iOS 17 and iPadOS 17, and it requires a 15-minute setup. During the setting up process, you’re prompted to read short phrases in a quiet environment. After reading over a hundred of them, iOS starts analyzing them when your iPhone is locked and connected to its charger. Note that the preparation phase took my iPhone 14 Pro several days to conclude. So even if you buy the latest iPhone (and a protective case for it), don’t expect the process to be super fast. Once it’s ready, your iPhone or iPad will be able to read text using your virtual voice.

Hands-on with Personal Voice

Apple iPhone 14 Pro Max display review cover

Initially, I had very low expectations for the end result, as Apple isn’t exactly famous for its AI smarts. But boy, my mind was blown after testing it for the first time. Yes, there is an inevitable robotic layer to the virtual voice, but it certainly sounds like me. And considering that this feature works completely offline, it’s really impressive how far Apple has come in this field.

To get a more objective view of the feature’s results, I even sent virtual voice messages to a number of contacts. Some were completely oblivious that the voice was artificial, while others could detect the robotic layer accompanying it. Everyone, however, agreed that the Personal Voice indeed sounds like my real voice.

Do I actively rely on Personal Voice in my day-to-day life? Not really. After all, it’s an accessibility feature designed for those with speech difficulties or who are at risk of losing their voices down the road. It certainly is a cool party trick to show off a few times to those unfamiliar with the latest mobile technologies. However, you likely won’t be using it much unless you have speech difficulties. Nonetheless, it’s a promising materialization of Apple’s not-so-secret efforts when it comes to AI-powered features, especially since reports indicate that we could be seeing some major AI-focused offerings from Apple next year.

How it could pave the way for more advanced features

With Personal Voice becoming accessible to millions of iPhone customers later this year, it makes you wonder if users will eventually be able to generate virtual identities of themselves. While I don’t see Apple executing it, it’s now possible to manipulate videos and the included facial expressions, not to mention replicating a person’s voice. Feed the virtual version of you a few years’ chat history with someone, and you may as well be able to replicate someone’s personality, speech tone, and style. View the result with an Apple Vision Pro, and you’ve got an identical virtual twin that can replicate someone’s appearance, voice, and mentality.