What enhanced AI can mean for the next Tensor chip

Lately, AI has been at the heart of many features integral to the user experience, with many companies seeking to leverage the power of AI in any new feature under development. From better speech recognition to fixing blurred photos and anything in between, many of the features we take for granted nowadays were built around AI. However, none of these companies arguably tends to lean on the prowess of AI more than Google. While many will point toward Google’s in-house Tensor SoCs as the first indicator of Google’s increasing dependence on AI, in reality, it goes way back before the inception of the Tensor SoCs. And with Google announcing its intentions to lean heavily on AI at this year’s Google I/O, that integration will only become more important.

Ambient Computing: Google’s ultimate goal

Google Tensor G2 graphic on lime background.

Back in 2019, Google’s SVP of devices and services, Rick Osterloh, first broke out the term “ambient computing” to the public at the Made by Google ‘19 event. Much to the bewilderment of the audience, Osterloh defined ambient computing as the concept of having the end user at the center of the system, not their phones or any other devices they own. “Help is anywhere you want it, and it’s fluid,” he said. “The technology just fades into the background when you don’t need it.”

Essentially, Google’s objective is to develop a system that readily and seamlessly handles the user’s queries as effectively as possible with minimum intrusion. Think of it as Iron Man’s Jarvis, except that it caters to normal users instead of a billionaire superhero. Likewise, a voice assistant – Google Assistant, in our case – will be at the center of this ambitious vision. At this point, many would be forgiven for interpreting Google’s so-called ambient computing as putting Google Assistant in every device and calling it a day.

Even before debuting its Tensor SoC, Google was heavily invested in AI to enhance the user experience.

Fast-forward to Google I/O 2022, and ambient computing was once again, with Osterloh reiterating that “In a multi-device world, people don’t want to spend their life fussing with technology.” As the keynote went on, he emphasized how Google’s endeavors with its Pixel devices are built with ambient computing in mind. A cornerstone of Google’s vision of ambient computing is, of course, the Tensor SoC. While it might not boast the highest horsepower, its biggest strength is its TPU, which is Google’s integrated machine-learning engine that leverages Google’s expertise when it comes to AI enhancements.

Essentially, a TPU is a piece of hardware specifically designed to handle massive matrix operations, which are typically used for neural network workloads to be done at much faster speeds. These neural network workloads basically represent the core of AI-based applications. Typically, they’re processed by either the CPU or the GPU on other chipsets. While these processes will be handled with no major issues on either of them, neither can handle these tasks as quickly as a TPU can.

Pixel 7a voice typing in action

Pixel 7a voice typing in action

Primarily, the reason a TPU is faster is that both the CPU and GPU rely on, to a variable extent, accessing the memory while processing such tasks. Compared to the calculation speed, memory access is substantially slower (this is referred to as the von Neumann bottleneck), which can hinder the throughput of the CPU and GPU when performing these matrix operations. It must be noted, however, that the GPU is considerably faster than the CPU in this regard. Courtesy of the way a TPU is designed, memory access is not required during the processing of these matrix operations, resulting in much higher throughput than either of them. The only downside is that the TPU is only fit for this purpose, meaning that it can not replace either the CPU or the GPU regarding their respective tasks.

Given the significance of the Tensor SoC, it was not that much of a surprise to see Google’s Pixel 6a – Google’s midrange phone of that year – retain the same Tensor SoC of its flagship sibling, even if it was at the expense of something as significant as a higher refresh rate screen. If anything, this shows how crucial the Tensor SoC is to Google’s ultimate goal. While it might have sounded earlier like a mere afterthought or an overly ambitious project, it now sounds more credible than ever, especially with generative AI and natural language process (NLP) engines taking the world by storm.

Google Bard: AI at the helm

Google Bard hero image

Source: Google

Despite being self-admittedly renowned for its cutting-edge AI research, it wasn’t Google that started the most recent wave of AI-driven applications. With AI-based chatbots like ChatGPT soaring in popularity, Google was bound to release its own version. In the most unimpressive fashion, Google finally unveiled Bard, its own take on generative AI.

Like ChatGPT, Bard is another AI-driven chatbot that utilizes a language model to respond to the end user’s queries in a natural, conversational way. Where it differs from its contender is the model it is trained on, which is more drastic than most people might think.

Instead of OpenAI’s GPT, Bard utilizes Google’s homegrown language model, namely LaMDA, which has been developed behind closed doors. Prior to Bard, we only got a glimpse back in Google I/O 2021. Big things were, of course, expected from that announcement, and it is hard to argue that Google did not deliver what it promised. The problem is that Google is not alone in this space. For the first time in many years, Google is not the first architect of a particular innovation.

ChatGPT generated a description about itself. It's a

Indeed, OpenAI got the ball rolling with ChatGPT. Apart from being released to the public first, ChatGPT has already undergone some significant upgrades in this relatively short time, including the introduction of OpenAI’s newest GPT-4 language model. Even more worrying was how Microsoft breathed new life into Bing by incorporating this technology. If soaking the limelight of AI technology did not worry Google enough, then threatening its dominant position in the search engine market share will surely have them on its toes. This was evident from the moment Google hastily took the wraps off Bard, which sometimes struggled to answer some basic questions like naming the months of the year or jokingly suggesting that the service has been already shut down.

It must be noted, however, that Bard is still in its infancy, and growing pains were bound to happen, especially given the big promises of such a technology. Also noteworthy is that crossing the line first does not necessarily correspond to guaranteed success. It’s not like it was smooth sailing for OpenAI either, with ChatGPT occasionally going off the rails. In fact, Google still has a golden opportunity to not only catch up with OpenAI’s chatbot but even reinstate itself firmly as the one to beat. At this year’s Google I/O, the company announced a ton of new features and improvements while touting how it was being “responsible.”

How can Bard integrate into Google devices?

Pixel 7 Pro selfie camera with speech enhancement settings

Bard can take advantage of many aspects of the Pixel and wider Google Android experience. For one, Bard would also thrive on the unique capabilities of the Tensor SoC inside its Pixel devices.

It’s not the first time we have seen Google commit to AI-reliant features. Even before debuting its Tensor SoC, Google was heavily invested in AI to enhance the user experience. One of the highlights of Pixel devices, Now Playing, made its debut back in 2018. Another cornerstone of the Pixel experience, Google’s brilliant HDR+ processing, broke into the scene long before Google contemplated the idea of developing its own SoC. Of course, Google later integrated its own Pixel Visual Core chip to assist with its sophisticated HDR+ post-processing. However, it was Google’s post-processing algorithms that turned the heads of many tech enthusiasts, so much that some of the Android developer communities have only taken interest in porting Google’s Gcam app to other devices, improving the photo quality substantially. Even Magic Eraser, a feature that was released much later, was soon brought to all Pixel devices and Google One members.

Tensor wasn’t the bedrock of those features, but it’s hard to argue that these features do not benefit from the unique abilities of Tensor’s dedicated TPU unit. Besides boosting the performance of existing features, this could open up the opportunity for Google to add even more AI-intensive features, and one of these features could well be none other than Bard AI. In fact, it has been reported that Bard AI could be coming to Pixel devices as an exclusive feature before potentially being rolled out to all Android phones.

Perhaps Google is still testing the waters via a standalone implementation of Bard on Android before ultimately integrating it into something like Google Assistant. This way, Google can bring the best of both worlds – Google Assistant’s refined experience and Bard’s capabilities as a generative AI engine.

Google Search generative AI results image

In general, Google Assistant is an excellent area to integrate Bard. For starters, since most Android phones already come with Google Assistant pre-installed, such a move will quickly increase Bard’s adoption. Google Assistant will also get substantially smarter and more useful, thanks to Bard’s ability to churn out more sophisticated responses. With Bard tied to Google Assistant, this could also facilitate integrating it with any other smart device that supports Google Assistant. This way, not only will your phones get smarter, but all your smart devices. Curiously though, Google didn’t even mention Assistant at I/O once.

However, should Google knit Bard and Google Assistant together, this could only boost Bard’s performance by tapping into Tensor’s potential. If Google could train its TPU to utilize LaMDA (or PaLM 2), on its devices, this could be a big game-changer. Not only will this tip the scales in favor of its Pixel devices, but it could also induce a big shift in focus when designing the upcoming SoCs, further diluting the evergrowing emphasis on raw CPU and GPU performance while highlighting the significance of having a capable, dedicated TPU.

Given how heated the competition is, there is virtually no room for Google to be too cautious to give it a try.

Of course, tying up Bard with Google Assistant will present its own challenges that Google has to work on. For instance, Google will surely have to work on reducing the possibility of misinformation to practically zero. Failing to do so, Google could risk undermining Google Assistant’s reliability, arguably its biggest strength in the virtual assistant space. It is fair to say that the stakes are incredibly high. Given how Google is ahead of everyone in this regard, however, not committing to that plan would be too good an opportunity to waste.

You can see that Google already has a large foundation to work on. Google Assistant ships with nearly every Android phone and is supported by a lot of smart devices on the market. Google now has its very own Tensor chipset built by design to focus on AI-based applications. These are two key areas where Google already has Microsoft beaten. Given how heated the competition is, there is virtually no room for Google to be too cautious to give it a try.

Microsoft has the advantage for now…

This has been the first time we’ve seen Google seemingly lagging behind. For a company that has always prided itself on cutting-edge AI research, it feels weird to see Google playing catch-up in this particular aspect. And of all its contenders, it’s Microsoft that has the lead, thanks to its integration of OpenAI’s newest GPT-4 language model that revived Bing in the process. Yet there is still that sense of inevitability that Google will soon take back the lead, even if there is currently a sizable gap between both companies. If anything, Google is yet to pull its ace out of its sleeve, whereas Microsoft already jumped its gun.

It won’t be a straight line to the finish, though. Google will have to clear some corners first should it decide to integrate Bard into its Pixel devices. While the Google Assistant route might sound like an easy win, it still presents its own challenges that Google must navigate. Having said that, with Google yet to leave the starting line, it really is Microsoft’s race to lose. It might look like so now, but it probably won’t be for long.

Looking at Google I/O, it seems that Google, while panicked, isn’t ready to throw in the towel yet.