Google Apps, Tools Aim to ‘Democratize AI’

To me, the biggest theme at last week’s Google I/O conference was “democratizing AI”—in other words, making AI accessible both to end-users through its use in a variety of Google services, and to developers through new tools, programs, and even hardware designed around Google’s TensorFlow AI framework.

Google CEO Sundar Pichai started the conference out with a keynote in which he again stressed that the company was moving from a mobile-first to an AI-first approach, similarly to what he said last year.

He said Google was “rethinking all our products and applying machine learning and AI to serve user’s problems.” He said machine learning algorithms already influence the ranking of different results in search, and how Street View now automatically recognizes signs. Other services are getting smarter because of AI, he said, such as how the Google Home now supports multiple users and how Gmail is now rolling out a “smart reply” feature where it automatically suggests responses to emails.

To that end, he made a number of announcements of AI products, both for consumers and for developers.

Lens, Assistant, and Photo use AI features

For end-users, the most visible of these new efforts is Google Lens, a set of vision-based computing capabilities that can understand what you are seeing and take action, both in the Google Assistant and in Google Photos.

For instance, he demonstrated how you can take a picture of a flower, and how Google Lens can now identify it. More prosaically, it can take a picture of a username and password for Wi-Fi, and then automatically understand that you want to connect and do that for you. Other examples include taking a picture of the outside of a restaurant and having the software understand what it is, then showing you user reviews and menus. This isn’t all completely new, but I can imagine that it will be quite useful—the kind of thing we’ll all be using pretty much by rote in a few years. Google says this will be rolling out in a few months.

Google Assistant continues to get smarter and will incorporate the Google Lens, though the biggest news on that from is that Assistant is now coming to the iPhone.

The popular Google Photos app is also getting a number of other new AI-driven features, including “suggested sharing,” where it will automatically select the best pictures and suggest you share them with the people in the photos. Google Photos is also adding a feature that automatically will let you share all or part of your library, so that if you take photos of your kids, they automatically become part of your partner’s photo library as well. And it can suggest the best photos for a photo book.

AI-First Data Centers and New Development Tools

On the internal side, Pichai talked about how the company was “rethinking” its computational architecture to build “AI-first data centers.” He said Google uses its current Tensor Processing Units (TPUs) across all its services, from basic search to speech recognition to its AlphaGo competition.

I was particularly intrigued by the company’s introduction of a new version of its TPU 2.0, which Pichai said was capable of reaching 180 teraflops (180 trillion floating point operations per second) per 4-chip board, or 11.5 petaflops in each “pod” of 64 such boards. These are available to developers as “cloud TPUs” on the Google Cloud Engine now, and the company said it would make 1000 cloud TPUs available to machine learning researchers via its new TensorFlow Research Cloud.

This is part of an increasing push on TensorFlow, the company’s open source machine learning framework for developers, and the conference had a variety of sessions aimed at getting more developers to use this framework. TensorFlow appears to be the most popular of the machine learning frameworks, but it’s only one of a number of choices. (Others include Caffe, which is pushed by Facebook, and MXNet, pushed by Amazon Web Services.)

I went to a session on “TensorFlow for Non-Experts” designed to evangelize the framework and the Keras deep learning library, and it was packed. It’s fascinating stuff, but not as familiar as the more traditional development tools. All the big companies say they are having trouble finding enough developers with machine learning expertise, so it’s no surprise to see all of them pushing their internal frameworks. While the tools to use these are getting better, it’s still complicated. Of course, just calling an existing model is much easier, and Google Cloud Platform, as well as Microsoft and AWS, all have a variety of such ML services developers can use.

Because developing such services is so hard, Pichai spent a lot of time talking about “AutoML,” an approach that has neural nets designing new neural networks. He said Google hopes that AutoML will take an ability that a few PhDs have today and will make it possible for hundreds of thousands of developers to design new neural nets for their particular needs in three to five years.

This is part of a larger effort called Google.ai to bring AI to more people, with Pichai talking about a variety of initiatives to using AI to help in health care. He talked about pathology and cancer detection, DNA sequencing, and molecule discovery.

Continuing the theme, Dave Burke, head of Android engineering, announced a new version of TensorFlow optimized for mobile called TensorFlow lite. The new library will allow developers to build leaner deep learning models designed to run on Android smartphones, and he talked about how mobile processor designers were working on specific accelerators in their processors or DSPs designed for neural network inferencing and even training.

Fei Fei Li Google IO 2017

In the developer keynote, Fei Fei Li, a Stanford professor who heads Google’s AI research, said she joined Google “to ensure that everyone can leverage AI to stay competitive and solve the problems that matter most to them.”

She talked a lot about “Democratizing AI,” including the various tools Google makes available to developers for specific applications, such as vision, speech, translation, natural language, and video intelligence, as well as making tools for creating your own models, such as TensorFlow, which is easier to use with more high-level APIs.

She talked about how developers will now be able to use CPUs, GPUS, or TPUs on the Google Compute Engine. She gave an example of how much of a speed improvement some models have running on TPUs, saying the research implications of this are significant.

Echoing Pichai, she touted the new TensorFlow Research Cloud, saying students and Kaggle users should apply to use it; and concluded by saying the firm created its cloud AI team to make AI democratic, to meet you where you are, with Google’s most power AI tools, and to share the journey as you put these to use.

http://www.pcmag.com/article/353992/google-apps-tools-aim-to-democratize-ai

You may also like

Leave a Reply

Your email address will not be published. Required fields are marked *