As a developer, Google I/O was everything I could have hoped for. It was a great networking event, there were plenty of new tools to help us do our day-to-day jobs, and there was sunshine. Now that the hype is over, what are the main takeaways?
The keynote was primarily a scorecard of Google’s work over the previous 12 months. They revelled in their achievements in mobile, in AI, VR, the web, and their charitable work.
Android is up to 2 Billion monthly active users. There was no dwelling on the next Android release, ‘O’. This seems to be a stability release, primarily, with only a handful of user facing features like ‘Picture-in-picture’ video and ‘Channels’ for an app’s notifications. The vast majority of system apps and Google’s own Play Services have been steadily unbundled from the OS over the last few years, so no need to talk about them either. Emoji and sticker packs were barely mentioned! In fact, a surprising amount of announcements affected Android, the web and iOS equally.
Overwhelmingly, the major thread flowing through the keynote was that of AI (a term I will use to cover Artificial Intelligence, Machine Learning, and Neural Networks as a shorthand). Google Assistant (slowly rolling out to more devices than just Pixels), Google Home, and many AI services provided through Google’s Cloud Platform.
Google Assistant is going to be available everywhere! It’s on Android phones, it’s now available on iPhones, it’s on Android TV, Android Wear, Google Home, and it’s available through an SDK for anything else! Assistant is to get a visual search feature through the camera to give contextual information to the user’s live feed. This will be called ‘Lens’ and seems to combine the best of their previous services such as ‘Google Now on Tap’, ‘Google Goggles’, and the camera section of ‘Google Translate’. They’ve also added visual result cards to some types of voice queries. The Assistant may direct a user to find the results on their phone or (Chromecast enabled) TV screen. Assistant apps (Actions) don’t need to be installed to use (unlike Alexa skills), will be surfaced through explicit or implicit phrases (eg “let me talk to My Fitness Friend” or “I want to exercise”). It will be the voice platform to beat and users are quickly going to assume that they can get stuff done with your brand.
AI is already improving apps and websites through a variety of mechanisms. Google offer some off-the-shelf APIs for natural language processing, audio transcribing (in 80+ languages), translating text (between 2 of 100+ languages), tagging videos automatically, identifying objects, faces, landmarks, indecent material, and many more. What’s more is that they provide these in a seamless way when working with Firebase, their real-time database product.
Google also offer processing power for more custom applications through their open source high level AI library, Tensor Flow. Tensor Flow allows us to create insights from data-sets that go beyond simple queries. Maybe you want to detect certain behaviours or do some clever image processing inside your app. Many clients are sitting on interesting datasets that are ripe for creating interesting experiences through AI. Since AI is the new frontier we should be actively working with our clients to determine what we can do in this area.
Android Things was another big push at IO. This is Google’s IoT play, and while still only in developer preview, allows developers to create great experiences on low cost hardware (like the Raspberry Pi) using the majority of the Android tool chain. There was a lot of crossover between this, AI, Assistant, and their cloud services like Firebase.
The conference did not disappoint, and over the coming months there will be a lot of great business cases surfacing that showcase the new power from Google's releases. Make sure you stay tuned to the Somo blog for any updates.