The 2016 Google I/O Conference proved Google's commitment to providing solutions that continue to blend technology into users' daily lives. As in recent years, Google presented new technology offerings with the intent of giving developers access and the ability to continue improving products in an open environment. With their visions seemingly driven by artificial intelligence and machine learning, Google's announcements this year showed a continued push for powerful computing and connectivity that is universally accessible.
Our mobile team at Productive Edge is particularly interested in the new capabilities of Android N, Google’s new mobile platform to be named later this summer. We’ve also been keeping our eye on a variety of other updates from Google, including: capabilities around running apps prior to install, contextual learning APIs, Android Wear 2.0 updates, new messaging products, and enhancements to Google’s development tools.
Here’s a look at our team’s top takeaways from the 2016 Google I/O Conference:
- Google Assistant and unlocking its power to turn every app into a smart assistant with Awareness API
This year Google unveiled their newest creation, Google Assistant, a virtual assistant tool that provides user a more personalized interactive experience through two-way conversation. The personal assistant expands upon Google Search, allowing you to not only ask it questions, but to go so far as make dinner reservations, or even purchase concert tickets within the context of a naturally flowing conversation with the assistant.
What’s even more fascinating to us is that Google has unlocked the capabilities of their assistant by releasing their new Awareness API. Over the years, Google has worked to develop nine powerful APIs that use mobile device sensors to gather data about a user’s location, activities, and environment. Google Awareness provides a centralized framework of sensor data that allows developers to easily add contextually aware features to their applications, a task that was previously quite an undertaking. With greater contextual awareness, apps can now be built to serve us in new ways based on predictive metrics or understanding the context of the user’s surroundings.
With this centralized framework and data comes the possibility of easily creating smarter, more assistive applications. At a simple level, this could be your alarm understanding your work schedule and traffic patterns, then allowing you an extra hour in bed on a light day. At PE, we see even more innovative possibilities for this new technology - we could combine this data with health biometrics to build powerful applications that signals a user to make smart (or critical) health decisions. We could even create an app that can help to manage a workforces daily tasks in the most optimal manner on a given day using this new technology.
For developers, harnessing data and using APIs universally has always presented a challenge that sometimes leads to poor application functionality, diminished hardware performance, and a poor user experience. With the new Awareness API, Google enables our team to combine the power of multiple API’s, gather data to better understand users and, ultimately, build more engaging applications.
- Instant Apps Provides Functionality More Intently
Another buzz-worthy rollout is Instant Apps, which enables access to mobile app content and services without having to install the entire app. With Instant Apps, if you click on a URL link to see a video or make a purchase, your device will utilize deep links to quickly access and temporarily install only the function of an app you need to use, rather than the whole application.
The benefits of this feature span across many different audiences:
- Users will have the ability to instantly access features and utilities that once required the more cumbersome process of downloading a full application from Google Play. Additionally, since the app is only installed temporarily, the psychological commitment of installing another application on their device is reduced. However, if the user really likes the app, Google has plans to make it simple for the user to install the full application.
- Product owners will begin to see increased adoption of applications that once were only installed by users confident they wanted to add the app to their device. The new capability of only temporarily accessing an individual feature of an app could attract an entirely new group of users to a mobile application.
- Google Search will also improve as a result of Instant Apps, as it will now return instantly actionable features and content deep within an application. With the ability to interact directly with the content and features of an application through a Google Search without having to download the full application, search results will be even more meaningful and useful for searchers.
In order for the feature to work, apps will need to be restructured in a modular form that will allow users to install just the module they need. This will open up possibilities for publishers, retailers, and more to quickly get users “hooked” into the content / features available with their app.
Best of all, this feature isn't baked into a new version of Android, but will be supported through the Play Store, meaning any semi-modern Android device (supporting Jelly Bean and up) can use Instant Apps.
- Allo and Duo - Google Puts a New Focus on Smarter Messaging - Rivaling (beating?) Apple’s Facetime
Google revealed its plan to launch two of its new mobile applications this summer. The messaging app, Allo, and video chat app, Duo. Allo is an excellent demonstration of the power of machine learning capabilities through mobile. One big feature is Allo’s ability to offer suggestive replies to any message, replies that are built and recommended based on the user’s texting habit. Through Allo, you can even chat directly with Google Assistant, getting instant access to Google apps (Google Maps, Search, etc.) in message conversations. To keep privacy concerns at bay, Google also managed to incorporate “incognito” mode for chats, similar to the Chrome browser capability. Using end-to-end encryption, incognito chat lets you message securely and even edit settings to allow your private messages to self-destruct after a certain time.
The other new product, Duo, is a video chat app that rivals Apple’s FaceTime in both integration and speed capabilities. One aspect Google was excited to present was Duo’s Android specific “Knock Knock” feature that gives the user a visual peek into what’s going on with the caller prior to answering a call.
For all of the diehard iOS fans out there, it’s also worth mentioning that both Allo and Duo will be compatible with both Android and iOS devices!
- Android N: Quicker Performance, Split-Screens, and More Productivity
The newest Android OS, Android N, was also previewed at the 2016 Keynote, demonstrating improved productivity and performance. We particularly liked the new split-screen feature that allows users to run multiple applications simultaneously. Google also introduced boosted product performance with new support for the Vulkan API. Vulkan is a powerful API for more 3D programming that allows access to low overhead graphics graphics for games and applications alike.
Android N now offers automatic and seamless system updates as well. When available, Android N downloads and installs the update in the background. This is made possible by using two system partitions to minimize load and install times - the core OS can switch between the two system partitions for increased productivity.
One of the bigger Android improvements was the updated Just In Time (JIT) compiler. This element is great for developers because it means quicker installs, better software performance, and no more optimizing apps.
- Standalone Capability Delivered with Android Wear (2.0)
In the sixth platform revision of Android Wear, Google placed heavy emphasis on a creating a product that is even more adaptable and customizable. The biggest new element with Android Wear 2.0 is the stand alone application capability. This means no more relying on keeping your phone nearby or even powered on for wearable application functionality.
Another new highlight is the Android Wear customization power. Users now have the freedom to decide what apps and data to display on the watch face. Text input with Android 2.0 has also been adapted to allow for easier auto reply and different input methods, including a full keyboard and handwriting recognition.
These customization options and ability to break away wearables from the smartphone tether are necessary to progress widespread adoption of wearable technology.
- Daydream VR Presents a Completely Virtual Ecosystem
The 2016 Google Keynote also brought news of the virtual reality product Daydream. Daydream is the new mobile virtual reality platform that utilizes Android N. Daydream software incorporates low-level graphics through Vulkan API and can also be paired with the newly designed VR hardware. Hardware includes modified headwear (sorry, no more Cardboard headgear) and an accompanying controller to use within apps. Through Daydream, users can access games and media from Google Play store as well as from major subscriber services like Netflix, Hulu, and HBO Now. One thing to keep in mind is Daydream will only be compatible with newer phones with capable sensors, processing power and screen specifications.
- Major Expansion to Firebase - Evolving into a Comprehensive Development Platform
In 2014, Google acquired the back-end service Firebase, a real-time data management service that helps developers to build applications efficiently and sync data for the web, Android, and iOS devices. This year, Google announced major expansion, making Firebase a unified platform with integrated services like cloud messaging (free push notification service), crash reporting, virtual testing, and analytics services.
Before now, developers relied on multiple platforms and services to complete all of these tasks of creating and managing an app. Firebase provides a single platform that makes it easier to intersect performance and user analysis in the development process. These improvements are huge for the simple reason that they reduce the number of external dependencies developers rely on to build and manage apps. They also put the power in the hands of a developer to manage the backend and operations of an application, rather than another team. We believe this will lead to greater accountability from the development staff, rather than reliance on other teams to extract and then push this information. Firebase can now provide all of the backend functionality and dev ops needed for most applications. This makes it easier to build apps more efficiently and manage comprehensive, significant user data for clients.
- Google Home: Google’s Integrated Smart-Device
Similar to Amazon’s Echo and Alexa, Google Home is a voice-activated device with the capability to sync your home’s Google products and your smart phone. With seamless integration techniques, Google Home can stream media and music to your Chromecast, tablets, and any other mobile device. A unique aspect of Google Home is that once you identify which rooms in your house your devices are in, it can play music in whichever rooms you specify. With simple voice activation, users can get hands-free access to many Google capabilities, from Google maps to Google search.
Combining this technology with Google’s Assistant and Awareness API, we see Google’s move as a great path to move users towards a more connected (and smarter!) world.
Google has raised the stakes in smart technology and virtual reality in an attempt to continue changing the way we interact with technology and with each other. These new developments will surely help us to design and develop applications with smarter, centralized tools and provide users with more insightful, responsive products that adapt to their lifestyle.
Tim Arnold - Director of Mobile Solutions
Sarah Siderius - Marketing Coordinator