Get in touch with our team
Feature image for 01.06.2017

01.06.2017

7 min read

Jobs, Lens & AI: Google’s future and what it all means for marketing

This article was updated on: 07.02.2022

At the time of writing, we’re two weeks down the line from Google IO 2017, Google’s annual developer conference. Now that we’ve had time to digest the various announcements that were made, we can look ahead to the new state of play that Google is ushering into the tech world and its implications for digital marketers.

In this post, we’ll take a look at several new developments and trends from Google, most of which were at least mentioned at Google IO 2017. If you’re here for something specific, use the links below to take you to the right section.

Google for Jobs

Personal Tab

Machine Learning

Google Lens

Google for Jobs is looming…

…for the USA, anyway. Still, if the platform is a success in the States, we could be seeing it in Europe soon after. The announcement is a confirmation of what we began speculating on in February: Google is going to shake up the way that jobs postings appear in results pages.

At present, we know for sure that the search engine will pull results from big recruitment sites like Monster, CareerBuilder, LinkedIn and ZipRecruiter, with Google actually working with some of them to hone their listings. In our first post on the topic, we asked how recruitment companies would respond to Google’s advances on their territory, but this evidence suggests that they’ve been fairly cooperative, which could mean that they will continue to dominate the SERPs as before.

At least for now, Google for Jobs is fairly limited in scope, not only geographically, but functionally. As described by USA Today, jobs listings will appear above organic results when users search for job openings, but there will be some functionality allowing for filtering. Clicking on the search results will take you directly to an application on the job site.

There is no sign yet of paid job advertising, or of Google allowing candidates to apply without leaving the search engine, but those future developments can’t be ruled out. For now, however, our recommendation for jobs schema implementation still looks like the best way to prepare for Google for Jobs and to compete with the big companies that Google has worked with to make it possible.

Search is becoming more personal than ever

In the days since Google IO, a new feature has been noted by some Google users that wasn’t mentioned at the conference: a brand new ‘Personal’ tab. But what does the tab do that Google’s personalised organic results don’t do already?

Rather than using personal data to give you the most relevant organic results, the Personal tab shows you things from across your Google account (which you have to be logged into in order to use this new feature). You can see emails relating to the search term, calendar events and your Google Photos, giving you access to private files in one convenient results page.

For digital marketers, the opportunities offered by the new tab are currently limited, especially as it’s only been rolled out in small numbers so far. The main opportunity that seems to exist at the moment is through paid advertising; the Personal tab does display some ads at the bottom of the results, though it remains to be seen how these ads will perform compared to the other kinds across the search engine.

It doesn’t look like there’s much that can be done organically to make use of the new tab, though that could change. If, for example, previously visited web pages started showing up in the tab, there could be an even greater benefit to create pages that users want to stay engaged with and keep coming back to.

It all comes back to machine learning

Almost everything that Google is doing in the foreseeable future revolves around AI and machine learning (ML). Most of the initiatives they discussed at Google IO involve ML in some way, which is why the company has ploughed a lot of time and money into making sure that they are leading and defining the industry.

One of the most significant things spoken about at the conference was the new Tensor Processing Unit (TPU). TPUs are designed to provide the processing power needed to accelerate the learning of the neural networks that already power so much of what Google already does, including search, Street View and Translate. The current model is already up to 30x faster than the most advanced CPUs and GPUs.

Google needs all that processing power because it is doing everything it can to be the world leader in AI, an endeavour that is only possible with enormous amounts of processing power to train and run neural networks. One of the coolest pieces of information (in my opinion) to come out of Google IO was the little insight CEO Sundar Pichai gave into AutoML, Google’s ‘learning to learn’ system where neural nets are used to design better neural nets.

But Google’s AI development is not simply an intellectual curiosity. Digital marketers need to be aware of the capabilities of machine learning, because it is already having an impact on the way Google organises information for consumers and will continue to do so to an even greater extent in the future. For just one example of this, take a look at Dr Pete’s blog on RankBrain and topicality in organic search.

Real time information with Google Lens

A shining example of applied AI and one of the most talked about announcements at Google IO, Google Lens is essentially augmented reality built into your phone to allow information on places and things to be displayed and used just by taking photos of them.

Sundar Pichai gave three examples of how Lens could be used in his keynote speech at the conference:

  • To identify the name and key properties of a flower.
  • To access wifi just by taking a picture of the login details.
  • To see reviews, menus and table bookings for nearby restaurants.

Google Lens is a perfect example of ML’s applications and limitations. On the one hand, it has the potential to be a fantastic tool that organises information in a new and accessible way (exactly what Google tries to do as a company). On the other hand, it is limited by the data it gets fed, and requires a critical mass of users constantly feeding it data for it to be really accurate. By installing it first in Photos, Google will be hoping to use the photos that already exist to make Lens even better in real time.

Some uses of Lens are purely informational and of little value to marketers – there’s not much you can do with a wifi login, after all! However, other applications should make us take note. The information displayed when you point the camera at a restaurant is pulled from Google’s Knowledge Graph, which can pull out all sorts of information about the business, including reviews and menus.

If you needed more incentive to gather reviews for your business or client, or to implement schema markup, this is it. We could soon live in a world where, if Google Lens can’t display reviews for your independent restaurants, customers will go to your competitor next door, instead. For more information on how to implement schema, check out Darol’s post on the topic.

Where next for Google?

It’s impossible to predict exactly what Google will be rolling out in a couple of years’ time, but two trends stand out from this year’s conference: increasing focus on consumer search experience and an emphasis on machine learning. The Personal tab and Google for Jobs feature highlight the search giant’s continuing desire to shape the way we consume information across all devices, while ML is being used to support countless initiatives in everything from DNA sequencing to local search.

It can be hard for us marketers to know what’s worth paying attention to and what we can safely ignore, but you can be sure that everything worth knowing about will be covered right here.