Does Googlebot Support HTTP/2? Challenging Google’s Indexing Claims – An Experiment

Originally published on: http://feedproxy.google.com/~r/seomoz/~3/wFRhyjJq67c/challenging-googlebot-experiment

Posted by goralewicz

I was recently challenged with a question from a client, Robert, who runs a small PR firm and needed to optimize a client’s website. His question inspired me to run a small experiment in HTTP protocols. So what was Robert’s question? He asked…

Can Googlebot crawl using HTTP/2 protocols?

You may be asking yourself, why should I care about Robert and his HTTP protocols?

As a refresher, HTTP protocols are the basic set of standards allowing the World Wide Web to exchange information. They are the reason a web browser can display data stored on another server. The first was initiated back in 1989, which means, just like everything else, HTTP protocols are getting outdated. HTTP/2 is one of the latest versions of HTTP protocol to be created to replace these aging versions.

So, back to our question: why do you, as an SEO, care to know more about HTTP protocols? The short answer is that none of your SEO efforts matter or can even be done without a basic understanding of HTTP protocol. Robert knew that if his site wasn’t indexing correctly, his client would miss out on valuable web traffic from searches.

The hype around HTTP/2

HTTP/1.1 is a 17-year-old protocol (HTTP 1.0 is 21 years old). Both HTTP 1.0 and 1.1 have limitations, mostly related to performance. When HTTP/1.1 was getting too slow and out of date, Google introduced SPDY in 2009, which was the basis for HTTP/2. Side note: Starting from Chrome 53, Google decided to stop supporting SPDY in favor of HTTP/2.

HTTP/2 was a long-awaited protocol. Its main goal is to improve a website’s performance. It’s currently used by 17% of websites (as of September 2017). Adoption rate is growing rapidly, as only 10% of websites were using HTTP/2 in January 2017. You can see the adoption rate charts here. HTTP/2 is getting more and more popular, and is widely supported by modern browsers (like Chrome or Firefox) and web servers (including Apache, Nginx, and IIS).

Its key advantages are:

Multiplexing: The ability to send multiple requests through a single TCP connection. Server push: When a client requires some resource (let’s say, an HTML document), a server can push CSS and JS files to a client cache. It reduces network latency and round-trips. One connection per origin: With HTTP/2, only one connection is needed to load the website. Stream prioritization: Requests (streams) are assigned a priority from 1 to 256 to deliver higher-priority resources faster. Binary framing layer: HTTP/2 is easier to parse (for both the server and user). Header compression: This feature reduces overhead from plain text in HTTP/1.1 and improves performance.

For more information, I highly recommend reading “Introduction to HTTP/2” by Surma and Ilya Grigorik.

All these benefits suggest pushing for HTTP/2 support as soon as possible. However, my experience with technical SEO has taught me to double-check and experiment with solutions that might affect our SEO efforts.

So the question is: Does Googlebot support HTTP/2?

Google’s promises

HTTP/2 represents a promised land, the technical SEO oasis everyone was searching for. By now, many websites have already added HTTP/2 support, and developers don’t want to optimize for HTTP/1.1 anymore. Before I could answer Robert’s question, I needed to know whether or not Googlebot supported HTTP/2-only crawling.

I was not alone in my query. This is a topic which comes up often on Twitter, Google Hangouts, and other such forums. And like Robert, I had clients pressing me for answers. The experiment needed to happen. Below I’ll lay out exactly how we arrived at our answer, but here’s the spoiler: it doesn’t. Google doesn’t crawl using the HTTP/2 protocol. If your website uses HTTP/2, you need to make sure you continue to optimize the HTTP/1.1 version for crawling purposes.

The question

It all started with a Google Hangouts in November 2015.

When asked about HTTP/2 support, John Mueller mentioned that HTTP/2-only crawling should be ready by early 2016, and he also mentioned that HTTP/2 would make it easier for Googlebot to crawl pages by bundling requests (images, JS, and CSS could be downloaded with a single bundled request).

“At the moment, Google doesn’t support HTTP/2-only crawling (…) We are working on that, I suspect it will be ready by the end of this year (2015) or early next year (2016) (…) One of the big advantages of HTTP/2 is that you can bundle requests, so if you are looking at a page and it has a bunch of embedded images, CSS, JavaScript files, theoretically you can make one request for all of those files and get everything together. So that would make it a little bit easier to crawl pages while we are rendering them for example.”

Soon after, Twitter user Kai Spriestersbach also asked about HTTP/2 support:

His clients started dropping HTTP/1.1 connections optimization, just like most developers deploying HTTP/2, which was at the time supported by all major browsers.

After a few quiet months, Google Webmasters reignited the conversation, tweeting that Google won’t hold you back if you’re setting up for HTTP/2. At this time, however, we still had no definitive word on HTTP/2-only crawling. Just because it won’t hold you back doesn’t mean it can handle it — which is why I decided to test the hypothesis.

The experiment

For months as I was following this online debate, I still received questions from our clients who no longer wanted want to spend money on HTTP/1.1 optimization. Thus, I decided to create a very simple (and bold) experiment.

I decided to disable HTTP/1.1 on my own website (https://goralewicz.com) and make it HTTP/2 only. I disabled HTTP/1.1 from March 7th until March 13th.

If you’re going to get bad news, at the very least it should come quickly. I didn’t have to wait long to see if my experiment “took.” Very shortly after disabling HTTP/1.1, I couldn’t fetch and render my website in Google Search Console; I was getting an error every time.

My website is fairly small, but I could clearly see that the crawling stats decreased after disabling HTTP/1.1. Google was no longer visiting my site.

While I could have kept going, I stopped the experiment after my website was partially de-indexed due to “Access Denied” errors.

The results

I didn’t need any more information; the proof was right there. Googlebot wasn’t supporting HTTP/2-only crawling. Should you choose to duplicate this at home with our own site, you’ll be happy to know that my site recovered very quickly.

I finally had Robert’s answer, but felt others may benefit from it as well. A few weeks after finishing my experiment, I decided to ask John about HTTP/2 crawling on Twitter and see what he had to say.

(I love that he responds.)

Knowing the results of my experiment, I have to agree with John: disabling HTTP/1 was a bad idea. However, I was seeing other developers discontinuing optimization for HTTP/1, which is why I wanted to test HTTP/2 on its own.

For those looking to run their own experiment, there are two ways of negotiating a HTTP/2 connection:

1. Over HTTP (unsecure) – Make an HTTP/1.1 request that includes an Upgrade header. This seems to be the method to which John Mueller was referring. However, it doesn’t apply to my website (because it’s served via HTTPS). What is more, this is an old-fashioned way of negotiating, not supported by modern browsers. Below is a screenshot from Caniuse.com:

2. Over HTTPS (secure) – Connection is negotiated via the ALPN protocol (HTTP/1.1 is not involved in this process). This method is preferred and widely supported by modern browsers and servers.

A recent announcement: The saga continuesGooglebot doesn’t make HTTP/2 requests

Fortunately, Ilya Grigorik, a web performance engineer at Google, let everyone peek behind the curtains at how Googlebot is crawling websites and the technology behind it:

If that wasn’t enough, Googlebot doesn’t support the WebSocket protocol. That means your server can’t send resources to Googlebot before they are requested. Supporting it wouldn’t reduce network latency and round-trips; it would simply slow everything down. Modern browsers offer many ways of loading content, including WebRTC, WebSockets, loading local content from drive, etc. However, Googlebot supports only HTTP/FTP, with or without Transport Layer Security (TLS).

Googlebot supports SPDY

During my research and after John Mueller’s feedback, I decided to consult an HTTP/2 expert. I contacted Peter Nikolow of Mobilio, and asked him to see if there were anything we could do to find the final answer regarding Googlebot’s HTTP/2 support. Not only did he provide us with help, Peter even created an experiment for us to use. Its results are pretty straightforward: Googlebot does support the SPDY protocol and Next Protocol Navigation (NPN). And thus, it can’t support HTTP/2.

Below is Peter’s response:

I performed an experiment that shows Googlebot uses SPDY protocol. Because it supports SPDY + NPN, it cannot support HTTP/2. There are many cons to continued support of SPDY:

This protocol is vulnerable Google Chrome no longer supports SPDY in favor of HTTP/2Servers have been neglecting to support SPDY. Let’s examine the NGINX example: from version 1.95, they no longer support SPDY.Apache doesn’t support SPDY out of the box. You need to install mod_spdy, which is provided by Google.

To examine Googlebot and the protocols it uses, I took advantage of s_server, a tool that can debug TLS connections. I used Google Search Console Fetch and Render to send Googlebot to my website.

Here’s a screenshot from this tool showing that Googlebot is using Next Protocol Navigation (and therefore SPDY):

I’ll briefly explain how you can perform your own test. The first thing you should know is that you can’t use scripting languages (like PHP or Python) for debugging TLS handshakes. The reason for that is simple: these languages see HTTP-level data only. Instead, you should use special tools for debugging TLS handshakes, such as s_server.

Type in the console:

sudo openssl s_server -key key.pem -cert cert.pem -accept 443 -WWW -tlsextdebug -state -msg sudo openssl s_server -key key.pem -cert cert.pem -accept 443 -www -tlsextdebug -state -msg

Please note the slight (but significant) difference between the “-WWW” and “-www” options in these commands. You can find more about their purpose in the s_server documentation.

Next, invite Googlebot to visit your site by entering the URL in Google Search Console Fetch and Render or in the Google mobile tester.

As I wrote above, there is no logical reason why Googlebot supports SPDY. This protocol is vulnerable; no modern browser supports it. Additionally, servers (including NGINX) neglect to support it. It’s just a matter of time until Googlebot will be able to crawl using HTTP/2. Just implement HTTP 1.1 + HTTP/2 support on your own server (your users will notice due to faster loading) and wait until Google is able to send requests using HTTP/2.

Summary

In November 2015, John Mueller said he expected Googlebot to crawl websites by sending HTTP/2 requests starting in early 2016. We don’t know why, as of October 2017, that hasn’t happened yet.

What we do know is that Googlebot doesn’t support HTTP/2. It still crawls by sending HTTP/ 1.1 requests. Both this experiment and the “Rendering on Google Search” page confirm it. (If you’d like to know more about the technology behind Googlebot, then you should check out what they recently shared.)

For now, it seems we have to accept the status quo. We recommended that Robert (and you readers as well) enable HTTP/2 on your websites for better performance, but continue optimizing for HTTP/ 1.1. Your visitors will notice and thank you.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Google Shares Details About the Technology Behind Googlebot

Originally published on: http://feedproxy.google.com/~r/seomoz/~3/NZbOrrcEFg4/google-shares-details-googlebot

Posted by goralewicz

Crawling and indexing has been a hot topic over the last few years. As soon as Google launched Google Panda, people rushed to their server logs and crawling stats and began fixing their index bloat. All those problems didn’t exist in the “SEO = backlinks” era from a few years ago. With this exponential growth of technical SEO, we need to get more and more technical. That being said, we still don’t know how exactly Google crawls our websites. Many SEOs still can’t tell the difference between crawling and indexing.

The biggest problem, though, is that when we want to troubleshoot indexing problems, the only tool in our arsenal is Google Search Console and the Fetch and Render tool. Once your website includes more than HTML and CSS, there’s a lot of guesswork into how your content will be indexed by Google. This approach is risky, expensive, and can fail multiple times. Even when you discover the pieces of your website that weren’t indexed properly, it’s extremely difficult to get to the bottom of the problem and find the fragments of code responsible for the indexing problems.

Fortunately, this is about to change. Recently, Ilya Grigorik from Google shared one of the most valuable insights into how crawlers work:

Interestingly, this tweet didn’t get nearly as much attention as I would expect.

So what does Ilya’s revelation in this tweet mean for SEOs?

Knowing that Chrome 41 is the technology behind the Web Rendering Service is a game-changer. Before this announcement, our only solution was to use Fetch and Render in Google Search Console to see our page rendered by the Website Rendering Service (WRS). This means we can troubleshoot technical problems that would otherwise have required experimenting and creating staging environments. Now, all you need to do is download and install Chrome 41 to see how your website loads in the browser. That’s it.

You can check the features and capabilities that Chrome 41 supports by visiting Caniuse.com or Chromestatus.com (Googlebot should support similar features). These two websites make a developer’s life much easier.

Even though we don’t know exactly which version Ilya had in mind, we can find Chrome’s version used by the WRS by looking at the server logs. It’s Chrome 41.0.2272.118.

It will be updated sometime in the future

Chrome 41 was created two years ago (in 2015), so it’s far removed from the current version of the browser. However, as Ilya Grigorik said, an update is coming:

I was lucky enough to get Ilya Grigorik to read this article before it was published, and he provided a ton of valuable feedback on this topic. He mentioned that they are hoping to have the WRS updated by 2018. Fingers crossed!

Google uses Chrome 41 for rendering. What does that mean?

We now have some interesting information about how Google renders websites. But what does that mean, practically, for site developers and their clients? Does this mean we can now ignore server-side rendering and deploy client-rendered, JavaScript-rich websites?

Not so fast. Here is what Ilya Grigorik had to say in response to this question:

We now know WRS’ capabilities for rendering JavaScript and how to debug them. However, remember that not all crawlers support Javascript crawling, etc. Also, as of today, JavaScript crawling is only supported by Google and Ask (Ask is most likely powered by Google). Even if you don’t care about social media or search engines other than Google, one more thing to remember is that even with Chrome 41, not all JavaScript frameworks can be indexed by Google (read more about JavaScript frameworks crawling and indexing). This lets us troubleshoot and better diagnose problems.

Don’t get your hopes up

All that said, there are a few reasons to keep your excitement at bay.

Remember that version 41 of Chrome is over two years old. It may not work very well with modern JavaScript frameworks. To test it yourself, open http://jsseo.expert/polymer/ using Chrome 41, and then open it in any up-to-date browser you are using.

The page in Chrome 41 looks like this:

The content parsed by Polymer is invisible (meaning it wasn’t processed correctly). This is also a perfect example for troubleshooting potential indexing issues. The problem you’re seeing above can be solved if diagnosed properly. Let me quote Ilya:

“If you look at the raised Javascript error under the hood, the test page is throwing an error due to unsupported (in M41) ES6 syntax. You can test this yourself in M41, or use the debug snippet we provided in the blog post to log the error into the DOM to see it.”

I believe this is another powerful tool for web developers willing to make their JavaScript websites indexable. We will definitely expand our experiment and work with Ilya’s feedback.

The Fetch and Render tool is the Chrome v. 41 preview

There’s another interesting thing about Chrome 41. Google Search Console’s Fetch and Render tool is simply the Chrome 41 preview. The righthand-side view (“This is how a visitor to your website would have seen the page”) is generated by the Google Search Console bot, which is… Chrome 41.0.2272.118 (see screenshot below).

Zoom in here

There’s evidence that both Googlebot and Google Search Console Bot render pages using Chrome 41. Still, we don’t exactly know what the differences between them are. One noticeable difference is that the Google Search Console bot doesn’t respect the robots.txt file. There may be more, but for the time being, we’re not able to point them out.

Chrome 41 vs Fetch as Google: A word of caution

Chrome 41 is a great tool for debugging Googlebot. However, sometimes (not often) there’s a situation in which Chrome 41 renders a page properly, but the screenshots from Google Fetch and Render suggest that Google can’t handle the page. It could be caused by CSS animations and transitions, Googlebot timeouts, or the usage of features that Googlebot doesn’t support. Let me show you an example.

Chrome 41 preview:

Image blurred for privacy

The above page has quite a lot of content and images, but it looks completely different in Google Search Console.

Google Search Console preview for the same URL:

As you can see, Google Search Console’s preview of this URL is completely different than what you saw on the previous screenshot (Chrome 41). All the content is gone and all we can see is the search bar.

From what we noticed, Google Search Console renders CSS a little bit different than Chrome 41. This doesn’t happen often, but as with most tools, we need to double check whenever possible.

This leads us to a question…

What features are supported by Googlebot and WRS?

According to the Rendering on Google Search guide:

Googlebot doesn’t support IndexedDB, WebSQL, and WebGL. HTTP cookies and local storage, as well as session storage, are cleared between page loads. All features requiring user permissions (like Notifications API, clipboard, push, device-info) are disabled. Google can’t index 3D and VR content. Googlebot only supports HTTP/1.1 crawling.

The last point is really interesting. Despite statements from Google over the last 2 years, Google still only crawls using HTTP/1.1.

No HTTP/2 support (still)

We’ve mostly been covering how Googlebot uses Chrome, but there’s another recent discovery to keep in mind.

There is still no support for HTTP/2 for Googlebot.

Since it’s now clear that Googlebot doesn’t support HTTP/2, this means that if your website supports HTTP/2, you can’t drop HTTP 1.1 optimization. Googlebot can crawl only using HTTP/1.1.

There were several announcements recently regarding Google’s HTTP/2 support. To read more about it, check out my HTTP/2 experiment here on the Moz Blog.

Via https://developers.google.com/search/docs/guides/r…

Googlebot’s future

Rumor has it that Chrome 59’s headless mode was created for Googlebot, or at least that it was discussed during the design process. It’s hard to say if any of this chatter is true, but if it is, it means that to some extent, Googlebot will “see” the website in the same way as regular Internet users.

This would definitely make everything simpler for developers who wouldn’t have to worry about Googlebot’s ability to crawl even the most complex websites.

Chrome 41 vs. Googlebot’s crawling efficiency

Chrome 41 is a powerful tool for debugging JavaScript crawling and indexing. However, it’s crucial not to jump on the hype train here and start launching websites that “pass the Chrome 41 test.”

Even if Googlebot can “see” our website, there are many other factors that will affect your site’s crawling efficiency. As an example, we already have proof showing that Googlebot can crawl and index JavaScript and many JavaScript frameworks. It doesn’t mean that JavaScript is great for SEO. I gathered significant evidence showing that JavaScript pages aren’t crawled even half as effectively as HTML-based pages.

In summary

Ilya Grigorik’s tweet sheds more light on how Google crawls pages and, thanks to that, we don’t have to build experiments for every feature we’re testing — we can use Chrome 41 for debugging instead. This simple step will definitely save a lot of websites from indexing problems, like when Hulu.com’s JavaScript SEO backfired.

It’s safe to assume that Chrome 41 will now be a part of every SEO’s toolset.


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Writing Headlines that Serve SEO, Social Media, and Website Visitors All Together – Whiteboard Friday

Originally published on: http://feedproxy.google.com/~r/seomoz/~3/_6Iee7-NMlY/writing-headlines-seo-social-media

Posted by randfish

Have your headlines been doing some heavy lifting? If you’ve been using one headline to serve multiple audiences, you’re missing out on some key optimization opportunities. In today’s Whiteboard Friday, Rand gives you a process for writing headlines for SEO, for social media, and for your website visitors — each custom-tailored to its audience and optimized to meet different goals.

Writing headlines that serve SEO, Social Media, and Website Visitors

Click on the whiteboard image above to open a high-resolution version in a new tab!


<span id=”selection-marker-1″ class=”redactor-selection-marker”></span> Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about writing headlines. One of the big problems that headlines have is that they need to serve multiple audiences. So it’s not just ranking and search engines. Even if it was, the issue is that we need to do well on social media. We need to serve our website visitors well in order to rank in the search engines. So this gets very challenging.

I’ve tried to illustrate this with a Venn diagram here. So you can see, basically…

SEO

In the SEO world of headline writing, what I’m trying to do is rank well, earn high click-through rate, because I want a lot of those visitors to the search results to choose my result, not somebody else’s. I want low pogo-sticking. I don’t want anyone clicking the back button and choosing someone else’s result because I didn’t fulfill their needs. I need to earn links, and I’ve got to have engagement.

Social media

On the social media side, it’s pretty different actually. I’m trying to earn amplification, which can often mean the headline tells as much of the story as possible. Even if you don’t read the piece, you amplify it, you retweet it, and you re-share it. I’m looking for clicks, and I’m looking for comments and engagement on the post. I’m not necessarily too worried about that back button and the selection of another item. In fact, time on site might not even be a concern at all.

Website visitors

For website visitors, both of these are channels that drive traffic. But for the site itself, I’m trying to drive right visitors, the ones who are going to be loyal, who are going to come back, hopefully who are going to convert. I want to not confuse anyone. I want to deliver on my promise so that I don’t create a bad brand reputation and detract from people wanting to click on me in the future. For those of you have visited a site like Forbes or maybe even a BuzzFeed and you have an association of, “Oh, man, this is going to be that clickbait stuff. I don’t want to click on their stuff. I’m going to choose somebody else in the results instead of this brand that I remember having a bad experience with.”

Notable conflicts

There are some notable direct conflicts in here.

Keywords for SEO can be really boring on social media sites. When you try and keyword stuff especially or be keyword-heavy, your social performance tends to go terribly. Creating mystery on social, so essentially not saying what the piece is truly about, but just creating an inkling of what it might be about harms the clarity that you need for search in order to rank well and in order to drive those clicks from a search engine. It also hurts your ability generally to do keyword targeting. The need for engagement and brand reputation that you’ve got for your website visitors is really going to hurt you if you’re trying to develop those clickbait-style pieces that do so well on social. In search, ranking for low-relevance keywords is going to drive very unhappy visitors, people who don’t care that just because you happen to rank for this doesn’t necessarily mean that you should, because you didn’t serve the visitor intent with the actual content. Getting to resolution

So how do we resolve this? Well, it’s not actually a terribly hard process. In 2017 and beyond, what’s nice is that search engines and social and visitors all have enough shared stuff that, most of the time, we can get to a good, happy resolution.

Step one: Determine who your primary audience is, your primary goals, and some prioritization of those channels.

You might say, “Hey, this piece is really targeted at search. If it does well on social, that’s fine, but this is going to be our primary traffic driver.” Or you might say, “This is really for internal website visitors who are browsing around our site. If it happens to drive some traffic from search or social, well that’s fine, but that’s not our intent.”

Step two: For non-conflict elements, optimize for the most demanding channel.

For those non-conflicting elements, so this could be the page title that you use for SEO, it doesn’t always have to perfectly match the headline. If it’s a not-even-close match, that’s a real problem, but an imperfect match can still be okay.

So what’s nice in social is you have things like Twitter cards and the Facebook markup, graph markup. That Open Graph markup means that you can have slightly different content there than what you might be using for your snippet, your meta description in search engines. So you can separate those out or choose to keep those distinct, and that can help you as well.

Step three: Author the straightforward headline first.

I’m going to ask you author the most straightforward version of the headline first.

Step four: Now write the social-friendly/click-likely version without other considerations.

Is to write the opposite of that, the most social-friendly or click-likely/click-worthy version. It doesn’t necessarily have to worry about keywords. It doesn’t have to worry about accuracy or telling the whole story without any of these other considerations.

Step five: Merge 3 & 4, and add in critical keywords.

We’re going to take three and four and just merge them into something that will work for both, that compromises in the right way, compromises based on your primary audience, your primary goals, and then add in the critical keywords that you’re going to need.

Examples:

I’ve tried to illustrate this a bit with an example. Nest, which Google bought them years ago and then they became part of the Alphabet Corporation that Google evolved into. So Nest is separately owned by Alphabet, Google’s parent company. Nest came out with this new alarm system. In fact, the day we’re filming this Whiteboard Friday, they came out with a new alarm system. So they’re no longer just a provider of thermostats inside of houses. They now have something else.

Step one: So if I’m a tech news site and I’m writing about this, I know that I’m trying to target gadget and news readers. My primary channel is going to be social first, but secondarily search engines. The goal that I’m trying to reach, that’s engagement followed by visits and then hopefully some newsletter sign-ups to my tech site.

Step two: My title and headline in this case probably need to match very closely. So the social callouts, the social cards and the Open Graph, that can be unique from the meta description if need be or from the search snippet if need be.

Step three: I’m going to do step three, author the straightforward headline. That for me is going to be “Nest Has a New Alarm System, Video Doorbell, and Outdoor Camera.” A little boring, probably not going to tremendously well on social, but it probably would do decently well in search.

Step four: My social click-likely version is going to be something more like “Nest is No Longer Just a Thermostat. Their New Security System Will Blow You Away.” That’s not the best headline in the universe, but I’m not a great headline writer. However, you get the idea. This is the click-likely social version, the one that you see the headline and you go, “Ooh, they have a new security system. I wonder what’s involved in that.” You create some mystery. You don’t know that it includes a video doorbell, an outdoor camera, and an alarm. You just hear, “They’ve got a new security system. Well, I better look at it.”

Step five: Then I can try and compromise and say, “Hey, I know that I need to have video doorbell, camera, alarm, and Nest.” Those are my keywords. Those are the important ones. That’s what people are going to be searching for around this announcement, so I’ve got to have them in there. I want to have them close to the front. So “Nest’s New Alarm, Video Doorbell and Camera Are About to Be on Every Home’s Must-Have List.” All right, resolved in there.

So this process of writing headlines to serve these multiple different, sometimes competing priorities is totally possible with nearly everything you’re going to do in SEO and social and for your website visitors. This resolution process is something hopefully you can leverage to get better results.

All right, everyone, we’ll see you again next week for another edition of Whiteboard Friday. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


3 Simple Ways to Make Your Blog Posts More Conversational

Originally published on: http://feedproxy.google.com/~r/ProbloggerHelpingBloggersEarnMoney/~3/FOwb-1nRBZs/

make-blog-post-conversational.jpg

This post is by ProBlogger writing expert Ali Luke

You’ve probably heard that your blog posts need to be “conversational”.

You may also have been told why: to create a sense of connection with your reader, keep them engaged, and make your blog sound less like a lecture and more like a discussion.

That’s all true. But making your writing “conversational” can be tricky – especially if you come from a business or academic writing background.

If your blog posts tend to sound a little dry and stilted, here are three simple ways to change things.

#1: Talk Directly to Your Reader

Write your post as if you’re talking a specific reader. Picturing an actual person may help – someone you know in real life, or who comments on your blog. You could even imagine you’re emailing them, or writing a Facebook post or comment.

And use words like “I” and “you”, even though you were probably taught not to at school or work. When you’re blogging it’s totally fine to write from your personal experience, and to invite the reader to step into your post.

Here’s an example from Jim Stewart’s post 9 Tips for Recovering Your Google Rankings After a Site Hack. (I’ve highlighted each use of “you” and “your”.)

If your WordPress site has been hacked, fear not. By following these tips you can fortify your site and kick wannabe hackers to the kerb.

And provided you act quickly, your WordPress site’s SEO traffic—and even its reputation—can recover within 24 hours.

This is clear, direct writing that speaks to the reader’s problem. And it’s easy to read and engage with: it’s almost like having Jim on the phone, talking you through fixing things.

Note: As Jim does here, always try to use the singular “you” rather than the plural “you”. Yes, you hopefully have more than one reader. But each one will experience your blog posts individually. Avoid writing things like “some of you” unless you’re deliberately trying to create a sense of a group environment (perhaps in an ecourse).

#2: Use an Informal Writing Style

All writing exists somewhere on a spectrum from very formal to very informal. Here are some examples:

Very formal: Users are not permitted to distribute, modify, resell, or duplicate any of the materials contained herein.

Formal: Your refund guarantee applies for 30 calendar days from the date of purchase. To request a refund, complete the form below, ensuring you include your customer reference number.

Neutral: Once you’ve signed up for the newsletter list, you’ll get a confirmation email. Open it up, click the link, and you’ll be all set to get the weekly emails.

Informal: Hi Susan, could you send me the link to that ProBlogger thingy you mentioned earlier? Ta!

Very informal: C U 2morrow!!!

With your blogging, it’s generally good to aim for an informal (or at least a neutral) register, as if you were emailing a friend. This makes you seem warm and approachable.

Typically, you’ll be using:

Contractions (e.g. “you’ll” for “you will”) Straightforward language (“get” rather than “receive” or “obtain”) Chatty phrases (“you’ll be all set”) Possibly slang, if it fits with your personal style (“thingy”, “ta!”) Short sentences and paragraphs Some “ungrammatical” features where appropriate (e.g. starting a sentence with “And”)

You might want to take a closer look at some of the blogs you read yourself. How do they create a sense of rapport through their language? How could you rewrite part of their post to make it more or less formal? What words or phrases would you change?

#3: Give the Reader Space to Respond

Conversations are two-way, and that means letting your readers have a say too. If you’ve decided to close comments on your blog, you may want to consider opening up a different avenue for readers to get involved, such as a Facebook page or group.

When you’re writing your post, don’t feel you need to have the last word on everything. You don’t have to tie up every loose end. It’s fine to say you’re still thinking about a particular subject, or that you’re still learning. This gives your readers the opportunity to chime in with their own expertise or experiences.

Often, you can simply ask readers to add to your post. For instance, if you’ve written “10 Great Ways to Have More Fun With Your Blogging”, ask readers to contribute their own ideas in the comments. Some people won’t feel confident about commenting unless explicitly invited to do so, ideally with a suggestion of what they could add (e.g. “What would you add to this list?” or “Have you tried any of these ideas?”)

On a slightly selfish note, if you’re not sure about the value of comments, remember it’s not just about your readers getting more out of your blog. Some of my best blog post ideas have come from a reader’s suggestion or question in a comment. And many other comments have prompted me to think in a more nuanced way about a particular topic.

There’s no one “right” way to blog, and some blogs will inevitably be more conversational than others. If you’d like to make your own posts a bit more conversational, though, look for opportunities to:

Use “you” and “I”. Talk directly to your reader, and share your own experiences where appropriate. Make your language fairly informal. Don’t worry about everything being “correct” – just let your voice and style shine through. Open up the conversation by inviting readers to comment, or encouraging them to pop over to your Facebook page (or join your Facebook group).

Have you tried making your blog more conversational? Or is it something you’re just getting started with? Either way, leave a comment below to share your experiences and tips.

Christin Hume

The post 3 Simple Ways to Make Your Blog Posts More Conversational appeared first on ProBlogger.

      


6 SEO Tests You Need to Try

Originally published on: http://feedproxy.google.com/~r/WordStreamBlog/~3/1OF8qidvl-g/seo-tests

Nobody actually knows anything about SEO with 100% certainty.

There are ~200 ranking factors. We think. Give or take. Links, content, and RankBrain top the list. We infer.

seo ranking factors

(Image source)

But never, ever, ever, does Google come out and say, “Here’s exactly what you should do. Step 1, Step 2, Step 3.”

The deck is also stacked against most of us. The algorithm (we think) rewards well-known brands. Things like frequent brand mentions, “high-quality” backlinks, and amazing content.

New or underfunded? Good luck.

Hitting all 16 on-page optimization points you read about on some blog post won’t cut it. Those are table stakes.

Instead, if you want to grow site traffic in a serious way, you need to experiment, do what big brands won’t do, and discover what works best for you (instead of reading someone else’s best guess.)

Ready to get your hands dirty? Here are 7 SEO tests you need to try on your own site.

But first, let’s cover one thing…

How to Run SEO Tests Responsibly

Every marketer worth their salt knows about testing.

You test landing pages to see which one drives a higher conversion rate. You test offers to see which ones result in more leads. You test headlines to see which brings in more readership. You test your button colors. (Even though they don’t do anything in the long run.)

And yet, SEO?

“Meh. Just sprinkle some keywords on this page before it goes live, please.”

how to run seo tests

Obviously, that’s not ideal. It’s not even good. It’s mediocre at best.

We’re not in this for mediocre. Your competition isn’t mediocre. They’re spending 2x on this stuff. In response, you need to be running tests for SEO.

First, check out Rand Fishkin’s advice for running successful SEO tests. Then follow these simple steps:

Experiment vs. control: You’d never, ever, ever change every single headline on all landing pages for paid campaigns. So don’t do it organically, either. You tweak a single element and run it against the control group to limit your risk. Segmentation: Similarly, you’d never throw up a new landing page for each paid campaign. Instead, you’d pick one keyword. One campaign. Or 10% of the traffic. Again, you’d use a much smaller-than-usual segment to control variations. Repeatability: Got some decent results? Good. Do it again. One-time blips won’t pay the bills.

Point is, proceed with caution on this stuff. You don’t want to do something you can’t undo. You don’t want to de-index your site if you don’t know exactly how to roll back those changes.

Now, roll up your sleeves and get ready to run some SEO experiments.

1. Remove bold tags

Keyword density used to be a thing. You wanted to place oh-so-many keywords into a piece to hit the 1-2% that guaranteed nirvana.

Keyword stuffing quickly became a thing, too. (In fact, it still works on YouTube.)

The theory is, if a little of something works, a lot of something will kill.

Fast forward a few years, and we’re still going for the same tricks. For example, bolding keywords.

You know, try to work them into the H2 if you can. Then slap at least a few bolds on before they go out the door. This sounds silly. It can’t work… can it? Because it starts to look ridiculous, too.

Turns out, one SEO experiment showed that, unsurprisingly, being overzealous with <strong> tags can backfire.

Alistair Kavalt at Sycosure decided to put this to the test after reading about it from SEOPressor. Here’s what happened: On September 7, 2016, Alistair decided to add bold tags to primary keywords on the following page. Take a wild guess at what that keyword was:

bold tags for seo

(image source)

Looks like About.com, right?

He made these changes to just a single page on the site and used a combination of manual rank checks with SerpBook to see how results would fluctuate.

Alistair didn’t have to wait long. Only three days later, the page “dropped 53 positions,” virtually disappearing from the SERPs for “parasite SEO.”

why remove bold tags

(image source)

A few days later he’d had enough. He removed the bold tags and again waited to see what (if anything) would happen.

On September 17, one week after disappearing from the rankings and only a few days after removing the bold tags, the page shot back up to the first position for “parasite SEO.”

seo test removing bold tags

(image source)

The page eventually settled back down, but the implications were clear.

Bold tags do matter – but not quite in the way you’d think. If you’re still bolding keywords on a lot of your pages, try removing those bold tags and see what happens to your rankings.

2. Strip dates from URLs

Chances are, you created your blog years ago. Before you knew what you were doing – or maybe your business started it before you were even around.

The bummer is that decisions made back then can (and often do) come back to haunt you today.

Take permalinks. It’s normal to assign a custom permalink structure in WordPress when you’re first getting started.

For example, select one of the following, and you’re stuck with dates in your URLs for good:

dates in urls for seo

The reason? Changing permalink settings later would cause mass 404 errors to ripple throughout your site. It would be like SEO suicide unless you knew exactly what you were doing (and how to fix it).

So. What, exactly, is the optimal permalink structure? Are dates in the URL good or bad?

Harsh from ShoutMeLoud decided to find out. Initially, he claimed that “removing dates had a positive impact on the overall search engine ranking.”

Then he tested it.

The reasoning here comes back to content relevancy. Some of his old blog posts dated back to 2008. The content was evergreen. It was still legit. But anyone seeing that “2008” in the URL string would immediately question its validity.

So he experimented with both approaches: date and date-less (like me in high school) blog posts.

Adding dates turns out, only drove down his traffic:

seo test dates in urls

(image source)

Removing the dates caused rankings and traffic to come back up:

how do dates in url strings affect seo

(image source)

One potential hypothesis comes back to SERP click-through rates. Outdated content inevitably looks outdated.

If you see two equally compelling results, all other things being equal, you might skip over the old one in favor of the new:

seo tests

Removing dates from your post metadata is usually fairly easy. You might need some technical help, but it’s usually just removing a line of code from your site or theme.

When removing dates from your permalinks, proceed with caution. Make sure you know your way around redirects.

3. Optimize for dwell time

Originally introduced by Duane Forrester, previously of Bing, dwell time refers to the length of time a visitor spends on a page before heading back to the search engine that sent them there. (We all know it was Google. Sorry Bing.)

Ideally, the longer the dwell time, the better.

It makes perfect sense when you think about it.

SEO isn’t about rankings, keywords, etc., contrary to popular belief. It’s about answering search queries. It’s about being the best at giving people what they’re looking for.

Your goal is to match search intent.

Someone hitting your site and then the back button seconds later would result in a low dwell time. And it’s a bad sign that you haven’t been able to give them what they were looking for.

So it’s kinda like the Bounce Rate or Time on Page you’re already used to. But not really. A little more nuance is involved.

Dwell time is an important concept because it dictates how we should design pages and what should go on them. It’s why long-form posts tend to rank better than short-form ones – not because people like reading (they don’t), but because it helps keep people glued to your screen a little longer.

Yeeeears ago (one e for each year), Dan Shewan at WordStream pointed to two different examples that suggest Google can measure your dwell time.

The first was the option for visitors to block results from a specific domain:

google dwell time

And the second is the reverse: the ability to get more content from that source.

google author results

Since then, we’ve had several more studies come out confirming that dwell time does have some sort of impact on rankings.

Testing this one can be relatively easy. Start by improving the content. Take a post that ranks well, but not that well. Think, “top of the second page.”

dwell time and seo

Look for evidence that it’s not quite meeting the searcher’s intent, like high bounce and exit rates with low time on the page.

Then do nothing but improve content quality.

Update the stats. Enhance readability and scannability. Add new sections. Upgrade the visuals. Insert a table of contents to help people jump around. Use audio or video to better summarize the page information. Include internal links for related articles to create ‘webs’ of content.

Now, monitor results.

4. Prune your site

Bigger is better. More pages = more traffic. Right?

Not exactly. Counterintuitively, less can bring in more, according to one SEO test featured on Moz.

Everett Sizemore makes a case for “pruning” your site by proactively removing stuff that doesn’t enhance quality. Brian Dean has suggested a similar pruning idea.

The theory goes that the less junk you have, the higher the overall quality signal of your site. This would work similarly to AdWords’ Quality Score, giving an indication of how well your ads and campaigns are aligning with users.

Everett uses QualityRank as a way to explain how this signal works. “Pruning” your low-quality, low-traffic pages increases your site’s overall average score.

For example, you cut the bottom 30-50% of your site’s low-quality content. Now, you’re only left with the middle and upper portions. Fewer pages overall, but killing the low-value pages drives the average page quality up with it.

site audit and pruning

(image source)

Here’s what that same example would look like after removing the bottom half of their less-than-useful content.

organic quality rank

You’ve instantly raised the overall “QualityRank” by ten points without doing anything else. The junk, therefore, was only holding the rest of your stuff down.

Sounds crazy, right?

Everett shares a few case studies and examples to prove it works.

First up, 1800doorbell used a combo of technical tweaks + pruning low-quality content to increase revenue from organic search by 96%.

how to prune old site content

(image source)

Ahrefs shared their own results of a similar test that showed a massive lift over the course of a year. Again, they placed a big emphasis on pruning (in addition to other technical improvements.)

site cleanup how-to

(image source)

Everett recommends running a content audit first to uncover your worst performing pages. For example, you can go through to find pages with:

No organic search traffic Ranking > 50 No backlinks No social shares

Removing the content entirely could create unintended consequences and broken links.

Instead, start by just adding a noindex tag on these low-quality pages. That way, they technically still exist on the site – but not from a search engine’s point of view.

5. Don’t sleep on nofollow links

You already know that links matter. You already know that link quality matters.

For example, links from the New York Times are worth more than ones from Bob’s SEO Fairy Dust Farm.

In the same vein, “followed” links are worth more than “nofollowed” ones. (More on the difference between these here.)

Picture blog comments. All of those people spamming their way to get links would be disappointed to find out that many commenting systems automatically “nofollow” their links.

It’s basically a way for sites to tell search engines not to attribute value to those links. (Because they’re leaving spammy comments. In 2017.)

The common thought is that “nofollowed” links are completely useless when it comes to SEO value. However, that might not always be the case. Those comment spammers might be onto something (God help us).

Rand and his rag-tag IMEC Lab group confirmed this suspicion in a series of tests.

Forty-two nofollow links were pointed towards a page ranking in the ninth position for a “low competition query.”

nofollow link seo test

(image source)

So what happened?

The page started climbing to the sixth position after those links were indexed.

seo value of nofollow links

(image source)

Removing the nofollow tags on the links helped the page rise to the fifth position.

This experiment suggests that at least for queries that aren’t super competitive, nofollow links aren’t as useless as previously thought. And it does give some credence to the idea that, “the more links, the better.”

6. Improve site loading times

Google released a mobile page speed industry benchmark report in February. Turns out, “The probability of someone bouncing from your site increases by 113 percent if it takes seven seconds to load.”

page loading times and seo

(image source)

People really, REALLY don’t like waiting around for pages to load. Especially on mobile devices.

The problem is that the same report found that most mobile pages take three times that longto load (22 seconds.)

Slow page loading times have a trickle-down effect. The longer a page takes to load, the less traffic, more bounces, and less conversions you’ll see.

“Similarly, as the number of elements—text, titles, images—on a page goes from 400 to 6,000, the probability of conversion drops 95 percent.”

It’s not just the page loading time that drives worse performance. It’s something a little more geekily referred to as Time to First Byte (TTFB).

Billy Hoffman worked with Moz years ago to run an experiment. They collected 100,000 pages to evaluate. They then used 40 different page loading metrics to base their analysis.

They then recorded the median page loading time based on average search position to see if there was any correlation.

In other words, they expected to see better-ranking pages have a lower average page loading time. But that wasn’t always the case.

load time seo test

(image source)

Instead, they found a much stronger correlation between rankings and TTFB.

The higher a page ranked, the lower its TTFB.

time to first byte vs. google organic ranking

(image source)

Billy reasoned that “TTFB is likely the quickest and easiest metric for Google to capture,” which can help explain why it seems to be such an influential factor.

Ok, great. That would actually mean something if you knew what Time to First Byte was in the first place.

Basically, it’s the time it takes for search engines to “receive the first byte of data from the server.”

what is time to first byte

(image source)

Someone types in your web address and hits “Enter.” That request is sent to your server to send the appropriate data. Your server processes that request. It gathers data from different places and assembles it for transmit. Then it’s sent back to the original client or browser that requested it in the first place.

Now, multiply that sequence, by tens of thousands of visitors, spread across the planet, at all hours of the day. Every additional or redundant issue, like massive image files or poor code, can throw a wrench in this process. Even a bad WiFi connection can slow it down.

In other words, page loading isn’t the only issue to be aware of or test. Delivering relevant content, faster, is too.

That’s why Lazy Loading images or using a Content Delivery Network can help. They can both speed up page loading times, sure. But they do that through lessening the initial load that’s transmitted each time someone requests content from your site.

Keep on testin’

PPC is fairly transparent in comparison to SEO. You know how much keywords cost. Many tools can show you what your competitors are paying or what their ad text and landing pages look like.

However, in SEO, there’s no shortage of myths and urban legends out there.

We can look for best practices to help guide the way. But at the end of the day, we have to test and run experiments for ourselves to really know for sure.

The good news is that you don’t have to start in the dark. You can begin with these six SEO tests that have already worked for others to start finding out what works, and what doesn’t, for you.


Listen to MozPod, the Free SEO Podcast from Moz

Originally published on: http://feedproxy.google.com/~r/seomoz/~3/s3aNXUkaasM/mozpod

Posted by BrianChilds

We’re marketers. We know from firsthand experience that there aren’t enough hours in the day to do everything that needs to get done. And that’s even more true once you commit to leveling up and learning new skills.

The learning curve for developing digital marketing skills can be steep, and staying informed as things evolve and change (thanks, Google) can feel like a full-time job. Our Moz Training has classes to help accelerate the learning process, but as startup folks ourselves, we understand the importance of multitasking.

Learn SEO on the go

We’re thrilled to introduce MozPod, an SEO podcast focused on sharing lessons from digital marketing experts. Episodes are led by instructors from Moz Academy and we discuss a wide variety of digital marketing concepts, from common terminology to recent changes and best practices.

Check it out on iTunes

Where can I listen in? iTunes The MozPod homepage Upcoming episodes

Our first series covers conversion rate optimization, PageRank, and link building:

Ep. 1: The Science of Crawling and Indexing Guest: Neil Martinsen-Burrell of Moz

Dr. Neil Martinsen-Burrell shares his perspective as a statistician on the development of Page Authority and Domain Authority. Great data and interesting stats.

Ep. 2: What’s a Good Conversion Rate? Guest: Carl Schmidt of Unbounce

Carl discusses the Unbounce Conversion Rate Benchmark Report and what SEOs can learn from an analysis of over 74 million landing page visitors. Great for content writers.

Ep. 3: Link Building Fundamentals Guest: The PageOnePower team

MozPod interviews PageOnePower about how search engines place value on links. Collin, Cody, and Nicholas share the personal wisdom they’ve gained from working at a link building company.

Want to be a guest on MozPod?

If you’d like to share your recent SEO analysis or have a topic you think MozPod listeners would find valuable, please send us your ideas! MozPod is a place for our community of SEOs and digital marketers to learn. We’d love to hear from you.

Simply fill out this form to share your idea: Be on MozPod

Give it a listen and let us know what topics you’d like to hear about in the comments!

Listen to MozPod on iTunes


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


How to Turn Low-Value Content Into Neatly Organized Opportunities – Next Level

Originally published on: http://feedproxy.google.com/~r/seomoz/~3/VLnlsWKZTKw/low-value-content-next-level

Posted by jocameron

Welcome to the newest installment of our educational Next Level series! In our last post, Brian Childs offered up a beginner-level workflow to help discover your competitor’s backlinks. Today, we’re welcoming back Next Level veteran Jo Cameron to show you how to find low-quality pages on your site and decide their new fate. Read on and level up!

With an almost endless succession of Google updates fluctuating the search results, it’s pretty clear that substandard content just won’t cut it.

I know, I know — we can’t all keep up with the latest algorithm updates. We’ve got businesses to run, clients to impress, and a strong social media presence to maintain. After all, you haven’t seen a huge drop in your traffic. It’s probably OK, right?

So what’s with the nagging sensation down in the pit of your stomach? It’s not just that giant chili taco you had earlier. Maybe it’s that feeling that your content might be treading on thin ice. Maybe you watched Rand’s recent Whiteboard Friday (How to Determine if a Page is “Low Quality” in Google’s Eyes) and just don’t know where to start.

In this edition of Next Level, I’ll show you how to start identifying your low-quality pages in a few simple steps with Moz Pro’s Site Crawl. Once identified, you can decide whether to merge, shine up, or remove the content.

A quick recap of algorithm updates

The latest big fluctuations in the search results were said to be caused by King Fred: enemy of low-quality pages and champion of the people’s right to find and enjoy content of value.

Fred took the fight to affiliate sites, and low-value commercial sites were also affected.

The good news is that even if this isn’t directed at you, and you haven’t taken a hit yourself, you can still learn from this update to improve your site. After all, why not stay on the right side of the biggest index of online content in the known universe? You’ll come away with a good idea of what content is working for your site, and you may just take a ride to the top of the SERPs. Knowledge is power, after all.

Be a Pro

It’s best if we just accept that Google updates are ongoing; they happen all.the.time. But with a site audit tool in your toolkit like Moz Pro’s Site Crawl, they don’t have to keep you up at night. Our shiny new Rogerbot crawler is the new kid on the block, and it’s hungry to crawl your pages.

If you haven’t given it a try, sign up for a free trial for 30 days:

Start a free trial

If you’ve already had a free trial that has expired, write to me and I’ll give you another, just because I can.

Set up your Moz Pro campaign — it takes 5 minutes tops — and Rogerbot will be unleashed upon your site like a caffeinated spider.

Rogerbot hops from page to page following links to analyze your website. As Rogerbot hops along, a beautiful database of pages is constructed that flag issues you can use to find those laggers. What a hero!

First stop: Thin contentSite Crawl > Content Issues > Thin Content

Thin content could be damaging your site. If it’s deemed to be malicious, then it could result in a penalty. Things like zero-value pages with ads or spammy doorway pages — little traps people set to funnel people to other pages — are bad news.

First off, let’s find those pages. Moz Pro Site Crawl will flag “thin content” if it has less than 50 words (excluding navigation and ads).

Now is a good time to familiarize yourself with Google’s Quality Guidelines. Think long and hard about whether you may be doing this, intentionally or accidentally.

You’re probably not straight-up spamming people, but you could do better and you know it. Our mantra is (repeat after me): “Does this add value for my visitors?” Well, does it?

Ok, you can stop chanting now.

For most of us, thin content is less of a penalty threat and more of an opportunity. By finding pages with thin content, you have the opportunity to figure out if they’re doing enough to serve your visitors. Pile on some Google Analytics data and start making decisions about improvements that can be made.

Using moz.com as an example, I’ve found 3 pages with thin content. Ta-da emoji!

I’m not too concerned about the login page or the password reset page. I am, however, interested to see how the local search page is performing. Maybe we can find an opportunity to help people who land on this page.

Go ahead and export your thin content pages from Moz Pro to CSV.

We can then grab some data from Google Analytics to give us an idea of how well this page is performing. You may want to look at comparing monthly data and see if there are any trends, or compare similar pages to see if improvements can be made.

I am by no means a Google Analytics expert, but I know how to get what I want. Most of the time that is, except when I have to Google it, which is probably every second week.

Firstly: Behavior > Site Content > All Pages > Paste in your URL

Pageviews – The number of times that page has been viewed, even if it’s a repeat view. Avg. Time on Page – How long people are on your page Bounce Rate – Single page views with no interaction

For my example page, Bounce Rate is very interesting. This page lives to be interacted with. Its only joy in life is allowing people to search for a local business in the UK, US, or Canada. It is not an informational page at all. It doesn’t provide a contact phone number or an answer to a query that may explain away a high bounce rate.

I’m going to add Pageviews and Bounce Rate a spreadsheet so I can track this over time.

I’ll also added some keywords that I want that page to rank for to my Moz Pro Rankings. That way I can make sure I’m targeting searcher intent and driving organic traffic that is likely to convert.

I’ll also know if I’m being out ranked by my competitors. How dare they, right?

As we’ve found with this local page, not all thin content is bad content. Another example may be if you have a landing page with an awesome video that’s adding value and is performing consistently well. In this case, hold off on making sweeping changes. Track the data you’re interested in; from there, you can look at making small changes and track the impact, or split test some ideas. Either way, you want to make informed, data-driven decisions.

Action to take for tracking thin content pages

Export to CSV so you can track how these pages are performing alongside GA data. Make incremental changes and track the results.

Second stop: Duplicate title tags Site Crawl > Content Issues > Duplicate Title Tags

Title tags show up in the search results to give human searchers a taste of what your content is about. They also help search engines understand and categorize your content. Without question, you want these to be well considered, relevant to your content, and unique.

Moz Pro Site Crawl flags any pages with matching title tags for your perusal.

Duplicate title tags are unlikely to get your site penalized, unless you’ve masterminded an army of pages that target irrelevant keywords and provide zero value. Once again, for most of us, it’s a good way to find a missed opportunity.

Digging around your duplicate title tags is a lucky dip of wonder. You may find pages with repeated content that you want to merge, or redundant pages that may be confusing your visitors, or maybe just pages for which you haven’t spent the time crafting unique title tags.

Take this opportunity to review your title tags, make them interesting, and always make them relevant. Because I’m a Whiteboard Friday friend, I can’t not link to this title tag hack video. Turn off Netflix for 10 minutes and enjoy.

Pro tip: To view the other duplicate pages, make sure you click on the little triangle icon to open that up like an accordion.

Hey now, what’s this? Filed away under duplicate title tags I’ve found these cheeky pages.

These are the contact forms we have in place to contact our help team. Yes, me included — hi!

I’ve got some inside info for you all. We’re actually in the process of redesigning our Help Hub, and these tool-specific pages definitely need a rethink. For now, I’m going to summon the powerful and mysterious rel=canonical tag.

This tells search engines that all those other pages are copies of the one true page to rule them all. Search engines like this, they understand it, and they bow down to honor the original source, as well they should. Visitors can still access these pages, and they won’t ever know they’ve hit a page with an original source elsewhere. How very magical.

Action to take for duplicate title tags on similar pages

Use the rel=canonical tag to tell search engines that https://moz.com/help/contact is the original source.

Review visitor behavior and perform user testing on the Help Hub. We’ll use this information to make a plan for redirecting those pages to one main page and adding a tool type drop-down.

More duplicate titles within my subfolder-specific campaign

Because at Moz we’ve got a heck of a lot of pages, I’ve got another Moz Pro campaign set up to track the URL moz.com/blog. I find this handy if I want to look at issues on just one section of my site at a time.

You just have to enter your subfolder and limit your campaign when you set it up.

Just remember we won’t crawl any pages outside of the subfolder. Make sure you have an all-encompassing, all-access campaign set up for the root domain as well.

Not enough allowance to create a subfolder-specific campaign? You can filter by URL from within your existing campaign.

In my Moz Blog campaign, I stumbled across these little fellows:

https://moz.com/blog/whiteboard-friday-how-to-get-an-seo-job

https://moz.com/blog/whiteboard-friday-how-to-get-an-seo-job-10504

This is a classic case of new content usurping the old content. Instead of telling search engines, “Yeah, so I’ve got a few pages and they’re kind of the same, but this one is the one true page,” like we did with the rel=canonical tag before, this time I’ll use the big cousin of the rel=canonical, the queen of content canonicalization, the 301 redirect.

All the power is sent to the page you are redirecting to, as well as all the actual human visitors.

Action to take for duplicate title tags with outdated/updated content

Check the traffic and authority for both pages, then add a 301 redirect from one to the other. Consolidate and rule.

It’s also a good opportunity to refresh the content and check whether it’s… what? I can’t hear you — adding value to my visitors! You got it.

Third stop: Duplicate contentSite Crawl > Content Issues > Duplicate Content

When the code and content on a page looks the same are the code and content on another page of your site, it will be flagged as “Duplicate Content.” Our crawler will flag any pages with 90% or more overlapping content or code as having duplicate content.

Officially, in the wise words of Google, duplicate content doesn’t incur a penalty. However, it can be filtered out of the index, so still not great.

Having said that, the trick is in the fine print. One bot’s duplicate content is another bot’s thin content, and thin content can get you penalized. Let me refer you back to our old friend, the Quality Guidelines.

Are you doing one of these things intentionally or accidentally? Do you want me to make you chant again?

If you’re being hounded by duplicate content issues and don’t know where to start, then we’ve got more information on duplicate content on our Learning Center.

I’ve found some pages that clearly have different content on them, so why are these duplicate?


So friends, what we have here is thin content that’s being flagged as duplicate.

There is basically not enough content on the page for bots to distinguish them from each other. Remember that our crawler looks at all the page code, as well as the copy that humans see.

You may find this frustrating at first: “Like, why are they duplicates?? They’re different, gosh darn it!” But once you pass through all the 7 stages of duplicate content and arrive at acceptance, you’ll see the opportunity you have here. Why not pop those topics on your content schedule? Why not use the “queen” again, and 301 redirect them to a similar resource, combining the power of both resources? Or maybe, just maybe, you could use them in a blog post about duplicate content — just like I have.

Action to take for duplicate pages with different content

Before you make any hasty decisions, check the traffic to these pages. Maybe dig a bit deeper and track conversions and bounce rate, as well. Check out our workflow for thin content earlier in this post and do the same for these pages.

From there you can figure out if you want to rework content to add value or redirect pages to another resource.

This is an awesome video in the ever-impressive Whiteboard Friday series which talks about republishing. Seriously, you’ll kick yourself if you don’t watch it.

Broken URLs and duplicate content

Another dive into Duplicate Content has turned up two Help Hub URLs that point to the same page.

These are no good to man or beast. They are especially no good for our analytics — blurgh, data confusion! No good for our crawl budget — blurgh, extra useless page! User experience? Blurgh, nope, no good for that either.

Action to take for messed-up URLs causing duplicate content

Zap this time-waster with a 301 redirect. For me this is an easy decision: add a 301 to the long, messed up URL with a PA of 1, no discussion. I love our new Learning Center so much that I’m going to link to it again so you can learn more about redirection and build your SEO knowledge.

It’s the most handy place to check if you get stuck with any of the concepts I’ve talked about today.

Wrapping up

While it may feel scary at first to have your content flagged as having issues, the real takeaway here is that these are actually neatly organized opportunities.

With a bit of tenacity and some extra data from Google Analytics, you can start to understand the best way to fix your content and make your site easier to use (and more powerful in the process).

If you get stuck, just remember our chant: “Does this add value for my visitors?” Your content has to be for your human visitors, so think about them and their journey. And most importantly: be good to yourself and use a tool like Moz Pro that compiles potential issues into an easily digestible catalogue.

Enjoy your chili taco and your good night’s sleep!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Yes, Competitors Can Edit Your Listing on Google My Business

Originally published on: http://feedproxy.google.com/~r/seomoz/~3/wAkEj6UPCu4/competitors-edit-listing-google-my-business

Posted by JoyHawkins

I decided to write this article in response to a recent article that was published over at CBSDFW. The article was one of many stories about how spammers update legitimate information on Google as a way to send more leads somewhere else. This might shock some readers, but it was old news to me since spam of this nature on Google Maps has been a problem for almost a decade.

What sparked my interest in this article was Google’s response. Google stated:

Merchants who manage their business listing info through Google My Business (which is free to use), are notified via email when edits are suggested. Spammers and others with negative intent are a problem for consumers, businesses, and technology companies that provide local business information. We use automated systems to detect for spam and fraud, but we tend not to share details behind our processes so as not to tip off spammers or others with bad intent.

Someone might read that and feel safe, believing that they have nothing to worry about. However, some of us who have been in this space for a long time know that there are several incorrect and misleading statements in that paragraph. I’m going to point them out below.

“Merchants are notified by email” Google just started notifying users by email last month. Their statement makes it sound like this has been going on for ages. Before September 2017, there were no emails going to people about edits made to their listings. Not everyone gets an email about edits that have been made. To test this, I had several people submit an update to a listing I own to change the phone number. When the edit went live, the Google account that was the primary owner on the listing got an email; the Google account that was a manager on the listing did not.

Similarly, I am a manager on over 50 listings and 7 of them currently show as having updates in the Google My Business dashboard. I haven’t received a single email since they launched this feature a month ago.

“Notified […] when edits are suggested”

Merchants are not notified when edits are “suggested.” Any time I’ve ever heard of an email notification in the last month, it went out after the edit was already live.

Here’s a recent case on the Google My Business forum. This business owner got an email when his name was updated because the edit was already live. He currently has a pending edit on his listing to change the hours of operation. Clearly this guy is on top of things, so why hasn’t he denied it? Because he wouldn’t even know about it since it’s pending.

The edit isn’t live yet, so he’s not receiving a notification — either by email or inside the Google My Business dashboard.

Edits show up in the Google My Business dashboard as “Updates from Google.” Many people think that if they don’t “accept” these edits in the Google My Business dashboard, the edits won’t go live. The reality is that by “accepting” them, you’re just confirming something that’s already live on Google. If you “don’t accept,” you actually need to edit the listing to revert it back (there is no “deny” button).

Here’s another current example of a listing I manage inside Google My Business. The dashboard doesn’t show any updates to the website field, yet there’s a pending edit that I can see on the Google Maps app. A user has suggested that the proper website is a different page on the website than what I currently have. The only way to see all types of pending edits is via Check the Facts on Google Maps. No business owner I’ve ever spoken to has any clue what this is, so I think it’s safe to say they wouldn’t be checking there.

Here’s how I would edit that original response from Google to make it more factually correct:

Merchants who manage their business listing info through Google My Business (which is free to use) are notified when edits made by others are published on Google. Sometimes they are notified by email and the updates are also shown inside the Google My Business dashboard. Google allows users (other than the business owner) to make edits to listings on Google, but the edits are reviewed by either automated systems or, in some cases, actual human beings. Although the system isn’t perfect, Google is continually making efforts to keep the map free from spam and malicious editing.

Do you manage listings that have been edited by competitors? What’s your experience been? Share your story in the comments below!


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


Getting SEO Value from rel=&quot;nofollow&quot; Links – Whiteboard Friday

Originally published on: http://feedproxy.google.com/~r/seomoz/~3/tdZL1mLwg08/seo-value-nofollow-links

Posted by randfish

Plenty of websites that make it easy for you to contribute don’t make it easy to earn a followed link from those contributions. While rel=nofollow links reign in the land of social media profiles, comments, and publishers, there’s a few ways around it. In today’s Whiteboard Friday, Rand shares five tactics to help you earn equity-passing followed links using traditionally nofollow-only platforms.

How to get SEO value from rel="nofollow" links

Click on the whiteboard image above to open a high-resolution version in a new tab!

Video Transcription

Howdy, Moz fans, and welcome to another edition of Whiteboard Friday. This week we’re going to chat about how you can get SEO value from nofollowed links. So in the SEO world, there are followed links. These are the normal ones that you find on almost every website. But then you can have nofollowed links, which you’ll see in the HTML code of a website. You will see the normal thing is a href=somewebsite in here. If you see this rel=nofollow, that means that the search engines — Google, Bing, Yahoo, etc. — will not count this link as passing link equity, at least certainly not in the same way that a followed link would.

So when you see these, you can see them by looking in the source code yourself. You could turn on the MozBar and use the “Show nofollow links” on the Page button and see these.

What sort of links use rel=nofollow?

But the basic story is that you’re not getting the same SEO value from them. But there are ways to get it. Recently you might have seen in the SEO news world that Inc. and Forbes and a few other sites like them, last year it was Huffington Post, started applying nofollow tags to all the links that belong to articles from contributors. So if I go and write an article for Inc. today, the links that I point out from my bio and my snippet on there, they’re not going to pass any value, because they have this nofollow applied.

A) Social media links (Facebook, Twitter, LinkedIn, etc.)

There are a bunch of types of links use this. Social media, so Facebook, Twitter, and LinkedIn, which is one of the reasons why you can’t just boost your linked profile by going to these places and leaving a bunch of links around.

B) Comments (news articles, blogs, forums, etc.)

Comments, so from news articles or blogs or forums where there’s discussion, Q&A sites, those comments, all the links in them that you leave again nofollowed.

C) Open submission content (Quora, Reddit, YouTube, etc.)

Open submission content, so places like Quora where you could write a post, or Reddit, where you could write a post, or YouTube where you could upload a video and have a post and have a link, most of those, in fact almost all of them now have nofollows as do the profile links that are associated. Your Instagram account, for example, that would be a social media one. But it’s not just the pictures you post on Instagram. Your profile link is one of the only places in the Instagram platform where you actually get a real URL that you can send people to, but that is nofollowed on the web.

D) Some publishers with less stringent review systems (Forbes, Buzzfeed, LinkedIn Pulse, etc.)

Some publishers now with these less stringent publishing review systems, so places like Inc., Forbes, BuzzFeed in some cases with their sponsored posts, Huffington Post, LinkedIn’s Pulse platform, and a bunch of others all use this rel=nofollow.

Basic evaluation formula for earning followed links from the above sources

Basic evaluation formula for earning followed links from the above sources

The basic formula that we need to go to here is: How do you contribute to all of these places in ways that will ultimately result in followed links and that will provide you with SEO value? So we’re essentially saying I’m going to do X. I know that’s going to bring a nofollowed link, but that nofollowed link will result in this other thing happening that will then lead to a followed link.

Do X ? Get rel=nofollow link ? Results in Y ? Leads to followed link 5 examples/tactics to start

This other thing happening can be a bunch of different things. It could be something indirect. You post something with your site on one of these places. It includes a nofollow link. Someone finds it. We’ll just call this guy over here, this is our friendly editor who works for a publication and finds it and says, “Hmm, that link was actually quite useful,” or the information it pointed to was useful, the article was useful, your new company seems useful, whatever it is. Later, as that editor is writing, they will link over to your site, and this will be a followed link. Thus, you’re getting the SEO value. You’ve indirectly gained SEO value essentially through amplification of what you were sharing through your link.

Google likes this. They want you to use all of these places to show stuff, and then they’re hoping that if people find it truly valuable, they’ll pick it up, they’ll link to it, and then Google can reward that.

So some examples of places where you might attempt this in the early stages. These are a very small subset of what you could do, and it’s going to be different for every industry and every endeavor.

1. Quora contributions

But Quora contributions, especially those if you have relevant or high value credentials or very unique, specific experiences, that will often get picked up by the online press. There are lots of editors and journalists and publications of all kinds that rely on interesting answers to Quora questions to use in their journalism, and then they’ll cite you as a source, or they’ll ask you to contribute, they’ll ask you for a quote, they’ll point to your website, all that kind of stuff.

2. Early comments on low-popularity blogs

Early comments especially in, I know this is going to sound odd, but low-popularity blogs, rather than high-popularity ones. Why low popularity? Because you will stand out. You’re less likely to be seen as a spammer, especially if you’re an authentic contributor. You don’t get lost in the noise. You can create intrigue, give value, and that will often lead to that writer or that blogger picking you up with followed links in subsequent posts. If you want more on this tactic, by the way, check out our Whiteboard Friday on comment marketing from last year. That was a deep dive into this topic.

3. Following and engaging with link targets on Twitter

Number three, following and engaging with your link targets on Twitter, especially if your link targets are heavily invested in Twitter, like journalists, B2B bloggers and contributors, and authors or people who write for lots of different publications. It doesn’t have to be a published author. It can just be a writer who writes for lots of online pieces. Then sharing your related content with them or just via your Twitter account, if you’re engaging with them a lot, chances are good you can get a follow back, and that will lead to a lot of followed up links with a citation.

4. Link citations from Instagram images

Instagram accounts. When you post images on Instagram, if you use the hashtags — hashtag marketing is kind of one of the only ways to get exposure on Instagram — but if you use hashtags that you know journalists, writers, editors, and publications of any kind in your field are picking up and need, especially travel, activities, current events, stuff that’s in the news, or conferences and events, many times folks will pick up those images and ask you for permission to use them. If you’re willing to give it, you can earn link citations. Another important reason to associate that URL with your site so that people can get in touch with you.

5. Amplify content published on your site by republishing on other platforms

If you’re using some of these platforms that are completely nofollow or platforms that are open contribution and have follow links, but where we suspect Google probably doesn’t count them, Medium being one of the biggest places, you can use republishing tactics. So essentially you’re writing on your own website first. Writing on your own website first, but then you are republishing on some of these other places.

I’m going to go Forbes. I’m going to publish my column on Forbes. I’m going to go to Medium. I’m going to publish in my Medium account. I’m going to contribute Huffington Post with the same piece. I’m republishing across these multiple platforms, and essentially you can think of this as it’s not duplicate content. You’re not hurting yourself, because these places are all pointing back to your original. It’s technically duplicate content, but not the kind that’s going to be bothersome for search engines.

You’re essentially using these the same way you would use your Twitter or Facebook or LinkedIn, where you are pushing it out as a way to say, “Here, check this out if you’re on these platforms, and here’s the original back here.” You can do that with the full article, just like you would do full content in RSS or full content for email subscribers. Then use those platforms for sharing and amplification to get into the hands of people who might link later.

So nofollowed links, not a direct impact, but potentially a very powerful, indirect way to get lots of good links and lots of good SEO value.

All right, everyone, hope you’ve enjoyed this edition of Whiteboard Friday, and we’ll see you again next week. Take care.

Video transcription by Speechpad.com


Sign up for The Moz Top 10, a semimonthly mailer updating you on the top ten hottest pieces of SEO news, tips, and rad links uncovered by the Moz team. Think of it as your exclusive digest of stuff you don’t have time to hunt down but want to read!


The Beginner’s Guide to Duplicate Content

Originally published on: http://feedproxy.google.com/~r/WordStreamBlog/~3/hn0TfUNQTqo/duplicate-content

One of the most frequent challenges I come across as a digital marketer are clients who can’t seem to get a good grasp of what duplicate content really is, how to avoid it, and why it matters to them.

In this article, I’m going to dispel a few myths about duplicate content and SEO that are still lingering in a post-Panda world, as well as giving a few tips as to how to keep on the right side of Google’s guidelines so that search engines and users love your content.

what is duplicate content

What is duplicate content?

From the horse’s mouth, a Google Search Console Help Centre article states:

“…substantive blocks of content within or across domains that either completely match other content or are appreciably similar.”

Which, doesn’t seem so difficult, but what we need to know is how does this affect your website?

Some examples of duplicate content include:

Ecommerce product descriptions. Specifically, generic descriptions provided by a supplier and used across multiple sales outlets. For example, this section on the Nespresso website about a coffee machine…

Duplicate Content

…has been repeated word for word on Amazon India to sell the same product:

Duplicate Content Nespresso 2

Use of the same page in multiple areas of your site. Again, this is usually a problem for ecommerce sites, e.g. you’ll see:

myfictionalshop.com/jackets/red-jacket.html

which has the same content as:

myfictionalshop.com/sale/red-jacket.html

Multiple service pages on your website which are too similar to each other.

Your site doesn’t handle the www and non-www versions of your site effectively.

You use another website’s content on your own site. Press Releases are a good example of content that is written once and distributed multiple times. Another would be sites that syndicate content and publish nothing original.

You own several domains that sell similar product lines to different target audiences – to both consumers and trade for example.

Why should I care about duplicate content on my website?

Let’s dispel the biggest myth that still gets circulated, the Google penalty myth. Here’s the truth: There is NO Google penalty for duplicate content.

This was addressed in a Google Q&A in June of last year. You can watch the whole video here.

However: Google MAY prevent some of your content from showing as a search result if your site has duplicate content issues, and as with all content, it will aim to show the most relevant content to the user at the time.

Google will still index those pages. If it can see the same text across several pages and decides they are the same, it will show only the one which they deem to be the most relevant to the user’s own query.

There is a distinction between content which has been duplicated by your CMS generating new URLs, for example, and users who replicate content on a large scale and re-publish it for financial reward, or to manipulate rankings.

Google’s Guidelines for quality are clear on this subject. If you use illicit tactics for generating content, or create pages with no original content, you do run the risk of being removed from search engine results pages (SERPs).

In ordinary cases such as those listed above, the worst that will happen is your site simply won’t be shown in SERPs.

How to check for duplicate content on your site

There are several tools which will help identify areas to improve on your own site such as:

Moz’s crawler tool will help you to identify which pages on your site are duplicate and with which other pages. It is a paid tool but it does have a 30-day free trial available.

Siteliner will give you a more in-depth analysis of which pages are duplicated, and how closely related they are and which areas of text are replicated. This is useful where large bodies of text are used but the whole page may not be a complete replication:

Duplicate Content Siteliner 1

Duplicate Content Siteliner 2

 Copyscape’s plagiarism checker will also check for copies of your pages being used on the wider web:

Duplicate Content Copyscape

If you can’t access these tools for any reason but are concerned that duplicate content may be influencing your site, try selecting a snippet of text and searching for it to see if any direct duplications are returned in the results.

What to do about content duplications?

This really depends on the type of duplication. Some of the techniques I’ll talk about now aren’t really for the beginner. You may need an SEO agency to hold your hand through this part of the process.

The Problem: Generic product descriptions provided by a supplier

The fix: This one is easy to tackle, but can be resource heavy. The advice is about as simple as it gets; make your content unique, useful, and interesting for your audience. Usually a manufacturer’s description will tell you what the product is, whereas you need to think about why your customer needs it and why they need to buy it from you.

There’s nothing stopping you from using the specification of a product and then adding your own wording around it. Add in your tone of voice and personality. Think about your specific audience and their personas. Think about why they would want to buy your product and then tell them your unique selling proposition. What problem or need does it satisfy that they will relate to?

The Problem: Same page in multiple places on your site

The fix: In this instance, you should include a canonical URL on the duplicated pages, which refer to the original as the preferred version of the page. In my ecommerce example where a red jacket appears in both “sale” and “jackets” categories one of them should include a canonical link in the code of the page to acknowledge the duplication. An example would be as follows:

On the jacket contained in the “Sale” page:

canonical tags for duplicate content issues

The Problem: Service pages on your website which are too similar to each other

The fix: There are a couple of options here. You can try and make the pages sufficiently different, however if the pages are largely around the same subject with only slight differences, you may be better served using just one page to talk about both subjects. I would advise removing the least valuable page and apply a 301 redirect back to the most valuable page. One valuable page is certain to be more successful than two weak or conflicting pages.

The Problem: Your site doesn’t handle www. and non-www. versions of your site effectively

The fix: The easiest way to test for this is to remove the www. portion of a URL on your site, in your browser and see what happens when you try to load the page. Ideally a redirect should take place from one to the other.

Note: it doesn’t matter which you go with, just pick one way and be consistent. Also make sure you have identified your preferred version in Google Search Console.

The Problem: You use another website’s content openly on your website

The fix: This scenario tends to happen if you use press releases or if you use feeds to populate certain areas of your site, to show the latest events in a specific region, for example.

There’s no real hard and fast rule to this. If you are sure that this type of content provides value to your users you can either accept that you’re never going to rank well for that content (but the rest of your site might) or you can take the time to make the content unique to your audience.

The Problem: Having two websites selling the same goods to different audiences

The fix: This one is somewhat complex. The best way to combat this, from a search point of view, is to combine your online presences into one site. There may be good business reasons for having two separate brands which cater to different audiences. You still need to be aware that they will ultimately be competing for attention in the search engine results pages (SERPs).

In Summary…

Simply adhering to Google’s quality guidelines will help. Create content which is useful, credible, engaging and, wherever possible, unique.

Google does a decent job of spotting unintentional duplications but the tips above should give you an idea of how to get search engines and users to understand your site.

About the author

Jean Frew is a Digital Marketing Consultant at Hallam Internet. Jean has worked in Ecommerce and Digital Marketing since 2007 and is experienced in driving online growth, as well as managing budgets and projects of all sizes. She has a broad knowledge of Digital Marketing and utilises analytics to make data-driven decisions.