Sunday, October 23, 2011

Integrating SEO & PPC: 3 Areas to Explore

A plethora of articles have been written about the convergence of SEO and PPC, most of them fairly elementary. I think everyone understands that, at least from a keyword level, each channel can (and should) inform and reinforce the other. That idea of "reinforcement" is a bit tricky, however.

Some studies have shown largely a reinforcing effect between organic and paid listings, except on brand queries where cannibalization is more likely to occur. These searches are often navigational in nature, so users are more liberal about clicking on paid results since they already know which brand they're interested in.

The basic SEO and PPC integration points written about usually encompass the following:

   1. Use PPC keyword data to inform the organic content and optimization strategy. Use organic referral data to inform the creation and optimization of new PPC ad groups.
   2. Leverage PPC ad copies in organic snippet text to optimize click-through rates. Much easier to test what's working in paid campaigns, which always put relevant messaging and a call-to-action forward, than to iterate and refine in organic placements.
   3. More visibility is better. A bigger footprint in search wins. Etc.

These are all useful points to think about. But to realize a competitive advantage, and to really integrate learnings from each channel, we need to dig a little deeper.
Toward an Understanding of Practical SEO & PPC Integration

Much of this is still being discovered. There aren't best practices developed around PPC and SEO synergies, aside from the fact that companies know the two sides need to talk to each other.

Conferences often have "SEO vs. Paid Search" sessions which, to be honest, are pretty silly. This isn't a topic you can simply throw pointed remarks at, like "SEO is way better than paid search!" or "PPC is SO much more awesome than SEO." Each is a unique, and complementary, online marketing channel.

Paid search spend doesn't have to "take away" anything from organic search budgets. If there's positive ROI, then there shouldn't be a budget anyway (would you put a limit on giving Google $10 if they gave you $11 in return each time?).

Conversely, organic search doesn't necessarily suffer because its ROI is inherently more difficult to capture. With solid strategy, benchmarking, and reporting, the ROI should be self-evident. Sure, it's not as clean, but the data is there if you know how to get it.

Because organic search has traditionally taken the lion's share of clicks on any given SERP (up to 80 percent, depending on the query and who's doing the talking), recent trends show Google experimenting with the interface to encourage more clicks in paid results.

You can see where we're headed: toward a more "organic" paid search experience (to borrow a phrase from the articles just cited). It’s in the best financial interests of Google to move click share towards the paid listings, which is in essence creating increased competition for attention on organic results.

This is an open topic, one that will continue to evolve. It's also an area I'm keenly interested in exploring over the next few months. Here are a few areas that are of particular interest to me right now:
1. Landing Page Snippet Text in PPC Ad Copy

Google was recently spotted testing the display of snippet text from landing pagesin paid ads.

That text was taken directly from the landing page, directly tying PPC to SEO techniques. With the interface blending paid and organic together more than ever, and with changes such as the above showing where we're headed, it's only a matter of time before more PPC tactics are necessary for SEO (page load times anyone?).


2. Focusing on Competitive Search

It’s amazing how little discipline is put toward reporting on branded and non-branded search as distinctly separate segments, with different implications.

On the PPC side of the fence, it seems pretty simple and clear that branded (often navigational) search is already 'owned' by the brand, so should not be leaned on too heavily when calculating the success of the campaigns. Sure, branded search is usually an important revenue driver. It just needs to be cleanly separated and tracked so as not to cloud the overall view.

Non-branded (what we call “competitive”) search is where the true game is played. While branded search is largely driven by offline efforts, with little to no leverage to obtain more, competitive search is driven by demand in the category itself. As such, sales are largely incremental.

Slicing out a company's competitive search share is often an excellent way to judge the healthfulness of their search marketing efforts. In fact, looking at the following data points represents an interesting opportunity for companies to find opportunities:

    * For PPC spend, does the competitive search slice fall within efficiency targets?
    * Does steady growth exist, in both SEO and PPC, for competitive search sales volume? It's not enough for your SEO campaigns to drive traffic. They have to drive relevant traffic that converts. And the true mark of success, in my opinion, is incremental increases in competitive (i.e., non-branded) search traffic.
    * Is there growth in the competitive slice as a percentage of site sales? In other words, how are site sales for competitive PPC and SEO segments faring as a percentage of total site sales? For large, established brands, you don't necessarily want the PPC slice to be too high (every click costs you). But you also don't want to be too dependent on SEO, especially if there are unsustainable tactics in play (algorithm changes happen!).
    * How is growth in new customer acquisition through these segments?

There are several interesting insights that can come from these analyses. And going deeper to a more granular level, you can even look at terms being bought in PPC campaigns that don't have strong visibility in organic search. And vice versa, you can identify competitive terms that have strong organic visibility but aren't being actively bid in paid campaigns. I have yet to see this kind of gap analysis performed well, probably because the greatest insights will happen at scale and getting to that data is a tough problem.
3. Understanding Traffic Value

SEO can learn a lot from PPC, including how valuable unique queries are to the business and how efficient they are within organic campaigns. For example, does it make sense to rank No. 1 for "shoes" if the resource cost to get there is significant? There is much unknown and hard to quantify about SEO. How much spillover is there for head terms such as “shoes” to revenue from other queries, even brand related?

For PPC, this is easy. Since traffic value doesn't vary by position, as posited by Hal Varian and Google (and also discovered through RKG's own testing), this is measurable and predictable.

Sure, the CPC landscape itself is highly mutable and impossible to predict. But we should be able to know, and predict, the value of traffic bid at an efficient cost.

Well, if traffic value in the organic results mirrors this behavior, and doesn't vary by position, then can we place a marketing efficiency on queries like they’re PPC bids? If we know, for example, that when we spend $1 per click for “green mens wide running shoes” we get $3 in sales in our PPC campaigns, can we apply this to the organic segment as well?

Certainly, SEO work takes a long time to bear fruit. But there must be some sort of projection to justify the resource cost and apply a revenue estimate to the efforts.

I’m not positing that we should apply a PPC approach to SEO. On the contrary, actually, what I'm proposing is that we apply more rigor around the accountability of our SEO recommendations and the tracking of results.

This is really a technology problem. It isn't a conceptual or strategic problem to be solved: we all know that this data would be powerful to have!

One problem is that we don't really know anything about impression data in organic search. Sure, Google Webmaster Tools gives us a fleeting taste of impression data, for a sample set, but I'm a bit skeptical of its accuracy based on everything else we get from GWT. The toolset is fantastic. I only question the accuracy of the data provided, based on Google’s own disclaimers.

Wouldn’t this be sweet integrated into Google Analytics, and trended, for all organic search traffic?

At a minimum, SEOs should be applying the fundamentals around testing PPC ad copy to snippet text in the organic results. These directly influence CTR, which is certainly tracked (among other behavioral signals) by Google's algorithms.
So Much More

I could keep going.

I could talk about using PPC to build visibility for high-quality resources and content pieces, which can drive traffic and links.

I could talk about gathering user behavior signals, such as bounce rate, time on site, and average pages per visit, from PPC campaigns and applying that to improve the quality of the site (which should improve its SEO).

I could even talk about building out extensive long-tail campaigns for PPC, and how that keyword data can be fed into related link modules on a site (for content that already exists), or fed into a strategy to develop said content.

But I'll stop here.

Conclusion

There's an exciting field of study in the interplay of organic and paid listings. Hopefully this will help further the dialogue. Looking forward to hearing your comments.

How To Take Your Keyword Research To A Higher Level?

Good keyword research is laborious. So is it no wonder that once you understand the basics, you may be tempted to use whichever tool you have at hand and try to automate or speed up the process. Except, should you want to do this?

Keyword research is your time to understand the market you are competing in and how people search. It is your opportunity to comb through the competition and learn their keywords, content and link-building strategies. It is your opportunity to map out what you should track for your website, your market competitors and your keyword competitors.

If you plan a months- or years-long relationship with a client or website, you owe several hours or days to get this right.

Keyword Research Tips

Keywords are the foundation of search engine optimization. It is all about getting traffic from relevant search queries, the keywords people use to find our products or services or whatever we offer that our target market is looking for.

Placing a handful of words into the Google Keyword Tool, exporting the results, then calling it a keyword chart is skimming; it is not keyword research. Quality keyword research takes time and investment.

    * Before you open one keyword tool, study the topic you are researching, at least to the point that you can explain it intelligently to someone else and answer basic questions.
    * Study your marketplace competitors. These are basic sources for seed words and phrases to put into keyword tools.
    * Seeds are the words and phrases you enter into the keyword tools. Track these and use the same ones in every keyword research tool.
    * Use multiple keyword research tools. Every service has its strengths and weaknesses. Using different keyword tools is like seeking differing points of view. You want to be certain you have the best information possible.
    * When you have some good keyword candidates, begin studying the search engine results for rankings competition and additional keywords. In SEO, the real competitors are the websites that rank for your keyword targets, not just your marketplace competitors.
    * Revisit the keyword tools. Look up additional seed keywords you may have added along the way. Create a complete dataset for every keyword research tool you use.


Keyword Selection Tips

Once you have a list of keyword candidates, you must cull through it to find your keywords. This is where a lot of people throw up their hands and give up or try to over-simplify the process.

Going back and forth between dozens of export files from different keyword tools is not practical, so I use a database to compile a master table that I later export into Excel.

Even if you are not a database wizard, anyone can learn to combine data into a table using Microsoft Access. You can do this in Excel too, though I find that more taxing.

    * Compile your research into a master table so you can sort it and filter it.
    * I sort my keyword candidate by the number of words in each keyword or phrase first, then by the number of searches. Here is the Excel formula I use to create a word-count column


    * Set aside or check off relevant one-, two- and three-word phrases.
    * Set aside or check off embedded keywords. Before Chris Anderson coined The Long Tail, I used embedded keywords to describe longer key phrases that contained shorter keywords. Search for each relevant one-, two- and three-word keyword, then mark the longer keywords that contain the shorter keywords.

At this point, what is left will be like looking for diamonds in a trash heap. There will be lots on non-relevant words and words with too little traffic. Comprehensive research is important, but now it is time to get practical.

    * Set some limits. Depending on how much traffic the website I am optimizing receives already, I will set a lower traffic limit between 100 and 1,000. The more traffic my website is getting, the higher the limit I set. Anything below the limit gets culled.
    * Review each keyword candidate you have left. If it is relevant, mark it or set it aside.
    * At the end, copy all the keywords you marked or set aside into one table. These are your keyword candidates.



A keyword is not a target until you assign it to a page and begin optimizing for it.
Keyword Tracking Tips

You may be tempted to toss all your keyword candidates into a ranking tracker and monitor them. I advise against this.

Keyword candidates are not the same as keyword targets. Any keyword you have not matched to a specific web page will just be a distraction.

Reports filled with unassigned keywords tend to go unused. Also, if your boss or your client sees a bunch of keywords that are not on their website or for which the rankings and traffic are low, it will open you up to uncomfortable questions. Don’t put yourself in a position where you have to say, “Oh, just ignore those.”

What should you track? In my book the most important measurements are:

    * The number of organic visitors that each assigned target keyword brings from each search engine.
          o Exact Match
          o Phrase Match
    * The total number of organic search visits.
    * Keyword diversity: the number of different keywords bringing traffic to the website from each search engine.

I still track rankings for high-tail keywords, grudgingly, once I assign them to a specific URL and start optimizing for that keyword. Actual search engine rankings are difficult to track. Too many different things influence the rankings.

Universal search result inserts (news, images, video, etc.), local search results, Query Deserves Freshness, Query Deserves Diversity and other factors all change the contents and display of the organic listings.

If I search for coffee house in Seattle and then in New York, I will get two completely different sets of results. Track traffic as your key performance indicator and use rankings as an imperfect estimate to determine things you cannot measure directly, like how much traffic your competitors may be receiving for the same keywords.
What About Long Tail Keywords?

If you are pursuing a long-tail content strategy, you may want to add rankings for recently used long-tails to your reports, but I would remove older long-tails from your reports as you add new keywords and content.

Unlike high-tail SEO, long-tail optimization tends to be “set it and forget it.” If you are not actively working on optimizing a keyword, it probably should not be on your reports.

A good alternative to traditional ranking reports are the search query reports in Google’s and Bing’s webmaster tools. These show average rank, clicks and impressions. This data is now in Google Analytics as well under Reports » Traffic Sources » Search Engine Optimization.

I like having clicks and impressions because, if a targeted keyword gets lots of impressions but few clicks, either the listing needs work or the keyword is a poor target. The downside is that these offer no competitor data.
Keyword Assignment Tips

It is important to be realistic when assigning keywords to URLs as search engine optimization targets.

    * Understand keyword difficulty. Blindly optimizing for the most heavily searched keywords can lower non-paid search traffic.  When a website has nowhere near enough authority to earn a top ranking for a high-volume keyword, but optimizes its most linked-to pages for that same keyword, this may steal an opportunity to optimize for a more realistic keyword.

Be honest in your perspective.

    * Use embedded keywords. Optimizing a page for a medium-tail keyword that also contains a high-tail that can get rankings and traffic now, while the page earns the authority to rank for the high-tail.

A medium-tail keyword is defined by whether you can successfully compete for a ranking as much as the number of search queries it receives.

    * Use long-tails to support mid-tail and high-tail keywords. Not every page can be an SEO hub page — a page targeted for a specific, competitive keyword. The classic content strategy is to create lots of pages for long-tail content. As you do this, try to create content related to the higher-tail keyword hub pages and link from your articles — using optimized anchor text — to those pages.
    * Do not try to optimize every page for a keyword. Every website has pages that will never receive search engine traffic. For example, a webpage showing a process chart may be great for people but thin content for web search. Think carefully before assigning a decent keyword on such a page.
    * Consider searcher intent. People may search for information or entertainment or acquisition. Don’t optimize a sales page for an information keyword. Try to match page content and keyword assignments with intent. Start by looking at what types of pages already rank.

At the start, I wrote that we owe it to our websites and clients to get keyword research right. With all the research and reporting tools out these days it is easy to spend too little time preparing or researching then attempt to track more keywords and metrics than we can ever use.

Invest the time and thoughtfulness up front. Select which keywords you will optimize for. Choose pages for those keywords. Remove distractions. Do this and you will always focus on things that matter.

Opinions expressed in the article are those of the guest author and not necessarily Search Engine Land.

7 Technical SEO Wins for Web Developers

As an SEO agency, we often work with external web developers.  They may be in-house working for the client or another agency like ourselves but focusing on web design and development.  The level of SEO knowledge varies greatly from agency to agency and sometimes we are brought in to train developers on the parts of SEO they often influence without knowing it.  Below I’m going to talk about seven elements of SEO that all developers should at least have an awareness of.

I just wanted to quickly clarify Technical SEO v Onsite SEO.  For me, Onsite SEO elements are those which the user can see without looking at the source code.  So I’d include things such as -

    * Title Tag
    * URLs
    * Headers
    * Body Text etc

Technical SEO involves the elements of a page that the user can’t see without looking at the source code.  These will include elements such as -

    * IP Detection / Redirection
    * Site Speed
    * 301 and 302 Redirects
    * HTTP Headers
    * Crawler Access
    * Javascript
    * Flash

These are the seven things I’ll cover below.

1) IP Detection / Redirection

I recently experienced this problem first hand on a client project and it was very, very messy.  For those of you unfamiliar with this, IP detection and redirection involves determining the IP address of a user on your site, then showing them content (or redirecting them to a new URL) based on their location.  To give an example, if someone landed on www.domain.co.uk and their IP address indicated they were in France, you could redirect them to www.domain.fr which contains French content.

For the user, this isn’t actually a terrible thing.  It isn’t foolproof as IP detection isn’t always 100% accurate, but usually it means you can show users content which is more relevant to their location and language.  Sounds good and makes sense.

However for the search engine crawlers, this can be very troublesome.  Mainly because they will usually crawl from a US based IP, this means that they may never see some of your content.  The recent client of mine were redirecting based on IP and ended up redirecting the search engines to their .us domain.  This meant that the search engines were not seeing the other countries they were targeting including the UK and Australia.

John Mu backed this up on a Webmasters thread:

Yes, many search engine crawlers are currently based in the US……One thing which I believe you may be hinting at is automatically redirecting users based on their location — we’ll get to that later in the blog post series, but generally speaking, yes, that can cause problems for search engine crawlers and because of that, we’d strongly recommend either not doing that or limiting it to a certain set of pages (say the homepage) while still allowing crawlers to access all of the rest of the content on the sites.

Whilst developers (and some SEOs) often think there is nothing wrong with IP detection and redirection, you can see the problems that it can cause.  So you will need to speak to them and let them know of the impact it can have on crawling and indexing of the site.  There are times when IP detection does make sense, but I’d advise lots of caution and to make sure you are not accidently stopping the search engines from seeing your pages.
2) Site Speed

This should definitely be high on the list of priorities for your developers to be looking at.  Not just because we know it to be a ranking factor, but mainly because a fast site is better for users and ultimately, conversions.  How long do you stick around on a site that takes longer than a few seconds to load?  Users are the same.

From an SEO point of view, you need to care about site speed because Google is obsessed with speed.  I recently read In The Plex which gives an insight into the early days of Google and it describes instances where Larry Page has measured the speed of products in his head and been accurate within tenths of a second.  Every product that he gives feedback on needs to be super fast for it to stand a chance of moving forward.  Google understand how much users care about speed, so you should too.


This can often give them the nudge they need to take site speed a bit more seriously.  In terms of specifics, take a look at this epic guide from Craig about Site Speed and SEO to get hands on tips and tools for improving site speed.
3) 301 and 302 Redirects

Sorry but lots of developers (and SEOs) get this wrong.  Right now, you only need to implement two types of redirect – a 301 or a 302.  Thats it.  No 303s, 307s or anything else.   There are two main ways that this can be messed up, the first way I’m going to talk about is using the wrong type of redirect.

Getting 301s and 302s Mixed Up

To give some background and context.  A 301 redirect is usually used in SEO for one of the following reasons -

    * A page has moved somewhere or been taken down, so you want to redirect users and search engines to an appropriate new page
    * Somehow you have created some duplicate content and want to remove them from Google’s index by redirecting them to the main canonical version

A 301 redirect will usually pass nearly all of the link juice and equity across to the URL it is pointing to.  So it can be a good way of giving some strength to different pages and making sure you’re not losing any link juice on pages that 404 etc.  This is why its so useful for SEOs.

Despite not being SEO friendly (we’ll cover why in a moment), there are some genuine reasons for an SEO using a 302 redirect -

    * A page may just be temporarily unavailable, for example a product that is old of stock on an ecommerce website that will be back in stock very soon
    * You may want to test moving to a new domain to get some customer feedback but not want to damage the old domains history and rankings

A 302 redirect works here because you are confident that the move isn’t permanent.  Because of this, Google will not pass link juice across the redirect, nor will they remove the old URL from their index.  These are the very same reasons why getting mixed up with 301s and 302s can hurt your SEO performance.

The common reason why some SEOs and developers get this wrong is that for the user, they don’t notice any difference.  They will be redirected anyway.  But the search engines will notice the difference.  I’ve seen an example of a client moving their entire site to a new domain and all redirects being a 302.  This is bad because -

    * Link juice will not be passed across to the new URLs, meaning they are unlikely to rank well in the short term and possibly long term
    * Google will not get rid of old URLs from its index which means you can have multiple URLs from the old and new domains indexed at the same time


Redirecting all URLs back to the Homepage

This is another problem I’ve come across more than once.  Google advises that when you implement redirects, you do it on a one for one basis.  For example -

You should redirect:

http://www.old-site.com/product-name-12345 to http://www.new-site.com/product-name-12345

http://www.old-site.com/product-name-10000 to http://www.new-site.com/product-name-10000

What some people do wrong is redirect -

http://www.old-site.com/product-name-12345 to http://www.new-site.com

http://www.old-site.com/product-name-10000 to http://www.new-site.com

Redirecting all pages back to the homepage, or even a single top level category, is bad for users and can sometimes look manipulative.  Also it is not passing the much needed link juice across to the deep pages within your site that need them, in the example above, the product pages are not getting the juice they need.

Again, I’ve seen many sites lose a lot of traffic by doing this.  Mainly because they lose rankings for their deep pages which are usually long tail. 
4) HTTP Header Codes

Chances are that many developers will know what these HTTP header codes mean, but in relation to SEO, they may not know what effect they have or how the search engines treat them.  There are lots and lots of status codes out there (did you know that the 418 status code means I’m a teapot?!), but as an SEO, you should certainly get to know the following HTTP headers well and know what impact they can have.


To find the HTTP header code of a page, you can use a tool like Web Sniffer, the SEOmoz Toolbar or on scale using Screaming Frog.

200 Success – This means that the page has loaded successfully.  For the users and search engines, this means that the page is working fine and should be indexed and ranked.

301 Permanently Moved – This has been covered in more detail above but in summary, means that a page has permanently moved elsewhere.  Both users and search engines are redirected and most link juice is passed across the redirect, the old page is removed from the index.

302 Temporarily Moved – Again, this is described above, but means that a page has temporarily moved elsewhere.  Users will not notice the difference between a 301 and a 302, but search engines will not pass link juice across it nor will they de-index the old URL.

404 Page Not Found – You are probably familiar with this one.  For users and the search engines, this means that the page being requested could not be loaded.  If an indexed page suddenly becomes a 404 page, over time the search engines will stop ranking it (from my experience and tests anyway).

Quick sidenote here from experience.  Something I’ve come across a few times is the situation where a page is displayed to the user which looks like a 404, however when you look at the HTTP header, it will show a 200 Success code instead.  This is not good and Google have classified these as soft 404s.  The reason they are not good is that its difficult for you to spot these errors using server logs or Google Webmaster Tools.  Although Google does try to spot them, its best not to rely upon Google to do this for you.

Best practice is to make sure that a 404 page actually returns a 404 HTTP header.  By the way, you should check out the Distilled 404 page which our intern Andrew built.

410 Page Permanently Not Found – I’m not sure why I’d use this rather than a 301 redirect, but there may be some good uses and its worth knowing how Google treat it.

500 Internal Server Error – This is a generic error page and isn’t very helpful as it doesn’t usually give much detail as to why the error occurred.  You should definitely try and keep these errors to an absolute minimum.

503 Service Unavailable – Whilst this isn’t a code you should commonly use, it is useful to know if your site is temporarily down and will be back shortly.  For example if you are relaunching a site or a new design, you may have to do this by taking the site offline.  Its better to return a 503 so that the search engines know to come back later.  John Mu also confirmed Google’s position on this:

“Optimally, all such “the server is down” URLs should return result code 503 (service unavailable). Doing that also prevents search engines from crawling and indexing the error pages :-) . Sometimes I’m surprised at how many large sites forget to do this…“
5) Crawler Access

For me, restricting crawler access and optimising your crawl allowance is an overlooked part of SEO.  Probably because it can be a bit hard to implement and its not really an exact science.  To understand this and optimise it, you must first become comfortable with the concept of crawl allowance.  Rand wrote a great post over on SEOmoz about this following some comments from Matt Cutts on the topic.

At a basic level, Google will crawl roughly based on PageRank as Matt Cutts has explained previously:

Bottom line for SEOs – Don’t think that Google will automatically crawl and index every page on your site, whilst Google have made it obvious that they want to find every piece of information on the web, they do have limited resources and must be selective about which pages they crawl over and over.

The learning here is to make sure that you are not wasting your crawl allowance on pages which you do not care about.  Try to make sure that when Google does crawl your site, they are spending time on your important pages.  There are a number of ways to do this, just having a good site architecture is a pretty powerful way in itself:

Unfortunately, many SEOs are not always in the position of being able to work with developers and define a site architecture from scratch.  Usually its a case of working with an existing site architecture and trying to fix and optimise what you can.  There are a few ways to do this and you need to work with developers to use these techniques effectively.

Robots.txt – This is the first file that a search engine will request when they crawl your site.  Within this file, they will try to see if there are any areas of your site or specific URLs that they should not crawl.  There is some debate as to how strictly the search engines obey whats contained in the robots.txt file, but I still think you should use it and feels its reliable in most cases.  A robots.txt file typically looks something like this which is from Amazon.co.uk:

If you are unfamiliar with how to write a robots.txt file, its best to get comfortable with it prior to speaking with developers on the topic.  Read this guide from Google and test on your own sites.

The action to take here is to take a good look at your site and decide which sections you would not like the search engines to crawl.  Use some caution here though, as you don’t want to block pages by accident.

Rel=Canonical – I’d never recommend using the rel=canonical tag on a new site.  Some may disagree with me, but I see rel=canonical as a last resort in solving site architecture issues.  If you can avoid these issues in the first place, do it.  Don’t think of rel=canonical as a tool to help you. 

The key reason for this is that this tag is not a strict rule.  The search engines do not have to take notice of it and can change how they treat it at any time they want.  Current evidence suggests that the search engines take notice of it and use it quite strongly.  But I’d still advise against relying on it for the long term.

It is worth making your developers aware of this tag and making sure they know the implications of using it.  It can be a great help in solving duplicate content issues, but at the same time, it can easily go wrong if you do not use it correctly.  The big advantage to using rel=canonical from a development point of view is that (in theory) it is easier to implement than a 301 redirect:

Take a look at this guide from Lindsay over on SEOmoz for a fuller guide to rel=canonical as well as my own post on choosing between rel=canonical and a 301 redirect.
6) Javascript

Javascript can be a great thing, it can add great functionality to your website and enhance user experience.  However, the search engines struggle with understanding Javascript.  They are getting better all the time and are actively trying to execute Javascript so they can access more content.  Because of this, there are two key things you need to remember -

Do not place valuable content within javascript – you want to make sure that the search engines can read all of the content you have produced.  Don’t let all your effort go to waste by putting your content inside javascript which the search engines can’t get to. 

A common way I’ve seen this done in the past is when navigating between tabs on an ecommerce product page.  When a user clicks on a tab such as “More Information”, the content is loaded in using javascript.  This is fine for the user, but the search engines may not be able to find this additional content.  Whilst they may be able to get to it, its best practice to be safe and load the content onto the page without having to execute any additional javascript.

Something I’d definitely advocate is the use of Jquery where possible.  Jquery can make elements of a page look just as beautiful as javascript, but is generally SEO friendly.  When you view source on an element created using Jquery, you can usually see the content which is great for search engines.

Do not place content or links within javascript to intentionally hide it from Google – this was an old tactic that SEOs would use for various reasons.  This was back in the day when the search engines genuinely struggled to see what was inside javascript.  So it could be used for old practices like PageRank sculpting or just blatant hiding of content.
7) Flash

Most of you should know this one, but just for completeness, lets talk briefly about flash.  As with javascript, the search engines struggle to understand flash elements of a page.  Despite various improvements, the search engines are not quite as advanced as they are with javascript.  However Google can extract some text and URLs that are within flash elements, but not all search engines can do this.

The bottom line here is that you can’t allow clients, developers or designers to build their entire site based on flash elements.  Enhancing a page using flash is fine, but I would be mindful of how search engines can see it.
Conclusion

I think there are a few key takeaways:

    * Love your developer!  They can do some awesome work for you and do not underestimate their ability to do cool stuff to help SEO
    * Don’t assume they know everything – be prepared to help them understand the SEO implications of their work
    * Give developers credit where its due – if they make a change to a site and it helps results, tell them

The Moz Top Ten | What You Need to Know in the World of SEO?

Expand your mind with the Moz Top 10. From analytics and SEO to social media and link building, check out the list below for some of the best tips and advice on inbound marketing from around the web.
 
1
       
Making search more secure: Accessing search query data in Google Analytics
BREAKING NEWS: Google is removing access to organic keywords data if the user is logged in while searching. Good posts on the impact of this for SEOs: Dear Google: This is war and Google Invests in Privacy for Profit.

2
       
Stevey's Google Platforms Rant
Google employee Steve Yegge shakes up the platform world as he compares Google, Amazon, Facebook, and Microsoft and how these giants approach their products from an insider’s perspective. 

3
       
How to Optimize 7 Popular Social Media Profiles for SEO
Aligning your personal and business social media profiles getting you down? This guide will set you on the right path for simple SEO success and symmetry across all platforms.

4
       
New SEO Reports in Google Analytics Now Here
Google has incorporated Webmaster Tools directly into Analytics. And here are some actionable reports that you can pull to make your website even better.

5
       
10 Reasons (in Pics) Why Distilled's SearchLove 2011 NYC is a Must Attend
Looking for a great conference with unparalleled speakers, actionable talks and workshops, and one amazing Halloween bash? Go no further than NY SearchLove 2011.

6
       
7 Technical SEO Wins for Web Developers
Building a powerful new website or putting a spit-shine on your current one? Make sure your devs are on-board with you and that you're both talking the same language about what they can do to build (or fix) a solid foundation for your site.

7
       
It’s Not Too Late to Learn How to Code
Coding's not just for kids or college students. Jean Hsu gives some encouragement for those of us who are a little more experienced and ready to dive into gaining a new skill.

8
       
How To Take Your Keyword Research To A Higher Level
Just scratching the surface with your keyword research? Level up your SEO fu and dig deep. Your websites, business, and/or clients will thank you.

9
       
Integrating SEO & PPC: 3 Areas to Explore
Double your efforts. SEO and PPC can be friends and make your life (and your colleague's) more awesome while improving your ROI in both paid and organic.

10
       
How to Evaluate Links – A Checklist + Tools
Do I or don't I? A simple checklist to help you save time while you evaluate the links you want to pursue for your link building campaigns.

Making search more secure: Accessing search query data in Google Analytics

As search becomes an increasingly customized experience, particularly for signed in users, we believe that protecting these personalized search results is important. As part of that effort, today the Google Search team announced that SSL Search on https://www.google.com will become the default experience for signed in users on Google.com (see the Official Google Blog post to learn more). Protecting user privacy is important to us, and we want to take this opportunity to explain what the Google Analytics team is doing to help you continue measuring your website effectively in light of these changes.

How will this change impact Google Analytics users?
When a signed in user visits your site from an organic Google search, all web analytics services, including Google Analytics, will continue to recognize the visit as Google “organic” search, but will no longer report the query terms that the user searched on to reach your site. Keep in mind that the change will affect only a minority of your traffic. You will continue to see aggregate query data with no change, including visits from users who aren’t signed in and visits from Google “cpc”.

What is Google Analytics doing about it?
We are still measuring all SEO traffic. You will still be able to see your conversion rates, segmentations, and more.

To help you better identify the signed in user organic search visits, we created the token “(not provided)” within Organic Search Traffic Keyword reporting. You will continue to see referrals without any change; only the queries for signed in user visits will be affected. Note that “cpc” paid search data is not affected.

Our team continues to explore ways that we can surface relevant information, like search query data, to help you measure the effectiveness of your website and marketing efforts, and as always, we welcome any feedback or comments that you have. Thank you for continuing to help us improve Google Analytics.

Source:http://analytics.blogspot.com