AudienceWise has been Acquired by SEOmoz

January 29th, 2013
by Tim Resnik


Matthew and I are very happy to announce that AudienceWise has been acquired by SEOmoz. Most of the details can be found in this blog post by Rand, the CEO of SEOmoz.

As many or all of you know, SEOmoz is the preeminent developer of software that helps marketers make decisions on how to optimize their web properties for search and social. What you may not know is that there are

Many “BIG” new features and products are coming. In fact, since Matthew and I are power users of the SEOmoz toolset we will initially focus a lot of our time on providing feedback and input on these new initiatives.

As for AudienceWise, we won’t be taking on new clients, but will continue to consult for a limited number of our current clients on an ongoing basis. SEOmoz will not be jumping back into the consulting game and will continue to focus on building great software to help marketers attract customers via inbound channels: SEO and Social Media. If anyone wants more detail, please don’t hesitate to reach out.

Presentations in the Age of SlideShare

October 8th, 2012
by Matthew

One of the marketing platforms that has emerged in the last few years has been SlideShare. If you haven’t seen the site yet, it’s where you can post your slide decks for others to view, and follow a group of people so you know when they’ve posted new presentations. The site provides a way for people to see what you’ve presented if they weren’t at your speaking engagement, or as an easily linked-to repository for your slides after the presentation. To get an idea of the additional reach SlideShare can provide, consider a presentation I gave last month in Portland on Structured Data and Semantic SEO. This was at an SEMPDX event where I spoke to a crowd of about 75 people. On SlideShare, this presentation deck has 3,847 views. On a grander scale, this presentation on the Top Tools for Learning has over 55,000 views and was posted only five days before reaching that number.

The use case for audience members to have an easy place to access your slide decks is a clear benefit. However, I’m not sure how much people can get out of viewing the decks without actually being at the presentation. There’s almost universal agreement that the best presentations use slide decks as a visual sidekick to the real attraction: listening to the speaker. How useful can it be to view the slide decks without the critically important context of hearing the speaker? Can a deck serve dual purposes of being an entertaining presentation as well as coherent reference material on SlideShare?

Speakers in the search marketing world have come a long way

I think it’s a good sign that as presenters, we are even thinking about this. I certainly feel like I’ve come a long way since I started speaking in 2005. When I was getting acclimated to the SEO world, I felt like the two best speakers were Marshall Simmonds and Bill Hunt (Danny Sullivan doesn’t count. By this point he was mostly doing keynote style presentations that could afford to be more entertaining than informational. Also, he went to a terrible high school). Marshall and Bill were (and still are) able to present actionable search tactics in an engaging way, using Powerpoint as an enhancement rather than as the engine of their talk.

Fast forward to 2012, and Rand Fishkin and Wil Reynolds are two of our industry’s best speakers. Both have the ability to be passionate and entertaining in their talks, but also provide several higher-level strategic takeaways as well as tactical tips. It’s easier said than done. I tend to pack my presentations with info, but they can be very dry. If you’re not engaging the audience instead of stepping them through Powerpoint, even your best takeaways may be glossed over.

The bullet point era

That’s not to say our slide decks have necessarily evolved at the same rate.  I’m still seeing (and using) a lot of very dense screenshots and slides full of bullet points. Check out this beauty from one of my earliest decks in that era:

Oof. Thankfully, I didn’t read the entire list of spiders. As I refined my decks, I moved on to more screenshots than lists of bullet points. but these also lacked obvious context. Like many others in our industry, I keenly watched the evolution of Rand Fishkin’s Powerpoint strategy of striving for one thematic point per slide, with a contextual section at the bottom containing a snippet of text and any relevant links. Here’s what this looks like in one my current decks:

Rich Snippets Data

If you see this slide on SlideShare, you’d get the basic gist of it without hearing me speak. When I present the slide, I have a few useful talking points around it that aren’t immediately obvious, but people in the audience at my presentation shouldn’t be focused on reading that slide instead of listening to what I’m saying about it. Above the callout at the bottom? That’s a densely packed screenshot from hell. I don’t consider it a problem on SlideShare, but I really don’t want my live audience to squint and figure out what the hell is actually on that screenshot.

There are exceptions to this general trend in our industry. Check out Mike King’s deck from Mozcon this year. That’s a beautiful deck. It has a narrative, easily captured takeaways at the bottom of the deck, and it even works pretty well on SlideShare. I’d wager a guess that this deck came close to a hundred hours for Mike (and his team) to construct. It’s not trivial to build a deck that works live as a visually compelling aid to your talk, yet has value online after the presentation.

The new golden age of Haiku Deck

This brings me to Haiku Deck. In a nutshell, Haiku Deck gives you a wide array of well-designed templates to choose from. You write a headline and subhead, and then you can search for pictures available with the Creative Commons license that match keywords used in your text. Here’s an example slide I made for the SMX East show:

Haiku Deck Example

Haiku Deck’s visuals are high resolution, and you can usually find an image that resonates around the theme of your slide. People aren’t going to be trying to read through a mess. This is a huge improvement over the onslaught of Times New Roman bullet points that is the bane of every conference. There’s a catch though:

“The best way to design slides for SlideShare isn’t the same as the best way to create slides to actually use in a presentation.” - This whole interview with Joby Blume on the Haiku Deck blog is worth a read. His point is that if the slides are self-explanatory, they’re great for SlideShare but audience members can just read them and ignore the presenter. But if they consist of abstract visuals, you are in essence giving a speech, and the residual value of the deck on SlideShare is basically zero. Let’s hear from Joby again:

“We sometimes need our slides to help us get the point across, but we can’t do that if we put up a beautiful picture of a snow-capped mountain when we are talking about complex derivatives.”

Especially when thousands of folks are going to download the deck on SlideShare afterwards. That particular truth is why I’m struggling to do an entire deck with Haiku Deck without challenging screenshots or diagrams. I couldn’t find an easy way within Haiku Deck to include even a small numbered list or pop-out, so it doesn’t seem ideal for screenshots or pointing out additional resources. Since the better SEO presentations tend to revolve around tactics and tools, it’s tough to convey actionable information with only a pretty picture and a couple headlines. So while I do believe Haiku Deck is an improvement over the long national nightmare that is Powerpoint, I’m not sure how to use it for presentations that aren’t in the “entertaining/inspirational speech” category. One solution is to only use one slide per ‘Do/Don’t’ or tactic. That means if I was going over how to choose an appropriate structured data markup, I may use 20 slides to convey my talking points. If that’s just one part of my talk, we could be looking at 100+ slides to get all the information across in a 20 minute presenatation. Oy vey.

Is it worth it to make a version just for SlideShare?

Which brings me back around to SlideShare. I believe my best slide decks are geared towards audience members, but kind of lousy for folks on SlideShare who view them without any context. Upgrading my visuals with Haiku Deck may improve the transitional slides, but will probably make my instructional slides worse. They’ll appear to be full of one or two basic instructions at best, and total cliches at worst. Given that we’re talking about maybe 10x-100x as many people will view your material online than see it person, and it seems like a worthwhile effort to make a coherent online version.

Here’s what I think my answer is, to provide the best live presentations as well as useful recaps on SlideShare:

1. Make two decks. I know, even making one deck is a complete pain in the ass. The deck for the live presentation should be heavy on the visual aid slides and contain less Powerpoint diagrams and text. The SlideShare deck should provide slides that have screenshots with extended text and even actionable bullet points if necessary. On SlideShare, your bullet points aren’t fighting with anyone for attention.

2. Better screenshots. Someone please come up with a way to provide enhanced screenshots with legible text and beautiful popouts/callouts. For now, I’ll still be using Snagit and annotating the screenshot at the bottom of the slide, but it’s still visually unappealing compared with what you can generate with a tool like Haiku Deck. Big screenshots full of text look terrible. Smaller screenshots with big ungainly pixelated text? That can be even worse.

3. Less speaking gigs. Putting 100+ hours into the deck and talk for each conference is not a small endeavor. I can’t see myself doing this for a dozen conferences per year. Not to mention that you need to maintain this quality level for any client or in-house presentations. I’m thinking I’ll have to be very picky about where I speak if I’m going to put in significantly more work on the deck(s).

The reality is that you never know which speaking engagement is the game changer for you, whether it opens a door for a new career path, partnership opportunity, or any number of positive outcomes. Only now this could all happen via SlideShare or another online location for your presentation. None of us we’re satisfied with the bullet point era of last decade. Now we also need to avoid providing thousands of people online a deck of pretty pictures without any context.

I’d love to hear what other people are doing with SlideShare, or how they’re using Haiku Deck to create informational slides.

CampaignPop – Twitter Stats for the 2012 Presidential Election

September 4th, 2012
by Matthew

Earlier this year, we were building tools for our news clients so that they could have a dashboard for tracking Twitter activity on reporters and writers (we released free versions of some of these social media tools).

We were discussing different datasets that we thought would be interesting to track with our longtime colleague, Jay Leary. All of us follow politics pretty closely, so we decided to build out a social media dashboard for the 2012 presidential election. The result is CampaignPop, where we measure both the output and social media sentiment for each candidate:

CampaignPop Twitter Sentiment Scores

 

Jay and Tim did a very nice job getting the site up, and my best contribution was staying out of their way and making offhand suggestions. We’re still figuring out the best way to promote the site and use it to surface insights about the effects of social media participation on the campaign. With any luck, we’ll be able to build a predictive model for the results of the election that is useful to political pundits and major media sites.

Have a look and tell us what you think.

Twitter Sentiment Tool for Excel

July 26th, 2012
by Tim Resnik

I’m here at MozCon, the annual SEO conference hosted by SEOMoz. If you’re a sucker for data, APIs and Excel, there isn’t a better speaker in our industry than Richard Baxter. This morning he did not disappoint as he took the audience through live Excel demos of tools he built using various APIs (MozScape, SharedCount, WIPMania).

At AudienceWise we have built several internal tools on the Twitter API, and recently have been using various APIs to analyze sentiment of tweets. Richard’s presentation this morning inspired me to hack together an Excel tool that performs sentiment analysis on tweets based on Twitter search queries.

It can be downloaded here and used for free.

It’s pretty straightforward. There are only two prerequisites for getting it to work:

1. Niels Bosma’s SEOTools for Excel. If you use his tools as much as we do, make sure to kick him a donation. He’s planning some neat looking upgrades.

2. A free API key from ViralHeat for the sentiment call. Once you have logged in go here to grab your API key.

sentiment screen shot

 

 

 

 

 

 

 

Catching Up With Structured Data at SMX Advanced

June 8th, 2012
by Matthew

Here’s my presentation from SMX Advanced 2012 on structured data use, tools, and how it fits into the SEO ecosystem. I also included a bit of background on how to get started with semantic technology:

SMX Advanced 2012 – Catching up with the Semantic Web One follow up item that I’ve been asked a lot since the presentation is what tools and solutions there are for setting up a RDF database, since that’s the natural first step once you get a handle on writing SPARQL queries, as well as tools for converting data sources into linked data objects.

A good general list of updated semantic web development tools can be found at http://www.w3.org/2001/sw/wiki/Tools - good place to find RDF browsers.

URIBurner is a utility that uses URLs as data sources, runs it through their middleware and then gives you output in a variety of data forms that conform to various standards (CSV, Turtle, RDF/XML, JSON). For example, here’s the output for this blog post:

URIBurner output for this AudienceWise post

One of the newest RDF database options is Stardog: It’s commercial grade, and I’m not sure of the fee structures yet but this is designed for midscale use (re: speed) rather than NASA-level use (re: size). Given the people behind it, this might be end up being the ‘web scale’ RDF database that provides the hooks most web publishers are used to working with.

4Store is an RDF platform operating under the GNU license

Mulgara is an open source RDF database written in Java.

Sesame is probably the most popular consumer level framework for working with RDF data. It supports connection to remote databases via API, a full SPARQL query set, and compatibility with most RDF file formats (RDF/XML ,Turtle, n-Triples, etc).

Hopefully this will help you get started with rolling your own structured data.

 

Enhanced by Zemanta

Twitter Data Scraping and Insights Using Excel

June 7th, 2012
by Tim Resnik

This is a follow up to a post I did a few weeks ago about using Excel to scrape key user data from Twitter. In that post I set up a spreadsheet that was tracking 20 of the most followed people in the SEO industry. After collecting that data on a daily basis over the last few weeks, I am going to use this post to demonstrate some basic analysis that can be done. It is not meant to be an Excel tutorial. For one of those, I recommend you check out Distilled’s guide on Excel for SEO, and of course Richard Baxter has a wealth of Excel tips and tutorials on his blog.

Step 1: Format Your Data

Here’s the spreadsheet that goes along with this post - Twitter Scrape Analysis – Excel

I’ve been collecting Twitter data on these 20 SEOs for about 30 days. That’s about 600 lines of data in my Excel spreadsheet. Pretty modest from a data analysis perspective, but we still need to make sure that things are formatted kindly. There are really only two things we’ll be doing with the data 1) adding a few columns for daily trending, i.e. number of followers gained or lost per day, and 2) creating a pivot table. In order to add the trending we are going to some data sorting, so I suggest that you put your data into a table. (Select all the data and CRTL + L).

Step 2: Basic Analysis Inline with your Data

The spreadsheet of our data is attached, but for quick review, our columns look like this:

data shot of twitter handles

 

There are two columns I added next for trending: one for calculating the daily followers gained or lost and a column to count the number of Tweets broadcast. You could do a similar column for following and listed. Next, I perform a quick and dirty formula to calculate the daily trend. There is probably a more elegant solution, but I found this to work, so I went with it:

  • Sort, by handle and then by date (oldest to newest)
Excel Sort
  • Formula: IF(Table2[[#This Row],[handle]]=A1,Table2[[#This Row],[followers]]-C1,”start”). This function checks to see if the two numbers you are calculating belong to the same handle. Then does basic subtraction. This will give you the difference between the row you are looking at and the day before.
  • Clean-up: this is an important step. Copy the entire table and paste values in order to overwrite the formulas with the values. If you don’t, the numbers will change when sorting. Next, sort the follower gain/loss column and delete all the fields that say “start”. We want to make sure we only have number in this column so the pivot tables and graphs translate properly when we are building our visualizations.

Step 3: Visualize

Let’s make the data more meaningful and present that data in a simple way that could bubble up some insights. There are a hundred different tools and visualization methods that you could choose, but I am going to keep it very simple and use a pivot table for this example. The main benefit of a pivot table is to summarize like data points, so in this case we want to see basic trending information by each twitter handle that we are tracking.

In the data tab select the whole table, and under the “insert” ribbon tab click on ‘Pivot’ table. A new Pivot table tab will be generated with the field selector open. In the field selector drag handles into the row labels field and followers. Then, add follower gain/loss and daily tweets to the values field. Make sure to select “value field setting” and then select “average” for each. Excel defaults to SUM, which would provide the sum of all the entries in the data table for the handle.

Excel Pivot Table Field Settings

To make it’s more readable sort by followers by clicking under more sort options next to “Row Labels”. Then select “Descending (Z to A) by:” “Average of Followers”:

Excel Pivot Sort

 

Next, add some conditional formatting to each row so some of the outliers pop out. You have to format each row separately or it will use the largest number, in this case dannysullivan’s followers as the baseline for all fields.

Excel Conditional Formatting

The keen observer has probably noticed that using followers gain/loss as a measuring stick to judge one Twitter account from another is not an apples to apples comparison. Followers beget more followers and Danny Sullivan and Matt Cutts have the advantage in this group. What is more telling is to normalize the data and compare the ratio of average follower gained to the sum of total followers. This can be done by selecting a single field of the pivot table under ‘Average of follower gain/loss’ column and then selecting options (in the ribbon)>formulas>calculated fields. Looks something like the below. Simply double-click in ‘follower gain/loss’, divide that by ‘followers’ and hit OK.

Excel Calculated Field

Make sure to format the new column into percentage, or it will just round off to zero, and that’s not very useful insight. Your end product should look something like the below and will provide you some insights into which handles are the outliers of your group.

twitter trending visualization

In this case there are a few things that stand out, including the fact that Aaron Wall has been losing followers at the fastest rate. He also has not Tweeted in nearly a month. Is his lack of engagement with his followers causing unfollows? It could be, but correlation is not causation as our friends at SEOmoz remind us. A deeper dive into the data may help us reveal something… Ah-ha! In looking at the raw data we see that aaronwall got dropped by 284 people in an 8 day period  (note: he no longer uses this account to Tweet, rather SEObook, but the decrease in followers was dramatically higher over this 8 day period than compared to the other days in this data set). In doing a little digging, a blog posted by AaronWall on SEObook entitled “Educating the Market: Is outing & writing polarizing drivel hate baiting or a service to the community” caused a bit of an uproar. Could the post on SEObook have had a negative impact on the @AaronWall Twitter profile? A similar analysis could be done by a publisher on a journalist who wrote a controversial piece of content. However, to really understand what’s going on we need to explore tone and sentiment. I’ll be mashing this data with open source sentiment analysis tools in future blogs. Stay tuned…

Enhanced by Zemanta

Mining Twitter Data with Excel and other tips for Twitter Analytics

April 23rd, 2012
by Tim Resnik

We work with several publishing clients that manage a multitude of Twitter accounts. We frequently run across the problem of spending too much time compiling a large cross section of key Twitter stats for multiple accounts. For example, we have a client that has over 20 Twitter accounts for unique brands unique brands. Trying to collect and analyze follower and status update counts is a tedious manual process, and not recommended.

There are two methods to accomplish data scraping from Twitter:

1)      Develop a web application using the Twitter API

2)      Use a mashup of tools mixed with Excel or Google Docs.

Building a web application to perform this scraping function has obvious financial and time barriers, but may be worth the effort or licensing fee in the long run.

Using Google Docs and ImportXML can handle this function nicely and has been covered thoroughly (I mean thoroughly) by Distilled in their ImportXML guide . Also, John Doherty makes good use of the API and Google Docs and has created a link prospecting tool.  What I want to cover in this blog is how to collect basic Twitter following/follower stats across several accounts using Excel and Niels Bosma’s badass plugin: SEO Tools for Excel. Since the SEO Tools plugin is doing all the heavy lifting in this example, Niels should really get all the credit.

Step 1: Install SEO Tools for Excel

There are a lot of other sweet features, but for this example we are just going to use the scraping function:

 SEO Tools for Excel

Step 2: Build a list of Twitter handles that you would like to collect via the Twitter API

First you need to figure out who you are interested in grabbing the data for. This could be your competition, your employees, different products within your company, punk bands, or whatever group of Tweeters you want to analyze. You can use services like  WeFollow and Listorious to generate ideas. Compile the list and put the handles in the first column of your spreadsheet (don’t include a preceding @). With this particular XML feed here are some of the relevant things you can pull:

  • ID: the numerical unique identifier for the account. This is a handy key to have because it will never change while the screen name can be changed by the owner of the account
  • Name: the name of the person who registered the account
  • Location: the geo-location of where the account was created
  • Description: the user-created description for the account that shows up right under the screen name on Twitter.
  • Profile img URL: the location of the profile image for that account
  • Followers count: number of followers that this account has
  • Friends count: number of accounts that this account is following
  • Created at: the date and time that this account was created
  • Status count: number of tweets since the account was created
  • Listed count: the number of times that this account has been included in another accounts lists
Or, checkout all the gory details

Here is what my column headers look like:

 Excel Column Headers

You can download the example XLS file here. The formulas are all pre-populated so all you need to do is enter the Twitter names (30 max due to API rate limiting) that you want the information for in the “Twitter Handle” column. If you want to build it out yourself, below are the details.

Step 3: Construct the API Request URL (sheet 2)

We must create the proper URL to be able to pull this information from Twitter. In this case we are using the Twitter REST API resource: GET users/show. This is one of a handful of API calls that does not require user authentication.

There are two required components and one optional to the URL.

Required:

  • Request URL: https://api.twitter.com/1/users/show.xml?screen_name=

+

  • Screen name: the Twitter handle you would like information for

So, it will end up looking like this:

https://api.twitter.com/1/users/show.xml?screen_name=mattcutts

Optional: include entities. This API requests returns the latest tweet for the user info requested. If you would like to include information about the tweet such as user mentions, hashtags or associated URLs, then also include the following to the end of the URL: &include_entities=true. The final URL would look like this:

https://api.twitter.com/1/users/show.xml?screen_name=mattcutts&include_entities=true.

We now have to construct the URL. In sheet 2 of the spreadsheet you can see that I am simply concatenating two fields: the Twitter handle from sheet 1 and the URL from the current sheet.

Step 4: Write Your Formulas and Populate the Array

Nothing too fancy here thanks to SEO Tools for Excel. There is a pre-built function that does all the heavy lifting. You can select the XPath (reads XML) or JSON option with the Twitter API. In this example we will use XML.

The XPath function has two inputs: the URL to call the XML file, which we constructed above and is in sheet 2, and the instructions for selecting the right information in the file (the proper node). Here is a very basic write up of how to use XPath to select nodes within an XML file.

 

Once you have your formula written for each column, fill in the rest of your array and wait a bit as it fills in the data. The Twitter REST API is limited to 150 requests per hour and each cell in the array you have just created is an API call. In this example we have 5 columns and 25 Twitter handles so filling out this array once will be just 25 short of the hourly cap.

Step 5: Some Basic Analysis

Now that we have an array of data for a group of Twitter users we can do some very basic analysis. In follow up blogs, I will collect the data over time and do a little deeper look at trending analysis.

In sheet 3 you will find a few graphs based on this data. It is important to note that the data in sheet 1 is formatted as text because Excel is keying off of the formula. This makes analysis difficult on that tab itself. I copied and pasted the values to sheet 3(Copy>Paste Special>Values) and then converted the text to numbers.

I wanted to compare the number of followers juxtaposed with the average number of tweets per day. Twitter returns the date as (Day of week, Month, Day, Time Stamp and Year). In order to format the date so Excel can understand it you must pull the right information out of the ‘Date Created’ :

Format Dates in Excel

Excel still can’t understand it because it is reading the formula instead of what is being visually shown in the cell. In order to get it to work, use Copy>Paste Special>Values to overwrite the formula, go to Format Cells>Date> format as 04/23/2012.Create a new column called Avg. Tweets per day. You now need to figure out how many days have passed from the time the account was created and divide into that the total number of tweets.

Average Tweets per Day

Next, you can select the data in the Followers and Avg. Tweets per Day columns to generate your graph. I won’t go over the details on chart formatting here, but here is the end result:

followers to tweets graph

 

Follow up post: Twitter Data Scraping and Insights Using Excel

 

Obama Tries for Geek Cred with ASCII Art

March 9th, 2012
by Tim Resnik

It’s not exactly like he came up with the idea, but at least he didn’t stop it.

If you view the source of the  barackobama.com you will see the following piece of ASCII art commented at the top of the HTML doc.

Obama ASCII art

 

The technological difference between Obama’s and Mitt Romney’s website is stark. Some may even say symbolic, but that’s not for this type of blog. Barry’s web team is using HTML5  and has enough time (and money) on hand to make ASCII art. If you jump over to Mitt Romney’s site and see what is going on, you’ll notice that it is coded up in a more antiquated version of HTML( XHTML 1.0) – not quite  cutting edge. It also uses the free, open-source content management system Drupal (great product by the way), which if nothing else is financially inline with fiscal conservative values — perhaps Newt should have taken a page out of this book.

 

On a separate but somewhat-related note: In the 2008 election we some the Barack Obama campaign leverage social media very effectively to build a grass-roots campaign that raised a record amount of money. I predict in this election we will see the Obama campaign innovate using geo-social/mobile applications. After all, he did join Foursquare a few months back.

How to implement Rel=”publisher” and Musings on the Authorship Markup Landscape

February 8th, 2012
by Tim Resnik

Shortly after the Google+ beta launch in July of 2011, Google began promoting authorship markup to webmasters, publishers, and bloggers. The markup enables Google to semantically build connections between disparate pieces of content and the individuals who wrote them (who have a Google+ profile). You might be saying, well that’s all and good for Google, but what do I get out of it? Google’s answer to that question today would be: you *may* receive authorship information along with your listings in the search results, such as a headshot, rich snippets from your Google+ profile, and even your own author SERP (such as Bianca’s below).  Their likely answer tomorrow: it will be used as a key cog in determining “Author Rank” that will greatly influence rankings and the SERP landscape. (A nice piece by John Doherty discussing Author Rank.)

Example of SERP when Authorship Markup is Implemented Properly

Now to the three authorship tags:

      1. Rel=”author”: a link, usually from the byline, from a piece of content created by an author to an author’s profile page.
      2. Rel=”me”: a link from the author’s profile page to the author’s Google+ profile. A reciprocal link from the author’s Google+ profile, under “Contributor to”
      3. Rel=”publisher”: a link in the head of the webpage to an organization’s Google+ Page.

I’m going to focus on the implementation of the publisher tag in this blog. To learn more about the other two check out AJ Kohn’s very thorough write-up on implementation steps, or check out these other resources: Google’s official guidelines (recently made a lot easier by allowing an email address verification from G+ to be used in place of rel=”me”), WordPress implementation, Matt Cutts YouTube video explaining authorship markup.

We know the value of the “author” and “me” markup, but what is the value of the rel=”publisher” tag? Again, the answer today may be a little different than the answer tomorrow. Today, it makes your site *eligible* for Google Direct Connect  which is a navigational search using the “+”<organization name> that sends the searcher directly to your Google+ Page. For example, if you do a search for +Pepsi instead of seeing a search result you will be directly navigated to Pepsi’s Google+ Page. At this point eligibility is determined algorithmically by Google on relevance and popularity. If you don’t think you qualify, you probably shouldn’t implement it at this point.  I have recently seen several branded SERPs that include Google+ page information right below the site links. I am not sure if this is a direct result of the rel=”publisher” verification or some other algorithm. NYTimes.com has it, yet CNN does not. Neither of which have rel=”publisher” implemented:

Google+ Page Showing up in Publisher SERP

 Implementing rel=”publisher” is not exactly a tough coding job, but there are a few quirks and incongruities.

The first step is to determine if you need the rel=”publisher” tag. If you have a high traffic content-rich website AND a Google+ Page for your business (not to be confused with a personal page on Google+), then rel=”publisher” is the markup that you want to use to let Google know that your site LOLcorp.com owns the Google+ page LOL Corp.

Next, add the rel=”publisher” tag to the <head> of your homepage. Google has a tool where you can generate the code and a Google+ badge for your site. This is the step where the waters get a little murky for me, and perhaps ESPN, but I’ll get to that in a minute.

At AudienceWise we work with clients that have many sub-brands, sometimes on a single domain. The instructions from Google are to put the code in the <head> of the document of your “main page.” However, they have not been clear about using multiple rel=”publisher” tags on a single domain. I have scoured the Google forums, as well as reached out directly to a few folks, but to no avail. No one seems to know for sure.

Undeterred, I looked around to try to find an analogous situation and came across the ESPN implementation. As far as I can tell, ESPN has two verified Google+ pages: NBA on ESPN and ESPN. I first checked ESPN for the rel=”publisher” tag and did not find it. I was then a little surprised to find it on the NBA page, but noticed that it was in the body and not the head. ESPN even left Google’s commented out instructions:

It’s not surprising that ESPN NBA, a site that should be eligible for Direct Connect, is not triggering direct navigation to their Google+ Page.

Once you have figured out the right place to put the tag, you can optionally put the G+ badge anywhere in your document. Next, make the connection from your Google+ page to your webpage. Simply select ‘edit’ and navigate to the ‘about’ tab and add your website. Make sure to use the canonical version of the url, or it won’t work. For example, www.pepsi.com is the canonical location of the website, not pepsi.com or subdomain.pepsi.com.

You should be ready to test at this point. Jump over to the Google Rich Snippet testing tool  and see if Google likes you or not. If you have already implemented your rel=”author” and rel=”me” tags, and they exist on the same page as your rel=”publisher”, tag you will get the warning below. However, Google has confirmed that this is just a bug and you can indeed have both the tags on the same page. In fact, Mashable receives this error in the testing tool (but Direct Connect works) so obviously this is not a problem. 

 At a glance, authorship markup seems a bit insignificant in the grand scheme of Google changes in the last year: Search+, the freshness algo , “secure search”, continued Pandalties, a massive privacy policy overhaul and Google+ Pages for businesses. However, a time will come where these tags (and other microformats) will become increasingly important in rankings and SERP display so it will likely pay off to be ahead of the curve and get it done now. Hopefully Google will provide clearer implementation guidelines, testing tools and equal inclusion for the publishers in the middle class.

Enhanced by Zemanta

Interview with Matthew Brown by PPC Associates

November 28th, 2011
by Tim Resnik

PPC Associates posted an interview with AudienceWise co-founder Matthew Brown about the current state of SEO and inbound marketing. 

Questions include:

1)       Please tell me your background and what you do for a living.

2)       Please complete the phrase “PPC to SEO as _____ is to ______.” (And please explain your answer.)

3)       If businesses are raking in money via paid search, why should they care about SEO?

4)      Many objections to SEO revolve around the indefinite, unpredictable nature of the results (which contrasts to the highly precise ROI from PPC). How would you answer that?

5)      How can an SEO client determine whether a prospective SEO provider is knowledgeable and capable of achieving excellent results for them?

6)      What is a typical SEO engagement for you?

 

 

 

 

Next Page »