Optimize My Website

How to Optimize My Website

Free SEO Analysis of Your Site

If you want my personal help on optimizing your website, please contact this Richmond SEO company and I will provide a free initial SEO analysis of your site!


Optimize My Website

Welcome to Optimize My Website .com. It is my goal to provide the information necessary so that you can understand the Search Engine Optimization (SEO) industry and then either optimize your own site or knowledgeably find someone who can perform this service for you.

 SEO is more than a one time do-it-and-forget-about-it service. SEO requires continued updates, tweaking and loving care. The industry is ever changing and what worked yesterday for good search engine positioning may not work today.

Please, check back often. We strive to provide up to date optimization tricks that are endorsed by the search engines themselves and we try to keep our article pages current with important happenings. There is also a dictionary section if you come across a term that you are not familiar with.

We hope that you find this resource useful.

Convertable Contact Form Tracks the Source of Each Lead

There is a cool new tool called Convertable that you should probably check out. It automatically tracks the source of every single web lead submitted through a contact form so you can then track the lead all the way through the end of the sales cycle. There is also a Convertable WordPress plugin, which you can see at http://convertable.com!

17 Myths About SEO in 2013 – Free eBook Download!

HubSpot just released a Free eBook entitled “17 Myths About SEO in 2013.” This is the first step I would recommend taking in learning how to optimize your website in today’s world. Check it out below!

Just take a look at 17 SEO Myths You Should Leave Behind in 2013 brought to you by HubSpot.com. This ebook debunks the most common myths and assumptions about how SEO works.

Avoiding Spam-like Techniques When You Optimize Your Website

Definition of Spam: When speaking of search engines, spam is loosely defined as any technique used to give your web page(s) an unfair ranking advantage over other pages.

The definition is pretty vague, and thus changes depending on who you ask. For our purposes, let us add to the above definition; … anything not easily seen by the visitor (invisible text or pages that load faster than can be detected) or any automated process used to trick the search engines.

If your work can be viewed by both search engine and site visitor, reads like well written text and is not over abused, then you are OK.

What Search Engines are Looking For
The Search Engines are not allowed to show obvious favoritism. So, in order to insure that the sites that deserve to be listed first are, they create rules that apply to every page on the internet and that can separate those that truly belong on top. For instance; when searching for a computer operating system the search engines need to display companies like Microsoft, Apple, Sun and IBM first so that the user gets that best return for their search. But the search engines can NOT just put who they want on top. So, they build mathematical algorithmic processes that analyze all sites the same and return values that ensure those that should be on top are.

Since these search engine processes are automated and thus prone to error, people have found loopholes in the system and take advantage by using these loopholes when they optimize their website to give their sites unwarranted positioning. As time goes on the engines add more equations to filter out these mistakes. If it appears that a site is obviously using spamming techniques they then either ignore the technique or ban the site (ouch!). Some spamming techniques require too many computer resources on the search engines end to filter out, so for now they are allowed… at least until they get more powerful computers. As of late, by banning sites that use obvious spamming techniques the engines have effectively put many SEO companies out of business.

Some Known Spamming Techniques that are Blocked
If a website page is on the topic of Dalmatian Puppies, you would rightly see the phrase Dalmatian Puppies within the title of the page or site, the meta-description tag, within headings, and used throughout the page. This would all be acceptable and even encouraged. But if the title was all capital letters and repeated the phrase 4 times in succession, was used many dozens of times within a couple hundred words of text and was always highlighted or made bold, then you would be spamming.

Some other techniques to avoid when You Optimize Your Website:

The over-use of meta tags
Text that is the same color as the background
Graphic links that occupy a single pixel
Multiple pages that load faster than you can see

You get the idea. If your site is worded with proper well written and well sounding copywriting and nothing is hidden then you have a good page.

Latent Semantic Indexing or Silo Structures

Never have I seen so much confusion explaining such a simple process. The Siloing methods are even referred to as “Advanced SEO” tactics by many in the industry. Well, you be the judge as to the complexity of this method.

Simply put, divide the content of your website up into subjects. Name folders based on each subject with a main definition page explaining a subject in each appropriate folder; a folder index page. Now, have that definition page link to more pages in its folder that further describe that subject and have each of these pages link to its main definition page and the site index. Voila, Siloing.

If your website has extremely differing subject content then build the site with sub-domains for each of these VERY different subjects. Or rather, have different sites with good domain names if your subjects are that different from each other. Remember though that each sub-domain is treated as a different site so only go this route IF you have hundreds of pages of content for each sub-domain as each has to stand on its own with the search engines.

Skip this paragraph unless you really know a lot about SEO and HTML and such… For those that were interested in SEO tactics when Google was still primarily using PageRank as their main defining algorithm, you might remember the ranking calculator; http://www.webworkshop.net/pagerank_calculator.php. This calculator was used to define PageRank per page and per site based on internal page and external site linking. Using this calculator we can tweak the internal linking structure of our site so that the pages we want can rank the way we want them to based on the total rank that a site possesses and its number of pages. Granted this can get more complex, so only consider investigating the calculator if you want to go crazy and micro-manage your individual page ranking which would be using technology that Google NO longer uses as their main ranking algorithm; meaning that it will probably be a waste of time to use. Doing the above (as in above this paragraph) will probably be more than enough for implementing Siloing.

In years past the issue found, and thus the need for Siloing, was that the search engines had trouble defining the overall purpose or subject of a website, and in some very unique cases they may still have trouble. The search engines have become much smarter over the years though so using the sub-domain methods will probably be over kill. If you are looking at rebuilding your website to use the sub-domain method you may even run the risk of each sub-domain being put into Google’s sand box as sub-domains are treated as separate sites. Here again Google is getting smarter and starting to relate sub-domains to each other, but there are still oddities in Google’s cross relating. As an example, consider Amazon and Microsoft’s websites. They get away with a lot of differing subjects with little to no Siloing and the search engines do fine with these sites.

As is the case with everything, you can overdo Siloing. An example would be using too many folder depths or not having enough pages for a given sub-domain.

Good organization is key, as is NOT overdoing optimization.

Web Analytics – Website Traffic Statistics

Site traffic statistics are an important part of any SEO program as they provide proof of visitor traffic. Most hosting companies provide a free stats option or include any stats fees in the hosting package. If you are not sure how to view your site’s stats then please make the effort to contact your hosting provider and inquire as to your options.

There are a few things to keep in mind when viewing your stats, and a few options to consider when choosing a stats program and/or whether to upgrade your current program.

Any stats program is going to display the number of visitors, hits and bandwidth that a site sees. What is interesting is that not every stats program defines a visitor in the same manor. For instance, is an automated non human computer inquiry from one of Google’s spiders considered a visitor? I personally only consider living breathing humans as visitors, but not all stats programs agree with me. Here is an example of what I am talking about. For the month of November 2007, this website saw a record number of visitors. But I am not exactly sure what that number is. I used 3 stats programs on this website:
awStats – this program works with this site’s server logs and provides what I believe are the most accurate of numbers
joomlaStats – this site used Joomla to build its pages dynamically and these stats should be as good as awStats, but as you can see below, they are not
Google Analytics – any page visited on the site triggers a javascript which forwards information to Google

In my opinion, each of these three programs should display the same visitor information as all three know the difference between human visitors and automated program visitors such as Google’s spiders. But, I was wrong. Look at these numbers for the month of November 2007:
awStats – 1780 human visitors
joomlaStats – 907 human visitors
Google Analytics – 423 human visitors

Yeah, I was surprised too. Now I understand that it is difficult determining unique visitors from return visitors, so I forgive these numbers:
awStats – 833 unique visitors
joomlaStats – 684
Google Analytics – 358

Quite a difference.

Each of these programs offers different strengths and different levels of accuracy. My recommendation is to pick a stats tool and stick with it. These three programs all show very similar gains and losses when viewing monthly totals over the course of a year so each is accurate in tracking trends even though the number itself varies greatly between them. The strengths though are worth considering. Here is a basic break down:
awStats – compiles every morning and gives the most data, tracking the most amount of raw options.
joomlaStats – real time tracking allowing you to see when someone is currently looking at your site and at which page, even showing the order of pages viewed.
Google Analytics – works with Google’s Pay Per Click ads data incorporating the PPC stats with the visitors. not real time but close; often about 30 minutes delayed from real time. This program also displays a nice overly so you can visually see where people click on your site pages, and it is getting better all of the time. The one thing that I wish that it provided is a bot visitor count.

Whatever program you choose to use do so understanding that program’s strengths. If you ever decide to change programs allow the two programs to run together for a few months and analyze the differences in their numbers. It would not be fare to quote traffic numbers using one program for a month against traffic numbers with a different program for a following month.

I never like to encourage products, but this one is too fun not to mention; VisitorVille. Your site pages are viewed as buildings with people walking around your site/town. Pages viewed most often are represented by larger buildings. If a visitor approaches after finding your site from a search engine they arrive in a bus with the name of the search engine painted on the side of the bus. It also integrates PPC options and tracks visitors purchasing your products and even returning for later visits. You even have the option of live chatting with any particular visitor all in real time. Analytics meets SimCity; very fun and very useful… but at a monthly price.

Submitting a Website to the Search Engines

If you already own your website, then you have received the spam emails promising submission to thousands of search engines. Or you may have looked at the tools that automatically submit for you. You could actually be placing your site in danger by using these services. Some of the search engines and directories can tell when a submission was made with a tool instead of by hand and will either ignore the effort or worse. The important engines require extra steps making automated submission impossible. Simply; avoid these auto tools.

The Submission Myths
One of the more popular advertisements promises submission to 50,000 search engines. Sounds good

In reality there are only a handful (less than a dozen) of search engines that will bring you traffic. Add to these a few (as in 3 or 4) directories and you have well over 95% of the search market at your disposal. These thousands of additional engines are made up of free for all (FFA) link pages or special interest webpages.

Submitting your web page to thousands of FFA sites means that you will end up with thousands of spam email messages.

In addition, you will not receive a single visitor when you submit your website about office equipment to a directory of artists in Antarctica.

Who do I Submit to? And How?
As always, it is quality that matters, not quantity. For this reason, you should concentrate on the popular search engines when you submit your website.

Here things actually get easier. As is true with most major corporations, search engines purchase one another and eliminate competition. Here is a rundown of the more important engines in the US of A:

Google: This engine supplies results to AOL & Netscape. From Google’s main page they have directions to their free submission tool, and by submitting once you just submitted to AOL & Netscape too.
MSN: These guys are running solo and are now self sufficient. As is true with Google, use MSN’s main page to find their free submission tool.
Yahoo: Corporate buyout king. Yahoo now owns Inktomi, AltaVista, AlltheWeb, Overture and some others. Like the previous engines, this submission is also free. Although they also offer paid submissions ranging from $100 to $300.
AskJeeves: Inclusion here is done through the Teoma directory for a fee.
Foreign Engines: Each country has its own local search engines, and even the ones listed above have their own branches located in different regions. Submission to these should not be overlooked if you sell product or offer services besides just in the US of A.
Open Directory/DMOZ: This is the main free directory. The trick here is to be already getting results from the above engines before submitting to DMOZ. This directory then boosts your ranking score with AOL, Google, AskJeeves and others… at least in theory. In the days of Yahoo being king this was a powerful tool, now DMOZ is just kind of there.
Specialty Directories: There are countless subject driven directories for every imaginable topic. Some are well used by members in that field and well worth submitting to. You will need to find these on your own.
PPC: Overture, Google, Look Smart and others offer sponsored listings or Pay-Per-Click campaigns. If you are selling product then this could be a definite consideration. As would Google’s Froogle engine or EBay.

Submission is quite simple, fill in the blanks provided and maybe pay a fee. For the PPC type of submission
an account will need to be applied for and then maintained which is a little more work.

There are two other options available for the more cost-conscious user:

Linking: This is a good idea anyway for further maintaining and building your sites ranking. But initially, if another site has a link pointing to your site (this is called a backlink) and the other site is known by the search engines then when the search engines next spider that site they will find reference to yours and freely add you to their submission list. This is absolutely best way to be submitted. And do not stop at one backlink, get as many as you can.
Toolbars: I hate this option, but it works well. If you add a Google bar to your Windows Internet Explorer program then Google will know everywhere you go on the internet, thus giving all your destinations free submission. Most of the search engines have their own tool bar. This is free, but now big brother(s) know everything you do when browsing.

In Conclusion
Submission is a must for your site to be known by the search engines. But please do not stop with only submission. If your site is not search engine friendly then the submission is worthless.

Backlinking – What are Backlinks and Why Do You Need Them for SEO?

Backlinking by definition; Also referred to as Inbound Links. Links from another website to your website.

Backlinking Usage
Encouraging websites to link to your site maintains and builds your sites ranking and is a great way to submit a website to the search engines. Google’s Sandbox delay can even be minimized, to a degree, by using backlinking properly. As such, backlinking is a powerfully incredible tool for any Search Engine Optimization campaign.

It has come to the attention of some that when viewing their website server logs other sites have linked to theirs, which would normally be something to be happy about. The web was designed to share information, linking sites together so when information on one website is updated everyone linking to that source stays current. The search engines recognize this. When your site is linked to by others this then shows the search engines and the rest of the world that your website is worthy of quoting. You want people linking to your website… usually…

Backlinking Usage… that can be bad
It should be noted that the content on your website is yours. The linking on your website is of your creation. If you link to a website nobody should have coerced the creation of that link. You add that link so that others can benefit from the content on that site. You have little control, and typically NO control, over what others place on their website. If you do not like a website linking to yours you are probably powerless to do anything about it.

This issue that has been created is from the discovery of the search engines’ valuing of these links and thus people trying to manipulate these search engines’ rankings of their site by building artificial linking. Of course the search engines then fight back by devaluing some websites, or groups of websites such as linking farms. For a while, Google was even devaluing the sites linked to by devalued sites; ouch!!

As stated above, you have little to no control over who links to you. The trick is to fill your website with incredible content convincing the world that you are an authority on a subject and worthy of being quoted. If weird sites are linking to you then build your content encouraging linking from a greater number of legitimate websites or those that you are not embarrassed of. An online presence can be a scary experience, but when handled knowledgeably the positive aspects will outweigh the oddities.

Thinking Like a Search Engine

Most search marketers are used to looking at things from a single perspective: that of a search marketer. Some of the more savvy marketers know enough to look at things from the perspective of end users as well, since those are the people they are ultimately trying to influence. The savviest of search marketers know that it’s also important to step back from time to time and try to think like a search engine engineer.
By stepping into the shoes of the men and women who spend their days developing ways to improve search results, we can gain a unique perspective on the world of search engine marketing. To do that, we are going to examine what search engines are trying to do, consider their goals, and look at how those goals affect their interaction with the webmaster and SEO community.
First, a disclaimer: None of this represents the official position of any search engine. This is my interpretation of what a search engineer is thinking, based on what I have seen out there, and based on my discussions with a variety of search engine engineers over the years.
I have found adopting this way of thinking to be an extremely effective technique in reviewing a Web site strategy. I don’t have to agree with the thinking of the search engine engineer, but understanding it makes me better equipped to succeed in a world where they define the rules.
So we can more completely adopt the mindset, this article will be written in the first person, as if I am the search engine engineer.

The Basic Task of a Search Engine

Our goal is to build a search engine that returns the most relevant results to searchers. To do this, we need to have a comprehensive index that is as spam-free as possible. We also need to create ranking algorithms which are able to determine the value to searchers of a given site in relation to their query.
We build our index through these 4 simple steps:
  • Crawl the entire Web
  • Analyze the content of every crawled page
  • Build a connectivity map for the entire Web
  • Process this data so that we can respond to arbitrary user queries with the best answers from the Web in less than 1 second.
OK, so I am being a bit facetious when I called it simple. But if you had set upon this task yourself, you would need a sense of humor too. In fact, to accomplish this task we have had to build and manage the largest server farms the world has ever seen.

Search Engine Quality Goals

Like all businesses, we want to make money. The great majority of our money is made by selling ads within our search results and the rest of our ad network. However, we can’t make money on our ads if users don’t use our search engine to search.
So for search engines, relevant search results are king. In simple terms, if our search engine provides the best answers to users, those users will continue to come to us to search. And we’ll make money by serving them ads. So we try to gain more users by providing the best search results they can find, hoping all new users that come online will search with us and continue to use our search engine for the rest of their lives.
Making matters more complicated in our quest for providing the best results is that a large percentage of user queries require disambiguation. By that, I mean the query itself does not provide enough information for us to understand what the user is looking for. For example, when a user searches on “Ford”, they may be searching for corporate information on the Ford Motor Company, performance details of the latest Ford Mustang, the location of a local Ford dealer, or information about ex-President Gerald Ford. It is difficult for us to discern the user’s intent.
We deal with this by offering varied answers in the top 10 results, to try and provide the user the answer they want in the top few results. For our Ford example above, we include in the top 10 results information on the Ford Motor Company and its vehicles, as well as on Gerald Ford.
We also implement new programs to provide disambiguation. To see examples of this, try searching on Cancer in Google and notice the “Refine results for cancer” links, or try searching on Beatles on Ask.com, and see how they have formulated their Smart Answer with links to music, images, products, and a drop-down box listing each of the four band members.
To summarize, providing the best answers leads to increased market share. More searches means more clicks on our ads, which means more revenue and profit. And it all flows from having the highest quality (including the best disambiguation) in our search results.

Modeling Webmaster Behavior on the Web

The best ranking algorithms we can use depend on models of Webmaster behavior on the network, where the Webmasters are not cognizant of the effect that their behavior has on the search engines. As soon as Webmasters become aware of the search engines, the model starts to break. This is the source of the famous “design sites for users, not search engines” stance that you have heard us talk about.
The Webmasters who do not follow this policy range from those who are black hat spam artists that will try any trick to improve their rankings, to those who bend the rules gently. All of this behavior makes it more difficult for us to improve our index, and to increase our market share.

Links as a Voting System

As an example of this, let’s talk a bit about how links play a role in building our index. We use inbound links as a major component of evaluating how to rank sites in response to a particular user query.
If the search is for the term “blue widgets,” then we evaluate the number and quality of relevant links that each page in the index has pointing to it. While there are over a hundred other factors, you can oversimplify this and say that the page with the best mix of relevant (to the query), quality links to it wins.
However, this concept is very fragile. It is heavily dependent on the person providing the link doing so because they really like the quality of the content that they are linking to. Fundamentally, the value of this algorithm for ranking content is based on observations about natural Webmaster behavior on the Web — natural behavior in a world without search engines.
As soon as you compensate someone for a link (with cash, or a returned link exchanged for barter reasons only), you break the model. It doesn’t mean that all these links are bad, or evil; it means that we can’t evaluate their real merit. We are slaves to this fact, and can’t change it. This leads to the stance we take against link purchasing, and the corresponding debate that we have with Webmasters who buy links.

The Role of FUD

All probabilistic models work best when the subjects being evaluated are not aware that they are being evaluated. However, we do not have that luxury. So the next-best thing is to make it difficult for the Webmaster to understand the nature of the algorithms used. Doing this still provides a certain amount of randomness, the foundation of all probabilistic models.
This is one big reason why we don’t publish lots of clear guidelines about how our algorithms work. A little bit of FUD (fear, uncertainty and doubt) improves overall quality. For example, you will never see anything that clearly defines how we identify a paid link.
This sounds a bit nasty, but we don’t mean it to be so. Once again, we are just trying to provide the best possible search results for our end users, and this approach helps us do that.


So now let me jump back out of our fictitious search engine engineer’s mind, and explain why this is all useful. Simply put, it’s always useful to understand the goals and aspirations of the dominant business industries in your space. Without a doubt, when search engines roll over, there are lots of casualties.
You don’t have to endorse their mind set, just understand it. You should seek to protect yourself from becoming an incidental casualty. Have an idea of how the search engines think, and use this knowledge to evaluate new search engine marketing strategies.
Understanding the way search engineers think can help you decide whether or not that new idea is worth trying. Is it at odds with the goals of the search engine? Does it help the search engine understand your site better? Knowing when you are taking risks, or making the decision to avoid them, can help scale your search engine marketing strategy to new heights.

Dangerous SEO Techniques to Avoid

Some webmasters and Search Engine Optimization companies will use any means they can think of to get a high rank in search engine listings. That is not always a good idea. Here are detailed a list of eight search engine optimization techniques that should not be used, and why.
Every site owner wants to be at the top of the search rankings. Better search engine rankings mean more site visitors, which means more business. Naturally, with the stakes as high as they are, site owners look for any means possible to get into the coveted top ten placements on the main search engines; especially Google.
There are a million and one optimization techniques out there that claim they can guarantee placement in the top ten results. Obviously, they all work with varying degrees of success. However, while many popular strategies may get you short-term success and temporarily boost your rankings, some tactics can actually end up hurting your placement, resulting in penalties or outright banning of your site from search engines, as well as alienating the very site visitors that you are trying to reach.
However, it is in your best interest to look at any potential optimization technique with a critical eye and to be sure you know what is being done to your site. If you do your own search engine optimization, be very careful before trying any of the following techniques. If you hire an optimization firm, you will want to have a serious talk with your consultant before using any of the following strategies.

Invisible Keywords Embedded on the Page


Invisible keyword spamming is one of the oldest tricks in the book. Using text that is the same color as the background of a site, webmasters place “invisible” keywords on a site to increase the instances of targeted search terms and artificially inflate the relevance ranking for a particular search. It is a common trick used by adult sites and Internet casinos, but it is also used by a surprising number of mainstream sites and businesses. A similar tactic involves placing the keywords in a very small, sometimes unreadable font size at the bottom of a page.
Previously, keyword spamming had a downside in that the block with the invisible keywords took up space on the page and had to be tacked on in white space someplace, but now with DHTML many site owners place keywords in an invisible layer so that it does not affect page layout and can actually be invisible to users.

Reason to Avoid

Not even considering the search engines, keyword spamming is annoying to users that pull up your page and discover it is not actually relevant to the terms for which they were searching. Secondly, search engines are starting to scan for this automatically and penalize sites that use this technique by dropping them from the database for a specific period of time.

Repeating Keywords Excessively Repeating


Regular occurrence of a keyword is supposed to mean that a page has high relevance to that search term. If a search user is looking for a page about dog grooming, ideally a page that is highly related to dog grooming will repeat the phrase “dog grooming” with a high amount of frequency. However, some site owners take it a step too far and repeat the phrase dozens of times in a row in the META tags or in the content to try to boost the relevance rating. The keywords may also be packed in via invisible text, mentioned above. Some sites go so far as to hire copywriters to replace instances of pronouns in otherwise fine site copy in order to get the maximum repetition of key terms.

Reason to Avoid

While this may be a good idea in theory, it is regularly abused. Directory editors frown on this technique, and your site could be banned from a directory if an editor notices you using this technique on the page. Packing the copy with excessive repetition also hurts the readability and makes your site awkward for potential visitors, which may hurt their opinion of your site and reduce the odds of a repeat visit. In addition, some search engines now have filters to screen for overly repeated keywords, which could result in a penalty – even in cases where you may not have intentionally packed the keywords.

Completely Irrelevant Keywords


Everyone knows that certain terms are very popular. Some site owners scan the lists of frequently searched terms and insert those terms as keywords for a page regardless of whether they actually relate to the site content or not, causing a page to show up in the rankings for a common search term and trick visitors into clicking on the link.

Reason to Avoid

Many search engines consider this spam, but it would be a bad idea even if they did not. For example, “sex” is a common search term, but if a search engine user is actually searching on the term “sex” and you have stuffed the keywords for a site about golf with excessive references to this term, you are not exactly gathering your target audience to the site anyway, and this is not useful traffic.

Hidden Links


Invisible links are hidden in images or in text on pages and exist for the benefit of the spider rather than the user. The idea behind them is to provide additional links within the page for the spider to find and index or to artificially boost link popularity.

Reason to Avoid

This is another attempt to trick search engines that many frown upon and will ban you for if you are caught. As always, search engines’ primary concern is maintaining highly relevant search results for their users and this tactic is frequently abused to boost rankings for irrelevant sites. In fact, it is quite common for some optimization firms to hide links to their own sites within clients’ websites – which can definitely backfire for the clients.

Cloaking or Using Redirect Pages


It has become increasingly common for site owners to create a separate page for search engine spiders than the one for viewers. When the spider hits the site, it crawls a specific robot version, but when actual site viewers click through to the link it automatically redirects to a different version of the site. This allows the site owner to create a version of the site to optimize rankings without affecting the readability or content of the actual site intended for viewers.

Reason to Avoid

This tactic is difficult for spiders to detect, and it has great potential for abuse. Site owners could create a page to optimize a site for terms that are not actually relevant to the content. Ideally your site should gain rankings through the merit of its content and not through this type of deception. In addition, most search engines are very much against this and if they detect you using it, they may ban your site, which eliminates your rankings entirely.

Link Farms


Link farms are pages that consist solely of links to other pages but fail to offer any kind of useful content themselves. Site owners typically place their links on farms to artificially boost linking popularity and thus rankings on sites like Google. Some link farms are automatically generated, while site owners manage others as a repository of reciprocal links.

Reason to Avoid

Human-edited link farms are certainly preferable to automatically generated ones, but it looks pointless to site viewers to come across a page full of links with little to no relevance to your actual site content. The major search engines are starting to crack down on link farms by scanning for linking patterns that analyze inbound links to a site, and in some cases it can hurt your page rankings to be on a link farm.
It is important not to confuse link farms with actual pages of well-chosen relevant links. It is an excellent idea to email site owners that cover similar content to your site and ask them to post a link, but it is a bad idea to try to get a link posted to your site on every URL you can find that is willing to post a link regardless of subject matter.

Over Submitting Your Site to Search Engines


This is the practice of repeatedly submitting your site to crawlers, on a schedule that could be as frequently as weekly or daily, despite the fact that the crawler has already indexed your site.

Reason to Avoid 

More is not better for submissions to search engines. Submitting multiple times is a waste of time and has no bearing on your search rankings. Most search engines simply delete multiple submissions, and they may end up ignoring your site altogether.

Using Untrained SEO Consultants


If you have looked around at all for a search engine optimization firm, you have noticed that there is a wide range of options with an associated wide range in prices. Some firms are run by well-trained and knowledgeable experts, who often charge accordingly, and other firms are run by people who are out to make a quick buck but may not have your site’s long term interests at heart.

Reason to Avoid

Sometimes untrained consultants will be cheaper, but as with all things, you get what you pay for. It is far better to fork out the money for a qualified professional firm to optimize your rankings through legitimate, long-term methods that will not cost you a penalty than to risk unsavory techniques that may get you good short-term placements but could result in bans and penalties if their methods are detected. Since many untrained or unethical SEO consultants may use the above tactics, you could end up with terrible return on investment if you pay for an optimization firm that ends up getting your site banned from the major directories.


Many of the above tactics may temporarily increase your search engine positioning, but for the reasons mentioned, few are a good idea to do. It is not a case in which the end justifies the means. You do not want to have people visit your site because you tricked them into it by outwitting the search engines. You want them to visit because the content of your site is interesting and relevant to what they were looking for. As search engines increasingly grow wise to these spamming tactics, they will rightfully penalize those webmasters that use them, so it is best to play by the rules and put in the time to gain traffic the right way.

K Words

SEO Dictionary

KEI: Keyword Effectiveness Index. The higher the KEI, the more popular the keywords are, and the less competition they have. KEI is an SEO standard but is typically limited in accuracy, but it at least provides something to base phrase strength on.
Keyword (Key Phrase) – A word or phrase typed into a search engine in order to find web pages that contain that word or phrase. A web page can (and should be) optimized for specific keywords/phrases that are relevant to the content on that page.
Keyword Density – An old measure of search engine relevancy based on how prominent keywords appeared within the content of a page. Keyword density is no longer a valid measure of relevancy over a broad open search index though.
When people use keyword stuffed copy it tends to read mechanically (and thus does not convert well and is not link worthy), plus some pages that are crafted with just the core keyword in mind often lack semantically related words and modifiers from the related vocabulary (and that causes the pages to rank poorly as well).
See also:
Keyword Funnel – The relationship between various related keywords that searchers search for. Some searches are particularly well aligned with others due to spelling errors, poor search relevancy, and automated or manual query refinement.
See also: MSN Search Funnels – shows keywords people search for before or after they search for another keyword.
Keywords Meta Tag – An HTML meta tag that lists all of the main keywords and key phrases that are contained on that web page. Some search engines use the keyword meta tag to help rank web pages in their databases. Google does not.
Keyword Research – The process of discovering relevant keywords and keyword phrases to focus your SEO and PPC marketing campaigns on.
Example keyword discovery methods:
  • using keyword research tools
  • looking at analytics data or your server logs
  • looking at page copy on competing sites
  • reading customer feedback
  • placing a search box on your site and seeing what people are looking for
  • talking to customers to ask how and why they found and chose your business
Keyword Stuffing – Writing copy that uses excessive amounts of the core keyword.
When people use keyword stuffed copy it tends to read mechanically (and thus does not convert well and is not link worthy), plus some pages that are crafted with just the core keyword in mind often lack semantically related words and modifiers from the related vocabulary (and that causes the pages to rank poorly as well).
See also: Search Engine Friendly Copywriting – What Does ‘Write Naturally’ Mean for SEO?.
Keyword Suggestion Tools – See Keyword Research Tools.

J Words

SEO Dictionary

JavaScript – A client-side scripting language that can be embedded into HTML documents to add dynamic features.
Search engines do not index most content in JavaScript. In AJAX, JavaScript has been combined with other technologies to make web pages even more interactive.

I Words

SEO Dictionary

IDF (Inverse Document Frequency) – is a term used to help determine the position of a term in a vector space model.
IDF = log ( total documents in database / documents containing the term )
Image Map – Placing separate hyperlinks on different areas of the same image. Clicking on different parts of the image will take the user to different web pages. Not very search engine friendly.
Inbound Links – See Backlinks.
Index – Collection of data used as bank to search through to find a match to a user fed query. The larger search engines have billions of documents in their catalogs.
When search engines search they search via reverse indexes by words and return results based on matching relevancy vectors. Stemming and semantic analysis allow search engines to return near matches. Index may also refer to the root of a folder on a web server.
Indexing – After a search engine has crawled the web, it ranks the URLs found using various criteria (see algorithm) and places them in the database, or index.
Information Architecture – Designing, categorizing, organizing, and structuring content in a useful and meaningful way.
Good information architecture considers both how humans and search spiders access a website. Information architecture suggestions:
  • focus each page on a specific topic
  • use descriptive page titles and meta descriptions which describe the content of the page
  • use clean (few or no variables) descriptive file names and folder names
  • use headings to help break up text and semantically structure a document
  • use breadcrumb navigation to show page relationships
  • use descriptive link anchor text
  • link to related information from within the content area of your web pages
  • improve conversion rates by making it easy for people to take desired actions
  • avoid feeding search engines duplicate or near-duplicate content
Information Retrieval – The field of science based on sorting or searching through large data sets to find relevant information.
Informational Query – a query about a topic where the user expects to be provided with information on the topic, such as “alzheimer disease”, “programming languages”, etc.
Inktomi – Search engine which pioneered the paid inclusion business model. Inktomi was bought by Yahoo! at the end of 2002.
Internal Link – Link from one page on a site to another page on the same site.
It is preferential to use descriptive internal linking to make it easy for search engines to understand what your website is about. Use consistent navigational anchor text for each section of your site, emphasizing other pages within that section. Place links to relevant related pages within the content area of your site to help further show the relationship between pages and improve the usability of your website.
Internal Navigation – See: Navigation
Internet – Vast worldwide network of computers connected via TCP/IP.
Internet Explorer – Microsoft’s web browser. After they beat out Netscape’s browser on the marketshare front they failed to innovate on any level for about 5 years, until Firefox forced them to.
Inverted File – See: Reverse Index
Invisible Web – Portions of the web which are not easily accessible to crawlers due to search technology limitations, copyright issues, or information architecture issues.
IP Address (Internet Protocol Address.) – A unique numerical that is assigned to every computer that connects to the internet. IP addresses can be either static (never unchanging) or dynamic (changes with every internet connection).
Your computer’s IP address is what enables it to be found on the internet in order to receive email, web pages, etc.
IP delivery – See: cloaking
IP Spoofing – Returning an IP address that is different from the one that is actually assigned to the destination website. This is often done with redirects. A huge no-no (it’s even a criminal offense when done under certain circumstances).
ISP (Internet Service Providers) – sell end users access to the web. Some of these companies also sell usage data to web analytics companies.
Italics – See: emphasis

H Words

SEO Dictionary

Header Tags – HTML tags that help outline a web page or draw attention to important information. Keywords located inside header tags can provide a rankings boost in the search engines.
<h1>This is an H1 tag.</h1>
<h2>This is an H2 tag.</h2>
Hidden Text and Hidden Links – Using a text font that is the same (or nearly the same) color as the background color, rendering the text or link invisible or very difficult to read. The same effect can also be achieved by using various HTML tricks.
Hidden text and hidden links are often used to artificially increase a web page’s keyword density for a keyword or keyphrase and/or to artificially boost the link popularity of other pages on your site(s).
The use of hidden text and hidden links is frowned upon by Google and most other search engines. Using them will most likely result in your web page(s) incurring a penalty by the search engines.
Hits – The term hits is commonly misused. Many people think of a hit as a visit to one of their web pages. This is incorrect. A hit takes place every time a file is accessed on your website.
For example, let’s say your friend’s home page has a logo gif and 12 pictures on it. Every time a visitor loads that page, 14 hits are recorded: 1 for the logo gif, 12 for the pictures, and one for the page itself. So don’t be all that impressed if he boasts that his site receives 1000 hits a day. In our example, those 1000 hits could have been generated by as few as 72 visitors to the site.
The only meaningful way to evaluate the traffic flow of a site is to consider the average daily or monthly number of unique visitors and page views a site receives.
Home Directory – The main directory where your site’s main index page is located. The index page in your home directory can be accessed like this: http://www.yoursite.com
Headings – The heading element briefly describes the subject of the section it introduces.
Heading elements go from H1 to H6 with the lower numbered headings being most important. You should only use a single H1 element on each page, and may want to use multiple other heading elements to structure a document. An H1 element source would look like:
<h1>Your Topic</h1>
Heading elements may be styled using CSS. Many content management systems place the same content in the main page heading and the page title, although in many cases it may be preferential to mix them up if possible.
Headline – The title of an article or story.
Hidden Text – SEO technique used to show search engine spiders text that human visitors do not see.
While some sites may get away with it for a while, generally the risk to reward ratio is inadequate for most legitimate sites to consider using hidden text.
HITS – Link based algorithm which ranks relevancy scores based on citations from topical authorities.
See also: Jon Klienberg’s Authoritative Sources in a Hyperlinked Environment [PDF].
Hijacking – Making a search engine believe that another website exists at your URL. Typically done using techniques such as a 302 redirect or meta refresh.
Home Page – The main page on your website, which is largely responsible for helping develop your brand and setting up the navigational schemes that will be used to help users and search engines navigate your website.
As far as SEO goes, a home page is typically going to be one of the easier pages to rank for some of your more competitive terms, largely because it is easy to build links at a home page. You should ensure your homepage stays focused and reinforces your brand though, and do not assume that most of your visitors will come to your site via the home page. If your site is well structured many pages on your site will likely be far more popular and rank better than your home page for relevant queries.
Host – See Server
.htaccess – Apache directory-level configuration file which can be used to password protect or redirect files.
HTML (HyperText Markup Language) – is the language in which pages on the World Wide Web are created.
Some web pages are also formatted in XHTML.
HTTP (HyperText Transfer Protocol) – is the foremost used protocol to communicate between servers and web browsers. Hypertext transfer protocol is the means by which data is transferred from its residing location on a server to an active browser.
Hubs – Topical hubs are sites which link to well trusted within their topical community. A topical authority is a page which is referenced from many topical hub sites. A topical hub is a page which references many authorities.

G Words

SEO Dictionary

GAP (Google Advertising Professional) – is a program which qualifies marketers as being proficient AdWords marketers.
See also: Google Advertising Professional program.
Gladwell, Malcolm – Popular author who wrote the book titled The Tipping Point.
Godin, Seth – Popular blogger, author, viral marketer and business consultant.
Google Base – Free database of semantically structured information created by Google.
Google Base may also help Google better understand what types of information are commercial in nature, and how they should structure different vertical search products.
See also: Google Base.
Google Bombing – Making a pank rank well for a specific search query by pointing hundreds or thousands of links at it with the keywords in the anchor text.
See also: Google search, miserable failure.
Google Bowling – Knocking a competitor out of the search results by pointing hundreds or thousands of low trust low quality links at their website.
Typically it is easier to bowl new sites out of the results. Older established sites are much harder to knock out of the search results.
Google Checkout – Payment service provided by Google which helps Google better understand merchant conversion rates and the value of different keywords and markets.
See also: Google Checkout.
Google Dance – In the past Google updated their index roughly once a month. Those updates were named Google Dances, but since Google shifted to a constantly updating index, Google no longer does what was traditionally called a Google Dance.
Major search indexes are constantly updating. Google refers to this continuous refresh as everflux.
The second meaning of Google Dance is a yearly party at Google’s corporate headquarters which Google holds for search engine marketers. This party coincides with the San Jose Search Engine Strategies conference.
See also: Matt Cutts Google Terminology Video – Matt talks about the history of Google Updates and the shift from Google Dances to everflux.
Google Keyword Tool – provided by Google which estimates the competition for a keyword, recommends related keywords, and will tell you what keywords Google thinks are relevant to your site or a page on your site.
See also: Google Keyword Tool – tool offering all the above mentioned features.
Google OneBox – Portion of the search results page above the organic search results which Google sometimes uses to display vertical search results from Google News, Google Base, and other Google owned vertical search services.
Google Sitemaps – Program which webmasters can use to help Google index their contents.
Please note that the best way to submit your site to search engines and to keep it in their search indexes is to build high quality editorial links.
See also: Google Webmaster Central – access to Google Sitemaps and other webmaster related tools.
Google Sitelinks – On some search results where Google thinks one result is far more relevant than other results (like navigational or brand related searches) they may list numerous deep links to that site at the top of the search results.
Google Supplemental Index – Index where pages with lower trust scores are stored. Pages may be placed in Google’s Supplemental Index if they consist largely of duplicate content, if the URLs are excessively complex in nature, or the site which hosts them lacks significant trust.
Google Toolbar – A downloadable toolbar for Internet Explorer that allows a user to do a Google search without visiting the Google website. The toolbar also displays the Google PageRank (PR) of the page currently displayed in the browser. The latest version also includes a very good popup-blocker. The Google Toolbar is a must have for every serious webmaster.
The Google Toolbar can be downloaded here:
Google Traffic Estimator – Tool which estimates bid prices and how many Google searchers will click on an ad for a particular keyword.
If you do not submit a bid price the tool will return an estimated bid price necessary to rank #1 for 85% of Google’s queries for a particular keyword.
See also: Google Traffic Estimator.
Google Trends – Tool which allows you to see how Google search volumes for a particular keyword change over time.
See also: Google Trends.
Google Website Optimizer – Free multi variable testing platform used to help AdWords advertisers improve their conversion rates.
See also: Google Website Optimizer.
Google.com – The leading search engine on the internet today with approximately 80% of all search traffic. When people speak of search engine optimization (SEO), they’re often referring specifically to Google.
Googlebot – The crawler that Google uses on a daily basis to find and index new web pages.
Guestbook Spam – A type of low quality automated link which search engines do not want to place much trust on.

F Words

SEO Dictionary

Filters – A filter is a software routine that examines web pages during a robot’s crawl looking for search engine spam. If the filter detects the use of spam on the page, a ranking penalty is assessed.
Common filters look for hidden text, links to bad neighborhoods, and many other SEO techniques that the search engine doesn’t like.
Fair Use – The stated exceptions of allowed usage of work under copyright without requiring permission of the original copyright holder. Fair use is covered in section 107 of the Copyright code.
See also: US Copyright Office Section 107.
Favicon (Favorites Icon) – is a small icon which appears next to URLs in a web browser.
Upload an image named favicon.ico in the root of your site to have your site associated with a favicon.
Favorites – See bookmarks
Feed – Many content management, systems such as blogs, allow readers to subscribe to content update notifications via RSS or XML feeds. Feeds can also refer to pay per click syndicated feeds, or merchant product feeds. Merchant product feeds have become less effective as a means of content generation due to improving duplicate content filters.
Feed Reader – Software or website used to subscribe to feed update notifications.
FFA (Free for all) – pages are pages which allow anyone to add a link to them. Generally these links do not pull much weight in search relevancy algorithms because many automated programs fill these pages with links pointing at low quality websites.
Filter – Certain activities or signatures which make a page or site appear unnatural might make search engines inclined to filter / remove them out of the search results.
For example, if a site publishes significant duplicate content it may get a reduced crawl priority and get filtered out of the search results. Some search engines also have filters based on link quality, link growth rate, and anchor text. Some pages are also penalized for spamming.
Firefox – Popular extensible open source web browser.
Flash – Vector graphics-based animation software which makes it easier to make websites look rich and interactive in nature.
Search engines tend to struggle indexing and ranking flash websites because flash typically contains so little relevant content. If you use flash ensure:
  • you embed flash files within HTML pages
  • you use a noembed element to describe what is in the flash
  • you publish your flash content in multiple separate files such that you can embed appropriate flash files in relevant pages
It is a common wrong belief that Flash can not be optimized for use with the Search Engines. If optimized properly, there is no disadvatage in using Flash on your website. In fact, the advantages experienced with Flash may improve your user experience making optimization work better with your website as people will be more inclined to stay on the website after being directed to it from the Search Engines.
Forward Links – See Outbound Links
Frames – A technique created by Netscape used to display multiple smaller pages on a single display. This web design technique allows for consistent site navigation, but makes it hard to deep link at relevant content.
Given the popularity of server side includes, content management systems, and dynamic languages there really is no legitimate reason to use frames to build a content site today.
Fresh Content – Content which is dynamic in nature and gives people a reason to keep paying attention to your website.
Many SEOs talk up fresh content, but fresh content does not generally mean re-editing old content. It more often refers to creating new content. The primary advantages to fresh content are:
  • Maintain and grow mindshare: If you keep giving people a reason to pay attention to you more and more people will pay attention to you, and link to your site.
  • Faster idea spreading: If many people pay attention to your site, when you come out with good ideas they will spread quickly.
  • Growing archives: If you are a content producer then owning more content means you have more chances to rank. If you keep building additional fresh content eventually that gives you a large catalog of relevant content.
  • Frequent crawling: Frequently updated websites are more likely to be crawled frequently.
FTP (File Transfer Protocol) – is a protocol for transferring data between computers.
Many content management systems (such as blogging platforms) include FTP capabilities. Web development software such as Dreamweaver also comes with FTP capabilities. There are also a number of free or cheap FTP programs such as Cute FTP, Core FTP, and Leech FTP.
Fuzzy Search – Search which will find matching terms when terms are misspelled (or fuzzy).
Fuzzy search technology is similar to stemming technology, with the exception that fuzzy search corrects the misspellings at the users end and stemming searches for other versions of the same core word within the index.

Previous Posts

Theme created by thememotive.com. Powered by WordPress.org.

Google Analytics Alternative