Search Engine Strategy

views updated

SEARCH ENGINE STRATEGY

Most people find what they're looking for on the World Wide Web by using search engines like Yahoo!, Alta Vista, or Google. According to InformationWeek, aside from checking e-mail, searching for information with search engines was the second most popular Internet activity in the early 2000s. Because of this, companies develop and implement strategies to make sure people are able to consistently find their sites during a search. These strategies oftentimes are included in a much broader Web site or Internet marketing plan. Different companies have different objectives, but the main goal is to obtain good placement in search results.

TYPES OF SEARCH ENGINES

In the early 2000s, more than 1,000 different search engines were in existence, although most Web masters focused their efforts on getting good placement in the leading 10. This, however, was easier said than done. InfoWorld explained that the process was more art than science, requiring continuous adjustments and tweaking, along with regularly submitting pages to different engines for good or excellent results. The reason for this is that every search engine works differently. Not only are there different types of search enginesthose that use spiders to obtain results, directory-based engines, and link-based enginesbut engines within each category are unique. They each have different rules and procedures companies need to follow in order to register their site with the engine.

SPIDER-BASED SEARCH ENGINES.

Many leading search engines use a form of software program called spiders or crawlers to find information on the Internet and store it for search results in giant databases or indexes. Some spiders record every word on a Web site for their respective indexes, while others only report certain keywords listed in title tags or meta tags.

Although they usually aren't visible to someone using a Web browser, meta tags are special codes that provide keywords or Web site descriptions to spiders. Keywords and how they are placed, either within actual Web site content or in meta tags, are very important to online marketers. The majority of consumers reach e-commerce sites through search engines, and the right keywords increase the odds a company's site will be included in search results.

Companies need to choose the keywords that describe their sites to spider-based search engines carefully, and continually monitor their effectiveness. Search engines often change their criteria for listing different sites, and keywords that cause a site to be listed first in a search one day may not work at all the next. Companies often monitor search engine results to see what keywords cause top listings in categories that are important to them.

In addition to carefully choosing keywords, companies also monitor keyword density, or the number of times a keyword is used on a particular page. Keyword spamming, in which keywords are overused in an attempt to guarantee top placement, can be dangerous. Some search engines will not list pages that overuse keywords. Marketing News explained that a keyword density of three to seven percent was normally acceptable to search engines in the early 2000s. Corporate Web masters often try to figure out the techniques used by different search engines to elude spammers, creating a never-ending game of cat-and-mouse.

Sometimes, information listed in meta tags is incorrect or misleading, which causes spiders to deliver inaccurate descriptions of Web sites to indexes. Companies have been known to deliberately misuse keywords in a tactic called cyber-stuffing. In this approach, a company includes trademarks or brand names from its competitors within the keywords used to describe its site to search engines. This is a sneaky way for one company to direct traffic away from a competitor's site and to its own. In the early 2000s, this was a hot legal topic involving the infringement of trademark laws.

Because spiders are unable to index pictures or read text that is contained within graphics, relying too heavily on such elements was a consideration for online marketers. Home pages containing only a large graphic risked being passed by. An emerging content description language called extensible markup language (XML), similar in some respects to hypertext markup language (HTML), was emerging in the early 2000s. An XML standard known as synchronized multimedia integration language will allow spiders to recognize multimedia elements on Web sites, like pictures and streaming video.

DIRECTORY-BASED SEARCH ENGINES.

While some sites use spiders to provide results to searchers, otherslike Yahoo!use human editors. This means that a company cannot rely on technology and keywords to obtain excellent placement, but must provide content the editors will find appealing and valuable to searchers. Some directory-based engines charge a fee for a site to be reviewed for potential listing. In the early 2000s, more leading search engines were relying on human editors in combination with findings obtained with spiders. LookSmart, Lycos, AltaVista, MSN, Excite and AOL Search relied on providers of directory data to make their search results more meaningful.

LINK-BASED SEARCH ENGINES.

One other kind of search engine provides results based on hypertext links between sites. Rather than basing results on keywords or the preferences of human editors, sites are ranked based on the quality and quantity of other Web sites linked to them. In this case, links serve as referrals. The emergence of this kind of search engine called for companies to develop link-building strategies. By finding out which sites are listed in results for a certain product category in a link-based engine, a company could then contact the sites' ownersassuming they aren't competitorsand ask them for a link. This often involves reciprocal linking, where each company agrees to include links to the other's site.

Besides focusing on keywords, providing compelling content and monitoring links, online marketers rely on other ways of getting noticed. In late 2000, some used special software programs or third-party search engine specialists to maximize results for them. Search engine specialists handle the tedious, never ending tasks of staying current with the requirements of different search engines and tracking a company's placement. This trend was expected to take off in the early 2000s, according to research from IDC and Netbooster, which found that 70 percent of site owners had plans to use a specialist by 2002. Additionally, some companies pay for special or enhanced listings in different search engines.

FURTHER READING:

Briones, Maricris. "Found On the Information Superhighway." Marketing News, June 21, 1999.

Coopee, Todd. "Simple Service Brings Surfers to Your Site." InfoWorld, August 14, 2000.

Greenberg, Karl. "Spiders Weave a Tangled Web." Brand-week, September 11, 2000.

Kahaner, Larry. "Content Matters Most in Search Engine Placement." InformationWeek, June 12, 2000.

McLuhan, Robert. "Search for a Top Ranking." Marketing 47, October 19, 2000.

Retsky, Maxine. "CyberstuffingA Dangerous Strategy." Marketing News, January 3, 2000.

Schwartz, Matthew. "Search Engines." Computerworld, May 8, 2000.

Sherman, Chris. "Search Engine Strategies 2000." Information Today, October 2000.

SEE ALSO: Results Ranking