A Survey of Techniques and Applications for Search Engine Optimization

 

Dr. Sachin Kumar1, Dr. Pratishtha Gupta2

1Sr. Assistant  Professor, Computer Science/ IT, Fairfield Institute of Management & Technology, (Affiliated to GGSIP University, New Delhi) , India

2Assistant  Professor, Department of Computer Science, Banasthali Vidyapith, Jaipur, India

*Corresponding Author E-mail: Sachinks.78@gmail.com; Pratishtha11@gmail.com

 

Abstract:

This paper is a complete survey of different search engine optimization techniques and large number of related techniques in diverse disciplines, including Online Adverting Market, Anchor Text, Cloaking, Link Farming, Duplicate domain, Redirection, Keywords  stuffing, hidden text, Link popularity spam, blog-spam, Guest-book spam, FFA pages, Negative impact of web ranking, Contents in images, HTML Links, Dynamic URL’s. Applications discussed are Easily searchable and take a lot of time with developing your website’s content, Make the overall design user friendly, Vary the keyword for each individual web page, Make sure that your hyper links are visible, accessible and informative,  Learn how to lead traffic to your site. At last we relate various techniques with application areas and also explain their future scope. We intend this paper to be useful to researchers and practitioners interested in search engine optimization.

 

KEY WORDS: Search engine optimization, SEO, SERP, FFA, HTML, URL.

 

 


INTRODUCTION:

Search Engine Optimization is the study of techniques and methodologies intended to affect or improve the visibility of web documents in search engine results. An informal discipline, SEO Theory is a natural outgrowth of the widespread interest in and practice of Search engine optimization. They answer tens of millions of queries every day. Despite the importance of large-scale search engines on the web, very little academic research has been done on them. Furthermore, due to rapid advance in technology and web proliferation, creating a web search engine today is very different from five years ago.

 

It provides an in-depth description of our large-scale web search engine, the first such detailed public description we know of to date. Apart from the problems of scaling traditional search techniques to data of this magnitude, there are new technical challenges involved with using the additional information present in hypertext to produce better search results. It addresses this question of how to build a practical large-scale system which can exploit the additional information present in hypertext. It also looks at the problem of how to effectively deal with uncontrolled hypertext collections where anyone can publish anything he/she wants. With 90 percent of all Internet users choosing to access a Search Engine during a given session, this first point of contact becomes often decisive in the success or failure for the online marketing concept.

 

Nevertheless, there is inter-dependency between both the attractiveness and the ability to locate a website, i.e. its “find  ability”. A website’s content may be highly interesting and appropriate to the user’s needs- but it will be condemned to failure if it cannot be correctly sourced (i.e. ranked) by a Search engine. Be that as it may, every top ranked website will be rejected if content and usability disappoint expectations. Search engine optimization highlights the importance of keyword decisions in order to attract customers who secure high conversion rates and thus increase sales, be it online or tiers.

 

It is designed to be a vertical search engine. The primary goal is to provide high quality search results over a rapidly growing World Wide Web. Google employs a number of techniques to improve search quality including page rank, anchor text, and proximity information. Furthermore, search engine is a complete architecture for gathering web pages, indexing them, and performing search queries over them.

 

At present SEO is more leaned towards commercial operation or commercial and profitable organizations website. The growth of the Internet has driven companies to adopt e-commerce and trade on line. However, a website that cannot be found by users is practically worthless. With the assistance of search engines, users are able to find these websites on the Internet. E-commerce web designers have realized the commercial potential of appearing in the top result list of search engines. Strategies are sometimes adopted to increase the visibility and rankings of the websites in search engines. As a rule, search engines have policies that determine which website will be included in their index. Some Search Engine Facts 93% of consumer’s worldwide use search engines to find and access websites.57% of internet users search the web every day and 46% of those searches are for product information or services. Attracting a loyal audience to your website is best achieved through top search engine listings.85% of qualified Internet traffic is driven through search engines, however 75% of search engine users never scroll the first page of results. Users looking for products on-line are far more likely to type the product name into a search engine (28%) rather than go into an engine's "shopping" channel (5%) or click on banner ads (4%).

 

All crawler based search engines follow a set of rules which is called algorithms. Though there are some closely maintained secret algorithms of every search engine but still there are basic rules which all of them follows.

 

TECHNIQUES

1.    Online Adverting Market

Online advertising market is becoming a popular area of academic research. Among other types of advertising, search engine advertising is leading the growth in terms of revenue. In general, there are two types of search engine advertising: paid placement and search engine optimization (SEO). This study aims to analyze the condition under which SEO exist and further, its impact on the advertising market. With an analytical model, several interesting insights are generated. The results of the study fill the gap of SEO in academic research and help managers in online advertising make informed advertising decisions. Search Engine Optimization is an interesting but less understood area in online advertising industry. In this study, we attempt to analyze the sustainability of SEO firms and the impact of SEO and other factors on search engine profit. Several interesting findings have been derived that concur with other SEO research and explain phenomena in the online advertising industry. First, a search engine could optimize its pricing policies for higher-type advertisers to reap higher profit. Second, investment in algorithm robustness has the effect of protecting the investment in algorithm effectiveness.

 

Claire S. et al. [1] discussed its latest report identifies the phenomenon  “ad free zones will be popping up on the Internet soon, a reflection of consumers’ increasing antipathy towards advertising”. It examines the opportunities for effective online marketing that go beyond the now omnipresent banner advertisements, and provides guidelines for marketers as to how to harness the “new set of capabilities” through planning and evaluation.

 

Christopher C.  [2] discussed results of several Facebook advertising campaigns conducted by an academic library in Hong Kong are presented and analyzed. Statistics were gathered from the advertising application integrated into the social networking platform. Conclusions are drawn based on a comparison of the performance metrics of the different advertising approaches that were employed.

 

Carol K. et al. [3] discussed reports the results of a survey of UK electronic information vendors regarding the importance of marketing to them, and reports the results of an examination of the readability, structure and information content of a sample of marketing materials produced by electronic information vendors. This analysis was of advertisements from professional journals and direct mail literature comprising leaflets, folders and brochures. The results demonstrated that the ‘average’ adult ought to have no problem in reading and understanding the marketing literature. Additionally, this study demonstrated that the marketing literature is feature oriented. Benefits are of much lesser significance. The UK based electronic information industry regards marketing as of crucial importance and believes that the marketplace is becoming more competitive. The favored promotional techniques of the industry are identified.

 

Gerard H. et al. [4] proposed the fiduciary duty of the corporation means that all its efforts – including any social marketing campaigns or corporate social responsibility – must be focused first and foremost on the success of the business and the enhancement of shareholder value; any wider public health benefits will inevitably be subjugated to this core purpose. And there is good evidence to show that the principal beneficiaries of apparently public‐spirited campaigns run by tobacco and alcohol companies are the sponsors. In the hands of a corporation, then, social marketing will always transmute into commercial marketing.

 

2.    Anchor Text

Associate the text of a link with the page that the link is on and the page the link points to Advantages: Anchors often provide more accurate description Anchors may exist for documents which cannot be indexed (i.e. Image, programs, and databases) Propagating anchor text helps provide better quality results 24 million pages over 259 million anchors.

 

Jane Y. [5] described the importance of career anchor data for organizational as opposed to individual use and reports on a study of the career anchors of 374 employees in the UK. The results show that age, gender and length of service have no significant effect on the distribution of anchors, although there are grade related differences. Suggestions are made on how career anchor distribution data could be used by organizations to determine appropriate career development strategies.

 

Hongwei H. [6] contributed to the literature by identifying potential corporate attributes that are relevant to CI. These findings expand the traditional view of the CI mix and represent a significant progress toward the identification and mapping of the construct of CI.

 

Ivo L. et al. [7]  proposed a new approach to named entity disambiguation exploiting various context representation models (bag of words, linguistic and structural representation). The authors constructed a comprehensive dataset based on all English Wikipedia articles for named entity disambiguation. They evaluated and compared the individual context representation models on this dataset. They evaluate the support of multiple languages.

 

3.    Cloaking

Here one page is customized for indexing but entirely different pages to the visitors. This enables SEO’s to optimize their cloaked pages for the search engines but to the frustration to the visitors.

 

Jeffrey S. [8] used to hide assets and cloak ownership, ranging from simple nominee arrangements through to complex financial transactions are explored.

 

Kay G. [9] found that in addition to determining equal pay, in some cases job evaluation has acted as a barrier or weapon against those making such a claim. The standards set for job evaluation appear to have been used variably in determining that the jobs are not equal in value under the guises of no reasonable grounds, material factor defences and in Tribunal decision making.

 

Lois L. K. [10] examined four questions: Was the US workforce diverse in previous times? What were the origins of its diversity? How did management scholars of the past view the diversity of the US workforce? Why did they view diversity as they did? While the workforce was diverse, particularly in the era 1880‐1930, the diversity was addressed exclusively in the early practitioner literature, not in theoretical literature. Five intellectual trends contributed to the “invisibility” of diversity in theoretical literature: ethnocentrism, the USA’s vision of itself, nativism (especially racial nativism), assimilations and convergence theory.

 

4.    Link Farming

 It is a process of exchanging reciprocal links with other websites. The purpose of this process is to popularize links. Sometime this may prove fatal our strong site of link to a low rank site may affect the high ranking site this way or that way.

 

Peter H. et al. [11] focused on the effect of socio demographic characteristics and farm structural variables in examining differences in farm indebtedness. This study extends this literature by specifically examining the role of farming attitudes. Obtaining a deeper understanding of the factors that affect the level of farming debt will be important as the degree of indebtedness has been found to affect farmers' management decisions. Outside of explaining farm credit use, farming attitudes and motivations may have an important impact on farmers' behavior in relation to a variety of farm activities.

 

Koen M. et al. [12] performed a meta analysis of the literature comparing the environmental impacts of organic and conventional farming and linking these to differences in management practices. The studied environmental impacts are related to land use efficiency, organic matter content in the soil, nitrate and phosphate leaching to the water system, greenhouse gas emissions and biodiversity.

 

5.    Duplicate domain

 It means having duplicate contents on two or more than two domain. Though the correct algorithm may not be able to detect such sites immediately but it is always discouraged.

 

Eleni K. et al. [13] followed a structured approach including five distinct steps: define the domain to be investigated; collect domain knowledge from both existing online community building and collaboration platforms and domain experts; analyse the gathered knowledge; develop and evaluate the domain mode.

 

Orestes V. et al. [14] attempted to formulate an ontological proposition for the intellectual capital (IC) domain. This study is motivated by the debate of contemporary thinking for different IC research streams (IC1ostensive versus IC2 performative) and their different ontological perceptions for IC. The proposed ontological proposition aims to serve the epistemological requirements towards the development of a common accepted generic IC theory.

 

Stephen M. J. et al. [15] facts that new and expensive software upgrades are not always a step forward, and can, in fact, be costly mistakes if they are not what the buyer wanted in the first place. Introduces the concepts of ′shareware′ – software that can be given a trial run before being purchased, and ′public domain′ software, which is available free of charge to be used by any member of the public. Lists a few of the better shareware programs and places where it can be obtained.

 

Mike T. [16] had been many attempts to study the content of the Web, either through human or automatic agents. Describes five different previously used Web survey methodologies, each justifiable in its own right, but presents a simple experiment that demonstrates concrete differences between them. The concept of crawling the Web also bears further inspection, including the scope of the pages to crawl, the method used to access and index each page, and the algorithm for the identification of duplicate pages. The issues involved here will be wellknown to many computer scientists but, with the increasing use of crawlers and search engines in other disciplines, they now require a public discussion in the wider research community. Concludes that any scientific attempt to crawl the Web must make available the parameters under which it is operating so that researchers can, in principle, replicate experiments or be aware of and take into account differences between methodologies. Also introduces a new hybrid random page selection methodology.

 

6.    Redirection

Redirects are auto-forward which are done at level of browsers or at level of servers. When the change are made there could be links pointing to the old page. Always there is a problem with redirects when 302 redirects meta  refreshes are used. 302 redirects are used at server level and meta at the browsers level. The one greatest criticism of redirect is that it is being exploited at a very large scale to hijack the page, it is hijacked and replaced by the hijacked pages. Thus, the original page is replaced  Yahoo was the first search engine to fix-up this bug. Therefore , redirect is not advisable.

 

Hsiang F. Y. et al. [17] World Wide Web (WWW) had become an extremely popular information service. Large HTTP packets result in network congestion. Proxy cache servers are widely deployed on the Internet to overcome this obstacle. However, the approach yields an undesirable phenomenon – a small set of users misuse proxy servers to mirror the entire contents of Web sites. This behavior wastes network resources, increases WWW servers’ loads, increases users’ waiting time, and violates copyrights. Approaches to designing a proxy server with WWW usage control and to making the proxy server effective on local area networks are proposed to prevent such abnormal WWW access and to prioritize WWW usage. Finally, a system, Proxy Breaker, is implemented to demonstrate the approaches. The implementation reveals that the proposed approaches are effective, such that the abnormal Web access does not reoccur.

 

Mike T. [18] had been many attempts to study the content of the Web, either through human or automatic agents. Describes five different previously used Web survey methodologies, each justifiable in its own right, but presents a simple experiment that demonstrates concrete differences between them. The concept of crawling the Web also bears further inspection, including the scope of the pages to crawl, the method used to access and index each page, and the algorithm for the identification of duplicate pages. The issues involved here will be wellknown to many computer scientists but, with the increasing use of crawlers and search engines in other disciplines, they now require a public discussion in the wider research community. Concludes that any scientific attempt to crawl the Web must make available the parameters under which it is operating so that researchers can, in principle, replicate experiments or be aware of and take into account differences between methodologies. Also introduces a new hybrid random page selection methodology.

 

S. Mary P. B.  [19] explored the issue of changing URLs and provides a brief analysis of the degree to which change is occurring. Examines the range of potential solutions and provides discussion concerning the reasons for outdated and inaccurate URLs.

 

7.    Keywords  stuffing

The stuffing is about repeating certain keywords and phrases. In fact the SEO based their companion. It has a negative aspect also that the optimizers may repeat excessively in the page. Sometimes stuffing can overdue and destroy the nature of content of itself.

 

Jon F. R. [20] documented the sequence of steps taken in setting up a cross cultural management course, and making extensive use of the Internet to add to the reality of the experience for fourth year and MBA students who, like most university students, have access to the Internet, a communication medium that allows inexpensive contact with other cultures.

 

8.    Hidden text

These are the contents which are visible to search engines but not to the visitors. In this context  meta tags can be seen to hidden text which can be seen by search engine but not by the visitors. However, a  stuffing hidden content with keyword is always spamming which should be avoided by a company at any cost.

 

Chengli Z. et al. [21] With the understanding of characteristics of internet resources, this paper will focus on solving the problem of text resource aggregation in open environment and its emergence showed during aggregation over time. The authors process these text resources, both in space and time dimension, through viewing them as an event stream evolving over time, and attempt to discover the evolutionary event patterns and furthermore, to mine the emergence of text content.

 

Stanley L. et al. [22] presents an approach for performing knowledge discovery in texts through qualitative and quantitative analyses of high level textual characteristics. Instead of applying mining techniques on attribute values, terms or keywords extracted from texts, the discovery process works over concepts identified in texts. Concepts represent real world events and objects, and they help the user to understand ideas, trends, thoughts, opinions and intentions present in texts. The approach combines a quasi automatic categorisation task (for qualitative analysis) with a mining process (for quantitative analysis).

 

Robyn S. et al. [23] describes the design of a stemming algorithm for searching databases of Latin text. The algorithm uses a simple longest‐match approach with some recoding but differs from most stemmers in its use of two separate suffix dictionaries (one for nouns and adjectives and one for verbs) for processing query and database words. These dictionaries and the associated stemming rules are arranged in such a way that the stemmer does not need to know the grammatical category of the word that is being stemmed. It is very easy to overstem in Latin: the stemmer developed here tends, rather, towards understemming, leaving sufficient grammatical information attached to the stems resulting from its use to enable users to pursue very specific searches for single grammatical forms of individual words.

 

9.    Link popularity spam

Optimizers are always searching for new ways to link their sites. Sometimes, they adopt unwarranted means such as blog-spam, guest-book spam, wiki-spam and forum-spam. Google, Yahoo and MSN and few more took a collective steps  to comeback link popularity spam. It was the use of a new tag called Rel. Rel is equal to  nofollow. The reason behind this attribute of search engines was based on and the belief that spammer will stop comment spam.

 

Amalia M. et al. [24] Perhaps surprisingly, a larger proportion of social scientists provided at least one outlink compared to the other disciplines investigated. By far the most linked-to file type was PDF and the most linked-to type of target website was scholarly databases, especially the Digital Object Identifier website. Health science and life science researchers mainly linked to scholarly databases, while scientists from engineering, hard sciences and social sciences linked to a wider range of target websites. Both book sites and social network sites were rarely linked to, especially the former. Hence, whilst successful researchers frequently use the Web to point to online copies of their articles, there are major disciplinary and other differences in how they do this.

 

Olof Sundin [25] is to explore how trustworthy knowledge claims in Wikipedia are constructed by focusing on the everyday practices of Wikipedia editors. The paper seeks to focus particularly on the role of references to external sources for the stabilization of knowledge in Wikipedia.

 

Shirley A. W. et al. [26] in this paper firstly on Twitter were identified. Secondly, following a review of the literature, a classification of the dimensions of microblogging research was established. Thirdly, papers were qualitatively classified using open coded content analysis, based on the paper's title and abstract, in order to analyze method, subject, and approach.

 

10. Blog-spam

It has comment pages and has become very popular in the last few years therefore very popular blogs are targeted by spammers through comment boxes and linking  their sites.

 

Alex M. Andrew [27] found that spam is with us in increasing volume. Automatic face recognition has been achieved. The opacity of an important part of the PC operating system is partially remedied by available software.

 

Mike T. et al. [28] in this blog searching is a useful new technique, the results are sensitive to the choice of search engine, the parameters used and the date of the search. The quantity of spam also varies by search engine and search type.

 

Wu He [29] reviewed disparate discussions in literature on security aspect of mobile social media though blog mining and an extensive literature search. Based on the detailed review, the author summarizes some key insights to help enterprises understand security risks associated with mobile social media.

 

11. Guest-book spam

It is also like blog spam where visitors are allowed to comment. Thus generating link spam, there is another spam known as wiki-spam, it is a collective website consisting of collective works of many authors. Its concept of open-editing is popular and anybody can write anything. It becomes early to place a hyperlink on the page. In fact, wiki had to fight spam every day.

 

Alfred Ogle [30] reviewed the literature on hotel guest questionnaires, also commonly known in the industry as comment cards. Considered a hotel tradition, the ubiquitous questionnaire remains the primary method employed by mainstream hotels to elicit and record guest feedback despite shortcomings in data reliability and response rates. Hence questionnaires play a key facilitation role in the collection of guest feedback.

 

Heather M. Makrez [31] connected in new ways, so does our student body, so do our graduates and therefore, so do our alumni. We must be able to be part of the conversations because they are happening whether we know about them or not. We need to want to be where our constituencies are getting their information if we want to be productive when trying to reach out to them.

 

12. FFA pages

It means free for all when someone submits URL’s, the web page script automatically update a link. All over it is not being popular nowadays.

 

Arthur R. et al. [32] in it the largescale numerical simulation of fluid flow is described as a discipline within the field of software engineering. As an example of such work, a vortex flowfield is analysed for its essential physical flow features, an appropriate mathematical description is presented (the Euler equations with an artificial viscosity model), a numerical algorithm to solve the mathematical equations is described, and the programming methodology which allows us to attain a very high degree of vectorization on the CYBER 205 is discussed.

 

13. Negative impact of web ranking

We can look to the following attributes which have no impact on the positioning of the website in the search results.

 

F. Rimbach et al. [33] examined the marketing and sales implications of page ranking techniques, in terms of how companies may use knowledge of their operation to increase the chances of attracting custom.

 

Fereshteh D. et al. [34] revealed that half of the universities have not scored well in the indexes used by the webometrics ranking model. Among all universities, King Saud University of Saudi Arabia scored well in most indexes.

 

14. Contents in images

If the content is the part of an image then search engine cannot read images, such as  GIF or JPEG file. Therefore, too much text content with images will prevent  search engine bots to index.

 

Bonnie H. et al. [35] under the direction of Conrad Atkinson, the University of California at Davis's Department of Art and Art History is using QBIC (Query by Image Content), a visual query software program currently under development by IBM, to determine how effective it may prove in retrieving art images based on what they look like, rather than relying on text indexing. Beginning in January, 1993, in a joint research project with IBM, we began the construction of a pilot database of art images selected from our slide library. This paper will describe the construction of our pilot database, and will present preliminary findings.

 

Anne Rindell [36] investigated the influence of inputs from consumers' past experiences of a company on their current image construction processes, in the context of non food retailing.

 

15. HTML links

If the pages have no direct HTML links then the search engines will miss these pages. If the navigational  structure in JavaScript then there is risk of search engine bots to miss these pages. Another approach  is to overcome this is through building sitemap where one or two more pages are site-map including hyperlink to all pages. Here, if an navigation of a website are of HTML links even through search engine spider can reaches all the pages.

 

John D. et al. [37] a marketing centric viewed of the connected enterprise implies that qualitative information in its systems and general document structures share a marketing based vocabulary – we propose that this should be founded on POSIT. As any system needs to be accessed and understood by people, the basis of its construction and navigation principles should be transparent even though many component processes will be automated. Based on the use of natural language, a user defined glossary stems from a selection of primitives and relationships between them.

 

Mike Sands [38] looked at the potential for developing customer relationship strategies using the Internet (electronic customer relationship management) (ECRM) with particular relevance for SMEs. Its basis is in qualitative research and it attempts to integrate the two Internet technologies of the Web and e mail into a push pull strategy. Aspects of “control” of the message in ECRM are examined and in particular whether democratic e communities have a part to play for companies looking to improve their ECRM. In arriving at some conclusions, regarding the implications for commercial organisations, draws on published work in the educational arena.

 

16. Dynamic URL’s

Sites with dynamic database may have long dynamic URL’s with many variables such as HTTP/ WWW.-------- International Journal of science and management/ July15/ ----------------------- call for papers. On the other hand, if compares to ----------- site,                                                                        
  http/ www.----------------------.com/ July15/ -------------/ call for papers.

 

The above dynamic sites have problem with indexing as per Google guidelines. In fact if we rewrite URL of a website as like static site then it is acceptable.

 

Olivier F. et al. [39] in it the Internet is promised a brilliant future among the favorite tools of marketing researchers. Develops a typology of Internet marketing surveys showing the existence of eight different designs that can be used by marketers. However, researchers who plan to develop research using the Internet need to be aware of several problems related to this new tool. In particular we show that the nature of the Internet creates different sampling problems.

 

Catherine D. et al. [40] used a theory building approach to understand how consumers perceive their experience of the navigation of an online shopping environment and identifies the facets which make up their experiential intensity. The paper first reviews the literature on the experiential attributes of web sites. It then outlines the methodology and explains the use of a “shopping with consumers” approach to uncover consumer perceptions.

 

APPLICATIONS

A company mostly relies on its official web site to build a reliable customer base, to interact and get feedback from customers and to use the site for advertising purposes. With SEO, some applications are employed which discussed above so that a web site will be listed on top of the list of results when a user types in a keyword by going to google, yahoo, msn or other search engines.

 

1.    Easily searchable and take a lot of time with developing your website’s content.

For most websites content is the key. When writing the content of your web pages, insert vital keywords which relate to the products and services that your company offers. Make it brief and concise, yet informative.  You might see that internet users do not actually read a web site’s content word-for-word. They just skim through the text and concentrate on the articles that they need and the data which they find interesting. Keep your facts up-to- date so that users will visit your web site more often.

Andrew Cox [41] discussed content management and its requirements, other than text and website management, such as bibliographic data, digital images, learning materials and links; suggesting librarians have already been doing this for years. Concludes content management has overtaken knowledge management as a buzzword.

 

Andrew K. et al.  [42] in it social websites had become a major medium for social interaction. From Facebook to MySpace to emergent sites like Twitter, social websites are increasing exponentially in user numbers and unique visits every day. How do these websites encourage sociability? What features or design practices enable users to socialize with other users? The purpose of this paper is to explore sociability on the social web and details how different social websites encourage their users to interact.

 

Sam Paltridge [43] Provided brief overviews of new interactive tools and indicators surrounding content analysis on the web, focusing on those windows and tools that are publicly accessible on the web. Contends emerging Internet indicators need to be further analysed if they are to be applied usefully in assisting the building of strategies. Finishes with a Table showing the structure of different domains mapped by AltaVista.

 

2.    Make the overall design user friendly.

In most cases, the rule the simpler, the better applies. Adapt the design to your target market. If your target audience is of a younger age, make sure that the colors and designs are funky, youthful and attention grabbing. For a more mature audience, you can use subdued colors and have a more elegant theme and overall web site design.

 

Stephen J. Lukasik [44] the success of the rapid introduction of digital information technology and networking, replacing analog telephony and inflexible technical rules governing the use of the electromagnetic spectrum, resulted from relatively minor modifications in staffing of a technical planning office lacking currency with the innovations in the technology supporting the communication and broadcasting industries. The support of the chairman, the commissioners, and their confidence in the leadership of the office were critical to success.

 

Ross S. et al. [45] Using SNWs to screen applicants offers benefits to organizations in the form of gaining a large amount of information about applicants, which may be used to supplement other information (e.g. a resume). It may also help a firm address “negligent hiring” legal concerns. However, other legal considerations as well as issues pertaining to information accuracy, privacy, and justice argue against using such information.

 

3.    Vary the keyword for each individual web page. 

 Think of every possible keywords or phrases that will apply to your products and services. Each user is different and may not necessarily use the keyword that you expect them to type in the search bar. Search engines offer tools which will guide you through finding the correct keywords for search engine optimization.

 

Bing T. et al. [46] The dynamic nature of information content on the Web has posed a serious problem to users who need constantly to keep track of the latest updates on specific information. Traditional search engines enable users to retrieve potentially relevant Web information, but they do not track and monitor Web pages based on users’ interests. On the other hand, Web information monitoring systems are designed specifically to help users track and monitor Web information.

 

Timothy C. Craven [47] The Yahoo! set showed generally no significant difference in inclusion of descriptions and keywords between generator identifying and other pages. The geocities.com set did show a significant difference for both keywords and descriptions. Exact repetition of descriptions or keywords between pages on the same site did not generally correlate significantly with identified generators.

 

4.    Make sure that your hyper links are visible, accessible and informative.

Providing users with hyper links or html links which are useful and easy to find is yet another way to optimize your web site.

 

Jocelyn C. et al. [48] extended the concept of boundary crossing to crossings in a polycontextual online environment. It updates literature on communities of practice by outlining the dynamics of a complex online community system. It provides an explanation for how personal knowledge evolves to fit emerging trends and considers how information systems can support deep knowledge transfer.

 

5.    Learn how to lead traffic to your site

Seek the advice of experts on this field if you do not know how to lead traffic to your site. Develop a plan and learn how to use the necessary tools pull traffic your way. Gather all the statistical data that you can use to keep your web site on top of the search engine’s list. Vary your web site’s contents and keywords to adapt to the current trend. If there is a particular word or phrase that you have not used when searching for keywords, use it instead to lead more traffic your way. Be flexible enough to change the web site’s contents and fulfill the ever-changing needs of the online users. All in all, you would have your work cut out for you if you want to employ do it yourself search engine optimization applications. With the proper research, creativity and enough knowledge, you can definitely optimize your web site and yield very promising results for your business.

 

Lan Anh Tran [49] was to explore an evaluation approach and to develop a model of web site evaluation that includes the specification of evaluation criteria, key issues to discuss and recommendations for improving the web site.

 

Junga K.. et al. [50] with regard to environmental factors, the more users perceive their audience to be a collection of weak ties, the more likely they are to share information on SNSs, independent of the size of their networks. Personal factors such as information self-efficacy, positive social outcome expectations, and sharing enjoyment feelings were found to be significant predictors of sharing activities. In addition, a significant interaction effect was found such that the effects of social outcome expectations on sharing activities on SNSs are manifested to a greater extent when users perceive their audience as weak ties rather than strong ties.

 

ISSUES AND CHALLENGES:

·      SEO does not necessarily require financial investments, other than the salary of the labour involved in the optimization process.

·      SEO can be done either in-house or it can be outsourced to a specialized company.

·      SEO may be the most cost-effective marketing method for some companies and increase sales tremendously.

·      Based on several different researchers by GVU, Forrester Research and Pew Internet and American Life Project it can be estimated that the percentage of web users that rely on search engines is between 50 and 85.

·      High search engine rankings have a positive effect on branding.

·      It may take weeks, months or even years for a web site to obtain good organic ranking. This fact makes it nearly impossible to execute quick marketing campaigns with SEO.

·      A company may put a lot of effort and money into SEO, but receive no return, because there is no guaranteed method to get to the top of the natural results. Therefore, SEO is not as reliable as some other marketing methods.

·      It is hard to determine the value of high organic ranking, prior to putting in the effort and money. The estimated traffic from a particular term may be much lower than initially expected, thus lowering the value of the term. It should be noted that it might also be much higher.

·      Maintaining a stable position at the top of the natural SERPs is not guaranteed. A web site may rank number one in the organic results of a search engine on one day and disappear from the SERPs completely the next day.

 

CONCLUSION:

The use of search engines follows only email and instant messaging as the most popular activity among internet users worldwide. In terms of traffic, they can easily account for 75% or more of visitors to a popular site. Therefore it only makes sense to ensure that any information authored specifically for the web is appropriately presented for indexing. Search engine optimization is a most effective tool for advertising for many organizations and it can improve their rankings among different firms. It requires extensive knowledge and time to implement.  A novel  algorithm based on Phrase Prioritization has been proposed. Emphasis and priority has been given to the phrases, so that the user typing more than one word may be considered together and the documents containing those phrases must be given the higher priority. A data structure supporting the phrase search has been proposed, which is the linked implementation of a sparse matrix, which maps the nature of the relationship between documents and the words of dictionary. Moreover, emphasizing the need of categorizing the words into specific and general words is also a very new and beneficial idea, which can increase the relevance of those documents containing specific words as compare to general words.

 


 

 

TECHNIQUES

APPLICATION

FUTURE SCOPE

Categorization

SEO, is the process of getting best possible rankings or visibility within the search results displayed on a search engine after a search is performed for a particular keyword or phrase. All major search engines like Google, Bing, Yahoo, AOL etc have such results. Today Online marketing has become the most successful form of business as Internet has grown as a major platform.  Any kind of activity performed online for promotion of your business and website is called ‘Online Marketing’ or ‘Internet Marketing’. SEO is not only the process of boosting your visibility on search engines, but is also a helpful tool for the conversions as well.

 

According to it search engine should ask the user before searching the data for the field in which they are seeking the information. By using the user will get the appropriate information in an appropriate field. It would be helpful in ranking also as unrelated site will not come up and other site will get a chance to come up. Categorization of different fields can be shown in an combo box on the home page itself. After the user clicks on the particular field with the specified topic, search engine would start searching and show the results.

Rating

SEO considers how search engines work, what people search for, the actual search terms or keywords typed into search engines and which search engines are preferred by their targeted audience. Optimizing a website may involve editing its content, HTML and associated coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines. Promoting a site to increase the number of backlinks, or inbound links, is another SEO tactic.

 

According to it search engines will show rating of each and every site along the site name. It will be helpful for the user in finding the appropriate site. Rating can be computed on the click and stickiness basis. If the user is clicking and spending a lot of time on the particular site, that site would get higher rating as compared to other.

Handling Abbreviations

If you are searching for something, and you are using an abbreviation amongst your query terms, it isn’t a bad idea to try the same query with the abbreviation expanded, especially if you think there’s a chance that you might miss something. If you’re searching for information about a space agency, searching for NASA without doing a search for National Aeronautics and Space Administration might not be as bad as if you were searching for information about the North American Saxophone Alliance, and you only used “NASA” in your search instead of the expanded version of the abbreviation. If you are publishing something on the Web that contains abbreviations, it’s often not a bad idea to use the abbreviation and the expanded version on the same page, and to see what else that abbreviation might stand for, and what search results for it look like. In the first 20 results I see for “NASA” in Google, all pages returned refer to the space agency except for the 9th result, which is about the DJ collective N.A.S.A, and the 10th result, which is about the racing organization, the National Auto Sport Association. No Saxophonists in a quick look at the top 50 results. This patent application is from Yahoo, but it’s possible that researchers at Google and Bing are considering many of the same ideas.

 

The use of abbreviations is a very common practice among users. Instead of typing the full name user try to search the term on abbreviations, as the result they get the result from miscellaneous fields. If categorization would be done most of the problem would have been solved but still some problem exists.

Meta search engine tags

Want top search engine rankings? Just add meta tags and your website will magically rise to the top, right? Wrong. Meta tags are one piece in a large algorithmic puzzle that major search engines look at when deciding which results are relevant to show users who have typed in a search query. While there is still some debate about which meta tags remain useful and important to search engines, meta tags definitely aren't a magic solution to gaining rankings in Google, Bing, Yahoo, or elsewhere – so let's kill that myth right at the outset. However, meta tags help tell search engines and users what your site is about, and when meta tags are implemented incorrectly, the negative impact can be substantial and heartbreaking.

It is good to use Meta search engine for searching a term as in a meta search engine, it submit keywords in its search box and it transmits search simultaneously to several individual search engines and their databases of web pages. Within a few seconds it get back results from all the search engines queried. Meta search engines do not own a database of web pages, it send search terms to the databases maintained by search engine companies. Today users are not aware of this. They use what they know like Google, Yahoo etc. for searching. It is a good practice to make use of it.

 

 


 

 

REFERENCES

1.     Claire Spencer, Nick Giles, (2001) "The planning, implementation and evaluation of an online marketing campaign", Journal of Communication Management, Vol. 5 Iss: 3, pp.287 – 299

2.     Christopher Chan, (2012) "Marketing the academic library with online social network advertising", Library Management, Vol. 33 Iss: 8/9, pp.479 - 489

3.     Carol King, Charles Oppenheim, (1994) "Marketing of online and CDROM databases", Online and CD-Rom Review, Vol. 18 Iss: 1, pp.15 - 26

4.     Gerard Hastings, Kathryn Angus, (2011) "When is social marketing not social marketing?", Journal of Social Marketing, Vol. 1 Iss: 1, pp.45 - 53

5.     Jane Yarnall, (1998) "Career anchors: results of an organisational study in the UK", Career Development International, Vol. 3 Iss: 2, pp.56 – 61

6.     Hongwei He, (2012) "Corporate identity anchors: a managerial cognition perspective", European Journal of Marketing, Vol. 46 Iss: 5, pp.609 - 625

7.     Ivo Lašek, Peter Vojtáš, (2013) "Various approaches to text representation for named entity disambiguation", International Journal of Web Information Systems, Vol. 9 Iss: 3, pp.242 - 259

8.     Jeffrey Simser, (2008) "Money laundering and asset cloaking techniques", Journal of Money Laundering Control, Vol. 11 Iss: 1, pp.15 - 24

9.     Kay Gilbert, (2005) "The role of job evaluation in determining equal value in tribunals: Tool, weapon or cloaking device?", Employee Relations, Vol. 27 Iss: 1, pp.7 - 19

10.  Lois Landis Kurowski, (2002) "Cloaked culture and veiled diversity: why theorists ignored early US workforce diversity", Management Decision, Vol. 40 Iss: 2, pp.183 - 191

11.  Peter Howley, Emma Dillon, (2012) "Modelling the effect of farming attitudes on farm credit use: a case study from Ireland", Agricultural Finance Review, Vol. 72 Iss: 3, pp.456 - 470

12.  Koen Mondelaers, Joris Aertsens, Guido Van Huylenbroeck, (2009) "A metaanalysis of the differences in environmental impacts between organic and conventional farming", British Food Journal, Vol. 111 Iss: 10, pp.1098 - 1119

13.  Eleni Kaliva, Eleni Panopoulou, Efthimios Tambouris, Konstantinos Tarabanis, (2013) "A domain model for online community building and collaboration in eGovernment and policy modelling", Transforming Government: People, Process and Policy, Vol. 7 Iss: 1, pp.109 - 136

14.  Orestes Vlismas, George Venieris, (2011) "Towards an ontology for the intellectual capital domain", Journal of Intellectual Capital, Vol. 12 Iss: 1, pp.75 – 110

15.  Stephen M. Jackson, Serafim Halkias, (1990) "The Truth Behind Shareware And Public Domain Software", OCLC Micro, Vol. 6 Iss: 6, pp.17 - 20

16.  Mike Thelwall, (2002) "Methodologies for crawler based Web surveys ", Internet Research, Vol. 12 Iss: 2, pp.124 - 138

17.  HsiangFu Yu, LiMing Tseng, (2002) "Abnormal Web usage control by proxy strategies", Internet Research, Vol. 12 Iss: 1, pp.66 - 75

18.  Mike Thelwall, (2002) "Methodologies for crawler based Web surveys ", Internet Research, Vol. 12 Iss: 2, pp.124 - 138

19.  S. Mary P. Benbow, (1998) "File not found: the problems of changing URLs for the World Wide Web", Internet Research, Vol. 8 Iss: 3, pp.247 - 250

20.  Jon Franklin Ramsoomair, (1997) "The Internet in the context of crosscultural management", Internet Research, Vol. 7 Iss: 3, pp.189 - 194

21.  Chengli Zhao, Dongyun Yi, (2012) "Text resource emergence: discovering evolutionary event patterns from web texts", Kybernetes, Vol. 41 ISS: 9, pp.1386 - 1395

22.  Stanley Loh, José Palazzo M. de Oliveira, Fábio Leite Gastal, (2001) "Knowledge discovery in textual documentation: qualitative and quantitative analyses", Journal of Documentation, Vol. 57 Iss: 5, pp.577 - 590

23.  Robyn Schinke, Mark Greengrass, Alexander M. Robertson, Peter Willett, (1996) "A stemming algorithm for Latin Text databases", Journal of Documentation, Vol. 52 Iss: 2, pp.172 - 187

24.  Amalia Más-Bleda, Mike Thelwall , Kayvan Kousha , Isidro F. Aguillo , (2014) "Successful researchers publicizing research online: An outlink analysis of European highly cited scientists' personal websites", Journal of Documentation, Vol. 70 Iss: 1, pp.148 - 172

25.  Olof Sundin, (2011) "Janitors of knowledge: constructing knowledge in the everyday life of Wikipedia editors", Journal of Documentation, Vol. 67 Iss: 5, pp.840 - 862

26.  Shirley A. Williams, Melissa M. Terras, Claire Warwick, (2013) "What do people study when they study Twitter? Classifying Twitter related academic papers", Journal of Documentation, Vol. 69 Iss: 3, pp.384 - 410

27.  Alex M. Andrew, (2008) "Spam, biometrics and a blog", Kybernetes, Vol. 37 Iss: 8, pp.1091 - 1093

28.  Mike Thelwall, Laura Hasler, (2007) "Blog search engines", Online Information Review, Vol. 31 Iss: 4, pp.467 – 479

29.  Wu He , (2013) "A survey of security risks of mobile social media through blog mining and an extensive literature search", Information Management and Computer Security, Vol. 21 Iss: 5, pp.381 - 400

30.  Alfred Ogle (2009), Morphology of a hotel tradition: The guest questionnaire, in Arch G. Woodside, Carol M. Megehee, Alfred Ogle (ed.) Perspectives on Cross-Cultural, Ethnographic, Brand Image, Storytelling, Unconscious Needs, and Hospitality Guest Research (Advances in Culture, Tourism and Hospitality Research, Volume 3) Emerald Group Publishing Limited, pp.169 - 214

31.  Heather M. Makrez (2011), Am I invited? Social media and alumni relations, in Laura A. Wankel, Charles Wankel (ed.) Higher Education Administration with Social Media (Cutting-edge Technologies in Higher Education, Volume 2) Emerald Group Publishing Limited, pp.229 - 248

32.  Arthur Rizzi, Charles J. Purcell, (1985) "Large CYBER 205 model of the Euler equations for vortexstretched turbulent flow around delta wings", Engineering Computations, Vol. 2 Iss: 1, pp.63 - 70

33.  F. Rimbach, M. Dannenberg, U. Bleimann, (2007) "Page ranking and topicsensitive page ranking: microchanges and macroimpact", Internet Research, Vol. 17 Iss: 1, pp.38 - 48

34.  Fereshteh Didegah, Marzieh Goltaji, (2010) "Link analysis and impact of top universities of Islamic world on the world wide web", Library Hi Tech News, Vol. 27 Iss: 8, pp.12 - 16

35.  Bonnie Holt, Laura Hartwick, (1994) "Retrieving art images by image content: the UC Davis QBIC project", Aslib Proceedings, Vol. 46 Iss: 10, pp.243 - 248

36.  Anne Rindell, (2013) "Time in corporate images: introducing image heritage and imageinuse", Qualitative Market Research: An International Journal, Vol. 16 Iss: 2, pp.197 - 213

37.  John Driver, Panos Louvieris, (2002) "Integrating the enterprise: the role of a language system for a marketing conception", Qualitative Market Research: An International Journal, Vol. 5 Iss: 3, pp.172 – 187

38.  Mike Sands, (2003) "Integrating the Web and email into a pushpull strategy", Qualitative Market Research: An International Journal, Vol. 6 Iss: 1, pp.27 - 37

39.  Olivier Furrer, D. Sudharshan, (2001) "Internet marketing research: opportunities and problems", Qualitative Market Research: An International Journal, Vol. 4 Iss: 3, pp.123 - 129

40.  Catherine Demangeot, Amanda J. Broderick, (2006) "Exploring the experiential intensity of online shopping environments", Qualitative Market Research: An International Journal, Vol. 9 Iss: 4, pp.325 - 351

41.  Andrew Cox, (2002) "Content management: introduction to VINE 127 (1)", VINE, Vol. 32 Iss: 2, pp.3 - 5

42.  Andrew Keenan, Ali Shiri, (2009) "Sociability and social interaction on social networking websites", Library Review, Vol. 58 Iss: 6, pp.438 - 450

43.  Sam Paltridge, (1999) "Mining and mapping web content", info, Vol. 1 Iss: 4, pp.327 – 342

44.  Stephen J. Lukasik, (2009) "Unleashing innovation: making the FCC userfriendly", info, Vol. 11 Iss: 5, pp.76 - 85

45.  Ross Slovensky, William H. Ross, (2012) "Should human resource managers use social media to screen job applicants? Managerial and legal issues in the USA", info, Vol. 14 Iss: 1, pp.55 - 69

46.  Bing Tan, Schubert Foo, Siu Cheung Hui, (2001) "Web information monitoring: an analysis of Web page updates", Online Information Review, Vol. 25 Iss: 1, pp.6 - 19

47.  Timothy C. Craven, (2005) "Web authoring tools and meta tagging of page descriptions and keywords", Online Information Review, Vol. 29 Iss: 2, pp.129 - 138

48.  Jocelyn Cranefield, Pak Yoong, (2009) "Crossings: Embedding personal professional knowledge in a complex online community environment", Online Information Review, Vol. 33 Iss: 2, pp.257 - 275

49.  Lan Anh Tran, (2009) "Evaluation of community web sites: A case study of the Community Social Planning Council of Toronto web site", Online Information Review, Vol. 33 Iss: 1, pp.96 - 116

50.  Junga Kim , Chunsik Lee , Troy Elias , (2015) "Factors affecting information sharing in social networking sites amongst university students: Application of the knowledge-sharing model to social networking sites", Online Information Review, Vol. 39 Iss: 3, pp.290 - 309

 

 

 

 

Received on 03.05.2016       Modified on 17.05.2016

Accepted on 10.06.2016      ©A&V Publications All right reserved

Research J. Science and Tech. 2016; 8(2):59-70

DOI: 10.5958/2349-2988.2016.00008.5