SEM

Increasing Your Analytics Productivity With UI Improvements

We’re always working on making Analytics easier for you to use. Since launching the latest version of Google Analytics (v5), we’ve been collecting qualitative and quantitative feedback from our users in order to improve the experience. Below is a summary of the latest updates. Some you may already be using, but all will be available shortly if you’re not seeing them yet. 


Make your dashboards better with new widgets and layout options



Use maps, devices and bar chart widgets in order to create a perfectly tailored dashboard for your audience. Get creative with these and produce, share and export custom dashboards that look exactly how you want with the metrics that matter to you. We have also introduced improvements to customize the layout of your dashboards to better suit individual needs. In addition dashboards now support advanced segments!
Get to your most frequently used reports quicker

You’ll notice we’ve made the sidebar of Google Analytics even more user-friendly, including quick access to your all-important shortcuts:


If you’re not already creating Shortcuts, read more about them and get started today. We have also enabled shortcuts for real-time reports, which allows you to set up a specific region to see its traffic in real-time, for example.
Navigate to recently used reports and profiles quicker with Recent History


Ever browse around Analytics and want to go back to a previous report? Instead of digging for the report, we’ve made it even simpler when you use Recent History.
Improving search functionality



Better Search allows you to search across all reports, shortcuts and dashboards all at once to find what you need.
Keyboard shortcuts

In case you've never seen them, Google Analytics does have some keyboard shortcuts. Be sure you’re using them to move around faster. Here are a few useful ones:

Search: s , / (Access to the quick search list)
Account List: Shift + a (access to the quick account list)
Set date range: d + t (set the date range to today)
On screen guide: Shift + ? (view the complete list of shortcuts)
Easier YoY Date Comparison


The new quick selection option lets you select previous year to prefill date range improving your productivity to conduct year over year analysis.
Export to Excel & Google Docs 

Exporting keeps getting better, and now includes native Excel XSLX support and Google Docs:


We hope you find these improvements useful and always feel free to let us know how we can make Analytics even more usable for you to get the information you need to take action faster.

Guidelines for Breadcrumb Usability And SEO

 What is Breadcrumb?
Breadcrumbs or breadcrumb trail is a navigation aid used in user interfaces. It allows users to keep track of their locations within programs or documents. The term comes from the trail of breadcrumbs left by Hansel and Gretel in the popular fairytale.


Importance of Breadcrumbs For Usability

There are various pro-breadcrumb arguments by Nielsen and other usability experts such as the fact that breadcrumbs:

  • Help users visualize their current location in relation to the rest of the web site.
  • Enable one-click access to higher site levels and thus help users who enter a site through search or deep links
  • Take up very little space on the page.
  • Reduce the bounce rate. In fact, breadcrumbs can be an excellent way to entice first-time visitors to continue browsing through a web site after having viewed the landing page.
  • Enable users to reach pages and complete tasks faster

Guidelines for Breadcrumb Usability And SEO
Whilst there exist several UI techniques that are used to render breadcrumbs, there are generally-agreed upon usability and SEO guidelines for breadcrumbs. Breadcrumbs should:


  • Never replace primary navigation. They have been devised as a secondary navigation aid and should always be used as such.
  • Not be used if all the pages are on the same level. Breadcrumbs are intended to show hierarchy.
  • Show hierarchy and not history. To go back, users use the browser’s back button. Replicating this facility defies the purpose of having breadcrumbs.
  • Be located in the top half of your web page. It can be placed above everything on the top of the page, just below the main navigation bar or just above the headline of the current page.
  • Not be too large. The breadcrumb trail is a secondary navigation aid and hence its size and prominence should be less than that of the primary navigation.
  • Progress from the highest level to the lowest, one step at a time.
  • Start with the homepage and end with the current page.
  • Have a simple link for each level (except for the current page). If the trail includes non clickable elements such as page titles then include them but clearly differentiate which parts are clickable.
  • Have a simple, one-character separator between each level (usually “>”).
  • Not be cluttered with unnecessary text such as “You are here” or “Navigation”.
  • Include the full navigational path from the homepage to the current page. Not displaying certain levels will confuse users.
  • Include the full page title in the breadcrumb trail. Also ensure consistency between the page address and the breadcrumb. If the page titles include keywords, then this will make your breadcrumbs both human and search engine friendly.

Document Search Engines to Search for SEO Documentation

A while ago I listed a few ways to find PDF tutorials, and later also shared my collection of SEO cheat sheets which showed that SEJ readers were definitely interested in SEO documentation. So today I am adding a few document search engines to add to your arsenal.
Brupt is a Google custom search engine that searches for .doc, .pdf, Excel and Power Point documents (.doc files by default). The search results interface looks like Google’s. You can choose a different file extension to search right from SERPs.
brupt 3 Document Search Engines to Search for SEO Documentation
Voelspriet offers to search for .doc, .pdf, Excel and Power Point, RTF, txt, WRI, PS, BAT files. Results will open in anew tab as Google advanced filetype: search:
voelspriet 3 Document Search Engines to Search for SEO Documentation
DocJax is a more fun tool. It has search while-you-type suggestions and is powered by Google and Yahoo. It also allows to search all four (.doc, .pdf, Excel and Power Point) file types simultaneously or separately and gives a preview link. The site also has a community and if you are willing to, you can join and save your favorite documents in your account area.
docjax 3 Document Search Engines to Search for SEO Documentation

SEO & Website Redesign: Relaunching Without Losing Sleep

Redesigns can make an ugly site pretty, but they can also make a high traffic site invisible. Keep these tips and no-nos in mind and you can keep yourself out of the CEO’s office.
SEO Redesign: Teamwork First
It should go without saying, but SEOs, developers and designers must work together cohesively during the site redesign process.
Too often, companies look to refresh the look of their site, and in the end, destroy their search engine presence. How? This can come from a myriad of reasons from coding errors, SEO unfriendly design practices, to even more disastrous practices (e.g., content duplication, URL rewriting without redirection, information architecture changes away from search engine friendly techniques).
Starting the redesign process with a collaborative call between the SEO team, designer, developer, and company decision maker(s) is always the best first step.
Often there are two attitudes present. Either, “We are redesigning our site and are not open to your ideas…but don’t let us do anything wrong,” or the other attitude (and my favorite), “Let’s work together to achieve a refreshed look and functionality and instill any missing SEO opportunities if possible.” 
To satisfy both scenarios, your information delivery as the SEO should be to inform designers and developers of the mistakes you shouldn’t make and also to announce to all parties what SEO revisions should be made to the site along with what search engines have recently been paying attention to.
Page Load Time 
A site redesign gives you the opportunity to re-code, condense externally referenced files, and achieve faster load times.
Don’t let the designer use the word “Flash” during your call(s). In an attempt to make a new site look pretty, the reliance on multimedia usage can have a negative effect on site speed. Ignoring this is bad, as Google has stated in the last year that site speed is a ranking consideration – also, slower sites annoy users.
Content Duplication
Ensure that your development environment or beta sections of the site are excluded from search engine’s view. Relaunching your site when these elements have been indexed by the engines means your cool new site is a duplicate and you will be in a mad dash trying to redirect the development environment that was leaked. Also, make sure there are no live copies on other servers that have visibility with the search engines.
Another form of content duplication is the creation of new URLs without properly redirecting old URLs via a 301 permanent redirect. This will leave search engines wondering which page should be ranked.
It's also worth mentioning that 301s are a must and that 302 temporary redirects should not be used. Make it commonplace in the redesign process that no one used the word delete in reference to site content. You should never delete any pages, these should be permanently redirected to the most relevant launching page.  
Content Restrictions 
It’s important before you throw the site to the web that you make sure that you have identified what pages shouldn't be crawled. 
Are there new parts of the site that shouldn’t be seen by search engines, login pages, etc.? Does the new site utilize dynamic URL creation or parameters that will need to be restricted?
Inversely, what pages might be restricted that shouldn’t be? Is there a folder in the robots.txt file that is inaccurately excluding pages that should be visible? Have meta robots tags been placed on pages that shouldn’t have been tags?
Tracking
Make sure that your analytical tracking code is placed back in the page source before the site goes live. Additionally, any conversion pages should have the appropriate conversion tracking code appended. Nothing makes an SEO want to cry like lost data.
Information Architecture
A redesign is the perfect time to rethink the direction of the site. Go beyond the need for a refreshed look and analyze the hierarchy of your content. Google is looking at this so be sure there is a clear view of the overall site theme as well as sub-themes flowing into the site through an appropriate folder structure.
URL Rewrite
If you're redesigning and shaking a site down to its core, there's no better time than now. You have the attention and devotion of the site developer to make your URLs right.
This is a continuation of the Information Architecture revisions. Be mindful of folder structure as well as relevant, keyword-rich text usage in page names.
Want to go the extra mile? Have the filename extensions removed so down the road if you redesign the site again and use a different script language you won’t have to do another URL rewrite.
Lastly, make sure all rewritten URLs include a 301 permanent redirect from the old URL to the new URL.
W3C/Section 508/Code Validation
Take advantage of this period to address code issues and how your site adheres to W3C and Section 508 compliance factors. Search engines want to see your excellence here and now is your chance to make their visit successful as well as your human site visitors.
Usability
Can you make the intended funnel of visit shorter or easier? This is great time for you think about what you want visitors to do. You may be able to remove a step in the purchase/goal funnel and increase your site’s convertibility.
Benchmarking
To truly assess the success of the redesign from an SEO and sales standpoint, ensure that you have recorded several site statistics as well as focused monitoring in post-launch. You will be happy you did because it will either be a visible success story or a lifesaver for finding problems once the site launches.
These include:
·         Run a ranking report.
·         Check your pages indexed in Google and Bing.
·         Run a page load time test.
·         Perform a W3C code validation report.
·         Note the bounce rate, time on site, pages per visits, and goal completions. Granted, this can be reviewed in analytics after launch, but be mindful that you should be watching this.
·         Run a site spider crawl of the live site to get a good list of URLs on the current site. You may need this for any clean of missed redirects.
·         Note the average time for Google to download a page and average pages crawled per visit in Google Webmaster tools. Also, “fetch as Googlebot” so you have a previous copy of what Google used to see.
Conclusion
Taking into account all of the mistakes you or the others on the redesign team shouldn’t be making will ultimately leave you much less stressed after the site launches. Meanwhile, minding all the opportunities that a redesign presents from an SEO and usability standpoint can lead to a successful launch and a fruitful post-launch environment.

Now get out there and show them how it’s done!

4 Ways to Find Out Why Your Website Traffic Died After a Relaunch

A site redesign and relaunch can be an exciting and busy time in the life of a company’s web marketing program. It's a great time to shake a site down to its core,revamp the messagelook and feel, and – most importantly – structure the site for SEO success (assuming you read my article onhow to relaunch without losing sleep).
On the other hand, if done improperly, a relaunch or site update can have disastrous consequences. For those who anxiously await the increased traffic and conversions from the updated site, there are those who are often greeted with tanking traffic post-launch.
Frantically assessing the site to find out what's gone wrong and why can be the most nerve-wracking part of a post-launch failure. Below is a quick assessment to diagnose post-launch issues.

1. Check Google Analytics

Has all site traffic ceased? If so, maybe analytical tracking didn't make it to the new site. Check this manually.
If you are receiving organic traffic, just at a reduced rate, run the site through Analytics Checkup. It could be that a certain section of the site, such as the blog, doesn't have proper tracking code placement. The scrape of all pages for tracking placement will identify issues.

2. Check robots.txt

If analytics passed inspection, now you know something is wrong. The first consideration is deindexation.
Check the robots.txt file for “disallow: /” or in the head of page source code for a meta robots tag exclaiming noindex. If your site is typically crawled very frequently this can do damage very quickly and start killing rankings. If your site doesn't enjoy frequent crawling, then this culprit can take days to a week before killing your online presence.

3. A Deeper Check of Google Analytics

OK, all the factors above are fine, where to next? It's time to review Google Analytics again, but go deeper.

Page Names Changed During the Relaunch

Was this URL rewrite architected well so that old URLs are 301 redirected to new pages? Review the organic traffic by landing page for those with the largest loss with a date range of the week prior to launch. Have those landing pages showing as top performers last week been redirected to new URLs?
(Note: You can also analyze Google Webmaster Tools for 404 error pages. However, it can take days for this information to appear, and we don’t have that much time.)
Next, move to the Content section and the sub-category of All Pages in Google Analytics. Choose the Primary Dimension of All Pages while also choosing a date range of post-launch.
Now, knowing the text rendered in the title element of your 404 page, filter search this text and see how many pageviews on the site are rendering 404 pages. Furthermore, open a secondary dimension of Landing Page to find these 404ing pages.
When you redirected pages, did you do a simple bulk redirecting of pages to the homepage or a site section or detailed one-to-one redirects. The latter is the preferable choice as redirected ranking listing may now have no thematic correlation with their respective search term and thus be washed away from ranking for the given term.

Page Names Didn't Change During the Relaunch

Once again, look at organic traffic by Landing Page. Look at post-launch vs. a comparable time pre-launch.
You still see the drop, but now open a secondary dimension by Keyword. Make an assessment of the keyword losses paired to their respective landing pages.
Review the current landing page vs. the pre-launch landing page. Have the suffering keywords in question disappeared from the focus/theme of the page?
Assuming you ran a keyword ranking report before launch, run one again and see if there are noticeable ranking drops already. Again review pre-launch and post-launch pages as done above a moment ago for the keyword theme differences.

4. Check for Host or Server Issues

Analytics are fine, there are no de-indexation issues, all redirects (if applicable) are fine, and all keyword focus per page is fine. What gives?
Did you change hosting or server? Communication issues between visitors, the host and server can lead to a delay in content delivery or in fact a timing out of content. This leaves a search engine with no way to view the page.
You can review this in Google Webmaster Tools in Crawl Errors and assess DNS errors and Server Connectivity. This may take days to show too and time is something we don’t have.
Run the site through Pingdom’s DNS Health and the Ping/Traceroute tool. This will help identify potential content delivery and server communication issues that may exist.

Finding Resolution

While there may be alternative methods for finding post-launch issues, following the tips above should help you quickly run through your site to pinpoint your traffic's cause of death.
If everything above checked out OK for you and you still don’t know why you're experiencing a grave organic search exposure loss, then you may have a less common issue that requires a deeper dive. Perhaps there is a design/code flaw or flagrant over-optimization.

ranking-factors-2013

2013 Search Engine Ranking Factors

Correlations

To compute the correlations, we followed the same process as in 2011. We started with a large set of keywords from Google AdWords (14,000+ this year) that spanned a wide range of search volumes across all topic categories. Then, we collected the top 50 organic search results from Google-US in a depersonalized way. All SERPs were collected in early June, after the Penguin 2.0 update.
For each search result, we extracted all the factors we wanted to analyze and finally computed the mean Spearman correlation across the entire data set. Except for some of the details that I will discuss below, this is the same general process that both Searchmetrics and Netmark recently used in their excellent studies. Jerry Feng and Mike O'Leary on the Data Science team at Moz worked hard to extract many of these features (thank you!):
When interpreting the correlation results, it is important to remember that correlation does not prove causation.
Rand has a nice blog post explaining the importance of this type of analysis and how to interpret these studies. As we review the results below, I will call out the places with a high correlation that may not indicate causation.

Enough of the boring methodology, I want the data!

Here's the first set, Mozscape link correlations:
Correlations: Page level
Correlations: Domain level
Page Authority is a machine learning model inside our Mozscape index that predicts ranking ability from links and it is the highest correlated factor in our study. As in 2011, metrics that capture the diversity of link sources (C-blocks, IPs, domains) also have high correlations. At the domain/sub-domain level, sub-domain correlations are larger then domain correlations.
In the survey, SEOs also thought links were very important:
Survey: Links

Anchor text

Over the past two years, we've seen Google crack down on over-optimized anchor text. Despite this, anchor text correlations for both partial and exact match were also quite large in our data set:
Interestingly, the surveyed SEOs thought that an organic anchor text distribution (a good mix of branded and non-branded) is more important then the number of links:
The anchor text correlations are one of the most significant differences between our results and the Searchmetrics study. We aren't sure exactly why this is the case, but suspect it is because we included navigational queries while Searchmetrics removed them from its data. Many navigational queries are branded, and will organically have a lot of anchor text matching branded search terms, so this may account for the difference.

On-page

Are keywords still important on-page?
We measured the relationship between the keyword and the document both with the TF-IDF score and the language model score and found that the title tag, the body of the HTML, the meta description and the H1 tags all had relatively high correlation:
Correlations: On-page
See my blog post on relevance vs. ranking for a deep dive into these numbers (but note that this earlier post uses a older version of the data, so the correlation numbers are slightly different).
SEOs also agreed that the keyword in the title and on the page were important factors:
Survey: On-page
We also computed some additional on-page correlations to check whether structured markup (schema.org or Google+ author/publisher) had any relationship to rankings. All of these correlations are close to zero, so we conclude that they are not used as ranking signals (yet!).

Exact/partial match domain

The ranking ability of exact and partial match domains (EMD/PMD) has been heavily debated by SEOs recently, and it appears Google is still adjusting their ranking ability (e.g. this recent post by Dr. Pete). In our data collected in early June (before the June 25 update), we found EMD correlations to be relatively high at 0.17 (0.20 if the EMD is also a dot-com), just about on par with the value from our 2011 study:
This was surprising, given the MozCast data that shows EMD percentage is decreasing, so we decided to dig in. Indeed, we do see that the EMD percent has decreased over the last year or so (blue line):
However, we see a see-saw pattern in the EMD correlations (red line) where they decreased last fall, then rose back again in the last few months. We attribute the decrease last fall to Google's EMD update (as announced by Matt Cutts). The increase in correlations between March and June says that the EMDs that are still present are ranking higher overall in the SERPs, even though they are less prevalent. Could this be Google removing lower quality EMDs?
Netmark recently calculated a correlation of 0.43 for EMD, and it was the highest overall correlation in their data set. This is a major difference from our value of 0.17. However, they used the rank-biserial correlation instead of the Spearman correlation for EMD, arguing that it is more appropriate to use for binary values (if they use the Spearman correlation they get 0.15 for the EMD correlation). They are right, the rank-biserial correlation is preferred over Spearman in this case. However, since the rank-biserial is just the Pearson correlation between the variables, we feel it's a bit of an apples-to-oranges comparison to present both Spearman and rank-biserial side by side. Instead, we use Spearman for all factors.

Social

As in 2011, social signals were some of our highest correlated factors, with Google+ edging out Facebook and Twitter:

SEOs, on the other hand, do not think that social signals are very important in the overall algorithm:
This is one of those places where the correlation may be explainable by other factors such as links, and there may not be direct causation.
Back in 2011, after we released our initial social results, I showed how Facebook correlations could be explained mostly by links. We expect Google to crawl their own Google+ content, and links on Google+ are followed so they pass link juice. Google also crawls and indexes the public pages on Facebook and Twitter.

Takeaways and the future of search

According to our survey respondents, here is how Google's overall algorithm breaks down:
We see:
  1. Links are still believed to be the most important part of the algorithm (approximately 40%).
  2. Keyword usage on the page is still fundamental, and other than links is thought to be the most important type of factor.
  3. SEOs do not think social factors are important in the 2013 algorithm (only 7%), in contrast to the high correlations.
Looking into the future, SEOs see a shift away from traditional ranking factors (anchor text, exact match domains, etc.) to deeper analysis of a site's perceived value to users, authorship, structured data, and social signals:

History of World Wide Web and Web Evolution

Introduction
World Wide Web (WWW) is the system of interlinked hypertext documents containing text, images, audio, videos, animation and more. User can view and navigate through these documents using hyperlinks or navigation elements which have references to another document or to the section of the same document. In a broader sense "The World Wide Web is the universe of network-accessible information, an embodiment of human knowledge."
History of World Wide Web
WWW was first proposed in 1990 by Tim Berners-Lee and Robert Cailliau while working at the CERN, the European Council for Nuclear Research. Both of them came out with their individual proposal for Hypertext systems and later on they united and offered joint proposal. The term "Word Wide Web" was first introduced in that joint proposal. The history of every invention has lot of pre-history. Similarly the World Wide Web has also lot of pre-historical gradual development of hypertext system and internet protocols which made the WWW possible. The gradual development started in the early 1945, with the development of Memex, a device based on microfilms for storing huge amount of documents and facilitating organizing those documents. Later in 1968 "Hypertext" was introduced, which made linking and organization of documents fairly easy. In 1972 DARPA (Defense Advance Research Project Agency), started project that connect all research centers to facilitate data exchange which later adopted for military information exchange. In 1979 SGML (Standard Generalized Markup Language) was invented to enable sharing of documents for large government project by separating content from the presentation and thereby enabling same document to be rendered in different ways. In 1989 Tim Berners-lee came out with Networked Hypertext system form CERN Laboratory. In 1990, joint proposal for hyper text system was presented and the term "World Wide Web" first introduced. In 1992 first portable browser was released by CERN, and that had picked up industry interest in internet development. Today web is so much popularized and has grown to be so invaded in to our lives; it becomes almost impossible to imagine the World without web.
Web Evolution - What and How?
Each technology has certain distinguished characteristics and features. Similarly web has certain features such as data, services, mess-up, APIs, social platform and more. These features are continuously and progressively evolving in distinct stages with qualitative improvements over the existing. Web evolution is categorized and hyped with some fancy marketing terms like "Web 1.0", "Web 2.0", "Social Web", "Web 3.0", "Pragmatic Semantic Web", "Pragmatic Web" and many more.
Yihong Ding, PHD candidate at Brigham Young University, in his article on "Evolution of Web" explained the development of Web by analogically comparing it with the human growth. Yihong Ding stated "The relationship between web pages and their webmasters is similar to the relationship between children and their parents. As well as parents raise their children, webmasters maintain and update their web pages. Human children have their normal stages of development, such as the newborn stage, pre-school stage, elementary-school stage, teenage stage, and so on. Analogically, web has its generations, such as Web 1.0, Web 2.0, and so on."
Along with technological advancement web design also changed over the period of time. Initial design was simple hypertext read only system which allowed users to read the information. User was just a viewer of what is presented on the web. Gradually images and tables added with evolution of HTML and web browsers, which allowed making better design. Development of photo-editing tools, web authoring tools and content management tools enabled designer to begin creating visually appealing website design layouts. In the next phase of development, web design changed with the change in usability and the focus is diverted on the users rather than the content of the website. User interaction and social touch is applied to the web design. Now user is not just a viewer. User can drive the web with feedback, information sharing, rating and personalization. Gradually we got the mature blend of function, form, content and interaction, called Read/Write Web. Continuing this evolution, meaning is added to the information presented on the web so that online virtual representatives of human can able to read and interprets the presented information. This kind of web where user agent imitating human behavior, can read and understand the information using artificial intelligence is called semantic web.
Web 1. 0 (Read Only Web)
World Wide Web is evolved in stages. First stage was the basic "Read Only" hypertext system also termed as Web 1.0 since the hype of Web 2.0. In fact in the original proposed web model, Tim Berners-Lee envisioned web as the Read/Write Model with HTTP PUT and HTTP DELETE method. These methods were almost never used just because of security reasons.
Some of the Characteristics of Web 1.0
1. In Web 1.0 web master is constantly engaged with responsibility of managing the content and keeps user updating. Majority of hyperlinks to the contents are manually assigned by the web master.
2. Web 1.0 does not support mass-publishing. The content on the website is published by the web master and thereby does not leverage the collective intelligence of users.
3. Web 1.0 uses basic hyper text mark up language for publishing content on the internet.
4. Web 1.0 pages do not support machine readable content. Only human who are web readers can understand the content.
5. Web 1.0 provides contact information (email, phone number, fax or address) for communication. Users have to use the off-line world for further communication with this contact information.
6. In Web 1.0, web pages are designed to react instinctively based on the programmed condition. Specific result or response is generated when the programmed condition is satisfied. Web 1.0 model does not understand remote request and can not prepare response for potential request in advance. To clearly understand above characteristics of web 1.0, Yihong Ding in his article on "Evolution of World Wide Web" has analogically correlated World of Web 1.0 with the world of a Newborn baby.
Newborn Baby : I have parents
Web-1.0 Page : Webmasters
Newborn Baby : Watch me, but I won't explain
Web-1.0 Page : Humans understand, machines don't
Newborn Baby : Talk to my parents if you want to discuss about me
Web-1.0 Page : Contact information (email, phone number, fax, address, ...)
Newborn Baby : My parents decide who my friends are. Actually, I don't care
Web-1.0 Page : Manually specified web links
Newborn Baby : Hug me, I smile; hit me, I cry (conditional reflex)
Web-1.0 Page : Reactive functions on web pages
Source: Analogy from the Article by Yihong Ding from http://www.deg.byu.edu/ding/WebEvolution/evolution-review.html#w1:1 "The web 1.0 pages are only babies.
Web 2. 0 (Read Write Web)
"Web 2.0 is the understanding that the network is the platform and on the network is platform roles for the business is different. And the cardinal role is user adds value. And figuring out how to built database and things to get better so that more people use that and it's the secret of web 2.0.
Web 2.0 is the business revolution in the computer industry caused by the move to the internet as platform, and an attempt to understand the rules for success on that new platform."[4]
In Web 2.0 the idea of Consumer (Users) and Producer (Web Master) is dissolving. Web 2.o is more about communications and user interactions. Web 2.0 is all about participation. "Content is the King" often cited quote during early web 1.0 days, is now turned in to "User is the King" in Web 2.0. In web 2.0 users communicates through blogging, wikis and social networking websites. Everything on the web is tagged, to facilitate easy and quick navigation. Web 2.0 is also about combining it all in one single page by means of tagging and AJAX with better usability via lots of white space, and a cleaner layout. The API ability makes it possible for programmers to mash up data feeds and databases to cross reference information from multiple sources in one page. In contrast with web 1.0, web 2.0 has collective intelligence of millions of users.
Web 2.0 is all about improved version of World Wide Web with changing role and evolving business model where users learned to communicate with the other users instead of just communicating with the publisher of the content.
Some of the Characteristics of Web 2.0
1. Web 2.0 is the second version of Web providing RIA (Rich Internet Application) by bringing in the desktop experience such as "Drag and Drop" on the webpage in browser.
2. SOA (Service Oriented Architecture) is the key piece in Web 2.0. Buzzwords around SOA are Feeds, RSS, web services and mash up, which defines how Web 2.0 application exposes functionality so that other applications can leverage and integrate those functionalities providing much richer set of applications.
3. Web 2.0 is the Social web. Web 2.0 Application tends to interact much more with the end user. End users are not only the users of the application but also the participants whether by tagging the content, whether he is contributing to the wiki or doing podcast for blogging. Due to the Social nature of application end user is the interval part of the data for the application, proving feedbacks and allowing application to leverage that user going to use it.
4. In Web 2.0 philosophy and strategy is that "The Web is open". Content is available to be moved and changed by any user. Web site content is not controlled by the people who made the web site but by the user who are using the web site.
5. In Web 2.0 Data is the driving force. Users are spending much more time online and started generating content in their passive time. Web 2.0 requires some of the key technologies to be used in the development of web pages. One of the important technologies is the AJAX which supports development of rich user experience.
6. Web 2.0 websites typically include some of the following key technologies.
- RSS (Really Simple Syndication), which allows users to syndicate, aggregate and to set up the notification of the data using feeds.
- Mashups, which makes it possible to merge the content from different sources, allowing new form of reusing of the information via public interface or APIs.
- Wikis and Forums to support user generated content.
- Tagging, which allows users to specify and attach human readable keyword to web resource.
- AJAX - Asynchronous Java Script and XML, which is the web development technique, allowing exchange of interactive data behind the scene without reloading the web page.
To clearly understand above characteristics of web 2.0, Yihong Ding in his article on "Evolution of World Wide Web" has analogically correlated World of Web 2.0 with the world of a Pre-School Kid.
Pre-School Kid : I have parents
Web-2.0 Page : Webmasters (blog owners)
Pre-School Kid : Parents teach me knowledge (though often not well organized)
Web-2.0 Page : Tagging
Pre-School Kid : I understand but maybe imprecise and incorrect
Web-2.0 Page : Folksonomy
Pre-School Kid : I can deliver and distribute messages, especially for my parents
Web-2.0 Page : Blogging technology
Pre-School Kid : Who my friends are is primarily determined by my parents' social activities and their teaching
Web-2.0 Page : Social network
Pre-School Kid : Multiple of us can be coordinated to do something beyond individual's capabilities
Web-2.0 Page : Web widget, mashup
Pre-School Kid : I can do suggestion based on my communication with friends
Web-2.0 Page : Collective intelligence
Following table distinguish the difference between Web 1.0 and Web 2.0
Web 1.0 is about : Reading
Web 2.0 is about : Reading /Writing
Web 1.0 is about : Publishing
Web 2.0 is about : Feedbacks, Reviews, Personalization
Web 1.0 is about : Linking Content using Hyperlinks
Web 2.0 is about : mashup
Web 1.0 is about : Companies
Web 2.0 is about : CommunityCommunity
Web 1.0 is about : Client-Server
Web 2.0 is about : Peer to Peer
Web 1.0 is about : HTML
Web 2.0 is about : XML
Web 1.0 is about : Home Pages
Web 2.0 is about : Blogs and Wikis
Web 1.0 is about : Portals
Web 2.0 is about : RSS
Web 1.0 is about : Taxonomy
Web 2.0 is about : Tags
Web 1.0 is about : Owning
Web 2.0 is about : Sharing
Web 1.0 is about : Web form
Web 2.0 is about : Web Application
Web 1.0 is about : Hardware Cost
Web 2.0 is about : Bandwidth Cost
Web 3. 0 (Semantic Web)
Web is no longer linking and tagging of information and resources. With the advent of semantic web concept, special information is attached to the resources or information so that machine can understand and read just like human.
Timer Berner Lee envisioned
"I have a dream for the Web [in which computers] become capable of analyzing all the data on the Web - the content, links, and transactions between people and computers. A 'Semantic Web', which should make this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The 'intelligent agents' people have touted for ages will finally materialize."
Semantic Web has derived from his vision of web as the universal medium for exchange of data, information and knowledge. Web 3.0 or Semantic Web is an Executable Phase of Web Development where dynamic applications provides interactive services and facilitates machine to machine interaction. Tim Berner Lee has further stated
"People keep asking what Web 3.0 is. I think maybe when you've got an overlay of scalable vector graphics - everything rippling and folding and looking misty - on Web 2.0 and access to a semantic Web integrated across a huge space of data, you'll have access to an unbelievable data resource." Semantic web is the an extension of World Wide Web in which web content is expressed in machine readable language, not just in nature language, so that user agents can read, process and understand the content using artificial intelligence imitating human behavior. In other words Semantic Web is an extension of the web where content expressed can be processed independently by intelligent software agents.
There can be several agents one can program within the context of vocabulary of the vertical domain.
For example
"Travel Agent", who keep searching chipset air tickets based on your criteria and notify you when it gets the perfect one.
"Personal Shopper Agent", who keeps looking for the specific product on the eBay and get it for you once it finds the one that match with all of your criterions.
Similarly we can have "Real Estate Agent", "Personal Financial Advisor Agent" and many more.
All user is doing is just creating their personal agent which talks with the web services which are exposed publicly and there by taking care of lots of repetitive tasks.
Precisely Web 3.0 = Every human + Every device + Every Information
Characteristics of Semantic Web
1. Unlike database driven websites, In Semantic Web database is not centralized.
2. Semantic Web is the Open System where schema is not fixed as it may take any arbitrary source of data.
3. Semantic Web requires using Meta description languages such as Web Ontology Language and the Resource Description Framework (RDF). Annotation requires lot of time and effort.
Web n.0 - a Glimpse of the Future
Let me add one more element to the previous formula
Web 3.0 = Every human + Every device + Every Information = Everything in the context of current technology advancement.
Web 3.0 is still evolving and it is going to encompass everything. One can not envision anything beyond web 3.0 in the current technology advancement.
Breaking all current technological capabilities Raymond Kurzweil, the inventor of OCR (Optical Character Reader) envisioned Web 4.0 as the Web OS with intelligent user agents acting parallel to human brain. Following figure illustrate the evolution of Web along with technology advancement and the semantics of social connections. Source: Nova Spivack and Radar Networks
Conclusion
The evolution of web has gone through phases as mentioned in this article and that has introduced numerous technologies and concepts in various areas; software, communication, hardware, marketing, advertising, content sharing, publicity, finance and many more.
In a way the World Wide Web has changed the way people were used to look at things earlier. I believe this evolution is never ending and moving towards excellence.