Web Analytics Overview
Southern Polytechnic State University, USA
Southern Polytechnic State University, USA
Web analytics is the technology and method for the collection, measurement, analysis and
reporting of websites and web applications usage data (Burby & Brown, 2007). Web analytics has been
growing ever since the development of the World Wide Web. It has grown from a simple function of
HTTP (Hypertext Transfer Protocol) traffic logging to a more comprehensive suite of usage data tracking,
analysis, and reporting. The web analytics industry and market are also booming with a plethora of tools,
platforms, jobs, and businesses. The market was projected to reach 1 billion in 2014 with an annual
growth rate more than 15% (Lovett, 2009).
Web analytics technologies are usually categorized into on-site and off-site web analytics. On-site
web analytics refers to data collection on the current site (Kaushik, 2009). It is used to effectively
measure many aspects of direct user-website interactions, including number of visits, time on site, click
path, etc. Off-site analytics is usually offered by third party companies such as Twitalyzer
(http://twitalyzer.com) or Sweetspot (http://www.sweetspotintelligence.com). It includes data from other
sources such as surveys, market report, competitor comparison, public information, etc. This chapter
provides an overview of on-site web analytics, with a focus on categorizing and explaining data, sources,
collection methods, metrics and analysis methods.
Log files have been used to keep track of web requests since World Wide Web emerged and the
first widely used browser Mosaic was released in 1993. One of the pioneers of web log analysis was
WebTrends, a Portland, Oregon based company, which conducted website analytics using data collected
from web server logs. In the same year, WebTrends created the first commercial website analytics
software. In 1995, Dr. Stephen Turner created Analog, the first free log file analysis software. In 1996,
WebSideStory offered hit counter as a service for websites that would display a banner. Web server logs
have some limits in types of data collected. For example, they could not provide information about
visitors' screen sizes, user interactions with page elements, mouse events such as clicking and hovering,
etc. The new technique of page tagging is able to overcome the limitation and gets more popular recently.
The fundamental basis of web analytics is collection and analysis of website usage data. Today,
web analytics is used in many industries for different purposes, including traffic monitoring, e-commerce
optimization, marketing/advertising, web development, information architecture, website performance
improvement, web-based campaigns/programs, etc. Some of the major web analytics usages are:
1. Improving website/application design and user experience. This includes optimizing website
information architecture, navigation, content presentation/layout, and user interaction. It also
helps to identify user interest/attention areas and improve web application features. A particular
example is a heat map that highlights areas of a webpage with higher than average click rate and
helps determine if intended link/content is in the right place.
2. Optimizing e-Commerce and improving e-CRM on customer orientation, acquisition and
retention. More and more companies analyze website usage data in order to understand
customers' needs to increase traffic and ultimately increase their revenue. Different sites can have
Manuscript only – published in
Encyclopedia of Information Science
and Technology, Third Edition, IGI
different goals like selling more products and attracting more users to generate more income
through advertisements. Websites want to keep visitors longer (reducing bounce rate) to
encourage users to return and to make every visit end with completion of targeted action
3. Tracking and measuring success of actions and programs such as commercial campaigns. To
bring value, web analytics must differentiate between a wide variety of traffic sources, marketing
channels, and visitor types. A common question is: “where did visitors learn that information?”
For example, parameters used in tracking direct traffic from email, social media, or mobile
devices allow correlation of traffic sources with marketing campaign cost, which helps to
evaluate return on investments.
4. Identifying problems and improving performance of web applications. The study performed by
Tag Man shows a significant correlation between page-load time and the likelihood of a user to
convert (TagMan, 2012). Web analytics helps to address this issue. Page loading metrics such as
average page load time by browser and geographic location are used to measure performance.
Both real-time and historical performance analysis allow proactive detection, investigation, and
diagnosis of performance issues. Improvements may range from simple image optimization to
modification of the expiration date in the HTTP headers to force browsers to use cached website
content. A heat map might help to reveal website errors, such as that users click on buttons or
images without links. The same techniques can be used by developers of web based applications
and games to add/modify software features.
DATA COLLECTION AND ANALYSIS
Data and Sources
The fundamental goal of web analytics is to collect and analyze web traffic and usage patterns. A
common way to study this data is to use the dimensional model (Hu & Cercone, 2004). Under this model,
there are two major types of data: facts or measurement data and dimensional data that describe facts
from different aspects and levels. Facts data are mainly about usage count and time. The most basic
measure is a page view, which is a single request for a web page. Count of user actions such as mouse
clicks can also be used as a measure. Various metrics are calculated based on basic measures and
dimensions. Dimensional data are much more complex. Major types of dimensions include time, content,
location, user client information (such as operating system, browser type, screen size, etc.), and user or
Both measurement data and dimensional data come from a number of sources, which can be
categorized into the following 4 types:
1. Direct HTTP request data
2. Application level data sent with HTTP requests
3. Network level and server generated data associated with HTTP requests.
4. External data
Direct HTTP request data directly come from HTTP request messages. An HTTP request is a
message sent by a web client (browser) to a web server to request a resource (a web page or a web page
element like an image). Traditionally, web traffic measurement is directly based on web resource visits
(commonly called page view). Then each request is further described by a number of dimensions, such as
page, visitor, technology, etc. The format of the HTTP 1.1 request is specified in IETF RFC 2616
(Fielding, Gettys, & Mogul, 1999). A typical HTTP request message is shown in Figure 1.
Figure 1: HTTP request header sample displayed with Chrome v22
An HTTP request consists of a request command (the first line) and HTTP headers. The request
command includes the required URI (unified resource identifier) information. A URI generally includes a
host's domain or IP and a directory path. If the host information is not included as a part of the URI, then
the “host” header has to be provided. The URI is the key information that leads to the count of a
page/resource views. HTTP headers are pairs of field names and values. HTTP 1.1 specification defines a
set of headers that can be included. These headers describe request and client characteristics. Most of the
header data are dimensional type of data used in web analytics. Some commonly used header fields for
User-Agent field holds client information such as browser type and operating system type. This
information can be used to profile client technologies.
Referer (not “referrer”) field keeps the previously visited URL that leads to the current URL. This
header can be used for the clickstream analysis where user visiting paths can be constructed by
chaining a serial of requests. It also can be used for metrics like entry rate, exit rate, etc.
Accept-Language field contains the list of natural languages that are preferred in the response.
The list is determined based on the OS default locale. This can be used to track user’s language,
e.g. en, en-US, es (Spanish), zh-cn (China).
Cookie field holds application level information stored at the client side. This can hold various
kinds of data that is beyond HTTP’s role, such as keyboard and mouse actions.
Application level data is generated and processed by application level programs (such as
Session data identify a client interaction with a website consisting of one or more related requests
for definable unit of content in a defined time period (Burby & Brown, 2007). HTTP itself is
stateless and cannot provide session information. Thus, this data is managed at the application
level. Session data are usually sent as URL parameters or session cookies. They are important for
calculate metrics like number of visits, time on site, number of page views per visit, etc.
Referral data is different from the “referer” header in HTTP requests. HTTP referer is at the page
request level and is usually a URL. Application level referral represents different sources leading
to the current web resource and is usually a coded value. It can be used to analyze traffic levels
from expected and unexpected sources, or to gauge channel effectiveness in advertisement
Request command to get “www.spsu.edu/itdegrees”
User action data mainly include keyboard actions (e.g. user input of search terms) and mouse
actions (e.g. cursor coordinates and movements). It also includes application specific action such
as voting, playing of video/audio, bookmarking, etc.
Client/browser side data include computer status information like display resolution and color
depth, or any other information a user chooses to make available.
Application level data is usually embedded in HTTP requests. There are three common places to
hold this information. First, they can be appended to a request URL as URL parameters. Server side
programs can parse these parameters. For example, Google uses specifically constructed URLs in their
search results to redirect users to the target while capturing extra information (Figure 2). Second,
application data can be sent as the HTTP cookie header. Cookies are small text files that usually store
user profile and activity data. The type of data that can be stored is directly determined by the client
software and settings (Tappenden & Miller, 2009). Third, application data can also be included in the
HTTP request body when an HTTP “POST” method is used (common for form submission).
Figure 2: Google uses a transmission URL when redirecting a link to an external target
Network level data is not part of an HTTP request, but it is required for successful request
transmissions. The most prominent example is an IP address of a requestor. The requester's IP address
and port number are required in order to return a response. This information is sent at the TCP/IP level
and is logged by a web server. Server generated data is usually used for internal reference and is recorded
in server log files. The log file commonly records file size, processing time, server IP, request events
other than HTTP request, etc. (see the next section for more details).
External data can be combined with on-site data to help interpret web usage. For example, IP
addresses are usually associated with Geographic regions and internet service providers. Third party
databases or services provide such mappings, e.g., MaxMind’s GeoIP and GeoLite
(http://www.maxmind.com), IPInfoDB (http://ipinfodb.com), GeoBytes (http://www.geobytes.com), and
hostip.info (http://www.hostip.info). Another example is user information that was collected and stored
during a separate process (e.g. registration). If user identity information is required in a visit, then this
profile data can be associated with usage data. Revenue and profit can be classified as external data if
Additional data is appended to the URL. These data are
captured by Google when a user clicks on the link in
Google search results.
they can be associated with particular webpages. Search terms and advertisement keywords requests are
also external data and are usually provided by third party services.
Table 1: Web Analytics Major Data and Source Summary
Client profile/User-Agent (browser, OS)
Application, HTTP request
User action (keyboard and mouse)
Visit or session
Referrer (preceding webpage)
HTTP request (“referer” header)
Referral (channel identification)
Client profile (screen size, color depth)
Revenue or profit
Collection and tracking methods
There are two major methods to collect usage data: web server logging and page tagging.
Web server logging is a traditional method of usage data collection. A log file is generated by a
web server to record server activities and HTTP headers in a textual format. There are various formats of
log files. Most commonly logged data in the NCSA Common Log Format
(http://www.w3.org/Daemon/User/Config/Logging.html) are server IP, date/time, HTTP request
command, response status, and response size. Figure 3 shows an example of the Common Log Format
implemented in Apache Web Server 2.2. Additional data, such as HTTP headers, process id, scripts,
request rewrite, etc., can be logged in proprietary formats or Extended Log File Format
(http://www.w3.org/TR/WD-logfile.html). Log analysis software can be used to extract and analyze log
files. Popular tools are Analog (http://www.analog.cx), Deep Log Analyzer (http://www.deep-
software.com), Webalizer (http://www.webalizer.org), and AWSstats (http://awstats.sourceforge.net).
Figure 3: Common Log Format Example in Apache Web Server 2.2
The second and more recent method uses client side programs such as embedded scripts, browser
127.0.0.1 - frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0"
LogFormat "%h %l %u %t \"%r\" %>s %b" common
CustomLog logs/access_log common
Log format configuration in
Apache Web Server
An example log entry produced
by the configuration above
included in a page tracks user activity and stores information in a cookie. The information is sent to a
processing server (not necessarily the same server that hosts the website) using web beacons or web
services. This method is commonly used by third party service providers such as Google Analytics and
Open Web Analytics. For many organizations, it has become a major type of web usage data collection.
Web server logging is less invasive and does not require page modifications (Ganapathi & Zhang,
2011). Compared to the web server logging method, page tagging has a number of advantages (Clifton,
2012). First, client scripts may have access to additional information about the client such as computer
keyboard pressing and mouse clicking. This is particularly useful in today’s context of rich internet
applications (RIA). RIAs support many client side user interactions that do not communicate with the
server; therefore server side logging cannot track these actions. Last but not least, data management and
reporting become simpler as many of these services are provided through a Software-as-a-Service (SaaS)
model without local maintenance. This is a preferred method for small and medium websites.
A third method of data collection, application level logging, is on the rise lately. Application level
logging is tightly coupled with an application, which is a functional feature of the application itself. This
is an expansion of the traditional web analytics which focuses on generic HTTP requests and user actions.
An application can be a shopping site, a web portal, a blog service, a learning management system, a
forum, or a social networking service. Each of these applications has its own unique usage data that is
collected beyond generic web requests or user actions. The usage data is processed by the application
itself or by a functional module tightly coupled with the application, but not by independent logging or
analytics services. For example, SharePoint 2010 provides framework specific analytics data, like usage
of templates and web parts (Zampatt, 2011).
Common analyses and reports
Meaningful and measurable metrics must be defined in order to analyze web traffic and relate it
to business goals. The most common traditional metrics used in web analytics are (Kaushik, 2009):
Visit count: page view, visit, unique visitor.
Visit duration: time on page, time on site.
Bounce rate and exit rate.
The most basic analysis is the dimensional analysis involving measures and dimensions. The
basic metrics mentioned above and other derived metrics are aggregated by dimensions at different levels.
For example, we can use dimensional analysis to answer the question: “what are the total visits by month
(or day of the week) and by website sections (or page)?" Dimensional analysis is the fundamental piece of
other analyses and reports. Most common types of analyses include:
Trend analysis looks at data along the time dimension and shows the chronological changes of
selected metrics. For example, data can show how the percentage of mobile client access has changed for
the past two years.
Distribution analysis is about metric value breakdown. Values are usually calculated as
percentages of the total by one or more dimensions. It is often used to analyze visitor and client profiles.
For example, the percentages of browser types for the past month give information about client diversity.
Other commonly used dimensions in this type of analysis are traffic source (e.g. referral source analysis
reveals the campaign effectiveness), location, technical data that includes information about browser, OS,
device, screen resolution and color depth, client technology support, etc.
User activity or behavior analysis analyzes how users interact with websites. Typical examples
are engagement analysis, clickstream analysis, and in-page analysis.
Engagement analysis is one of the most frequently used analyses in the industry. It measures the
How many pages were visited per session?
What is the duration of a visit?
How often new visitors become returning visitors?
How often visitors return to the site (loyalty)?
The goal of visitor engagement analysis is to find out why the multitude of operations performed
on a website did not end in conversion. There were several attempts to create engagement calculators that
will distinguish between user visits. For example, one user came from Google search, visited two pages in
five minutes and downloaded necessary document. Another user came from the main site, visited twenty
pages in 40 minutes, downloaded five documents (Peterson & Carrabis, 2008).
Clickstream analysis, also known as click paths, analyzes the navigation path a visitor browsed
through a website. A clickstream is a list of all the pages viewed by a visitor presented in the viewing
order, also defined as the "succession of mouse clicks" that each visitor makes (Opentracker, 2011).
Clickstream analysis helps to improve the navigation and information architecture of websites.
Visitor interest/attention analysis (in-page analysis) analyzes users’ attentions on a web page. It
uses client script to track user mouse movements and clicks, and shows results in a heat map. It can also
show how far down visitors scroll the page. Analysis of link popularity and areas of attention helps to
develop content placement strategies. For example, it helps determine what navigational items should be
placed on the top of the page or find the best places for advertisements.
Conversion analysis is one of the key analyses in e-commerce and other sectors. Conversion rate
is calculated by dividing the number of completed targeted actions (e.g. purchases) by the number of
unique users visited the site. All web analytics providers strive to improve conversion tracking. For
example, Google Analytics provides Multi-Channel Funnels conversion reports that show what
campaigns, sources, or channels have contributed to a visitor's multi-visit conversion.
Performance analysis helps reveal website performance issues (such as loading time) or linking
errors. For example, after a website redesign, indirect traffic volume needs to be watched. If there is less
indirect traffic, then some links from other sites and/or bookmarks were potentially broken after the
PRIVACY AND ACCURACY
Privacy and data accuracy are two major issues and concerns of web analytics. In most cases,
these two issues are related. Many privacy settings affect data tracking and collection accuracy. Concerns
about personal privacy have been rising since web analytics became commonly adopted. The use of
cookies is a major issue in accuracy and privacy concerns. Cookies may contain privacy information that
users do not what to share. For example, in a web beacon tracking method, cookies are used to track
customer behavior across different websites. A web beacon is a piece of third party tracking code
embedded in a webpage. The same provider collects data, reads cookies, and tracks user behavior across
several domains and websites. As soon as the first web beacon is displayed on a system, a unique number
is generated and saved in a cookie file on the user's system. When the user visits another website with
web beacon from the same provider, the provider reads the cookie and aggregates user's data and can
customize what advertisement to be displayed for this user.
server. If cookies are blocked at the client side, then part of the information is missing and will affect the
accuracy of web traffic and usage. There are several ways that users can manipulate client application
settings to protect their privacy. All major current browsers provide an easy way to delete cookies and
prevent third party cookies, first party cookies, or scripting altogether. Users can choose to use the private
browsing mode in all three major browsers (Incognito in Google Chrome, InPrivate in Internet Explorer,
and Private Browsing in Firefox). To standardize privacy and tracking controls, W3C recommended the
use of DNT (Do Not Track) HTTP header (http://www.w3.org/Submission/web-tracking-protection/). A
the setting and should not track the user when the DNT option is explicitly set to true. However, this does
not force websites to comply, and the service provider may decide not to honor users' choice. For example,
when Microsoft decided to set DNT setting to true by default in IE 10, Yahoo announced that it would
ignore IE 10’s DNT settings (Schwartz, 2012).
Another issue is identification of sessions and users. A visit/session can consist of multiple user
actions and requests. However, HTTP protocol is a stateless protocol, which makes each request and
response independent and not related to prior or later requests. This poses difficulty when we want to
correctly identify behavioral patterns. Sessionization is an attempt to group requests from each user over a
period of one visit. The configuration and definition of sessions will affect the accuracy of metrics like
number of visits. The only way to receive accurate visit statistics is to generate new session when a user
logs in and to terminate the session after the user logs out or stays idle for a period of time.
A common approach to identify visits is to use IP addresses. But this is not always possible. If
visitors come from the same organization and their network uses Port Address Translation, some visitors
will be identified by the same public IP address. On the contrary, if a user changes the IP address during
the session, a visit can be incorrectly counted as multiple visits. Cookies are also used to identify visits,
but as mentioned before, cookies can be deleted and blocked for privacy protection. That impacts the data
accuracy as well.
Web browser and proxy caching influence the accuracy of log file analysis. Caching is important
for user experience and effective use of resources. However, it changes host and visit tracking data. If a
proxy is used, then content might be cached and reused for subsequent user visits.
Other issues may include tracking code configuration and setup, incorrect setting of tracking
codes, especially in page tagging methods. Some factors include missing tags and improper placement of
websites. However, a browser delays rendering any content that follows a script tag until that script has
been downloaded, parsed and executed. This delay skews the user engagement statistics.
Web 2.0 has brought many changes to the Web analytics industry. AJAX changed how users
interact with websites, and the future analytics will be more focused on event data rather than just based
on HTTP requests. This made page tagging method a dominant collection method for the future. Mobile
web has also become a major trend in the last two years (Meeker, 2012). However, there are several
by many mobile browsers and collected statistics are not very reliable. Therefore, there is a need for more
robust method of mobile web data collection and analysis.
Higher application level analytics will not only collect generic HTTP request data or user action
data, but also domain and application specific data. Web analytics traditionally was used for e-commerce
sites, but recently expanded into other areas such as social media and education. The collection and
analysis of such application level data is usually labeled using application names, like learning analytics,
video analytics, search analytics, social media analytics, etc. For example, Google provides search and
advertising analytics; YouTube provides video analytics; LinkedIn and Facebook provide social analytics;
Blackboard provides learning analytics. Most of these application specific analytics combine on-site web
usage data and external data. This trend will continue with introduction of more application specific
Diversity of client systems and expansion of data sources led some providers to replace the term
web analytics with digital analytics. It's no longer just about measuring website usage but instead
understanding the entire digital footprint of users (Stanhope, 2012). The web usage has become part of a
larger digital usage (e.g. mobile devices, smart TV, etc.). Realizing this change, Web Analytics
Association (http://www.digitalanalyticsassociation.org/?page=aboutus) has renamed itself to Digital
Analytics Association in March 2012 to account for the analyst's changing role of combining data from
multiple sources and channels.
Web analytics is a field of web traffic data collection and analysis. It had gained wide adoption
and become one of the important tools to help web application management and business analysis. With
the recent Web 2.0 and cloud service advancements, it has quickly evolved from simple system level data
logging to more comprehensive information collection and analysis. With the continuing expansion of
data sources, Web/digital analytics will play an even more important role in the future.
Burby, J., & Brown, A. (2007, August 16). Web Analytics Definitions - Version 4.0. Retrieved from
Clifton, B. (2012). Advanced Web Metrics with Google Analytics (3rd ed.). Indianapolis, IN: John Wiley
Fielding, R., Gettys, J., & Mogul, J. (1999). Hypertext Transfer Protocol -- HTTP/1.1. Retrieved from
Ganapathi, A., & Zhang, S. (2011). Web Analytics and the Art of Data Summarization. In Managing
Large-scale Systems via the Analysis of System Logs and the Application of Machine Learning
Techniques (pp. 6:1–6:9). New York, NY, USA: ACM.
Hu, X., & Cercone, N. (2004). A Data Warehouse/Online Analytic Processing Framework for Web Usage
Mining and Business Intelligence Reporting. International Journal of Intelligent Systems, 19(7),
Kaushik, A. (2009). Web Analytics 2.0: The Art of Online Accountability and Science of Customer
Centricity (1st ed.). Indianapolis, IN: John Wiley & Sons.
Lovett, J. (2009). US Web Analytics Forecast, 2008 To 2014. Cambridge, MA: Forrester Research.
Meeker, M. (2012). Internet Trends. Retrieved from http://www.businessinsider.com/mary-meeker-2012-
Opentracker. (2011). Glossary. Retrieved December 15, 2012, from http://www.opentracker.net/glossary
Peterson, E., & Carrabis, J. (2008). Measuring the Immeasurable: Visitors Engagement. Web Analytics
Demystified. Retrieved from
Rapoza, J. (2010, December 2). Web Analytics: A New View. InformationWeek. Retrieved from
Schwartz, M. J. (2012, October 30). Yahoo To Ignore IE10 DNT Settings. InformationWeek. Retrieved
Stanhope, J. (2012, January 1). The new face of Web analytics. KMWorld Magazine, 21(1). Retrieved
TagMan. (2012, March 14). Just One Second Delay In Page-Load Can Cause 7% Loss In Customer
Conversions. Retrieved from http://blog.tagman.com/2012/03/just-one-second-delay-in-page-
Tappenden, A. F., & Miller, J. (2009). Cookies: A Deployment Study and the Testing Implications. ACM
Trans. Web, 3(3), 9:1–9:49.
Zampatt, G. (2011, September). SharePoint Best Practices Creating and Configuring Service Applications
With (and Without) PowerShell, Part2. The SolidQ Journal, 13. Retrieved from
Overview and history
McManus, S. (2004). Count on Me: an Introduction to Web Analytics. Retrieved from
ClickTale (2010). A Brief History of Web Analytics. Retrieved from
Dems K. (2010). A Brief History of Web Analytics. Retrieved from
Ballardvale (2004). Market Trends - Web Analytics: History and Future. Retrieved from
Wikipedia (2013), Web Analytics. Retrieved from http://en.wikipedia.org/wiki/Web_analytics
Kaushik A. (2014). Occam’s Razor. Retrieved from http://www.kaushik.net/avinash/
Clicktable (2014). Web Analytics & usability Blog. Retrieved from
GetElastic (2014). Web Analytics Blog. Retrieved from http://www.getelastic.com/category/web-
Clifton, B. (2014) Measuring Success - the blog. Retrieved from http://www.advanced-web-
Report and stats
Stanhope, J., Frankland, D., & Dickson, M. (2011). The Forrester Wave™: Web Analytics, Q4
2011. Forrester Research. Retrieved from
Stanhope, J., Frankland, D., & Dickson, M. (2012). Welcome To The Era Of Digital Intelligence.
KISSmetrics (2011). The 2011 Web Analytics Review. Retrieved from
Companies and tools
TopTenReviews (2014). 2014 Web Analytics Product Comparisons. Retrieved from http://web-
WikiPedia (2014). List of web analytics software. Retrieved from
Google Analytics (2014). Retrieved from http://www.google.com/analytics
WebTrends (2014). Retrieved from http://webtrends.com
ClickTale (2014). Retrieved from http://www.clicktale.com
Open Web Analytics (2014). Retrieved from http://www.openwebanalytics.com
WASP (2014). Retrieved from http://webanalyticssolutionprofiler.com
IBM Digital Analytics (2014). Retrieved from http://www-
Digital Analytics Association (2014). Retrieved from http://www.digitalanalyticsassociation.org
Web Analytics Wednesday (2014). Retrieved from
Kaushik, A. (2007). Web analytics: an hour a day. Indianapolis, IN: Sybex.
Mashable (2014). Web Analytics. Retrieved from http://mashable.com/category/web-analytics/
Beyond Web Analytics (2014). Retrieved from http://www.beyondwebanalytics.com
KEY TERMS AND DEFINITIONS
Cookie: a small text file stored at the client side to record additional information that may be
shared by multiple requests and responses.
Digital analytics: an expansion of web analytics to include data from other sources.
Dimension: an attribute or a perspective to describe measures.
HTTP: the application level data transfer protocol for web applications.
HTTP request: a message sent from a client to a web server to request resources.
Metric: a key indicator of an objective we want to measure and track
Web analytics: the technology and method for the collection, measurement, analysis and
reporting of websites and applications usage data
Web log: a text file generated by a web server to record server activity and communication data.