|
| 1 | +// To do: Add a page that talks about how synthetics and UE work together. Passive/Active, etc. |
| 2 | + |
| 3 | +[[user-experience]] |
| 4 | += {user-experience} |
| 5 | + |
| 6 | +{user-experience} provides a way to quantify and analyze the perceived performance of your web application. |
| 7 | +Unlike testing environments, {user-experience} data reflects real-world user experiences. |
| 8 | +Drill down further by looking at data by URL, operating system, browser, and location -- |
| 9 | +all of which can impact how your application performs on end-user machines. |
| 10 | + |
| 11 | +Powered by the APM Real user monitoring (RUM) agent, all it takes is a few lines of code to begin |
| 12 | +surfacing key user experience metrics. |
| 13 | + |
| 14 | +[role="screenshot"] |
| 15 | +image::images/user-experience-tab.png[User experience tab] |
| 16 | + |
| 17 | +[discrete] |
| 18 | +[[why-user-experience]] |
| 19 | +== Why is {user-experience} important? |
| 20 | + |
| 21 | +Search engines are placing increasing importance on user experience when organically ranking websites. |
| 22 | +Elastic makes it easy to view your website data in the context of Google Core Web Vitals -- |
| 23 | +metrics that score three key areas of user experience: loading performance, visual stability, and interactivity. |
| 24 | +These Core Web Vitals are set to become the main performance measurement in Google ranking factors. |
| 25 | +If you’re a content-based site that wants to appear in the “Top Stories” section of Google search results, |
| 26 | +you must have good Core Web Vitals. |
| 27 | + |
| 28 | +// We don't support business outcome capture yet. For now, this section should focus on CWV. |
| 29 | +// Saving this, as it might be useful later: |
| 30 | +// -------------------------------------------------------------------------------------------------------------- |
| 31 | +// Every website has goals -- some sites want users to buy a product, sign up for a mailing list, download an app, |
| 32 | +// or share something on social media. |
| 33 | +// But no matter how great your product is, a poor {user-experience} can negatively impact your goal completion rate. |
| 34 | +// For example, in one study, 40% of users said they abandon a website if it takes more than three seconds to load. |
| 35 | +// footnote:[Source and more info: https://neilpatel.com/blog/loading-time/[neilpatel.com]] |
| 36 | +// In another, Amazon calculated that a page load slowdown of just one second would cut conversions by |
| 37 | +// 7% -- costing them $1.6B in sales each year. |
| 38 | +// footnote:[Source and more info: https://www.fastcompany.com/1825005/how-one-second-could-cost-amazon-16-billion-sales[fastcompany.com]] |
| 39 | +// In short, a good {user-experience} keeps your users happy and improves your website's odds of success. |
| 40 | +// -------------------------------------------------------------------------------------------------------------- |
| 41 | + |
| 42 | +[discrete] |
| 43 | +[[how-user-experience-works]] |
| 44 | +== How does {user-experience} work? |
| 45 | + |
| 46 | +{user-experience} metrics are powered by the {apm-rum-ref}[APM Real User Monitoring (RUM) agent]. |
| 47 | +The RUM agent uses browser timing APIs, like https://w3c.github.io/navigation-timing/[Navigation Timing], |
| 48 | +https://w3c.github.io/resource-timing/[Resource Timing], https://w3c.github.io/paint-timing/[Paint Timing], |
| 49 | +and https://w3c.github.io/user-timing/[User Timing], to capture user experience |
| 50 | +metrics every time a user hits one of your pages. |
| 51 | +This data is stored in {es}, where it can be visualized using {kib}. |
| 52 | + |
| 53 | +The RUM agent can be installed as a dependency to your application, or with just a few lines of JavaScript. |
| 54 | +It only takes a few minutes to <<instrument-apps,get started>>. |
| 55 | + |
| 56 | +[discrete] |
| 57 | +[[user-experience-tab]] |
| 58 | +== {user-experience} in {kib} |
| 59 | + |
| 60 | +[discrete] |
| 61 | +[[user-experience-page-load]] |
| 62 | +=== Page load duration |
| 63 | + |
| 64 | +This high-level overview is your analysis starting point and answers questions like: |
| 65 | +How long is my server taking to respond to requests? |
| 66 | +How much time is spent parsing and painting that content? |
| 67 | +How many page views has my site received? |
| 68 | + |
| 69 | +You won't be able to fix any problems from viewing these metrics alone, |
| 70 | +but you'll get a sense of the big picture as you dive deeper into your data. |
| 71 | + |
| 72 | +[role="screenshot"] |
| 73 | +image::images/page-load-duration.png[User experience page load duration metrics] |
| 74 | + |
| 75 | +[discrete] |
| 76 | +[[user-experience-metrics]] |
| 77 | +=== {user-experience} metrics |
| 78 | + |
| 79 | +{user-experience} metrics help you understand the perceived performance of your website. |
| 80 | +For example, first contentful paint is the timestamp when the browser begins rendering content. |
| 81 | +In other words, it's around this time that a user first gets feedback that the page is loading. |
| 82 | + |
| 83 | +[role="screenshot"] |
| 84 | +image::images/user-exp-metrics.png[User experience metrics] |
| 85 | + |
| 86 | +// This is collapsed by default |
| 87 | +[%collapsible] |
| 88 | +.Metric reference |
| 89 | +==== |
| 90 | +First contentful paint:: |
| 91 | +Focusses on the initial rendering and measures the time from when the page starts loading to when |
| 92 | +any part of the page's content is displayed on the screen. |
| 93 | +The agent uses the https://www.w3.org/TR/paint-timing/#first-contentful-paint[Paint timing API] available |
| 94 | +in the browser to capture the timing information. |
| 95 | +footnote:[More information: https://developer.mozilla.org/en-US/docs/Glossary/First_contentful_paint[developer.mozilla.org]] |
| 96 | +
|
| 97 | +Total blocking time:: |
| 98 | +The sum of the blocking time (duration above 50 ms) for each long task that occurs between the |
| 99 | +First contentful paint and the time when the transaction is completed. |
| 100 | +Total blocking time is a great companion metric for https://web.dev/tti/[Time to interactive] |
| 101 | +(TTI) which is lab metric and not available in the field through browser APIs. |
| 102 | +The agent captures TBT based on the number of long tasks that occurred during the page load lifecycle. |
| 103 | +footnote:[More information: https://web.dev/tbt/[web.dev]] |
| 104 | +
|
| 105 | +`Long Tasks`:: |
| 106 | +A long task is any user activity or browser task that monopolize the UI thread for extended periods |
| 107 | +(greater than 50 milliseconds) and block other critical tasks (frame rate or input latency) |
| 108 | +from being executed. |
| 109 | +footnote:[More information: https://developer.mozilla.org/en-US/docs/Web/API/Long_Tasks_API[developer.mozilla.org]] |
| 110 | +
|
| 111 | +Number of long tasks:: |
| 112 | +The number of long tasks. |
| 113 | +
|
| 114 | +Longest long task duration:: |
| 115 | +Duration of the longest long task on the page. |
| 116 | +
|
| 117 | +Total long tasks duration:: |
| 118 | +Total duration of all long tasks |
| 119 | +==== |
| 120 | + |
| 121 | +These metrics tell an important story about how users experience your website. |
| 122 | +But developers shouldn't have to become experts at interpreting and acting on these signals; |
| 123 | +they should spend their time reacting to the opportunities that these metrics present. |
| 124 | +For that reason (and many others), Elastic has embraced Google Core Web Vitals. |
| 125 | + |
| 126 | +[discrete] |
| 127 | +[[user-experience-core-vitals]] |
| 128 | +==== Core Web Vitals |
| 129 | + |
| 130 | +https://web.dev/vitals/[Core Web Vitals] is a recent initiative from Google to introduce a new set of |
| 131 | +metrics that better categorize good and bad sites by quantifying the real-world user experience. |
| 132 | +This is done by looking at three key metrics: loading performance, visual stability, and interactivity: |
| 133 | + |
| 134 | +[role="screenshot"] |
| 135 | +image::images/web-dev-vitals.png[Web dev vitals (image source: https://web.dev/vitals)] |
| 136 | + |
| 137 | +Image source: https://web.dev/vitals/[web.dev/vitals] |
| 138 | + |
| 139 | +Largest contentful paint (LCP):: |
| 140 | +Loading performance. LCP is the timestamp when the main content of a page has likely loaded. |
| 141 | +To users, this is the _perceived_ loading speed of your site. |
| 142 | +To provide a good user experience, Google recommends an LCP of fewer than 2.5 seconds. |
| 143 | +footnote:[Source: https://web.dev/lcp/[web.dev]] |
| 144 | + |
| 145 | +First input delay (FID):: |
| 146 | +Load responsiveness. FID measures the time between a user's first interaction with a page, like a click, |
| 147 | +and when the page can respond to those interactions. |
| 148 | +To provide a good user experience, Google recommends a FID of less than 100 milliseconds. |
| 149 | +footnote:[Source: https://web.dev/fid/[web.dev]] |
| 150 | + |
| 151 | +Cumulative layout shift (CLS):: |
| 152 | +Visual stability. Is content moving around because of async resource loading or dynamic content additions? |
| 153 | +CLS measures these frustrating unexpected layout shifts. |
| 154 | +To provide a good user experience, Google recommends a CLS score of less than `.1`. |
| 155 | +footnote:[Source: https://web.dev/cls/[web.dev]] |
| 156 | + |
| 157 | +TIP: Beginning in 2021, Google will start using Core Web Vitals as part of their ranking algorithm |
| 158 | +and will open up the opportunity for websites to rank in the "top stories" |
| 159 | +position without needing to leverage https://amp.dev/[AMP]. |
| 160 | +footnote:[Source: https://webmasters.googleblog.com/2020/05/evaluating-page-experience.html[webmasters.googleblog.com]] |
| 161 | + |
| 162 | +[discrete] |
| 163 | +[[user-experience-distribution]] |
| 164 | +=== Load/view distribution |
| 165 | + |
| 166 | +Operating system, browser family, and geographic location can all have a massive impact on how visitors |
| 167 | +experience your website. |
| 168 | +This data can help you understand when and where your users are visiting from, and can help you |
| 169 | +prioritize optimizations -- for example, prioritizing improvements for the most popular browsers visiting your site. |
| 170 | + |
| 171 | +Don't forget, this data also influences search engine page rankings and placement in top stories for content sites -- |
| 172 | +without requiring the use of AMP. |
| 173 | + |
| 174 | +[role="screenshot"] |
| 175 | +image::images/visitor-breakdown.png[User experience visitor breakdown] |
| 176 | + |
| 177 | +[discrete] |
| 178 | +[[user-experience-errors]] |
| 179 | +=== Error breakdown |
| 180 | + |
| 181 | +JavaScript errors can be detrimental to a users experience on your website. |
| 182 | +But variation in users' software and hardware makes it nearly impossible to test for every combination. |
| 183 | +And, as JavaScript continues to get more and more complex, |
| 184 | +the need for user experience monitoring and error reporting only increases. |
| 185 | +Error monitoring removes this blindspot by surfacing JavaScript errors that are |
| 186 | +occurring on your website in production. |
| 187 | + |
| 188 | +[role="screenshot"] |
| 189 | +image::images/js-errors.png[User experience javascript errors] |
| 190 | + |
| 191 | +Open error messages in APM for additional analysis tools, |
| 192 | +like occurrence rates, transaction ids, user data, and more. |
| 193 | + |
| 194 | +[discrete] |
| 195 | +[[user-experience-references]] |
| 196 | +==== References |
0 commit comments