Resources
Transcript
Session slide deck
PDF · download
Synthetics documentation
Docs · embrace.io
Embrace Synthetic Monitoring — product overview
Product · embrace.io
Make the business case for web performance with just three visuals
Blog · embrace.io
Synthetic & RUM: Why you need both
On-demand webinar · get.embrace.io
0:00
Cliff Crocker
Alright, I'm gonna go first, but… I'm actually going to take a second to introduce Lindsey, our new Head of Product Marketing at Embrace. Lindsey's awesome, super fun to work with, and, threw all this together for us, so really appreciate it. So, I want to introduce Tammy. We decided we wanted to kind of introduce each other. I had written a bunch of stuff down because Tammy is so accomplished, and has done so much, and means a lot to so many of us in the community.
0:28
Cliff Crocker
We call her Perf Mom a lot of times. I call her Perf Sis a lot. Tammy's authored many books, or authored books, Time is Money, written several different you know, blog posts relevant to web performance during her time, starting at Strangeloop, Sosta, Speedcurve, and now Embrace. And she's really been responsible for citing a lot of the greatest data that's out there in our field when it comes to tying performance
0:54
Cliff Crocker
back to user experience. She's keynoted Velocity, she's the co-chair at Perf Now, an awesome web performance conference that happens every year in Amsterdam, and more web performance events than any of us can count. I've had the privilege of working with Tammy across different jobs for many, many years.
1:13
Cliff Crocker
And I can say without hesitation, she is Amazing. And she's made my job much more enjoyable, but she's certainly made this space, super welcoming and just, awesome. She's in Nelson, BC. Last time I introduced her, I talked about how she was more famous for baked goods. Now I'll actually say that she's also known as the Britney Spears of Nelson, BC. So I'll leave that for everyone to figure out.
1:40
Cliff Crocker
I'm excited to be talking with Tammy again.
1:43
Tammy Everts
I'm still trying to remember where that Britney Spears reference comes from. I know it's based on something, but it escapes me.
1:50
Cliff Crocker
It's when you're, like, mic'd up with, like, the…
1:52
Tammy Everts
Oh, the foam… oh, that's right, yes, because, I live in a really small town, and so when people, friends have, like, searched for me online, and they see pictures of me with a mic, they think it's hilarious that that's what I do, and I, yeah, I've been told I look like Britney Spears with the mic. Anyways…
2:11
Tammy Everts
So yeah, so I'm gonna introduce you, Cliff. Cliff and I have known each other for… like, 15 years, something like that, since I got involved in performance, and Cliff is one of the few people I know who's been doing this even longer than I have, going way back to your days at Keynote Cliff, like, load testing, you know, like, so, Cliff is probably… I always… I've had mixed feelings about the phrase, he's forgotten more about, like, X than the rest of us will ever know, because that implies
2:41
Tammy Everts
forgetting things, and I don't think you forget anything, so I'm not going to use that expression. But, Cliff, it's funny you call me persist, because I call you my webperf BFF, so, out in the, out in the world. So, yeah, like, we've worked together at…
2:57
Tammy Everts
Sosta, Speedcurve, now at Embrace, and yeah, Cliff makes coming to work every day really, really fun, so I'm really grateful for you, Cliff. And I will say, if you've seen glitches in, like, how the slides are forwarding, I might be good at some things. I'm not great at slides, so that was just my finger kind of going on the keyboard here. I'll try not to do that anymore. Let's get started.
3:24
Tammy Everts
So yeah, like, over the years, you know, the nice thing about, our Embrace Plus Speedcurve marriage is that we serve a lot of the same customers, so we've been really lucky to, help some really great brands. These are a few of them over the years, and, you know.
3:43
Tammy Everts
I like to feel like we help them, and also they help us because, you know, we learn from, kind of, real-world experience what pain points people have, and how to make tools better so that we can help folks with those pain points.
3:59
Cliff Crocker
And this is us. This is, this is Embrace, so user-focused observability. When SpeedCrypt was acquired back in November, one of the most exciting things that we felt around this was that we were really both aligned around user experience. Embrace on the mobile side, and much more tied into the observability space.
4:20
Cliff Crocker
And then us on the web performance side, from Speedcurve. And focused a lot on how do we help front-end developers, businesses make their sites faster, attain more revenue, through increased conversion rates, things like that. but really focusing on that quality of user experience. So. We've got this really unique opportunity here that I personally am extremely excited about as we start to bring these two, areas together, because really, they shouldn't be separate. Really, we feel like there should be, you know, a unified view when we're thinking about user-focused observability across any screen, whether that's a mobile device.
4:56
Cliff Crocker
and a mobile app, or that's a, you know, a web app, a desktop, mobile, whatever it might be. These are users. These are people that you're building sites to serve. These are people that you're trying to improve the experience for, and we get the opportunity to do both. So, pretty great.
5:13
Tammy Everts
Yeah, I… just to add on to that, like, I think about myself as a user, and, you know, fluctuating between using a website on my desktop, using a website on my phone, then going to the mobile app on my phone, all for the same brand, and I'm not making excuses for the brand if something gets bogged down at some point. I want to expect a consistent experience across everything, so it's kind of… it kind of feels like a no-brainer, but,
5:37
Tammy Everts
You know, it's interesting how those areas have become really siloed over the years, so really excited about bringing them together. So today, we're gonna talk about a few things. Synthetic monitoring, of course. Core Web Vitals for, folks who are new to Core Web Vitals. and some use cases and best practices that we're going to walk through, and then a little bit of conversation about what's kind of around the corner, what's next for synthetic monitoring. And there's going to be some time for Q&A at the end of this, but maybe, like. I want to take a quick moment just to acknowledge that, you know, I think there's a variety of people who are joining this session, or who've signed up to get the recordings later on, and it's really exciting to see, you know, a lot of… we saw some familiar names. On the sign-up sheet, but also just, new folks. So, our goal with this is that I think everyone, you know, hopefully if you're really experienced and you spent a lot of time in the performance space. There's still going to be some fun kind of Easter eggs in here for you, maybe some things that you didn't know, and some cool thinking to kind of take with you about what we can all kind of work together to do next. And if you're new to performance, we're going to do our very best to kind of get you up to speed, no pun intended.
7:00
Tammy Everts
So, just a… let's do a quick walkthrough, and I'm gonna just kind of blast through some of the cool things that synthetic monitoring lets us know about our sites.
7:16
Tammy Everts
So, really, really quickly. We think of synthetic monitoring, sometimes you hear about, like, lab data, that's basically synthetic, and I like to, get people to think about synthetic monitoring is it helps you to understand your pages, and how your pages are built, and what's changing on your pages. And it also lets you be really proactive, so you can see problems, identify problems, by analyzing your pages before they go out to your users, before you introduce any new code changes or anything like that.
7:47
Cliff Crocker
Yeah, I think that… I think when we think about, you know, synthetics in general, I like the lab analogy because we think about a clean room environment where you have control over the variables, you have control over what you're testing under, what conditions you're testing under, when you're trying to isolate problems with your code and improve the things that you can.
8:05
Cliff Crocker
Which is very different from rum when we think about that, and we'll be talking a little bit more tomorrow, Andy Davies and I, about rum and about rum at Embrace. But it's a really great way for us to actively go out and monitor, so it is a form of active monitoring, controlling all your variables in a clean lab environment, so then you can actually isolate some problems.
8:27
Tammy Everts
And when we think about synthetic monitoring… oh, sorry, was that, Cliff?
8:31
Cliff Crocker
No, no, that's why I was cutting you off, I apologize, go ahead.
8:33
Tammy Everts
Oh, no, sorry. Usually when Cliff and I interrupt each other, just for everyone's reference, it's because we're about to say the same thing that the other person's gonna say, so, there's a bit of a hive mind that's going on, so it always cracks me up. Okay, so where synthetic fits into your overall, sort of, spectrum.
8:52
Tammy Everts
We, you know, we've just talked about synthetic. RUM, on the other hand, helps you understand your users. Full spectrum of user experiences. crux, which we're not going to get into very much until later on in this session, is kind of a specific RUM dataset, but it's very limited, but still really helpful. And that's kind of all I'm going to say about that for now, unless there's anything you wanted to add, Cliff.
9:20
Cliff Crocker
Yeah, I guess the only thing here, and the reason for this slide, is that a lot of times, even now, like today, 10 years ago, everybody's saying, oh, synthetic is so much better than rum, or rum is, like, so much better than synthetic, and… I even saw some of these arguments, like, in the last month, where people were like, well, I don't need rum, I've got synthetic, and it's so much more detailed. Or, you know, the rum data's the real source of truth, so why should I care about synthetic? We obviously think that there's room for both, and there's a need for both. Our goal is to kind of talk about the best of both, what synthetic does, and then what it doesn't, which we fill in a lot with RUM, or data sets like Crux to get more of a full picture of what's going on. But these are not competitive technologies. These are, very complementary.
10:08
Tammy Everts
So, let's just do, like, a lightning tour of just some of the things that you can learn from synthetics. So, as Cliff already mentioned, it's a… it's a clean baseline in your lab where, you know, you can see, okay, something changed in your codebase, something changed in your design, and you suddenly violated a metric. In this case.
10:28
Tammy Everts
The metric that's been violated very briefly is cumulative layout shift. And we'll get more into what the different metrics mean in a few minutes. The cool thing about Synthetic, too, is it lets you measure not just your own pages, but any URLs, and there are just so many helpful reasons why you can do that, including just doing competitive benchmarking, which we'll also get to in a minute. You can script different scenarios, which can be really helpful, like see how a page renders on a first view versus a repeat view. How does it render if you block all the third parties or specific third parties? So there's a lot of really cool scripting you can do to, emulate different scenarios in synthetic.
11:10
Cliff Crocker
And this has been one that I think has really popped up a lot over the last few years with, with cookie consent, and those cookie banners that pop up everywhere as an example. Being able to script both those experiences, what is, you know, how much, actually. wait, is the cookie consent adding to my page? But also, what is a logged-in user, a user who's returned to my site several times, or accepted that cookie consent thing? So, being able to test under different scenarios, again, with some of that control, scripting is amazing for that.
11:41
Tammy Everts
You can also test pages, outside your production environment, so you can test pages in your staging or dev environments, test pages behind a login, so just kind of lots of cool things you can do. And why this is specifically helpful is that you can also, test
11:59
Tammy Everts
on a deploy. So basically, what these two features kind of let you do is integrate testing into your entire development pipeline. So, you can do it manually, as we just kind of talked about, where you just trigger a manual test after a deploy, you can test in your staging environment before you push code out, or you can even do… use plugins, or our API, or GitHub integration.
12:23
Tammy Everts
To kind of really bake synthetic testing into your entire pipeline. You get also really great diagnostics, and that's always been kind of the… a big strength of synthetic, that, we would never want people to go away from using synthetic and just using rum, because RUM can't give you this level of diagnostic waterfall, where you can see This is just the top part of this waterfall, where you can actually see what the page elements are, and how long they take to render, and these hash marks, at least in our charts, show you that, these are blocking resources. So you can kind of visualize all of that.
13:09
Tammy Everts
And we also, Synthetic also shows you detailed performance audits and recommendations, along with your Lighthouse scores. And, we like to badge them so you can see which metric is affected by each audit, or which metric will be hopefully improved if you deal with our recommendation. So you see TBT stands for total blocking time, you see FCP is First Contentful Paint, LCP is Largest Contentful Paint, and so on.
13:41
Tammy Everts
And this part is always really interesting. A detailed analysis of assets. Just, like, how's your page built? How are, you know, how much HTML is on the page? How much, importantly, JavaScript is on the page? So for this page, you can see that there are a total of 540 requests. and almost 6 megs of total page size. And when you look at the breakdown over here in the sidebar, you can see that about 3 of those megs are JavaScript, which is pretty significant.
14:11
Tammy Everts
And another great thing is that you can actually see what are the third parties on the page, what are the long tasks? What are the bits of JavaScript that are taking 15 milliseconds or more to execute? So, just being able to get this level of granularity and seeing how the page is built, and where the issues might be happening.
14:30
Tammy Everts
And then you also get really great visuals with Synthetic. These visuals have been around forever, they're so powerful. We talk about them all the time, where you can see rendering timelines, you can get rendering videos. You can get rendering videos of your site alongside your competitor's site, and those can be really, really helpful for helping your organization understand where your page sits performance-wise compared to your competitors. And if you want to see an example of all of this, without having any kind of synthetic monitoring set up yet, or you just want to see our approach, you can check out, speedcurve.com slash benchmarks and, poke around in there, drill down into some waterfall charts, get all the visuals that you, that we've just talked about, and, just, yeah, kind of explore those.
15:21
Tammy Everts
So now let's talk about Core Web Vitals.
15:25
Cliff Crocker
All right, so I can… I can feel some of our friends' eyes rolling up into the back of their heads, going, guys, come on, like, these have been around forever. Which is true, but we've got, you know, a set of new friends as well who maybe not… have not heard of these, and I think it's an important starting point. So, years ago, we were…
15:43
Cliff Crocker
Tasked with the… even back in keynote days, like, way back when, we were tasked from from companies all the time of, like, hey, just give me that golden metric. Give me that one metric that I need to focus on that tells me site performance. Is it page load time? You know, is it, you know, speed index? You know, people have been on the hunt for what's the one metric I can focus on that's going to tell me about user experience.
16:05
Cliff Crocker
Well, we got it to 3. So, a few… actually, not a few. A few years back, but actually quite a few years back, we introduced this concept of Core Web Vitals. So, Core Web Vitals is an initiative that was really driven a lot by Google, but really more around, driven by the community. Whether it was vendors, you know, coming together to talk about this, or webperf experts, many of whom are probably on the call today, people contributed to come up with a set of metrics that were focused on
16:34
Cliff Crocker
Not just diagnostics and is, you know, in network terms or technical terms, but really. what's the user experience, and how is that actually measurable in the browser? So, largest Contentful Paint. you know, representing the loading aspect of a page or of a site. Interaction to NextPaint, came in a little bit later. Initially, we had first input delay. One of the things I love about Core Web Vitals is we've given ourselves permission to change over the years, whether that be thresholds or the metrics themselves. So, IMP came along and replaced first input delay, which we think
17:10
Cliff Crocker
IMP is a great metric and shows strong correlation with user behavior, measuring interactivity. And then there's CLS, Cumulative layout shift. Visual stability. How, you know, how is the page settling? How are things moving around on the page as I'm trying to interact with it? We all know that frustration of. trying to click on a page and then accidentally… accidentally clicking on an ad banner that pops down in front of us. That, you know, those practices are something that have caused a lot of frustration in users and something that's measurable. Through a metric like CLS. Point being, these are a great starting place, and where we'll focus a lot of our time to talk about a little bit today. They're not the end-all, be-all, by any means, but they're certainly a good starting point, especially for those that are just getting into web performance.
17:57
Tammy Everts
And just for anyone who's, just learning about Core Web Vitals, these thresholds that you see in the bars, these are, like, Google has created these thresholds based on looking at a lot of aggregated data. So the thing that I always caution people on is, like, this is a good starting point, but your thresholds are going to be different for your site. We're not going to get deep into that today.
18:20
Tammy Everts
But I just always want to kind of put that out there as a really big caveat. So, just to kind of go through these real quick, Cliff already covered them, but one of the things that is important to talk about is what we can measure in synthetic versus what we can't measure in synthetic. So, LCP, we can measure in synthetic, largest contentful paint. Is something on the page loading? Is the largest visual element showing up?
18:46
Tammy Everts
Is it stable? That's CLS? Is the page janky or not? Are elements flying around? We can also measure that in synthetic. Is it interactive? that's INP, interaction index paint, we can't measure that in synthetic, because it's based on RUM data, because we're actually measuring how long it takes a page to respond to a real user action. But we have a pretty good proxy called Total Blocking Time, which measures how much blocking… how much… the total blocking time of the JavaScript on your page.
19:16
Tammy Everts
Which, is a pretty good synthetic proxy for INP. And… We also want to give you some metrics that are beyond Core Web Vitals. Core Web Vitals are a good starting point, but there are a couple of other metrics, well, at least a couple of other metrics, that we would consider to be vitals, if not core web vitals, which is, is the server responding? You really want to know this, so that's, time to first bite.
19:40
Tammy Everts
And is anything happening on the page? That's first Contentful Paint. Is anything starting to render in the browser for the user? And, this is a heat map. I love heat maps. They can tell you a lot. I don't want to go deep into it, because Clip is going to be talking more about this shortly, but you can see in this heat map a whole bunch of pages. all of the metrics and where the issues are, and you can also see the little column where IMP would live if, if we had RUM enabled for this set of sites. So I'm gonna throw things over to you, Cliff.
20:14
Cliff Crocker
Okay, so we're gonna walk through product. We're gonna walk through a little bit of a demo. There's obviously a lot that we could be focused on when we're talking about, synthetic monitoring. Lots of different use cases. I'm actually gonna shrink my screen up just a little bit. This looks a little bit too…
20:32
Cliff Crocker
Too wide to me. Maybe I won't.
20:36
Tammy Everts
No, it looks good.
20:37
Cliff Crocker
Okay, cool. So what I decided to go through today was actually talk about competitive benchmarking. So, this is one of, like, the most primary use cases for synthetic that we've seen, but then we'll also follow up with how do you actually diagnose, find, and fix issues, specifically as it relates to Core Web Vitals.
20:55
Cliff Crocker
So the great thing about synthetic, like Tammy said, is that you can measure other people, right? You can measure your own site, of course, and you should, but getting context around how the site performs against its competitors, against the competition, is extremely important.
21:12
Cliff Crocker
So, when we think about that… alright, one second… My browser seems to have frozen up here. Apologies, everybody. Let me, do this.
21:28
Tammy Everts
Oh, that looks good.
21:29
Cliff Crocker
Okay.
21:30
Tammy Everts
Yeah, that looks better.
21:31
Cliff Crocker
Sorry, little glitchy. So when we think about this, you know, you can't go and measure a competitor with rum, right? arguably, you could do this with a Chrome User Experience Report, just gathering, you know, bits and pieces of RUM data from Chrome browsers under certain conditions, but you can't go and put JavaScript on your competitor site unless you're, you know, breaking some rules. So, this is why I really like Synthetic, to be able to kind of illustrate this. And I'll just kind of start with this idea of, like, this heat map here. So, I picked some, you know, random sites in the retail space from the PageSpeed benchmarks that Tammy talked about earlier.
22:10
Cliff Crocker
just to give you an idea of a cross-section of how things are looking. So, at a glance, we can start to tell, like, hey, if you're Walmart, you know, how is your site performing compared to Nike? It looks like, you know, doing a decent job from time to first bite, but maybe could improve a little bit when it comes to First Contentful Paint.
22:28
Cliff Crocker
Certainly can improve a lot when it comes to total blocking time. So, starting to get context around those vitals as it compares to your users. You can obviously break this down by browser, whether you're measuring from a slow mobile device or a fast desktop device, as well as region, where for this example, we just measured from the U.S. and the West Coast. But here's where it actually kind of starts to come together, and really what I think we all enjoy so much about Synthetic is this ability to compare screenshots and film strips, to be able to see what the users are seeing as your site's loading. Because nothing really tells the performance like, like a picture, like a video, especially when you're, you're, you know, putting yourself next to your competition. So, film strips give us the ability to see, like, hey, how is this page start render looking compared to the competitors? As that page is actually visually loading, you know, when does my largest Contentful paint actually come in and happen relative to competitors? And then, you know. when it's finally fully loaded, you know, what's that experience actually look like? Are we throwing a lot of content out there, whether that's large videos or other things that continue to stream, or are we stopping with a pretty basic static page, like we can see here with Apple? But what I really like is the videos. So, being able to come in and say, alright, how does it actually render when we're thinking about, the page load? And we can see, like, for Nike there, where it took quite a while for that first, sorry, for that largest Contentful Paint image, the poster image, actually, for the video that was on the site, to load.
24:03
Cliff Crocker
Pretty slow there, it's something that, you know. maybe we wouldn't have the context for if we hadn't held it up against, you know, some of our other competitors, but again, you know, picture, Paints a thousand words, or says a thousand words, or whatever it is. We can get good information around the loading of the page, like we said, from film strips, but also from the metrics that we look at, like. Time to First Bite, largest contemptful paint, the start render, and then time to interactive. Time to Interactive, one that isn't talked about as much anymore, but certainly, I think I've seen a lot of correlations with it when it comes to, you know, how things are actually being loaded into a page, and when you can actually start scrolling around, and when you might be experiencing some of that jank, or high IMP times that we've talked about before.
24:48
Cliff Crocker
We can also get CPU times, so this correlates heavily with the JavaScript execution we see in terms of long tasks, or what we'll hear about tomorrow in terms of long animation frames in RUM. So seeing here that, hey, we actually see JavaScript, scripting CPU time, pretty high for lows, and we're gonna dig into that one here in a minute to see what's going on there. But, there's other… there's other parts of CPU time, like time being spent in layout, loading, the other timings, as well as paint timings, all associated with how busy the CPU is, and how active that main thread is as it's actually rendering.
25:26
Cliff Crocker
Content requests. We've talked about this a little bit already as well, but how's my site, how's the site page construction looking? when I compare it to competitors. Am I loading way more JavaScript, or is this kind of just the norm for my industry? Am I loading, like, too many images in terms of the number of requests? You know, all those things can give us a little bit more perspective and context when we're building a site. And then also, the size of those assets. Obviously, the green here, video is huge, and it's taking up quite a bit of bandwidth when we're talking about people that might be on
26:00
Cliff Crocker
poorly connected networks or congested networks, other areas where maybe we don't have the privilege of having such a fast download times and wide open pipes to be able to stream this kind of content. But also, JavaScript alone, right? JavaScript can be huge. 9 meg of JavaScript, in this case, for Lowe's, that's being loaded on the page.
26:20
Cliff Crocker
Which is, you know, quite a bit higher, even than the images that they're serving on the page and video. And then we'll talk a little bit more about Lighthouse and what comes from that as we dig into more examples, but that gives you a high level of, like, hey, how do I benchmark myself and look at how I am performing compared to my competitors? Now let's dig in a little bit and kind of, use some examples. I'm going to be equal opportunity and pick on a few different ones here, but I want to dig into, like, how you actually go about finding and fixing, issues with Core Web Vitals. We'll start with Largest Contentful Paint, that Nike example that we were talking about.
27:01
Cliff Crocker
I pre-baked the cake a little bit here. I'll go ahead and jump here to look at the test details in a little bit more detail. The nice thing is that, you know, we are trending this data. I should have talked about that a little bit before. I'm jumping right into the test details, but, you know, we can see if there's been changes that are happening over time, with any of these different vitals. We can see, again, specifics around the film strips and the pages that we might be measuring for this site. What's my lighthouse performance looking like over time, as well as what are some of the recommendations coming out of those
27:33
Cliff Crocker
audits that I can use to improve. And then from there, you know, drilling in to actually look at the test details, and I pulled up a special one here to dig into. So now we're here for Nike, and we're trying to say, okay, well, what's going on? Why is my largest contemplate paint image taking so long to load? And honestly, I'll admit, before the call, when I was looking at this one, I was struggling a little bit, because, not only are we seeing the LCPs coming in, you know, pretty late relative to competitors.
28:01
Cliff Crocker
But we're also seeing this kind of weird behavior where it disappears, and then all of a sudden we don't get anything again. So, this is a bit of a double whammy. This is, again, a call-out to talk about why I think Core Web Vitals are a great starting point, but they don't always tell you the whole story when it comes to user experience. So we'll dig into, you know, what's going on here a little bit more. We can see high level from the waterfall, what this looks like in terms of, like, the milestones on the page, all the way up to fully loaded. Before we drill into the waterfall detail, I'm gonna actually…
28:34
Cliff Crocker
trim this to LCP. This is a shout out to Tammy, this was actually her idea for this waterfall, to be able to come in and say, alright, if I'm looking at LCP, there's too many requests on this page for me to kind of grok all at once, so I simply just want to look at what's in my critical path, like, what's in my critical path to LCP?
28:53
Cliff Crocker
And the answer is a lot, when we're looking at this page. So we can see, as we start to drill down, one of the unique things about the waterfall that we're using at Embrace is that we're showing you several different dimensions on one view of a waterfall. It's not just about, you know, the loading and the durations that we're looking at, but also we're doing things like showing you hash marks when something is render blocking. CSS, naturally going to be render blocking, but when we look at JavaScript.
29:20
Cliff Crocker
you know, really as a best practice, you should be deferring this JavaScript or loading asynchronously, because obviously that's going to do things like, prevent you from loading other assets on the page. It's essentially kind of stopping the world when it comes to, being render blocking.
29:37
Cliff Crocker
So we can see a lot of that happening here. We can also see just the React library here, and how much JavaScript execution is tied to this, which is kind of another light bulb that's going on. And as we scroll down, we can see there's just more JavaScript. This happens to be loaded synchronously, but as we start to get down here, I'm still scrolling, seeing a bunch of images that are being loaded here. Shoes, no surprise there for Nike. But as we get into this, I start to see there's all these poster images that are being loaded, before that actual, LCP element, which is all the way down here. So, we've managed to load up a ton of JavaScript, we've managed to load up a lot of images that aren't actually being used, yet. And we're actually deprioritizing this largest contentful paint image that's coming down way, way, way down here, which happens to be the poster image for that video. That's rolling on the page. So, just to get to this point, you know, we've had to download all this JavaScript, we've had to load all this stuff just to get a chance for that image to show up. The image is not discoverable in the HTML, because this is being loaded later, I believe, by the video player, as a holding image for the video as it starts to process. which is loaded way, way, way up here, so we can see, like, the player, that's being loaded, that's the CSS, and then here's the actual JavaScript. Was, like, the 21st element was loaded, you know, roughly, at the… whatever mark this was, yeah, 600 milliseconds, or whatever it was. But it wasn't until way, way, way further down, likely this is gated on onload or something like that, where the images actually started to load, and be presented to the page. So… they could do a lot here to improve this experience. Also, just probably simply by… simply by preloading this image, by putting that actually in the header and saying, preload this image, the preload scanner would then be able to find that, render this much, much earlier. Also, using PreConnect, since this is on a different domain, the staticnike.com gets you a few little, a little bit more performance gains as well.
31:48
Cliff Crocker
But that alone could make a big difference just in terms of the LCP. But it doesn't solve for that other problem that we were seeing. And I think that this is a big one. I'm actually going to show the video again, because I think that it's that important. This'll take a second to actually build and render. But nothing's happening, nothing's happening, nothing's happening. Okay, there's… oh wait, it's gone, it's gone, it's missing again, I'm not seeing anything until later, and then here it is again when the video actually starts to pre-roll. So, we've got… several issues that are going on here that are related to this video player. As I look down in the waterfall, the other really cool thing that we show is the height of the asset. It's actually showing you the size. So, this is all, video segments that are loaded, because this is supposed to be a streaming experience.
32:36
Cliff Crocker
But due to all the JavaScript and JavaScript execution that's happening, as well as the size. you know, of this segment, this one's over 3 meg, 3.5 meg. We're not loading anything until we get all the way down here, and we've started loading the video segments, that are, you know, matched up with that poster image that we saw. So, we're creating a pretty bad experience here for the user, not just in delaying LCP, but also in the fact that this video can't start rolling much, much earlier, like when that video player was actually established in the JavaScript in the early part of the page.
33:11
Cliff Crocker
Now, this is not a scientific breakdown of how to make this page faster, necessarily, but just an indication of what you can do when you take a film strip and a picture of what's happening, and then, you know, throw that onto a waterfall to actually see, you know, the rendering order, when things are coming in, things that you can do to kind of find some low-hanging fruit and hopefully optimize this.
33:33
Cliff Crocker
Okay, shifting quickly back into, a couple other examples, and I know I'm a little bit… little bit late on time, but I'm gonna push through anyway.
33:43
Tammy Everts
do it.
33:44
Cliff Crocker
I wanted to point out a few more. So, let's pick on Amazon, because they seem to have the worst CLS on the page. Again, pulled this up here so we can see, what the test result looks like. So again, we've got the same test result, you know, you get this for every single test that you run, you know, quite a bit of detail packed in here. This isn't an error. At first, I thought it was, but it's like, oh yeah, it's Amazon. They don't load any third parties, they're basically just using everything from the Amazon domain.
34:14
Cliff Crocker
And we can see that here when they're looking at the script execution. Lighthouse is telling us that there's some issues that are going on here. We're not going to dig into those as much, because really, there's only one that's impacting CLS here, and that's the fact that there's not explicit height, and width set on several of these images.
34:34
Cliff Crocker
Which is going to contribute to a lot of the ship that you see. But this is a really high CLS score. You know, seeing a layout shift score of, you know, over .1 is something to look into. Seeing something that's almost .4 there's a lot of room for improvement here, and it's really coming down to just this first shift that's happening, where we can see that this banner's loading, we've kind of caught this resize in the middle here, but we've had to resize, the element to be able to start fitting in all these different cards that are loading. This is a little bit more obvious, where we can see the elements shifting around here. They're not of a predetermined width, they're actually being dynamically resized.
35:12
Cliff Crocker
And then here, another dynamic resize that we're seeing, that's causing issues with that banner image yet again. So, we've been able to really quickly get to, okay, what's my CLS? what are some of the recommendations that are coming out of Lighthouse to improve CLS, and give me a visual example of the actual rectangles and elements on the screen, as they're moving around, so I can identify where those layout shifts are. Now layout shifts are measured within a window, so you can see here, this is the, you know.39, but there's other layout shifts that are happening. So as soon as this is addressed. The next question's going to be, how do I actually go about getting, you know, taking care of this one? So CLS is rarely one and done when you've got issues like this, so it's kind of nice to see some perspective about where layout shifts are happening, even if they're not included in that initial window that's contributing to the CLS score.
36:02
Cliff Crocker
Finally, we're gonna jump back into… Our leaderboard here, and take a look at total blocking time. So, I like Lowe's, I shop there a lot, but I also noticed that they're using quite a bit of, of JavaScript on their site. So, when I look at Lowe's, and again, now I'm in a test example for Lowe's. I can see that, you know, FCP is maybe a little bit on the high side. Time to first bite certainly is high, and again, we're measuring on a mobile connection in this case. LCP needs some improvement as well. But when I start to look at total blocking time, and break this down here. In the waterfall, I can see a breakdown of this, and I can start to see where these long tasks are executing on every script. So there's a lot going on here. There's stuff that's happening for first-party requests. There's quite a bit happening and associated with this third party as well, where we can see that this is coming in and contributing to almost 2 seconds of long task time, where essentially the main thread is dead and not responding. And it continues. More stuff from Lowe's CDN, which is technically classified as a third party, but is really more of a first party, if you think about it. But all of this, we kind of come down here and take a look at the activity timeline.
37:27
Cliff Crocker
Nope, let me reload that really fast. is causing quite a bit of interruption. So all this red that's happening here, throughout the page. Not only is the user, you know, not able to interact with the page, and nothing is able to happen while this is… while this is going, this is render blocking, stop-the-world activity that's happening due to all that JavaScript that's executing, but it's also impacting things like your downstream metrics, like when we're thinking about things like, you know, even page load time. There's even some that's happening before Largest Contentful Paint.
38:03
Cliff Crocker
So, again, we saw that their larger Contentful Paint wasn't too bad. First Contentful Paint wasn't horrible. Most of that's pushed out by time to first bite, but look at everything else that has to happen on this page, and what's being interrupted in terms of that user experience. So… While the LCP metric might be telling us how this page is loading and what it looks like, once I want to interact, or click on something, or add to cart, or something that you really want your customers at Liz to do, it becomes difficult because you've got all this stuff that's executing, blocking the main thread.
38:34
Cliff Crocker
We can scroll down here and see that this is associated with, third parties as well as first parties, because we've already identified this is from Lowe's CDN here. But these are ones that, you know, you actually want to spend some time talking to your third-party vendors to say, hey, what's going on? Why are you blocking the main thread so much? Why is there such high JavaScript execution time? In some cases, it's misattributed, might be due to wrapping around JavaScript errors or things that are occurring, but in a lot of cases, it's
39:01
Cliff Crocker
JavaScript that's not even being used, or functionality that's not even being used. And more and more, I think third parties are starting to play along and give us optimized bundles and things that we can use when we're not using the full functionality, but we should be holding them accountable. We can also look at the size, and look at all that JavaScript that's being loaded. Again, 7 meg that's being served from this low CDN domain. Google Tag Manager is responsible for about 600K of the JavaScript. As well as the number of requests that are coming across. And here's a breakdown of that. When we look at the script by domain, we can see that long task time was about 5.6 seconds, where the browser could do nothing until this was done. The overall time that we're seeing on the script execution was close to 12 seconds.
39:50
Cliff Crocker
And again, this is happening, as the browser's loading, as other resources are loading, so like we saw in that, you know, example from Nike, where the video files and segments weren't able to load because all the JavaScript that was happening and executing on the thread, this is the same kind of thing that would impact that.
40:08
Cliff Crocker
So, some things to focus on there. Obviously, the audits can tell us very similar things when we look at that. I would imagine that, you know, reducing JavaScript execution time, one of the biggest recommendations that they have. Here's the actual scripts that you would want to look at and profile.
40:24
Cliff Crocker
Minimizing main thread work, as well as, reducing the impact of third-party code, so a lot of the third parties that we talked about that were attributed to that. I think I'm gonna stop there, but wanted to give you a quick and dirty look at, one, how we think about synthetics and the use case of competitive benchmarking and optimizing for Core Web Vitals. Two. How's this looking in Embrace? We're pretty excited about it. We're pretty excited that we've got synthetics in the quarter that we've been here into the product and integrating it in with the RUM product that we'll dig into a little bit more. Wanted to make sure that you were able to see that and see if that prompted any questions about where we're going. But hopefully this was useful. Tammy, I'll go ahead and stop sharing so we can continue with the slides and get to some Q&A.
41:14
Tammy Everts
Alright. All right, everybody, bear with me. Try to make this as unbumpy as I can. Ta-doo! Oh, here we go. Well, that was a little bit bumpy. We're just gonna quickly scroll through, all you speed readers. Can I enjoy… There we go. Alright, so one of the things that's been really interesting about this journey of joining forces with Embrace and talking with a lot of folks who are really new to synthetic. is, just the fact that we've internalized so many best practices over the years, but we've not, I think, done a great job of sharing them so that we can make sure people are really set up to do very successful and helpful testing. So, what I've been working a lot on over the past little while, with help from Cliff and Andy, is just a set of best practices for synthetic testing, and it's really been interesting having these
42:32
Tammy Everts
conversations, because We're… there's a lot of overlap, but then we also had our own little individual preferences, so it's been really interesting, actually, just getting back to, kind of, basics and talking about these things. So, really quickly, Cliff talked about some of these things earlier, so did I. There's… I think of them as being, like, 4 flavors of synthetic testing. So we have scheduled tests. These are the… this is how you create your baseline, by having scheduled tests at specific times every day, so you can get that baseline.
43:00
Tammy Everts
Deployment tests, we talked about this, is when you integrate testing into your CI-CD pipeline, or do, and then there's ad hoc testing, where you can just kind of test anytime by hitting a button. If you don't want to do a full-on CI integration, you can just say, like, oh, just did a deploy, I'm going to hit a button and do a round of tests.
43:18
Tammy Everts
And then competitive benchmarking. So… best practices, most of the leading sites, like, most of the sites that I help out and that I speak with, benchmark themselves against at least a couple of competitors, and maybe kind of an aspirational site, like Amazon, which tends to always have really speedy start render and LCP times.
43:38
Tammy Everts
So those are kind of the four types of testing that we recommend everyone do. And then kind of the best practices. So, the testing key page types. This is actually a really interesting conversation, because a lot of folks think that they need to test their homepage, because…
43:55
Tammy Everts
they want to benchmark themselves against their competitors, and so that, like, comparing home pages feels really important. Actually, when we look at RUM data and the relationship the different page types and the speed of those pages has with metrics like conversion rate or bounce rate.
44:11
Tammy Everts
The higher correlation is actually things like your product pages, your category pages, if you're a retailer, or if you're, like, a news site, it's your actual article pages. So, you want to make sure that you are testing a good representation of, for retail product category. Definitely search, check out, and landing pages, and your homepage too, but that's almost more of a vanity metric than anything else. And for a new publisher, your homepage, yes, but also your section and article pages. And you're going to want to look at your RUM data, and your… the good RUM data will actually show you what the most popular page types in your site are, and you should base your testing on that. You also want to make sure you're testing from probably multiple geographic regions, and again, this is where you're going to want to look at your RUM data, and make sure you're aligning your test locations to where your visitors are actually coming from. And it sounds really obvious, but sometimes people just choose one test location because They want to minimize the number of checks that they're… the testing that they're doing to kind of stay within a certain budget. But you really want to test where your people are. By default. like, we've set up our synthetic testing to basically do a minimum of 3 test runs per test, and then it plucks the median result, and that's what it shows you. So, don't… we don't necessarily… I don't see a lot of use cases where people do just one run per test, or more than three runs per test. Sometimes you could do, like, five. You want it to be an odd number, so you can plug that median. Five, maybe if you're finding your numbers are really, really spiky, and you're getting a lot of outliers, but that's kind of outside the norm.
45:56
Tammy Everts
And this was actually… Andy Davies and I, had a really quite involved conversation about how often people should test. What we both agreed on is that testing once a day is not enough. What you really want to do is… and testing at least every 6 hours is going to give you pretty good baseline monitoring.
46:16
Tammy Everts
Really, the more important thing to think about is, again, looking at your RUM data, when are users most likely to visit your site? What are the most popular times of day? And make sure you're testing at those times. Don't just choose a random number, because that number might not be when your visitors are there.
46:33
Tammy Everts
And Andy's recommendation is, you know, every 3 hours if you've got, like, a really critical page that you want to make sure you have some visibility into. And then make sure you're covering the right browser profile. So in synthetic, you know, at Embrace, we have. evergreen browser profiles, so you don't have to worry about always updating to, like, iPhone 14, to iPhone 16, and so on. We have these evergreen browser profiles. We actually update them in the background for you, so you don't have to worry about it. But at a minimum, you want to get coverage for desktop slow
47:07
Tammy Everts
And mobile fast, and ideally also mobile medium, so you get a sense of, kind of, like, what a lot of users are likely to experience. But I'd also recommend that if you are able to do it, also test for mobile slow, which is going to get you the sense of what a, like, a bad Android, a cheap Android on a poor connection is going to be, and also even desktop fast to give you, like, maybe the full range of user experiences.
47:35
Tammy Everts
And now I'm gonna throw things back over to Cliff to talk about what's coming up.
47:41
Cliff Crocker
Yeah, and this is specific to synthetics. We've got a lot cooking over at Embrace. We've got a lot that's been going on on the Speedcurve platform as well, but wanted to focus a little bit more on what you can start to expect to see as it, lands in… as things start to land in Embrace.
47:58
Cliff Crocker
We talked about CICD integration, something that we can still leverage today, given that Embrace is powered by Speedcurve in terms of synthetics. So we can use our existing deploy APIs, we can use our existing integrations with GitHub. But we want to make this more, first class and embrace, so bringing over deployment dashboards, bringing over annotations and notes and the things that you want to see and track with those deployments. is coming soon. On top of that, competitive benchmarks. We showed you how you can use the product in its general sense right now to run benchmarks and get really great data. However, we've got to take that to the next step. So we've had a project in the backlog for about, you know, six to eight months now that we really want to bring forward, which will start to include, you know, the industry benchmark data by default, some
48:48
Cliff Crocker
templates based on, you know, whatever industry that you're measuring from, as well as being able to select your own competitors, but also supplement that with the crux data, with some, you know, views of what the performance looks like from Chrome User Experience Report for you and your competitors. Finally, functional testing. Completely different type of testing than what we talked about today, which is more about performance baselining and performance testing. Functional testing is one of the things that a lot of our customers have been asking us for, the ability to also reuse scripts between functional and your performance testing, but doing more pass-fail type testing.
49:23
Cliff Crocker
are my key flows actually running and operating, something that we want to supplement our synthetics offering with? This is just a very small sliver of the things that we're working on next, as it relates to synthetic, that we're super excited about.
49:40
Tammy Everts
So… Really quickly, just because we're running up on time, and we want to make sure that we've got time for, questions. we've talked about, kind of, what the great bits of synthetic are, but it's also some of the limitations, and this is where, kind of, real user monitoring picks up. So, synthetic. you have limited test URLs, only your site, or sorry, only pages on your own sites and other sites, real user monitoring. all of your data for all of your site, across real network and browser conditions, full geographic spread, it's always on, and you can correlate, performance metrics with business metrics and UX metrics, like bounce rate and conversion rate. So, I don't want to give away the farm, so that's all I'm going to say there, and I'm going to throw the mic over to Lindsey.
50:31
Lindsey Ludwick
Thank you! Great session, you guys. Really appreciate it. We have a lot of commentary and questions, so I don't want to spend too much time, but if you're registered today, then you probably know that we have the WebRum session tomorrow. Where you'll learn from Cliff and Andy how to see 100% of what's happening in the browser. So be sure to check that out if you're registered. And, also on Thursday, we have a really exciting session hosted by our own Tammy, a live AMA on LinkedIn. All things web performance, so bring your toughest questions on over. You can scan that QR code, the post is live, so you can put your questions in there already, and they will be answered on Thursday.
51:16
Lindsey Ludwick
Tammy's a great follow on LinkedIn, if you don't already know that. But yeah, we're really excited about this AMA, so go ahead and navigate on over there to participate. And let's get to some of these questions. So, there's a really live discussion in the chat about, third-party consent banners, browser caching, repeat views, and I just want to point out, before we dive into the other questions, Tammy's link, to the post that Cliff authored, maybe we can drop that in there once more, and, acknowledge that, yes, Embrace does have scripting for repeat views, so that's happening in the chat. In the Q&A,
51:55
Lindsey Ludwick
We have a question from Ravi. Core Web Vitals were defined by Google. Do you think there's maybe a little bit of bias for Chromium V8 Engine?
52:08
Cliff Crocker
Do we think that there's bias for chromium V8 engine in the synthetic tests that we're doing?
52:12
Lindsey Ludwick
In Core Web Vitals, given the Google genesis of both.
52:17
Cliff Crocker
Yeah, I think, you know, a year ago, I would have said, yeah. And, you know, that was because WebKit really wasn't kind of coming along and giving us visibility. After Interop. I guess it's 2025. They've added support for INP as well as LCP. They still haven't come along with CLS, but we're hoping that they will. So, well, yes, I think there was inherent bias, and I think there's inherent bias in the Crux data, because that really is just Chrome data.
52:45
Cliff Crocker
now we've got a much bigger picture of what's going on across, especially when you think about mobile safari and just how prevalent that is. In the U.S, you know, it can make up 60% of some people's traffic. What used to be a big gaping black hole, we've now got visibility into. I would follow on to say that there's still issues, though, because I think Chrome still has a little bit of bias when you talk about synthetics, because We don't really have a great way right now of testing WebKit on iOS, for example. It's certainly possible, something that we're also looking into that I didn't mention on the roadmap, so we can start to get a better view of what's happening, in synthetics as well, but…
53:23
Cliff Crocker
you know, unfortunately, or not unfortunately, however you think about it, we're gonna have some Chrome bias because of the fact that Chrome has given us so much to work with. Which we're very appreciative of, but I think the industry's moving along and trying to close those gaps as soon as possible.
53:38
Lindsey Ludwick
Great.
53:39
Tammy Everts
Shout out to Mozilla.
53:41
Cliff Crocker
Sorry, Zillow 2, Firefox, gosh, I'm so bad. They've been awesome. They were actually… they were on it with IMP and LCP before WebKit was, for sure.
53:49
Tammy Everts
Yep.
53:51
Lindsey Ludwick
Awesome. All right, Benicius had an observation in the comments about, third-party benchmarking. It seems they're experiencing some cases where they're being blocked from testing other sites. Is this something you're familiar with? Any advice or, anything to point their direction?
54:11
Cliff Crocker
Feel your pain, brother.
54:13
Tammy Everts
Oh my god.
54:13
Cliff Crocker
I, I think that what… well, I don't know, I don't want to take on all this. Tammy, please, you know, interject, but… it's a problem… not a problem, but I… what we've seen with bot detection is, especially in this world of, like, agentic traffic that's been going through, is that it's definitely stepped up quite a bit. Places where we were getting… weren't getting blocked were getting blocked now, we're getting the challenges and those responses. I think.
54:38
Cliff Crocker
What we're trying to do in this case, though, is identify ourselves clearly with our user agent. In some cases, reaching out and talking to some of the bot management that's out there to say, like, hey, we're good actors, you know, we're not malicious, we're not screen scraping, you know, we'll respect
54:55
Cliff Crocker
you know, the LLMs.txt and robots.txt and all those things in your, in your site. But it is increasingly challenging. I know a lot of people try to script around this, look at changing their user agent string, measuring from different static IPs, things like that, but it is a challenge, and I think that that is something you run into when you're benchmarking competitors, and Tammy can attest to it after maintaining page speed benchmarks as well.
55:21
Tammy Everts
Yeah, yeah, it's a little disheartening to, you know, check into the benchmarks and see, oh, that's just throwing up failed tests. So, I mean, I guess the only good thing is that there's a lot of sites out there, so you just… change your competitors to something else, and kind of stay on top of it. But yeah, staying on top of your failed tests to make sure that that's not happening.
55:45
Cliff Crocker
Or call them up and say, whitelist me, man, come on.
55:49
Lindsey Ludwick
gives. Alright. Question about Lighthouse. Do you believe Lighthouse reports are non-uniform? If yes, how do you use Lighthouse reports?
56:02
Cliff Crocker
That's a really good question. I think, yes, Lighthouse reports can be… very non-uniform, especially when they're, you know, run from a developer's desktop or in Chrome, or run, you know, sporadically. Like, you can take one Lighthouse score, walk it over to somebody's desk, and show when they're on a different, you know, laptop or whatever than you are, and they're gonna have a completely different set of results.
56:26
Cliff Crocker
So I think two things that I would focus on there. One is that we do run it consistently from, you know, the same instance type, same locations, all that stuff. We keep our profile locked down as well to match what we see, in, PageSpeed Insights.
56:45
Cliff Crocker
And also, I… I don't really love the whole idea of the Lighthouse score. I think a lot of people have come around to the sense that it's not so much about getting, like, 100 on your Lighthouse score, but looking at how you're trending and tracking over time, and really taking those audits into account.
57:01
Cliff Crocker
You'll see that in Embrace as well as in Speed Curve, where we're not really highlighting, like, hey, here's the score, you gotta make it better. It's more of, like, hey, what can you do to improve LCP? to improve your user's experience. So I think it's really kind of how you look at it. It's a great tool, it's a great source of information for how to make pages better. It is absolutely highly variable, especially when you're testing it from local.
57:25
Tammy Everts
I've… I've actually kind of just stopped looking at the number and the score. Like, I like the audits, but I've seen sites… like, Amazon often has, like, a really cruddy Lighthouse score, but pages seem like they're rendering really quickly, so if you care about user perceived performance, that score just…
57:42
Tammy Everts
doesn't necessarily mean that much. I've also seen, like, you know, there's a few blog posts out there where people have gained the Lighthouse score and gotten 100 with a really non-performance site, just to show that you can. So, I don't trust the… I don't even look at the number, to be honest, anymore, and I don't know… like, I guess seeing it, if you're baselining on it, and you want to see, like, oh, it went up or it went down, what changed? Sure, but the number itself is kind
58:07
Tammy Everts
kind of an absolute value to care about? I don't know.
58:13
Lindsey Ludwick
Wonderful. Thank you both so much, Cliff and Tammy, and to our participants and attendees today, we're gonna close it there. We did not get to all of the questions, but we will make a point to answer those in some of the follow-up materials, so keep your eye out for the emails and landing pages and blog posts that come out of this week. We will incorporate a lot of these interesting questions into some of those.
58:35
Lindsey Ludwick
Thank you, everybody, for your attendance and attention today, and we hope to see you at other sessions this week!
58:44
Tammy Everts
Thanks, y'all. Bye.
Key takeaways
01
Synthetic and RUM aren't competitors. They answer different questions.Synthetic gives you a controlled lab to isolate problems before users see them. RUM tells you what users actually experience in the wild. The teams getting real value run both, and use each purposefully.
02
Core Web Vitals are a starting point, not the whole story.LCP, INP, and CLS give you a shared vocabulary and a baseline. But the Nike walkthrough showed how a passing-ish LCP score can still hide a broken loading experience. Treat Core Web Vitals as the entry door to diagnosis, not the verdict.
03
Competitive benchmarking is the synthetic use case most teams underuse.You can't put RUM on your competitor's site. Synthetic lets you stack film strips, waterfalls, and metrics side by side against the sites your customers compare you to. It's also the fastest way to make the business case for performance work internally.
04
Test where your users are, when your users are there.The most common mistake is testing once a day from one location on a homepage. Align test cadence (every 3–6 hours), geography, and page types to your actual RUM data. Homepage benchmarking is mostly vanity.
05
The biggest performance wins usually live in JavaScript and third parties.Across all three teardowns — Nike, Amazon, Lowe's — the recurring villain was JavaScript blocking the main thread, often from third parties whose budgets nobody is auditing. Profile your long tasks and have a real conversation with your third-party vendors.
In this session
Tammy Everts
Sr. Director, Community
Cliff Crocker
VP of Product