So I stumbled upon this message in Google Page Speed Insights, that the current page I am optimising has a lot of unused CSS.
And I learned that Chrome offers a nice, easy way to see which CSS rules from the stylesheet are not applied in the page, but there is not such an easy way to remove these unused stylesheets on a per-URL basis: the Coverage Report.
Of course, there are some CLI tools that can scan your source files for CSS selectors, but in my opinion, doing it as a service would be more convenient.
It's 2020 and we should really not ship the styles for an entire website to EVERY web page. Of course, the browsers nowadays are able to cache the styles, but why not deliver only the CSS rules that the current page actually needs?
So this is why I decided to start https://splitcss.com
It will be an API that basically will return only the CSS that a certain URL needs.
The consumer can pass the URL where the HTML is, as well as the CSS URLs.
The API will then fetch the HTML, wait for the JavaScript to execute and then scan for unused CSS selectors.
Lastly, the response will be cached and delivered to the consumer as either JSON or plain CSS which he can load, instead of loading the styles for the entire website.
What do you guys & girls think? Would you find it useful?
I'd love to hear your thoughts.
A couple of concerns, the first being what benefits does this bring over my current CLI implementation? IMO, setting up a flow where I'm now having to parse your JSON response or load the new css and deal with all that sounds more complicated then just having my build process handle it for me.
The next concern is speed. If we're concerned with the small gains from optimizing dead CSS then I'd be concerned that the latency of hitting your server, having it do the work, and then parse the response is likely to take longer. Even with caching, there' some things that either seem slower or require a lot more integration work going back to point 1.
Lastly, I'd also be concerned about the correctness of the response. In particular, situations where a style isn't used on initial page load, but might be after a user action. I'm not sure how your service would handle this type of situation.
Not trying to bash your idea, but as a dev those are the few questions I would have that would need to be answered before I'd consider using your service.
Hey Peter, thanks for your feedback, I really appreciate it!
The benefit over the CLI implementation is that it would be able to fully execute the JS (with Headless Chrome) before searching for any CSS selectors.
The speed gains would largely depend on how your CSS is written in first place. If your app already has some code-splitting technique in place, then probably a service like this would not be so beneficial.
Whether the response will need to be parsed, it will depend on how you intend to use the service. If you decide to have a proxy to cache the response on your end, it will be almost the same speed as if your server generated the CSS.
The last concern is particularly interesting - I haven't thought about it thoroughly, but I think as long as server-side rendering is not involved, your classes should be somewhere in the HTML or in the JS. Again, if you split your selectors in Javascript with string concatenation, no tool is able to figure it out. e.g "btn" + " success"
I don't claim this would be the silver bullet for every web app - there are many ways to make a website and also I don't expect it to be perfect science. But if it works decently in many cases, then it should be fine.
I really appreciate your feedback - it makes me think about the whole thing in greater details. I will make sure I explain and document it well on the website.
Hi Vlad. I like the idea. Am new to web development but i would definitely use your API. Btw i like your page could you please tell me what tools you used to develop your web page
Hey Ozzy, thanks, I am glad you like it! And welcome to web development!
I used PHP (Laravel) to build the web page. The actual API will be built with NodeJS and it will be using Headless Chrome.
I used SublimeText as code editor and designed it in Sketch.
I will definitely let you know as soon as it's live!
It's 2020 and most websites are bloated as hell because nobody cares. So I see little market need, as internet speeds are fast enough to hide loading an unnecessary 3KB gzipped CSS file.
Also, your description of the problem sounds to me like you do not understand how the technology works. If I use semantic versioning and a proper asset pipeline, then my css file will be called something like merged-123.min.js.gz and whenever the content changes, so will the number. That means I can use a far-future header to have this cached on pretty much every ISPs proxy on the planet, as well as on the computer of any previous visitor. Having the file stored with the ISP (meaning 30+ms less latency) or on the user's computer (meaning no latency) is much much faster than any API I could use, even if my file is 10x the size of your file.
You returning CSS directly would be a CORS request and break for many users, be blocked by Firefox privacy settings, and be problematic with GDPR. So I'd need to proxy your request, thereby doubling its latency. Plus your CSS files are un-cacheable, a big no-no for SEO.
And for less technical people, there's many free Wordpress plugins to optimize their CSS.
Hey there, thank you for your feedback!
Websites are indeed bloated in 2020, and that's pity because software is a rare industry where suboptimal products are being tolerated.
Nonetheless, I think we as software engineers should put some effort into improving the situation. "Just put better hardware" shouldn't be the solution to most of the performance problems.
It really depends on how your web app is architectured. I agree that there are cases where using an API on top would not bring much of an improvement, but it's not fair to assume that a lot of websites wouldn't benefit using it.
The way you described transferring the data via your server, it can be done via mine as well. Returning a gz version of the file is also possible. So it can be stored within the network or cached as well. Just because it's served from a different domain doesn't necessarily mean it cannot be cached or that it violates CORS (or GDPR).
If you decide to load the CSS via JavaScript after the page loads and the server includes the CORS header, it shouldn't be a problem.
Also proxying can be as little as invoking the equivalent of
file_get_contents()
and some caching technique, so I don't see an issue with latency here if you decide to write a proxy.In my eyes your SEO would improve if devs stop feeding browsers with unneeded CSS rules.
I can't imagine Google would be unhappy seeing less CSS on websites.
I had been looking for this type of service which could provide only the CSS used in a page.
This will be very beneficial for my use-case. I am using a heavy bootstrap UI kit for a side-project (Landing page, Admin Dashboard, etc). The UI kit CSS is about 20K lines of code which is essential for the Admin Dashboard pages. The UI kit also has landing page templates included on the same CSS file.
Now when I try to load just the landing page, the first load I will be fetching about 350kb of unused CSS.
Very bad load time.
Requested for the beta access from your website. Good luck.
Thank you for sharing this, I really appreciate it!
I'm excited about the CSS savings as well!
I will definitely let you know when the API is live!
I’m a bit unclear on how this works and how it’s better than a CLI. Does the css file loaded by my users point to splitcss, which then does all that processing before returning css to the users browser? Or do I use this as part of a build step somehow?
Hey Gabe, thanks for your input and great question btw!
The difference with the CLI tools is that the API will run Headless Chrome for that particular URL (so js will be fully executed) before attempting to find the unused CSS and cache it in the end.
On the consumer part, there will be two ways to go.
It will require either to either load the resulting CSS response in a <link> / <style> tag or,
to have a proxy which might do for example, additional caching. In this case, the browser will basically load it from the origin domain, not from the splitcss domain.
Gotcha. What happens if a particular page doesn’t utilize a CSS selector until after a user action? Wouldn’t your system strip that CSS out? As far as I know, this is why CLI tools search code for used classnames.
If the page doesn't utilize a CSS selector until after a user action, the class should be still be there to be found, either in the HTML or in the JS, unless: the JS concatenates strings to build classes ( e.g "btn" + " success" ) or if server side rendering is involved / HTML partials are returned.
The service will respect CSS classes that will be marked to be ignored. So in this case, the developer still has a way to make sure some classes will be included.
The idea is good but I think it's overkill for such a simple task.
Usually, the CSS is not THAT big unless you use the WHOLE bootstrap css file.
Maybe you can put benchmark/performance comparison before and after using your service (CLI vs your API) to validate your proposed solutions, if the margin is significance people might interested in your service.
Thanks Irfan, good idea! I will add a performance comparison, that definitely makes sense.
For me, it's a bit a non-sense, because pagespeed is mainly to focus on speed, and using an api will only add some loading time. And also, in case of error, the website will be completely useless and ugly without css.
Good luck
Hey Tim, thanks for sharing your thoughts!
CSS resources are generally pretty static, you can cache the API response on your end so there shouldn't be any increased loading time.
The API will wrap the call in
try {} catch {}
so in a case of an error it would simply return the input and I will get a notification to fix it asap. The consumer code can actually do that too so the end user would not see an unstyled website.I don't think I'm in your targe, because I'm crazy about load speed. And as you said, css ressources are static, so I'm already using a library that is doing the job.
That's perfectly fine! Most libraries do a good job by performing a search within your files.
The difference would mainly be that the API will use a Headless Chrome instance and then search the HTML/JS.
But again, every case is different and the dev knows what's best for their app.
Not everything needs to be a service. I would rather focus on the root problem of finding unused CSS and helping devs to clean up before shipping to production. As opposed to deal with complexities of cross-domain/cross-geography CSS hosting.
Sure, that would be the best engineering approach
How will that work with dynamic changes? Like a class that’s activated based on a user action or event? The CSS would not be there, since it was not used before, no?
Or is it something you need to request the api for every refresh?
In both cases , I don’t really see the need. The first case would break the page and the second would add latency.
Also, what happens if the API has a bug and didn’t return anything?
To me, this introduces more hassle than help but I may be missing something.
If your class names are within the HTML and JS, they should be discoverable.
But if your app returns a HTML partial with possibly new CSS classes, you can mark those classes in your stylesheet to be ignored by the service and to be included anyway.
Caching the response (which will also be cached by the API) should help with the latency.
If the API throws an exception, it will return the original input and I will receive an email to fix it. The consumer code could/should do the same by using
try {} catch {}
.Again, if you think your apps are structured in a way that they don't need this, it's also fine. It's not a silver bullet for every web app.