Middleware with VCL?

Comments

4 comments

  • Andrew Betts

    Hi @darez81,

    It seems like you are describing the default behaviour of Fastly. If you are already requesting your website through our network, then all you need do is ensure that the assets you want to cache have appropriate Cache-Control headers to tell us whether we can cache them, and for how long.

    More info: https://docs.fastly.com/guides/tutorials/cache-control-tutorial

  • Daniele Tassone

    I don't have the control on the origin, so i want to create that logic in Fastly. 1) If is .JS => cache it 2) If is .html => cache it 3) If is not js or html => don't cache it

    I know how Fastly work, the problem I see is related about how to create the logic in Fastly to cache it depending on some condition because i don't have the control on the origin.

  • Andrew Betts

    Oh, I see, now I understand what you are trying to achieve. Yes, you can indeed modify the TTLs of cache objects in Fastly using edge logic, overriding the cache control directive from your origin server. To do this you just need to set beresp.ttl to a time of your choice.

    Rather than making this decision based on a file extension in the URL, it's better to do this based on the content-type of the response. I made a demo to help you try this out:

    https://fiddle.fastlydemo.net/fiddle/78c50399

  • Andrew Betts

    In short, yes, however, return(pass) means different things depending on where you use it.

    When used in vcl_recv, returning pass will directly move the flow to the fetch stage without performing a lookup. This means that when the backend object is fetched, there is no associated cache address, so regardless of what TTL you give it, it will not be saved in the cache. Next time that URL is requested, the same thing will happen again, assuming you continue to pass in recv.

    If you return lookup (the default) from recv (which will also happen if you have no custom recv code) then a cache lookup will be performed and if there is no hit, we will create a cache entry in advance of the backend fetch. Therefore, by the time we get to fetch there's already an entry in the cache. If you return pass from fetch in this situation, we'll mark the new cache entry as a 'hit for pass' and save it. Next time the URL is requested, the lookup will hit that cache entry, and because it is marked for pass, we'll perform a backend request anyway.

    This difference may not be worth remembering in your case but in edge cases it can be relevant.

Please sign in to leave a comment.