Or
NICE AND ALL BUT..
When you use a third-party API, like fetching the weather forecast from
`https://api.weather.com/$city`
every call travels across the internet to the API's server, retrieves the data, and then travels all the
way back. This journey, although fast in internet terms, adds up, introducing latency and costs with each
call.
By strategically caching data close to your servers you can dramatically reduce both the latency
impact on your end users and the incurred cost by continuously hitting the upstream API.
By swapping your API's URL from `api.weather.com/$city` to `cachething.com/api.weather.com/$city` a proxy is automatically created for your requests to the original API. The first time around, it fetches the forecast just like before. But then a copy of this data is gradually put closer to your origin servers so to be quicker on subsequent requests.
The next time you requests the same url path, cachething checks its edge copy first. If there's a match (and the data is still fresh), it serves you this local copy instantaneously, reducing the wait time and latency significantly.
By serving a cached copy, you also save on the costs of calling the upstream API. This is especially useful when you have a high volume of requests or when the upstream API charges per call.
FEATURES
Need fresh data every 30 seconds? Just add `x-cache-ttl: 30s` in your request headers. Want the best of both worlds with immediate responses and background data updates? `x-cache-mode: stale-while-revalidate` is your friend. And for those times when you need the absolute latest data, bypass the cache entirely with `x-cache-mode: skip-cache`
CacheAnything.io goes beyond just caching. Need to clear the cached data for a fresh start? Hit `/__purge` for a clean slate, or use `/__purge/$id` to remove specific entries. These added paths, like spells in a wizard's book, give you control over the cache like never before.
Optimal caching is hard. We help you remove the guesswork out of the euqtion. Monitor the dashboard in order to get insights on how your APIs are used and how to best tweak TTL values.
Only 2xx responses are cached. This is reducing the risk of serving stale or incorrect data. If the API you are calling is down, we will not cache the response and serve the cached data, improving availability for your application.
IS IT FAST?
With over 300 edge locations at your service, we ensures your data is replicated where it matters most, bringing it closer to your users worldwide. This network of edges acts like a global web, catching and storing your data at strategic points, so every API call feels like it's coming from next door, not across the globe.
Start caching your API calls in 1 minute. No credit card required.
Do you have questions or need help with a specific use case? Let's talk!
Get in touch