My 2 cents is applying a single resilient policy on the client-side might not be enough to have a robust and resilient system. Or you can apply a circuit breaker to monitor successive failures and back-off for a given period of time if the service is treated as overwhelmed or malfunctioning. You can also define a global timeout for all your retry attempts. Server-side: You can use Bulk-head policy to have control over the maximum number of concurrent calls but if the threshold has been exceeded then you can start to throttle requests.Ĭlient-side: You can have a timeout for each individual request and you can apply retry policy in case of temporary/transient failure. If a former policy could not "fix" the problem then the latter would try to do that (so there is a policy escalation). Whenever we are talking about strategy it means for me a chain of resilient policies. So with this is mind it might make sense to return with either 204 or 404 in case of idempotent deletion. If you call it 100 times then it should always return with "yepp, that's gone". If your service's removal functionality can be considered as idempotent then there is no such state as Already done. It says that if you call the method / endpoint multiple times with the same input then it should return the same output without any side effect. The second criteria is usually referred as Idempotency. The introduced complexity is negligible compared to the promised reliability.The operation can be redone without any irreversible side effect.The potentially introduced observable impact is acceptable. Whenever we are talking about retry pattern we need to consider the followings: Permanent errors, like service has been shut down forever or active refusal of network connection attempts under TLS 1.3 need human intervention to fix them. It might return with 429 (Too Many Requests) along with the appropriate Retry-After header. If there are too many pending requests then the service may consider to throttle them and use back-pressure to shed the load. If there is a database outage and the service is able to detect it then it can fail fast and return with a 5XX response or it can try to fail over. If there is a network issue then it can be considered as a temporary/transient issue (this can be manifested as HttpRequestException). Whenever something fails during the request processing then it can be treated as temporary or permanent failure. If you have soft deletion, then the service could return 404 for an already deleted resource and 400 (Bad Request) for an unknown resource. Without soft deletion it is hard to tell that the given resource has ever existed before or it was never been part of your system. The already done state can occur when you try to delete an already deleted item. It can either 200 (OK) with body or 204 (No Content) without body or 202 (Accepted) if it is asynchronous. You can find several arguments on the internet which is the correct state for succeeded. This removal can end up in one of the 3 different states from consumption point of view: Let's suppose we have a REST service which exposes a removal functionality of a given resource (addressed by a particular URL and via the DELETE HTTP verb). Unless you explicitly call EnsureSuccessStatusCode method. So as you can see neither 400, 404, nor 429 Too Many Requests (typical response code in case of back-pressure) will cause your Polly policy to be triggered. Unless you call EnsureSuccessStatusCode explicitly.ĪddTransientHttpErrorPolicy will check the followings: You have a bit of misunderstanding, 400 Bad Request or 404 Not Found will not result in HttpRequestException.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |