Prevent Cache Limit from Causing Application Deployment Failure
Once the cache has been filled all subsequent application deployments fail until the cache self-cleans. In CAS.log you see that the client is refusing to download the content because the cache, not the actual disk, is full.
This strikes me as a non-optimal design choice if the goal is to successfully install applications. If I want to install an application, I do not want it to fail because of an artificial limit that until very recently was set at the time of client install. Most of the time the cache clears based on the ‘Minimum duration before cached content can be removed’ client setting and there is no problem. However, there is one specific and prevalent use case where it is: stupidly large applications.
There are some applications (ex. CAD) that are just incredibly large (20+ GB) and all their own will exceed what an organization desires to reserve for cache size on a day-to-day basis. Admins are either abandoning the app model entirely for these or running temporary cache modification scripts. This simply should not be that for a system designed to deploy applications.
So, what to do? One option would be to simply not cache apps that exceed the limit and/or delete them immediately upon successfully detecting the app. Another option would be to do what Software Updates do: allow them to violate the cache limit. Maybe that becomes a deployment option: Allow deployment to violate cache limits. The only hard limit that should be allowed to break a deployment is the amount of free disk space to
TL;DR: Stop breaking application deployment just because the cache is full. No one wants to troubleshoot app failures.
Bryan Dam commented
Oh, add-on idea: If the 'ignore limits' option is selected then delete that content first. This could help prevent large apps from wiping out other content's cache.