Prevent Cache Limit from Causing Application Deployment Failure
Once the cache has been filled all subsequent application deployments fail until the cache self-cleans. In CAS.log you see that the client is refusing to download the content because the cache, not the actual disk, is full.
This strikes me as a non-optimal design choice if the goal is to successfully install applications. If I want to install an application I do not want it to fail because of an artificial limit that until very recently was set at the time of client install. Most of the time the cache clears in 24 hours and there's no problem but there are two specific and prevalent cases where it is.
Task Sequences: During OSD we want every app and every update applied before the user gets the machine and we want it done quickly. It's very easy to go over a limit that day-to-day is ideal. This has forced SCCM admins to either abandon the application model during TS (a common MVP suggestion) or write custom scripts to temporarily increase the cache. Neither of these are ideal.
Large Applications: There are some applications (ex. CAD) that are just incredibly large (20+ GB) and all their own will exceed what an organization desires to reserve for cache size. Again, admins are either abandoning the app model entirely for these or running temporary cache modification scripts. This simply shouldn't be necessary for a system designed to deploy applicatons.
So, what to do? One option would to implement a rolling (first in, first out) cache where if cache space is needed the oldest non-persistent content for a compliant application is removed to make space. If the new app's content is larger than the cache itself then simply don't cache it. The only hard limit should be on the amount of free disk space to reserve before the SCCM client refuses to download content of any type.
TL;DR: Stop breaking application deployment just because the cache is full. No one wants to troubleshoot app failures.
How about when an install needs more space in the cache it cleans the cache then. When a new machine is setup many software titles are installed an the cache runs out of space even thou 99% of the cache is marked as can be deleted. (Adobe Creative Suite products are a problem here)
Blake Erwin commented
+1 for the idea of deployment setting that allows us to ignore cache limit for a specific application.
Additionally, further flexibility/higher limits should be allowed configurable to the client cache size. If I set client settings to have a cache limit of 100GB, I notice that it sets the limit to "99,999" MB. The ability to increase the cache size further would be useful.
L U commented
Like Zeb's ideas - and have the time persisting be in hours or even minutes, not just days, from when install completes. Allow 0 minutes for instant removal after final detection test is passed.
Zeb Smith commented
I'd love have a setting to allow the cache to use up disk until x% free space is remaining and another to tune how long content persists in cache.
The first could be system-wide and the second per Application.
That way, if I have a gigantic application to roll out, we can let it use up a large portion of disk for a short amount of time, and then shrink itself back down after a completed install.
Eric Van Boven commented
We have seen this so many times at our school. Cannot believe SCCM operates like this currently.
Kevin Street commented
I would also like to suggest an option where during deployment of an application you can tick a box, something along the lines of "Temporarily increase client cache size to accommodate this application" so that applications that are bigger than the usual cache size will install without error.
L U commented
This is a great case for allowing Applications to deploy from distribution points ... See https://configurationmanager.uservoice.com/forums/300492-ideas/suggestions/8875516-allow-deployments-using-application-model-to-be-in
... and please would you add your votes if you haven't already? Thanks!