To avoid needing to close and re-open the console, I can move the folder to another folder, then back again and it then shows the right order.
But it locks up the console for a minute or so each folder move, out then back in, while it re-organises its internal guts.
1,884 votesstarted · AdminMark Silvey - ConfigMgr Product Team (Admin, System Center Configuration Manager) responded
I’m re activating this one based on the fact that we don’t show the intrusive reboot countdown after the software was installed. We still have some work to do here to make this experience meet the requested behavior.
Anonymous (2019-01-11), wouldn't you want the updates 'availability' to be notified clearly well before the deadline when they install; that is, make a long 'available' window before the deadline? Get them installed before your researchers commit to a long simulation. You really don't want the updates to be applied during the simulation but pending restart for such a long time while the system is doing production work where you have so much invested. Updates pending-restart can cause crashes.
If you use a special collection for those machines with the long simulation workloads (deploy to them with long 'available' window with a later 'deadline') and use the main collection for the rest (with a shorter 'available' window to get the updates in sooner), then I think you'll get a better result.
To achieve that, we need noisy 'available' notifications.
PowerShell AppDeploy Toolkit has interesting smarts regarding notifications, not just for potential restarts, but managing user expectations around applications that must be closed during product installs. A truly integrated solution would look at the technical/user interaction of product install from end-to-end, not just at restart.
We have Available period before Deadline. Would like clearer messages at time of Available, cusomizable to repeatedly and persistently inform users of our schedule and what to expect, to encourage co-operation for earlier, less painful compliance.
others reporting the same problem:
Yes please! (commenting here because I'm out of votes. 10 votes are REALLY not enough.
Dependencies - issue with revisions - when I remove a dependency, the references in the depended-on application still shows the dependent application, but the link is to the last-1 revision. Disconcerting and unhelpful.
Turning off Application Revisions would take this problem away.
Maybe this: if I create a sub-folder, I have to wait for the subfolder to appear in the GUI (takes 5-10 seconds, not inutitive), then select the folder before creating a new item. Now I understand, it's awkward. Needs a bit of work, for example, pre-selecting new folder so new items can be created in it without the extra step of manually selecting it.
This is a BUG - just wanted to say that. As the existing operation effectively freezes up the user interface.
Run out of votes, but the slow 'edit membership' is a real problem for us too because our update group space has many automatic deployment rules being created all the time.
These suggestions (Deppentöter and Newman) would save us a lot of wasted time.
Like Zeb's ideas - and have the time persisting be in hours or even minutes, not just days, from when install completes. Allow 0 minutes for instant removal after final detection test is passed.
This is a great case for allowing Applications to deploy from distribution points ... See https://configurationmanager.uservoice.com/forums/300492-ideas/suggestions/8875516-allow-deployments-using-application-model-to-be-in
... and please would you add your votes if you haven't already? Thanks!
Eg if we have an office patch, we need to get 'required' instances where it only applies to windows 7 and not to the windows 8, 10 PCs. We need to know which machine /OS types a particular patch is 'required' on so we don't deploy unneeded patches to machines of a particular OS.
We need to separate patches required for Window 7 and windows 10. We need to differentiate updates that are only needed for our windows 7 based computers and only needed for our Windows 10 based computers. We want to do this through the search/filters in the Softare Updates list. Our server team is starting to use SCCM, like Bill, and needs also to filter updates by target OS. We will create different Software Update groups for each and need to put only the right updates in each group.
I have really tried - have application run a helper package to get its location (saved in a file) and use the source from there to install huge Application - but it turns out that WMI that has the programs is not accessible to SYSTEM account - so script fails to find the helper package. Lot of work to find out that the guts are just missing in SCCM client infrastructure.
We don't want to have to expand the cache size on our clients' small SSDs just to allow one or two unreasonably large packages to download before they install - costly in many ways. We have to use the package/program model for these applications that would otherwise work fine in the application model.
We can't use a single common share because our network is so widely distributed - we rely on SCCM distribution points replication to get the software close to the clients that will install it, and not willing to stand up alternative replication infrastructure that would only compete with it over our WAN.
At least could you let us distribute content as a Package to DPs, and give us a simple, reliable way to dynamically find the correct _local_ distribution point package contents with a script run on each client (eg powershell and/or WMI api), until a full solution can be implemented?
Having Application model able to install from distribution point share - would hit the mark so much better.
My vote +3 this week, but I've run out of votes. I really needed this, last week when deploying a suite of security applications, that needed a restart between phases of the installation. If there are multiple restarts, this feature would need to re-evaluate after every restart. When I tested using Application deployment as it stands now, the uninstall worked, then just sat there. We cannot leave systems insecure - that is unacceptable. We require the process to keep going until it's done. Deploying via Task sequence will lose Application management support.
We always use the feature for Software Updates, and would often use it for Application deployments.
This is terse. From the comments, can we assume we mean right-click on a device or device collection and select updates to exclude, from a list of deployed updates?
How would you track your exceptions down later when you want to clean up - will you be able to do it from the software update item?
Check out the new uninstall behavior in 1804 tp.
Ah the good old AD software installation days.
But in SCCM, when an application is pushed to a user but installs in SYSTEM context, does it belong to the user or to the machine? If a corporate machine has a shared user history, we don't want a different user logging on to uninstall the software. We just want authorised users to be able to _run_ the software. We deal with this better with AppLocker and with App-V. We don't want a situation where software is installed, uninstalled, installed, uninstalled, installed ... as different people use the machine. We do, however, want to be able to pull all installations of the software when it is retired or superseded - even on machines where users are no longer using it and it would otherwise not be superseded by a user-pushed supersedent application.
If the targeting was to a device and the device falls out of scope in AD / out of the query collection, then we would likely want the software pulled off the device. (provided it was not due to an error in collection hierarchy where the members were temporarily dropped and re-instated - does happen in operation but not often).
+ Advanced Group Policy Management (AGPM) integration?