I love working to a plan; I like to think through what is the best way to build something in the best possible way before implementation, however this isn’t always possible, and in some situations not desirable. I’m not talking about always planning the full life-cycle of a product or project but at least planning the first iteration, release, milestone or phase; we can of course assign these labels to interchangeably.
During my time at University I took the game development module and my lecturer used to talk about the “fuzzy” period at the beginning of a project. This is the time when we haven’t fully decided what we want to create but this is also applicable to any project. It is the period of confusion at the beginning where we haven’t fully dissected the problem; we don’t have a plan, we don’t know what tools to use and sometimes we don’t even know where a successful project will bring us. Luckily this last point is not always problem because for most projects we know precisely what the required result is, however you can rarely determine in advance what is the most appropriate approach for a project or what are the most appropriate tools without first knowing the problem(s) intimately.
Sometimes people come to me half-way through a half-baked plan facing multiple issues; this is a frustratingly common occurrence. I can handle my own lack of planning but handling someone else s means figuring out what has been done so far.
We were recently approached by a customer faced with a hard deadline enforced by the end of life of ITCM on the 31st of January 2016. They have 3 months to migrate to a new deployment architecture.
They have hired a consultant to sequence all the applications that can be sequenced and the remaining packages will be imported as they are. This work has already commenced but when I asked if they had applied a consistent naming convention I was informed that they had not. Great.
Although this is both encouraging, for the fact that they have done something proactive, and frustrating, because they have implemented it badly or without pre-planning, it is not an insurmountable problem.
The first phase should be application discovery and rationalisation exercise in order to remove some effort wasted dealing with applications that have overlapping functionality, are very old versions or applications that are simply not used anymore within the organisation. You would be surprised how many applications can be excluded after this exercise. There are even scenarios where applications costing thousands of pounds in license fees are simply not required.
This next phase is a little speculative, because I am not in possession of all of the facts and circumstances for this specific case, but for a mass migration of applications like this there are tools to facilitate the bulk of the grunt work. For instance you could use InstallShield AdminStudio’s batch packaging tool to create a library of crude MSIs; these MSIs could be imported into a tool like Citrix AppDNA (the customer already uses XenApp) or Dell ChangeBase both of which can automatically generate App-V packages and highlight further issues.
However, we were engaged too late in the process to affect any of this so I am just venting.
The case against the manual method
This one is pretty easy: an SCCM administrator will spend an average of 3 hours per application to import and configure each individual application;
Tasks, multiplied by 400
- Create the application in the SCCM console
- Customise the command line to include a log file (MSI only)
- Distribute the content to the distribution points
- Create the per application user collection with membership based upon an AD group
- Create the deployment to the new collection
- 400 applications x 3 hours = 1200 hours
- or 150 days
- or 30 weeks
- or 7.5 months
Aside from the fact that this would be an horrendous experience for the administrator(s) this would be a massive waste of skilled resource.
Inspiration for Automation
David O’Brien has created a wonderful script to import MSI and legacy setup.exe applications based on an XML definition: here.
Deepak Singh Dhami has some useful examples for creating folders and moving applications into them here and another for distributing the content to the distribution point, creating a collection and creating the deployment here and here.
Armed with this inspiration I put together a proof of concept to demonstrate the possibilities. You can see the successful POC script here.
However PowerShell is not the only option. We can use the Configuration Manager SDK and C# to perform many of the same actions. You can see a basic example here and this is also something that we will look at for this mini project.
Preparing for automation
In our environment applications are stored in folders according to vendor names as shown to the right, therefore we require the vendor name for each packaged application.
Given the absence of a naming convention we have a conundrum; I know from experience that we can extract MSI details using WMI queries but it is a little trickier with App-V. We can extract information in the package (using something like this) but the application manufacturer is not stored as a separate entity within the package.
This unfortunately means that the process cannot be fully automated but we can delegate a more menial task; organising the directory structure.
We will define a strict naming structure for the directories that contain the packages; we can then traverse this to extract the relevant information. This also gives us a repeatable formula that can be reused in ongoing or future projects.
Proposed automated solution
As with any early stage planning this could change but we will set out with the following process in mind.
As a pre-requisite all packages should be organised according to a pre-defined folder structure.A strict naming convention should be applied that clearly separates the vendor name, application name and version number.
- Packages scanned for structure and naming integrity.
- Report issues and break or continue.
- Packages scan creates an XML document detailing application properties ready for import.
- The XML is processed, each section will import an application into SCCM;
- Create the application.
- Move the application into the vendor folder.
- Create the deployment type.
- Distribute the content to the distribution point(s)).
- Create the collection.
- Deploy the application to the collection.
- Report successful and unsuccessful imports.
Planning is one thing but plans are rarely static; they evolve as we experiment and then evolve some more during testing, but this is not a reason to omit planning. Building upon solid foundations ensures that the most value is extracted from the exercise. For instance I can already see possible uses for the code that we will create for this project; the import of future applications on an ad hoc basis could be automated giving further cost savings on an ongoing basis. For this reason we should develop the code with this in mind. A further advantage of automating future application imports would be in ensuring consistency in the environment.
Sometimes the first urge is to jump straight in a write the code but putting a little forethought in is always worthwhile. I always try to write code with reusability in mind; it means that with every piece of code I write my job becomes a little easier, or in case a customer is reading this, I become more efficient.
Next we will develop the script in PowerShell or C#. I will write a followup blog post detailing the decision process and the result. Please let me know your thoughts so far in the comments section or if this post has been helpful for you.