Quantcast
Channel:
Viewing all 96 articles
Browse latest View live

Lakeside Systrack – excluding applications from being captured

$
0
0

Lakeside systrack gathers a lot of data around application metrics and a ton of other cool stuff. However on occasion you may find yourself wanting to exclude the systrack agent from collecting data. This might be for instance if you are having an issue with a application  and suspect this may be down to the Lakeside Systrack agent hooking into your applications executables.

Lakeside Systrack uses a couple of DLL’s for hooking into and collecting metrics for all applications by default. This DLL is LSIHok.32.dll and the 64bit DLL is Lsihok64.dll. You you use process explorer https://docs.microsoft.com/en-us/sysinternals/downloads/process-explorer and view the DLL’s associated with the processes you will see that the Lakeside DLL’s hook into pretty much everything.

lakeside Systrack LSIHok.dll application view.

Now there are a couple of things you can do here to try and test whether the Systrack hook may be causing your application woes. Unfortunately the first one does not quite provide the full test. There is a Systrack setting that you can configure to detail executables to exclude from systrack data gathering. While this does stop the data being gathered by the executable it does not remove to hook from the executable.

In order to specify the executables you wish to exclude from metrics, open up the Systrack Deploy tool then navigate to “Configuration\Alarms and configuration”. The select the  Configuration you wish to change and click Edit.

Now ensure you have click the Enable Advanced settings tick box at the bottom left hand of the screen and select the “Policies and Settings” node.

lakeside Systrack LSIHok.dll configuration

Now expand the “Application Management” and add your executable names (comma delimited) into the “Applications for which data is not recorded” setting. Now OK your way back to the deploy tool and you can either wait for the agents to pick up their new configuration or select all the relevant computers in the tool and select “Read configuration now”.

In a non persistent world though you may find this not to be enough as you’ll want this configuration generally to be available before the Systrack agent starts up and pulls its configuration from the master server. In these instances you can also append these settings into the master image via a registry key.  The registry key in question is located in:

HKLM\Software\Wow6432Node\lakeside Software\LSIAgent\HookThread

You can add the same executable list to the “FilteredApps” reg_sz key and then seal up your image.

Now as mentioned if this doesn’t help your cause and you still suspect systrack of causing issues with your application/executable there is one more thing you can try however it is generally probably a bad idea within a live production environment.  If you want to remove the LSIhok from the application then you can from the same “Alarms and Configuration\Policies and Settings\Application Management” section of the configuration set the “Enable Application Hook” to False. Similarly in a non-perisistent desktop you can also set the following registry key to disable that functionality.

HKLM\Software\Wow6432Node\LanesideSoftware\LsiAgent\HookThread and change the Red_dword value to 0 for the EnableHook key.

This unfortunately does come with some negative behaviour in that you will no longer get any data for application/service hangs logon process information and command like reporting, which for the visualisers and the resolve tool limit their functionality quite considerably. You’ll either want to do this on a small set of test machines or for a very short space of time for testing.

Author: Dale Scriven


Microsoft Teams machine based VDI installer

$
0
0

Teams is actually a really good enterprise messaging and collaboration tool and as its in many O365 subscriptions etc then it’s a no-brainer to use it.

However teams also has an unpleasant after taste for SBC/VDI admins (its not the only app that does mind you) in the way that it is installed. By default, running the installer for Microsoft Teams doesn’t actually install the application but it extracts a package and a json file into the c:\program files directory. When a user then logs into a their VDI instance the package is extracted and installed directly into the users profile (around 500mb natively).

For physical devices this doesn’t cause too much of a hassle but for VDI implementations this causes a massive headache. If you consider a typical non-persistent VDI environment which includes some kind of profile solution, Citrix UPM, VMware’s persona manager etc etc you get some highly undesirable effects.

You either have to persist the default locations for the installation files within your profile management solution adding at least 500mb’s to each profile (no thank) or users have to accept that on each logon to a nonpersistent fresh desktop the Teams installer will execute providing a non ideal user experience while the CPU is busy performing the installation actions and whatever else it has to worry about during a logon.

A great solution to this is FSLogix and Office365 or profile containers to containerise the installer reducing the user impact by persisting the data natively within an OS as far as Windows is concerned. This is one of the reasons why Microsoft purchased FSLogix then provided effectively free licences for anyone who purchases RDS,VDA, E3 and above O365 among others. This obviously covers pretty much about everybody. however the problem will still be the same that while Teams is containerised within FSLogix that is still 500mb x No. of users of storage space that could be put to better use.

Despite the great FSLogix option there’s no denying that Teams is a badly written application for any kind of non-perisitent solution and everyone has been commenting on the situation for some time. It appears that Microsoft are now starting to do something about it.

Microsoft have released a version of Teams that is a machine based install which does not install the application into a profile location but with in the correct C:\program files\ location with the caveat that it available for VDI instances only. Sorry SBC people you’ll have to have wait a bit longer I think.

Microsoft have recently released this article which includes download links to the x64 and x86 versions of teams and a specific command line to run in order to install Teams as a VDI friendly product. I wanted to have a look at this executable and see how it installed.

The command line you need to install teams is:

msiexec /i Teams_Windows_x64.MSI /l*v Teams.log ALLUSER=1

The critical difference here which either installs teams in the standard in profile mode or VDI mode is the ALLUSER=1. DO NOT get ALLUSER confused with ALLUSERS=1 its not a typo!

In order to find out a bit more about the teams installer I broke open process monitor and ran the command line without having preinstalled any typical VDI agent packages into a windows 10 instance. Sure enough the installer errors out with an error stating “cannot install for all users when a VDI environment is not detected”.

Looking into the process monitor logs it appears that the teams installer looks for specific VDI agent based registry locations to determine whether it will install or not.

The installer looks specifically looks for the following registry keys

HKLM\SOFTWARE\Citrix\PortICA

HKLM\SOFTWARE\VMware, Inc\VMware VDM\Agent

These reg keys are obviously associated with the two big VDI vendors however if you are using another vendor again you may be out of luck for now. If a VDI agent is not installed then the installer looks for these keys only and then fails the install however if the VDA is installed it also looks for quite a few other keys so at the moment its not a case of creating a single key to fool the installer.

However once you have a standard VDI agent installed you will then be able to run the installer command and you will see that rather than putting only the package and json file within the program files location it will now install full application into program files with the only exception to the rule being that the Squirell install log file is placed within the user profile and also a folder is created for Teams addin’s within the C:\Users\%username%\appdata\local\Microsoft\TeamsMeetingAddin.

The change to MSI package are certainly welcome and a good initial step in providing a machine based install for Microsoft Teams which hopefully will also migrate across to the other apps that are guilty of the same behaviour(cough OneDrive). I personally would like it be a choice for the customer whether they want to go to the standard version of teams ensuring that they stay up to date with the latest versions automatically across their estate or take a steadier approach by using the machine based install versions
without the VDI technology search behaviour which would require more administrative effort for IT teams but a further degree of control that most companies find comforting.

A couple of things worth noting is that like its non machine based install counterpart Teams is not yet optimised for VDI voice and video capabilities, such as the HDX realtime pack so Microsoft recommend disabling the calling feature within teams.

The machine based installed is also not automatically updated so IT teams will need to manage the update procedure as they would for any other application.

Author: Dale Scriven

Citrix StoreFront and Workspace Beacon Probing

$
0
0

Beacon probing in Citrix Storefront isn’t that well documented and I had a requirement to look into it a little more within some VDI’s themselves. Consider this situation, you have a number of users that have VDI’s this could be 10-100,000 users it doesn’t really matter. You also have Citrix Receiver or Workspace installed within those VDI’s that utilise the native single sign on to a store for published applications etc. A pretty common scenario I’ll think you’ll agree.

 As we all know beacon probing is used alongside the native receiver feature to determine if the client machine is inside or outside of the network and it does this by the administrator configuring a number of internal and external beacon that receiver attempts to communicate with. The client machine will determine itself to be inside the network if it can get to the URL defined as internal and the external beacon points are used to verify that the client machines network has internet connectivity and to assist in verifying that the client is outside of the network.

For most organisations there is no need to have separate Citrix StoreFront servers for internal load balancing and external NetScaler Gateway proxy, separate Stores in the same server group make’s much more sense. However, this affects beacon probing as the configuration is global so the URL’s you specify will apply to nearly any and all stores within that group.

This is absolutely fine for client devices that roam such as users laptops, iPad’s etc but consider this behaviour for the VDI’s themselves which will never be outside of the network now if you increase this feature usage to around 10,000 VDI’s this activity doesn’t make too much sense and will generate a fair amount of unnecessary traffic which may have the networks team asking questions.

Receiver and Workspace also poll on regular intervals for its location status so its not just the initial Receiver or Workspace SSO logon that will generate this traffic. From my testing it appears that these URL’s are also retested every 15 minutes.

Additionally, when you add an account to Citrix Receiver or Workspace whether that’s manually through Group Policy or through the StoreFront configuration in Citrix Cloud or on-Prem DDC’s it will also record the default external URL used by the Store.

These details are stored within the registry of the VDI HKCU\Software\Citrix\Receiver\SR\Store\12312312332\Beacons and HKCU\Software\Citrix\Receiver\SR\Store\12312312332\Gateway. The Beacons key has an Internal and External subkey and additionally they will have further subkey’s named Addr0 onwards. The Internal key will generally only have Addr0 as you only specify a single URL for the service to determine the internal connectivity while the external subkey can have multiple keys (Addr0,Addr1,Addr2 etc). Each of the Addr Keys will have a string of “Address” with a value of the beacons configured.

Now theres a couple of ways you can stop the VDI’s from generating the traffic, the first and easiest method is pointing your internal VDI’s to an Internal Only StoreFront store. By configuring it to Internal only Receiver only connects to the service URL for probing it will not attempt to connect to the external ones. You configure store’s to be internal only by unchecking the “Enable Remote Access” checkbox in the Configure Remote Access Settings” option of the store.  

If this is not possible and you need to use a store that is configured for External and Internal access then you can use FSLogix Rules Editor to disable the probing.

Firstly you’ll need to install the Rules Editor if you have not yet already which is included in the FSLogix agent download. Once installed create a new Ruleset and configure the following Directory Rule.

HKCU\Software\Citrix\Receiver\SR\Store\*\Beacons\External

HKCU\Software\Citrix\Receiver\SR\Store\*\Gateways

You’ll notice in the path that a wild card is specified this is where FSLogix comes in handy as part of the key value if you remember from earlier is a random string of numbers which is tricky if not impossible to get around with some enhanced profile management type products.

Once that is configured then you just need to add the memberships either by user or group or one of the many other ways available within FSLogix then distribute the ruleset either as part of the next automated build process for your VDI or through Group Policy etc to the C:\%ProgramFiles%\FSLogix\Apps\Rules folder.

Copying the rules into those folders produces instant results and the registry keys will be hidden straight away, so it is easy to test the effect by just exiting the Receiver or Workspace app copying the rules into the folder and then opening it up again.

Once you have confirmed the effect you can simply add the rules into your automated build process and your done.

Author: Dale Scriven

Citrix StoreFront customisation script

$
0
0

I’ve been working on a powershell script to customise Citrix StoreFront look and feel over and above what the GUI options provides. things such as inconsistent colours etc can distract from a corporate branding scheme when using StoreFront .

This Powershell script is hosted on github and is set to be executed on a Citrix StoreFront server where a new uncustomised store has been created.

Grab the current script from here.

—–Version—–

30/03/20 v0.1

Header bar colour Scheme

Loading page colour scheme

Image import utility

UK English dictionary changes

Server name footer display

Citrix StoreFront Configuration Copy

$
0
0

I’ve written a small script that you can use against your Citrix StoreFront server group to copy, backup or replicate a groups configuration using native StoreFront Powershell cmdlets.
The this allows you to either store a know working configuration for just in case moments or to bundle up the config and port it over to a new server or server group if they current ones are being replaced.

The script is available from HERE. Please view the README.MD for more details.

Author: Dale Scriven

Citrix ADC 13 64.35 Cannot Complete your Request Error

$
0
0

With the release of the latest Citrix ADC firmware 13 64.35 an update in the security policies will cause issues with Single Sign On  (SSO) to Citrix StoreFront whether this is AD based FAS etc. Release note-let (if you will) NSAUTH-7747 details this change however doesn’t really make it to obvious of the actual effect this will have to the casual observer.

Through a Citrix ADC Gateway when you authenticate with SSO to Citrix StoreFront with this firmware applied you will receive our old friend the “Cannot Complete your Request” message. Head scratching may also increase as no hints of an issue are reflected within the Citrix StoreFront event logs as no errors are captured during the logon process by StoreFront.

However, the fix for the issue is very straight forward and all that is required is a single traffic policy is created and bound to the Gateway virtual server.

Adding the following into your Citrix ADC config (changing the %vservername% for your own one), will resolve the issue and your users will be able to sign in again.

add vpn trafficAction SSO_ACT http -SSO ON

add vpn trafficPolicy SSO_POL true SSO_POL

bind vpn vserver _%vvservername% -policy SSO_POL -priority 100 -gotoPriorityExpression END -type REQUEST

Save NSCONF

Author: Dale Scriven

The post Citrix ADC 13 64.35 Cannot Complete your Request Error first appeared on https://vhorizon.co.uk.

Droplet Computing Containers

$
0
0

To say that the EUC Workspace is changing rapidly is an understatement. Client operating systems now update regularly, and the end point  choices have never been wider with Apple and Microsoft ,thin clients  retaining their fan base but an undeniable increase in systems utilising ChromeOS and even Linux.

Applications and endpoints update regularly with changes in architecture and function however the one exception to that rule for us is corporate applications. Customers can consume a vast amount of subscription based applications, SaaS apps and any number of modern and fast paced ways to get work done. However, there is always that little percentage of application estate which does not move so quickly and fails to keep up with change. Those applications are also typically ones that modern replacements are not available, suitable or customers just are not able to migrate away from them.

These applications also will cling onto their old system requirements meaning that they do not play nice or at all with what organisations need to use for supportability and new features that they understandably want to use.

This gray area is where Droplet Computing comes in.  

Droplet Computing provides a neat answer to these issues in the form of user accessible containerisation while maintaining a high degree of enterprise ready capabilities too.

The premise of Droplet containers use virtualisation and emulation to createe the container. The software can then be run on Windows, Mac OSx, ChromeOS and Linux cover 99% of the end point use cases.

Droplet also provide guidance on using their solution within corporate environments covered Citrix Virtual Apps and Desktops, VMware Horizon View , AWS and Azure.

The droplet application has a small number of components to it these are:-

Droplet Installer – installer for the container framework

Licence File – no explanation needed 😊

Container image(s) – disk files containing base OS and applications

Apps.json File  – configuration file for the visible apps

Settings.json – Container settings including container sizing and network and drive access rules

Credentials File – encrypted administrative credentials file

The containers of the platform offer two flavours of supportability Modern and Legacy.

The Modern platform allows you to install applications that are compatible with Windows 7 while the Legacy container allows applications to run from a Windows XP type framework. Both container type are effectively locked down versions of the operating systems displaying the administrator configured applications only and a single shared document location.

Both platforms its worth noting utilise a dynamically expanding container which has a limit of 20GB’s so if you have a hefty application or a number of applications then this will need some consideration in how you provision these containers much the same way you might consider applications for layering technologies.

Client operating specs vary based upon the applications that you intend to containerise however the minimum recommended is to ensure the client has 1GB’s of RAM spare for very light programs, Notepad++ type applications and scale as necessary up to 8GB’s. For Windows 10 and Intel based devices HAXM can also be utilised to benefit from Intel VT/NX support.

Similarly, multiple CPU’s can be assigned to the container if required by default a single CPU is assigned however heavy legacy application containers can use more if needed.

Applications are installed in the container by placing the installer for the container within a shared folder that is created on the endpoint (C:\Users\%username%\AppData\Roaming\Droplet\Shared). When the container is running files placed in the endpoint will be replicated into container itself making it available for installation.

The container can be configured to either allow or deny access to network resources and contains a firewall type configuration  which allows applications access to the bare minimum network resources in order to function correctly.

Droplet Computing

Additionally localhost drive access is limited to the secure shared folder allowing staff to move files in and out of the container using the same folder that administrators will use in order to move installation files into the container.

Containers can also be domain joined if required (not ideal but they can be) and options exist for using vpns or other connectivity tools as required.

Applications are presented to users in a very familiar format for anyone who has used any published application interface providing a simple click to launch interface. Once applications are installed they must be “published” in order for users to have access to them. Publishing an application requires you to enter the Name of the application the executable path a description and optionally an icon. These entries are then written to the apps.json file which can be exported and installed on the target endpoints.

Droplet is a great solution for full OS based endpoints with dedicated resources however options are also available for remoting infrastructures such as Citrix and VMware solutions as previously mentioned. When adding this solution to one of these environment types additional care and consideration must be given to sizing as you are effectively running a virtual machine within a virtual machine and the container plus installed applications will have their own requirements.

Consideration points for deploying Droplet should include but are not limited to:

  • Target platform VDI/SBC
  • Host OS type
  • Additional CPU overhead
  • Additional Memory Overhead
  • Disk IOPS
  • Container Storage requirements for x users (remember a single container for a single user)
  • Source Container copy for new users
  • Concurrency
  • Network and disk access requirements

Installation automation is a little limited at present with the installer however if you are using MDT or SCCM etc you can use AutoIT to automate the installer and scripting the file movements or use Group Policy Preferences, however this obviously only covers a small subset of the use case as its more designed with disconnected non-windows devices and non-domain joined.

I would like to see a few extra features within the the product to make it a little more administrator friendly such as improvements to publishing applications, security enhancements, automated installation options however Droplet Computing are a great string to have in an organisations technology bow. It provides a level or portability and support to applications that would be challenging otherwise especially in an offline setting.

To find out more goto www.dropletcomputing.com

Author: Dale Scriven

The post Droplet Computing Containers first appeared on https://vhorizon.co.uk.

The Great Workforce Reset

$
0
0

Its no surprise that as I write this everyone’s year has been that little bit different and I hope you are all staying safe and well.  We can’t take away the tragedy and chaos Covid-19 has caused however this has brought about some radical changes in many peoples working practices and an opportunity that really cannot be ignored.

Covid-19 gives us a real chance to come out the other side with a better way of working for both employee’s and employers, handing over a world to our children that we are actually proud to have help shaped in our working lives.

If you are reading this then you will likely know that I am a Workspace (or End User Computing whichever term you prefer) consultant and have been for many years now. I’ve always been interested in the technology solutions that widen options as well as improving the experience for staff and have driven towards the any-device, any-location anytime style mantra. The technology stacks I have worked with I have seen transition from internal only infrastructure alongside thin clients through the occasional and disaster recovery use case to what it is capable of achieving today. I may be slightly bias here but I see an opportunity for a mass and quick change of working practices for organisations that are already onboard this work style and also those that are resistant or previously unable to allow this. This blog post is a bit of a read and I’m sorry about that but there’s been a good few twitter conversations I’ve had over the past few months that have talked about this topic and felt that I’ve not managed to get all my current thoughts on the subject across.

Psychological and balance impacts

Over the last few years questions have been coming to light around this way of working with mental health and the work/life balance being two of the major talking points within pretty much every industry. A fantastically frank and honest account of the issue can be found here https://www.techradar.com/news/workplace-stress-a-major-technology-bug-to-fix . We should all pay close attention and look for signs of stress etc in ourselves and also within others around us and the stigma of just getting on with it because its part of the job needs to go. Work/life balance is also a hot topic with many workers spending almost a full working day on just getting to and from work and if there’s a traffic jam or leaves on the line you can forget getting home in time to put the kids to bed. But I believe it doesn’t have to be this way.

Traditional bricks and motar approach

For many the pre-Covid working practices followed a tradition dating back to the nineteenth century including 40 hour working week travel to and from an office 5 days a week 9-5 with very little flexibility. This has slowly started changing over the last few years with some employer’s introducing flexi-time and work from home days during the week, however many organisations have stuck to the traditional requirement of staff needing to be sat at a specific desk, in a specific office at a specific time for the whole working week. Believe it or not this approach is hurting businesses and stifling the growth and success they are trying to achieve.

Taking the question of location into account, large cities like London attract people from all over the country to work allowing capital based organisations the ability to pick and choose their talent from a wide geographical boundary.

Organisations who are not London based and reside in smaller towns, villages etc who do not benefit from the city draw their talent from a much smaller geographical boundary and workers tend to be local to the office. These organisations are no less entitled to the best of the talent that they are recruiting for at the time however, their selection of staff is reduced considerably by requiring staff to commute into the specific desk and a specific office at a specific time. Organisations are losing out on this front alone. By removing the requirement to travel to work you remove that boundry for working and provides more choice for both employer an employee. Indeed employment need not stop at the countries borders either, for companies who may embrace this approach and are able to either use or ignore time differences why not cast the hiring net internationally giving organisations an even wider range or employee’s all offering their valued experience and additional localised input.

Businesses can also benefit by decentralising from one or more offices. By adopting a work from home first strategy organisations can remove or downsize their physical bricks and mortar requirements saving the companies money which they can reinvest into the business process or staff.

Business and government obligations

The savings made from physical locations should not all be shareholder/ceo bound with the forced work from home we are now experiencing companies will have time and the ability to assess and assist their home workers better, bolstering their processes and welfare for home workers. Companies should see it as a duty to provide resources or advice for home workers. When Covid suddenly shut offices across the globe there were many reports of companies allowing their staff to take home IT equipment, chairs or whatever they could spare to assist the staff to work effectively from home.

This scramble isn’t really a new normal rather a mass execution of a DR incident which even when organisations test DR plans regularly an actual incident always throw curve balls or issues that have not been planned for. However, these companies did have the correct general gist in accepting that providing a remotely accessible VDI or VPN is not enough to provide a good work from home experience. Companies will ideally provide a work from home budget allowing people to purchase items necessary to effectively work from home whether that’s a webcam,chair or new laptop or ideally all three. By providing this budget staff are able to obtain the items that work for them and their home working environment. Even by providing these extra’s organisations will still benefit from the reduced reliance on office locations and a much wider reach of potential staff and greater capabilities of business growth and flexibility.

During the pandemic many companies even my own took a keener interest in peoples well being. Mine as an example allowed me to set up a weekly drop-in meeting with our department providing a no pressure avenue to discuss anything at all, from what people are working on, personal activities, worries etc etc. This has been very successful and has been running every week since this whole affair started. Additionally we have had welfare checks from our management team and more recently care packages sent to everyone from the company in the guise of notepad’s, pens. things to eat and drink etc. Not all companies do this but it is encouraging to see that on social media its not only my employer that is taking this approach to staff welfare which is definitely a good start.

Likewise office safety laws such as DSE etc need to be updated to reflect the greater percentage of work from home users who are only covered at present by brief mentions within the documentation.

What about non-office workers?

We also can’t ignore that not everyone works in an office style environment taking the London example above there is a huge supporting infrastructure around the capital based around restaurants, coffee shops, pubs convenience stores etc etc. But if you really look at that infrastructure what you will largely see is a scooby-doo repeating background style street view with the same organisations providing that critical service. Now I like a £4 cup of coffee as much as the next person but these staff will also be commuting to work. By shifting our working style to predominantly working from home this will also benefit workers in those organisations too.

By moving people to predominantly work from home they will be located within their home town, villages etc which has its own supporting infrastructure and these locations are more likely to have independent shops cafes etc that will be able to benefit from a larger pool of local workers and customers looking for their caffeine fix or essential items.

One-size does not fit all

I know not everyone is able to work from home for various reasons such as no suitable space within their home environment, privacy, connectivity and other concerns mean that offices will sometimes be unavoidable. Also for physical meetings where sometimes a video conference just will not do I totally accept that the purpose of a company office will not just disappear but we can do better for many workers by moving to this work style that has been up until now creeping very slowly into view. Solutions are already coming to the front with options such as https://try.thryve.network/ and an increase in rent-a-desk style locations for people that only require occasional access to a office desk or high speed internet connection negating again the need to commute to a central location. In my case I’d be happy to use a rent-a-desk or coffee shop for their upload pipe when I have something substantial to upload. However for my normal day to day usage of VDI’s, teams meetings etc even my modest rural broadband can cope with that.

Industry right-sizing

Heres where things may get slightly controversial. Post-Covid I don’t think all businesses should survive to the other side. Much like right-sizing your environment or coping with virtual machine sprawl many industries have ballooned to take advantage of this old physical presence requirement.  For example it is not unusual for people to fly to other countries for work, not only that I have seen with some degree of shall we say concern people boasting about how many air-miles or equivalent they have earned that year. That doesn’t sound to me like something to be celebrated rather ironed out as an old way of meeting people and getting work done. It is nice to go places and see new things and while I’m not a frequent out of country traveller (I’ve been to exactly 3 countries other than my own) I don’t begrudge anyone a holiday but travelling for work in this way which so many people seem to take for granted needs to be seen as a very last resort rather than part of the job. We are losing so much by doing this while also harming the environment around us.

This is not normal

The mass DR execution event was not the new normal, nothing about this pandemic is normal as any of us who regularly work from home will tell you. Its not solely a constant battle between trying to concentrate on work and having the children around you asking about lockdown homework that your embarrassed to admit that you don’t understand or indeed that feeling of remorseless monotony because when you finish work your not allowed to go anywhere to spend your free time. For those of you who are not used to working from home you may not realise it but that isn’t what home working is like.

During non pandemic times working from home provides not only time back to you because you are not travelling it also allows a degree of flexibility to your day. I regularly move my working day around to fit around my families needs Christmas plays, sports days, school meetings all occur during the working day which when working in an office often meant burning an annual leave day for each one. Now all I do is either use my lunch break to attend it or work later or earlier in the evening. Also work can benefit from my reduced travel time as I am more available and more inclined to help out or perform tasks outside of the traditional working hours which more organisations will see as a huge benefit moving to this working style. Again this is not a freebie and as the Joker says “If your good at something never do it for free” however flexibility for both employee and employer cannot be ignored which would not be the case for office based work styles.

An example of how not to do things

Already companies are suggesting changes in the wrong direction for working from home an example here from the https://www.theguardian.com/business/2020/nov/11/staff-who-work-from-home-after-pandemic-should-pay-more-tax guardian newspaper details how Deutsche Bank believe that work from home staff should be charged 5% extra tax per day because its cheaper. I’m not sure where they are getting that from but I think working from home works out as about the same cost as commuting into the office every day as you still have to spend money you wouldn’t normally on heating/lighting your home and charging your devices.

We can do better!

I really want to end my working career knowing that our generation have taken the opportunity to accelerate a monumental shift in working styles which was happening anyway but rather than take decades to achieve we can achieve much quicker and bring something positive out of this pandemic. I want my children to have more opportunity to work wherever and whenever they want without sacrificing their own personal life which will secure not only their financial future and increase choice but will go a great distance in avoiding the mental impact on this centuries old method of working.  

This brings me onto a last point the 40 hour working week is a historical arbitrary figure. Most people now perform particular duties/tasks which may or may not take you 40 hours a week to complete. The issue with the 40 hour week is that it assumes productivity and value equals time spent which as we all know is not the case. Some jobs may require more time and some a good bit less.  Very few people have roles that take exactly the same time every week to complete their task’s etc so in the future I would like to see the end of the 40 hour working week and a move towards a productivity based approach where tasks are completed staff do not feel obliged to sit in front of their desk until clocking off time. However I believe that may be something that the next generation of workforce will have a greater capability to fix. 🙂

What do you thing?

Author: Dale Scriven

The post The Great Workforce Reset first appeared on https://vhorizon.co.uk.


Citrix Drive Mapping Session in Session

$
0
0

You may have found then when you launch a Citrix session inside a Citrix session that the local drives do not map through, this does not change no matter what client drive redirection policy you set.

As an example if we have a user who has a laptop and they launch a Windows 10 desktop through Citrix into their corporate network, then within that session open a browser and goto a partners Gateway and launch a session there using the Citrix Workspace app installed within the Windows 10 VDI. If the partner environment has client drive redirection set to “allowed” this should map the Windows 10 VDI’s drives through. This of course would map correctly if a session was launched to the partners environment directly from a laptop however drives will not map if a session is detected as running inside a session. At least not by default.

In order to enable this capability the following registry key needs to be updated within the Windows 10 VDI’s Citrix Workspace registry location:

HKLM\Software\Wow6432Node\Citrix\ICA Client\Engine\Configuration\Advanced\Modules\ClientDrive

NativeDrivemapping Reg_SZ TRUE

You can also combine this with targeted Group Policy Preferences or ivanti conditional actions etc to ensure that this setting only applies to the groups/users that need the access.

Once this has been completed then a parent Citrix session with this value will be able to map client drives through to child Citrix sessions if the target environment allows.

Author: Dale Scriven

The post Citrix Drive Mapping Session in Session first appeared on .

SSLLabs SSL test script to check multiple URI’s with PowerShell

$
0
0

SSL Labs Server test is a great tool for ensuring that your Citrix ADC Gateways are secured. By entering a URL into SSL Server Test (Powered by Qualys SSL Labs) and waiting gives you a indication if your VIP is exposed to known vulnerabilities and presents this as a graded score (F to A+). Getting an A+ is the target however running the test manually only gives you the result for the time of the test.

If you have multiple administrators who have access to your ADC’s or potentially the security goalposts move this may result in your ADC’s becoming vulnerable again which you would only discover by running the SSL test again.

Luckily for us SSL labs have created an API that we can use to perform tests and obtain some detail around the current score.

Armed with my rudimentary PowerShell and API skills I wrote a script. This script is designed to be run as part of a scheduled task and takes a list of URI’s from a CSV file within the same directory as the script and initiates the SSL server test against them. The script can be run ad-hoc too and provides PowerShell console output during the process.

Simply create a CSV or use the one already within the directory and place a list of URI’s within the CSV, then run the script.

During the creation of the script I noted that the API’s returned errors which seemed to be some kind of rate limiting so there are some countdown timers and sleeps  etc to ensure that the API’s do not deny multiple Uri requests from the same source.

The results are then written into a txt file and also displayed as a pop-up window. This script is at version 0.1 so it works just fine but I will be looking to improve it shortly.

Head over to https://github.com/scifidale/SSL-Labs-Checker to grab a copy of it.

Author: Dale Scriven

The post SSLLabs SSL test script to check multiple URI’s with PowerShell first appeared on .

Citrix Studio access during an outage

$
0
0

When you have a Citrix Virtual Apps and Desktop site with multiple zones, all zones are not created equally and this can affect Citrix Studio access during an outage. Citrix Virtual Apps and Desktops uses a concept of Primary and satellite zones. When a failure occurs within the Primary zone meaning that either all the delivery controllers are down or disconnected from the SQL databases Citrix Studio becomes unavailable.

While restoring the primary zone controllers to a working state should be the main concentration point the environment may have been designed to be highly available so the satellite sites can continue to run and potentially require business as usual type access.

Luckily there is a simple powershell command that you can run on one of the surviving satellite zone controllers in order to reclassify the satellite zone as primary and reinstate Studio access.

Set-ConfigSite -PrimaryZone “SataliteZoneName”

The Set-ConfigSite command also includes quite a few other capabilities that are worth exploring more details can be found here.

As long as you remember to run it again to reverse the change back to the original Primary Zone when it is healthy again.

Author: Dale Scriven

The post Citrix Studio access during an outage first appeared on .

Windows 11 Start Menu Layout Group Policy

$
0
0

Windows 11 has launched today and it wasn’t long before I spotted some request for rearranging the start menu from the new centre layout to the traditional start menu on the left which can be pushed through a group policy or similar.

Procmon to the rescue which found the following Reg location that handles this, so you can create a group policy preference etc configure you preference for new or traditional start button layout.

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\Advanced\TaskbarA1

Setting this dword code to “1” applies puts the start layout to the centre while applying “0” to this value moves the start menu back to the left hand side.

Author: Dale Scriven

The post Windows 11 Start Menu Layout Group Policy first appeared on .

Removing the Windows Domain login EULA for automated deployments

$
0
0

Pre-login EULA’s are a common sight within organisation’s forcing a user to “read” something then click OK allowing the system to continue to login.

Automated build tools such as Ivanti, or MDT however are not keen on these things and often as part of a new environment it is recommended that a new Active Directory Organisational Unit is created and blocking inheritance of group policies applied so that the EULA can be filtered out.

New systems built with automation tools are joined to the domain added to these OU’s to ensure that when systems reboot the automation is not interrupted by waiting for a manual EULA acceptance.

But what happens in cases where you cannot create a staging OU and stop the EULA from applying either by design or by organisational requirements.

Automating through MDT and Ivanti etc suddenly becomes a lot harder. However, two registry keys, a scheduled task and a simple script come into play here that can save the day in these situations. All available from GitHub here.

These keys are populated when a EULA group policy is defined and contains the header and the body text of the EULA.

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\legalnoticetext

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\legalnoticecaption

These can simply be deleted in order to remove the EULA from the system however unfortunately group policy will constantly reapply these causing the issue to reoccur.

Combining these keys with a script and a scheduled task however ensures the keys will not reappear during your automated build process.

A couple of things to note is taking MDT as an example using the task sequences unattend.xml file ensure that you remove the steps to domain join and instead rely on the Recover from Domain step instead to perform the domain join. Without doing so you will see the system installing the OS and joining the domain and rebooting before you have a chance to “install” the script as a step meaning that you will already be seeing the EULA.



With that out of the way comes creating a scheduled task for ease I’ve put the MDT package on GitHub here. All you need to do is import into MDT and create the application that runs the Disable-EULA.CMD.

When run this copies a batch file along side its parent folder to C:\Script and also creates a scheduled task using the XML task template included.
The scheduled task then runs at computer start under the system context and delete’s the registry keys associated with the EULA and allows the automated logon to continue uninterrupted.

All that remains is when you have completed the automated build process is to delete the scheduled task which you could automate as well using the following command:


Schtasks /delete /TN “Disable EULA” /F

Author: Dale Scriven

The post Removing the Windows Domain login EULA for automated deployments first appeared on .

MDT Fails to resume on reboot (Failure 70)

$
0
0

When deploying with Microsoft Deployment Technology (MDT) and the task sequence fails to resume after a MDT initiated reboot take a look at the BDD.log.

One of the last entries within that log may show errors registering “Microsot.BDD.Utility.dll” and will log it as error code 70. As the below example shows.

RUN: regsvr32.exe /s “C:\Users\ADMINI~1\AppData\Local\Temp\Tools\x64\Microsoft.BDD.Utility.dll”  LiteTouch              

FAILURE (Err): 70: CreateObject(Microsoft.BDD.Utility) – Permission denied            LiteTouch

If so this can be caused by User Account Control (UAC) so ensure that UAC is disabled and in addition the following registry key is set within the image.

\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System

FilterAdministratorToken Reg_DWORD=0

This key can be fed by the group policy or local system policy and ensuring that it is set to Disabled should resolve the issue.

Computer Configuration\Security Settings\Local Policies\Security Option\User Account Control: Admin Approval Mode for the built-in Administrator account

Once configured if you have a stuck MDT task sequence a reboot the image and the process should continue.

Author: Dale Scriven

The post MDT Fails to resume on reboot (Failure 70) first appeared on .

Citrix Audio Over UDP Dropping mid session

$
0
0

Citrix Audio Over UDP feature is great for improving audio performance in Citrix sessions and Ray Davis has a good article on how to get it working correctly link HERE. One issue I’ve come across recently is reports of audio failing to work mid-session. Often reported as audio dropping mid session.  The symptoms were if a user was in a call there the audio was working fine however after the call finished and another started all audio would drop from the session. No system audio, no voice app audio nothing. During the issue the audio devices attached to the Citrix session (headsets, mics speakers etc) were still showing as connected.

After a bit of digging we found the cause to be two things. due to the UDP nature of the audio after a period of audio inactivity the audio over UDP ports on the connecting firewalls shutdown terminating the audio. Also VDA versions prior to 1912 CU4 have no capability to maintain the UDP connection during periods of inactivity. In order to resolve this all the VDA’s need to be upgraded to 1912 CU4 and a registry key needs to be configured. Not sure why this isn’t I the default install for the vda but there we go.

Create a DWORD value of KeepAliveTimer in the following location HKEY_LOCAL_MACHINE\SOFTWARE\WOW6432Node\Citrix\Audio and set the value to a number. This number represents the number of seconds between UDP keep alive pings which will keep the UDP sessions active during the periods of inactivity somewhere.  

This also means that you need to talk to your network team and confirm what the firewall in between the clients and the endpoints have configured for UDP timeouts and adjust the registry value to be inside of that timeout. So for instance if networks say the UDP timeout is 30 seconds on their firewall the registry value you would set could be.

KeepAliveTimer DWORD 15

Once those are configured you should find that your audio works perfectly.

Author: Dale Scriven

The post Citrix Audio Over UDP Dropping mid session first appeared on .

Migrate Citrix NetScaler ADC from pooled to permanent licence

$
0
0

Citrix NetScalers or ADC’s have two different types of licencing available to them. Either the traditional permenent licence where you upload a licence file to the appliance reboot it and its done, or pooled licences. Pooled licences are more flexible and require Citrix ADM to host and manage these licences. They can be checked out by NetScalers, enabling you to quickly upscale/downscale and reallocate licences quickly.

However there are some problems here, when switching to pooled licences, Its a lot more administrative work as these licences tend to expire and you have to “manage” these licences like you work with CVAD licences. Also I’ve yet to come across and real use case where you need to juggle licences for NetScalers (ADC’s) in this manner and I’ve worked with A LOT of use cases. Finally once you switch to pooled licences you cannot physically switch that NetScaler (ADC) back to permanent licences.

It appears that future sales of NetScalers will have no choice but to goto pooled licences (I won’t get into that discussion here but safe to say I’m not a fan).  

This post will centre on VPX’s based on Azure (or any cloud for that matter), it is possible to switch back to running a pair with permanent licences with some juggling.

The high level tasks to put an HA pair back to permanent licences are as follows:

  • Backup configs
  • Shutdown secondary appliance
  • Delete secondary appliance
  • Remove secondary appliance from HA config of the Primary
  • Reprovision Secondary appliance from cloud marketplace
  • Apply basic wizard config to appliance (including permanent licence)
  • Apply extra config to appliance
  • Set Primary appliance as stay primary in HA config
  • Set Secondary appliance as stay secondary in HA Config
  • Create HA pair FROM “PRIMARY APPLIANCE”
  • Monitor and confirm HA pair are functioning correctly
  • Failover the pair and test
  • Perform same steps above on remaining NetScaler (ADC)  

For a pair of ADC’s in the cloud you need to use INC mode which means that each ADC has more settings etc that are individual to each ADC. So these actions listed above are slightly more at risk as especially during a failover as if some configured items are missing then the appliance will not function correctly and even worse sometimes can just delete that config from the ns.conf file meaning that when failing back this may not resolve the issue as the config will be gone.

It is critical to plan and document each step of the process you need to follow and more so have backups of everything before starting the process.

I’ve written the below steps to help you with this process of migrating a cloud based NetScaler (ADC) back to permanent licences from pooled licence mode.

Take full backup of ADC’s prior to rebuild and document attached nic’s vnets and IP’s etc

Nic’s (Names)vNet/SubnetIP’s
Complete hereComplete hereComplete here (etc)
ItemConfiguration
SNIPS
PBR’s
Routes
NetProfile
VLANS
  • Shutdown secondary node
  • Change primary node HA “ADC01” to “Stay Primary”
  • Remove HA node from the configuration and Save the configuration (**See notes below**)
  • Remove the Nics from the Azure Load Balancers
  • Reconfigure ALB Monitor to include port other than port 9000
  • Delete the secondary VM
  • Deploy a new ADC using BYOL ADC using Firmware 13.0
  • Upgrade to firmware if necessary to same revision as Primary
  • Obtain HOSTID and retrieve licence
  • Install licence
  • Add nics to VM
  • Add SNIPS from document above maintaining the previous names
  • Add PBR’s from document above maintaining the previous names
  • Add Routes from document above maintaining the previous names
  • Add Netprofile from document above maintaining the previous names
  • Configure VLAN’s from document above maintaining the previous names
  • Set ADC02 to Stay Secondary and save config
  • Add ADC02 to HA from ADC01
  • Ensure sync occurs correctly and examine the config to ensure it looks correct
  • Reattach and verify azure nics to the Azure Load Balancers ALB’s
  • change both ADC’s to actively participate in HA
  • Test authentication works with existing ADC
  • Failover ADC’s to newly built ADC and test authentication functions
  • Ensure all load balancing/GSLB etc is reporting healthy and is as expected
  • Test ICA connection through Azure and ensure functionality
  • Save configuration

**Important Note**

When rebuilding existing ADC environments which utilise Azure Load Balancers it should be noted that removing a node from HA causes a problem. ALB’s monitor ADC’s on port 9000, this is only open and active on a primary ADC in an HA pair. when the pair is broken the port is inactive and ALB believes the service to be down.

When rebuilding the full issue is if you delete an recreate a secondary ADC instance you have to remove the HA configuration from the Primary in order to readd the new one which then closes the port stopping the service from ALB from functioning. Please note this may be avoidable by configuring a different or additional port for the ALB to monitor on, perhaps port 80 etc, but this then must be removed or returned quickly when the HA pair is rejoined to avoid issues with the ALB determining which NetScaler (ADC) is the primary.

Author: Dale Scriven

The post Migrate Citrix NetScaler ADC from pooled to permanent licence first appeared on .
Viewing all 96 articles
Browse latest View live