To remove the OpenFrame client on a device that has been orphaned run the following command at an admin terminal:
openframe-client uninstall
A list of those random things you find really useful when fixing IT issues. It takes ages to track them down again later, and you KNOW you'll come across the issue again. So here they are for me...and YOU...to refer back to any time.
To remove the OpenFrame client on a device that has been orphaned run the following command at an admin terminal:
openframe-client uninstall
Have you ever stopped to think about where your organisation’s data actually lives?
Have you considered how robust your device PIN is and what it now allows access to?
Microsoft Security Baselines for Windows devices contain a default minimum PIN length of 14 characters.
"WHAT!?" you say, "I only just managed to justify shorter passwords by implementing MFA, and now my previously-six-digit PIN now has to be 14 characters!?"
Yep. That ever-present, super convenient PIN code has become an Achilles heel.
The technology world is moving away from passwords, which can be used from anywhere...including by hackers. Even MFA codes can be obtained through sophisticated phishing methods. By comparison, logins that avoid passwords altogether are considered more secure.
As we transition to a password-less world, much value is placed on phishing-resistant logins that use biometrics, such as fingerprint or face ID. However, while these login methods seem fancy, we can always fall back to the safety-blanket we call the device PIN. If the finger or face scanner is not working (or you're accessing your significant other's device...naughty naughty!) we can use the device PIN to login instead.
"But hang on, isn't a device PIN the same as a password?"
Good question, I'm glad you asked. Passwords are usually tied to a cloud account of some sort, whether a personal Google, Facebook, or Microsoft account (Insta, or Tik-tok for the younger set. Is Snapchat still a thing?) or a company login, again often via Google or Microsoft.
Passwords can be used from anywhere. They're often tied to a multi-factor authentication system, but without specific restrictions applied by the company providing the account, they can still be used on any device from any location.
Device PINs, however, are tied specifically to the device on which they are created, along with any biometric data (fingerprint/face-ID) you use to authenticate yourself. This authentication data can then be linked to any cloud accounts accessed from that device, removing the need for passwords, making them easy and secure to log into.
"So what's the problem!?"
Again, I'm glad you asked. These cloud accounts, linked to authentication data safely stored in your trusted devices, are now easily and securely accessible, so you feel safe. So you link even more accounts to it: your work logins, your social media accounts, and dare we imagine your bank and financial systems.
Now we have much of our life tied to one little PIN. It's very difficult to fake a face ID or fingerprint...but how robust is your device PIN, and what does it allow access to?
Think about it.
Originally posted on my LinkedIn: https://www.linkedin.com/pulse/all-access-pass-james-robinson-vzfkc/
In 2024, it is commonly known that multi-factor authentication (MFA) is an important first line of defence in cyber security. Requiring multiple forms of verification, significantly reduces the risk of unauthorized access, even if one factor (like a password) is compromised. Thankfully, most New Zealand organizations are focused on implementing MFA to enhance their security posture.
In 2019, Microsoft stated that MFA was a “simple action you can take to prevent 99.9 percent of attacks on your accounts.” Five years on, that may still be true, but the size of the 0.1% seems to have grown.
Effective cyber security employs a layered approach, called defence-in-depth. So while MFA is still a critical cybersecurity protection, it is by no means invincible and must be accompanied by complementary protections. Cyber attackers have developed sophisticated methods to circumvent MFA, and your cybersecurity strategy needs to take this into account.
One common method is MFA fatigue. Attackers bombard users with repeated authentication requests, hoping the user will eventually approve one out of frustration or confusion. This social engineering tactic exploits human error, making it a potent tool for bypassing MFA. Another social-engineering technique is to reach out to contacts of an already compromised account requesting MFA credentials. Posing as a trusted contact circumvents our normal psychological warning systems.
Yet another technique is one we hear of a lot, phishing. Attackers trick users into revealing their MFA codes by creating convincing yet fake login pages or sending deceptive emails. Once the user enters their credentials and MFA code, the attacker captures this information and gains access to the account.
More sophisticated Man-in-the-middle (MitM) attacks are also prevalent. In these attacks, cybercriminals intercept the communication between the user and the authentication server. By doing so, they can capture login credentials and MFA codes, effectively bypassing the security measures.
Token theft is another method where attackers steal session tokens stored on a user’s device. These tokens can be used to authenticate the attacker without needing the MFA code again.
To combat MFA circumvention, organizations can implement several strategies:
By understanding these attack vectors and adopting a defence-in-depth approach, organisations can significantly enhance their defence against MFA bypass attacks.
2 - Article originally posted for iT360 - https://it360.co.nz/mfa-critical-not-invincible/
Will insurance still payout if we delay cybersecurity updates?
Deployment of security updates will be governed by those liable for the risks - the insurance companies.The world experienced a global IT outage last Friday (19 July 2024) and into the weekend due to a defective cybersecurity software update on Windows computers running cybersecurity firm Crowdstrike's Falcon software. Countless articles and blog posts have raised questions around what Dr Shumi Akhtar, Associate Professor at the University of Sydney, has called "fragility of our heavily digitised world." The conversation has inevitably included how we should be managing security updates, especially their rapid, often untested deployment to production systems.
In the ever escalating conflict between shadowy hackers and those tasked with keeping our digital systems secure, we are advised to apply updates for our security tools as soon as possible, to minimise the chance of hackers exploiting new vulnerabilities or new hacking techniques. With the recent fault being caused by a software update, many are asking if security updates should be deployed so rapidly.
What if we don't deploy security software updates as rapidly as advised?There is a precedent for delaying updates, born out of an era when it felt like Windows updates caused as many issues as they resolved. We implement such things as pilot groups or update rings, which enable a managed deployment of updates to less critical devices - our guinea pigs or crash test dummies. Once stability and reliability is assured, we allow our more sensitive and critical systems to be updated.
So this begs the question...
Why don't we roll out security software updates in a tiered manner, like Windows updates?At this point, we have to consider the risks of doing what we've always done - deploy security updates asap - with the alternative - delaying updates to critical systems until they are proven elsewhere. If we deploy immediately, Friday's events demonstrate the risks of that approach and the costs involved. It has global impact, across a vendor's entire customer base. If we delay deployment, we risk leaving our organisations open to cyber attack.
Who will decide which approach we should take?We all will have opinions on which approach is best and whether there are better ways to manage things, but ultimately...
Decisions will be governed by those who are at the most risk, this being company directors and in turn their protectors, the insurance companies.We are yet to see guidance from insurance companies on how they view these two opposing risks, failed update vs cyber attack, and which they deem to be most critical. For now, most insurance policies require demonstrating competent management of security systems in order to make a successful claim in the event of a cyber attack. If we delay security updates, and we are attacked in a way that could have been prevented by the update's application, have we demonstrated competence?
If we delay security updates, will our cybersecurity insurance policies still deliver?For now, it would seem, we are best to maintain the status quo or accept the risk of a breach being an uninsured event.
Moving forward, insurance companies may take a different view and the question we might need to ask next could be...
If security updates cause business interruption, will insurance companies continue to provide protection?For now we await a response from the companies that have provided insurance to those impacted by this recent global outage.
- What will be the priority moving forward, protection from cyber attack, or more careful deployment of protections?
- What new responsibilities will fall on companies to ensure they are covered for both cyber attacks and business disruption from faulty updates?
Speak to your insurance advisor and find out what their current position is, and be ready for that to change in the future.
DISCLAIMER: I'm no insurance expert and may be completely missing something in this conversation, so I'd love to hear from those better informed so that I and the technology community at large can make more educated decisions regarding the protection of their systems, data, employees, and customers.
Originally posted on my LinkedIn page: https://www.linkedin.com/pulse/update-what-insurance-impact-james-robinson-dgwpc/
Sometimes you want to know which wireless access point (AP) you are connected to. From a Windows laptop you can do this by running the following command
netsh wlan show interfaces
In the output, look for the BSSID, which will be in the format of a MAC address. Then look for this ID either on the AP itself, if you are able to physically access it, or via whatever wireless controller software you use to manage your APs.
A key tenet of the agile, DevOps-focused project-to-product transformation mindset is the concept of "You build it, you run it". That is, if you and your team were responsible for creating the thing, typically a bit of code in the DevOps world, the same team needs to be responsible for running it on a day to day basis. This probably works in the case of software, due to the way most of the "running" can be automated and therefore also turned into code. "Running" the code then simply involves a continuous improvement of that code, to be self-reliant and resilient. It would make sense for the team who know the core code, i.e what was "built", to also be intimately acquainted with the peripheral code, i.e. the code around the outside that automates the "running" of the core code, making it more resilient and reliable.
However, how does this idea translate into less automated products? Traditionally, highly skilled teams will build a product and leave the running, along with some heavy documentation, to an often lower skilled team of individuals. The highly skilled teams are your "developers", whether or not it is software they are developing. The teams left to do the often relatively mundane running tasks are your "operators". DevOps seeks to bring these development and operations teams together, hence "You build it, you run it." Can we translate that model into more traditional products, or does the success of the "You build it, you run it" concept depend heavily on whether or not running the product can be automated?
Can we keep our highly skilled developers involved in the sometimes mundane running of their products, without risking boredom and frustration? Perhaps it is this very boredom and frustration that inspires them into continuous improvement of the products. This might be wishful thinking.
Even better, how do we up-skill our operators to a level where they can further develop the products they are responsible for operating? This feels like a more uplifting path.