10 December 2024

All Access Pass

 

Have you considered how robust your device PIN is and what it now allows access to?

Microsoft Security Baselines for Windows devices contain a default minimum PIN length of 14 characters.

"WHAT!?" you say, "I only just managed to justify shorter passwords by implementing MFA, and now my previously-six-digit PIN now has to be 14 characters!?"

Yep. That ever-present, super convenient PIN code has become an Achilles heel.

The technology world is moving away from passwords, which can be used from anywhere...including by hackers. Even MFA codes can be obtained through sophisticated phishing methods. By comparison, logins that avoid passwords altogether are considered more secure.

As we transition to a password-less world, much value is placed on phishing-resistant logins that use biometrics, such as fingerprint or face ID. However, while these login methods seem fancy, we can always fall back to the safety-blanket we call the device PIN. If the finger or face scanner is not working (or you're accessing your significant other's device...naughty naughty!) we can use the device PIN to login instead.

"But hang on, isn't a device PIN the same as a password?"

Good question, I'm glad you asked. Passwords are usually tied to a cloud account of some sort, whether a personal Google, Facebook, or Microsoft account (Insta, or Tik-tok for the younger set. Is Snapchat still a thing?) or a company login, again often via Google or Microsoft.

Passwords can be used from anywhere. They're often tied to a multi-factor authentication system, but without specific restrictions applied by the company providing the account, they can still be used on any device from any location.

Device PINs, however, are tied specifically to the device on which they are created, along with any biometric data (fingerprint/face-ID) you use to authenticate yourself. This authentication data can then be linked to any cloud accounts accessed from that device, removing the need for passwords, making them easy and secure to log into.

"So what's the problem!?"

Again, I'm glad you asked. These cloud accounts, linked to authentication data safely stored in your trusted devices, are now easily and securely accessible, so you feel safe. So you link even more accounts to it: your work logins, your social media accounts, and dare we imagine your bank and financial systems.

Now we have much of our life tied to one little PIN. It's very difficult to fake a face ID or fingerprint...but how robust is your device PIN, and what does it allow access to?

Think about it.

Originally posted on my LinkedIn: https://www.linkedin.com/pulse/all-access-pass-james-robinson-vzfkc/

23 October 2024

MFA: Critical, Not Invincible

 


How Helpful is MFA today?

In 2024, it is commonly known that multi-factor authentication (MFA) is an important first line of defence in cyber security. Requiring multiple forms of verification, significantly reduces the risk of unauthorized access, even if one factor (like a password) is compromised. Thankfully, most New Zealand organizations are focused on implementing MFA to enhance their security posture.

In 2019, Microsoft stated that MFA was a “simple action you can take to prevent 99.9 percent of attacks on your accounts.” Five years on, that may still be true, but the size of the 0.1% seems to have grown.

Effective cyber security employs a layered approach, called defence-in-depth. So while MFA is still a critical cybersecurity protection, it is by no means invincible and must be accompanied by complementary protections. Cyber attackers have developed sophisticated methods to circumvent MFA, and your cybersecurity strategy needs to take this into account.

What are the ways that MFA can be thwarted?

One common method is MFA fatigue. Attackers bombard users with repeated authentication requests, hoping the user will eventually approve one out of frustration or confusion. This social engineering tactic exploits human error, making it a potent tool for bypassing MFA. Another social-engineering technique is to reach out to contacts of an already compromised account requesting MFA credentials. Posing as a trusted contact circumvents our normal psychological warning systems.

Yet another technique is one we hear of a lot, phishing. Attackers trick users into revealing their MFA codes by creating convincing yet fake login pages or sending deceptive emails. Once the user enters their credentials and MFA code, the attacker captures this information and gains access to the account.

More sophisticated Man-in-the-middle (MitM) attacks are also prevalent. In these attacks, cybercriminals intercept the communication between the user and the authentication server. By doing so, they can capture login credentials and MFA codes, effectively bypassing the security measures.

Token theft is another method where attackers steal session tokens stored on a user’s device. These tokens can be used to authenticate the attacker without needing the MFA code again.

So how can we protect ourselves?

To combat MFA circumvention, organizations can implement several strategies:

  1. Educate users: Regular training on recognizing phishing attempts and the importance of not approving unexpected MFA requests can reduce the risk of MFA fatigue and phishing attacks.
  2. Use robust MFA methods: Implementing hardware tokens or biometric factors can provide stronger security compared to SMS-based MFA, which is more susceptible to interception. Ensuring your push-based MFA requires verification of something the user can see on the login screen helps combat MFA fatigue.
  3. Monitor for unusual activity: Continuous monitoring of login attempts and user behaviour can help detect and respond to suspicious activities promptly.
  4. Implement conditional access policies: Restrict access based on factors such as location, device, and risk level to add an extra layer of security.
  5. Employ advanced web-filtering and link-scanning systems: Utilizing these tools can help prevent users from accessing malicious websites and clicking on harmful links, thereby reducing the risk of phishing and MitM attacks.

By understanding these attack vectors and adopting a defence-in-depth approach, organisations can significantly enhance their defence against MFA bypass attacks.


References

1 - https://www.microsoft.com/en-us/security/blog/2019/08/20/one-simple-action-you-can-take-to-prevent-99-9-percent-of-account-attacks/

2 - Article originally posted for iT360 - https://it360.co.nz/mfa-critical-not-invincible/

22 July 2024

To Update or Not to Update - What is the insurance impact?

Will insurance still payout if we delay cybersecurity updates?

Deployment of security updates will be governed by those liable for the risks - the insurance companies.

The world experienced a global IT outage last Friday (19 July 2024) and into the weekend due to a defective cybersecurity software update on Windows computers running cybersecurity firm Crowdstrike's Falcon software. Countless articles and blog posts have raised questions around what Dr Shumi Akhtar, Associate Professor at the University of Sydney, has called "fragility of our heavily digitised world." The conversation has inevitably included how we should be managing security updates, especially their rapid, often untested deployment to production systems.

In the ever escalating conflict between shadowy hackers and those tasked with keeping our digital systems secure, we are advised to apply updates for our security tools as soon as possible, to minimise the chance of hackers exploiting new vulnerabilities or new hacking techniques. With the recent fault being caused by a software update, many are asking if security updates should be deployed so rapidly.

What if we don't deploy security software updates as rapidly as advised?

There is a precedent for delaying updates, born out of an era when it felt like Windows updates caused as many issues as they resolved. We implement such things as pilot groups or update rings, which enable a managed deployment of updates to less critical devices - our guinea pigs or crash test dummies. Once stability and reliability is assured, we allow our more sensitive and critical systems to be updated.

So this begs the question...

Why don't we roll out security software updates in a tiered manner, like Windows updates?

At this point, we have to consider the risks of doing what we've always done - deploy security updates asap - with the alternative - delaying updates to critical systems until they are proven elsewhere. If we deploy immediately, Friday's events demonstrate the risks of that approach and the costs involved. It has global impact, across a vendor's entire customer base. If we delay deployment, we risk leaving our organisations open to cyber attack.

Who will decide which approach we should take?

We all will have opinions on which approach is best and whether there are better ways to manage things, but ultimately...

Decisions will be governed by those who are at the most risk, this being company directors and in turn their protectors, the insurance companies.

We are yet to see guidance from insurance companies on how they view these two opposing risks, failed update vs cyber attack, and which they deem to be most critical. For now, most insurance policies require demonstrating competent management of security systems in order to make a successful claim in the event of a cyber attack. If we delay security updates, and we are attacked in a way that could have been prevented by the update's application, have we demonstrated competence?

If we delay security updates, will our cybersecurity insurance policies still deliver?

For now, it would seem, we are best to maintain the status quo or accept the risk of a breach being an uninsured event.

Moving forward, insurance companies may take a different view and the question we might need to ask next could be...

If security updates cause business interruption, will insurance companies continue to provide protection?

For now we await a response from the companies that have provided insurance to those impacted by this recent global outage.

  • What will be the priority moving forward, protection from cyber attack, or more careful deployment of protections?
  • What new responsibilities will fall on companies to ensure they are covered for both cyber attacks and business disruption from faulty updates?

Speak to your insurance advisor and find out what their current position is, and be ready for that to change in the future.

DISCLAIMER: I'm no insurance expert and may be completely missing something in this conversation, so I'd love to hear from those better informed so that I and the technology community at large can make more educated decisions regarding the protection of their systems, data, employees, and customers.

Originally posted on my LinkedIn page: https://www.linkedin.com/pulse/update-what-insurance-impact-james-robinson-dgwpc/

 

28 September 2021

Identify which wireless access point you are connected to

Sometimes you want to know which wireless access point (AP) you are connected to. From a Windows laptop you can do this by running the following command

 netsh wlan show interfaces

In the output, look for the BSSID, which will be in the format of a MAC address. Then look for this ID either on the AP itself, if you are able to physically access it, or via whatever wireless controller software you use to manage your APs.

29 July 2021

You Build It, You Run It

A key tenet of the agile, DevOps-focused project-to-product transformation mindset is the concept of "You build it, you run it". That is, if you and your team were responsible for creating the thing, typically a bit of code in the DevOps world, the same team needs to be responsible for running it on a day to day basis. This probably works in the case of software, due to the way most of the "running" can be automated and therefore also turned into code. "Running" the code then simply involves a continuous improvement of that code, to be self-reliant and resilient. It would make sense for the team who know the core code, i.e what was "built", to also be intimately acquainted with the peripheral code, i.e. the code around the outside that automates the "running" of the core code, making it more resilient and reliable.

However, how does this idea translate into less automated products? Traditionally, highly skilled teams will build a product and leave the running, along with some heavy documentation, to an often lower skilled team of individuals. The highly skilled teams are your "developers", whether or not it is software they are developing. The teams left to do the often relatively mundane running tasks are your "operators". DevOps seeks to bring these development and operations teams together, hence "You build it, you run it." Can we translate that model into more traditional products, or does the success of the "You build it, you run it" concept depend heavily on whether or not running the product can be automated?

Can we keep our highly skilled developers involved in the sometimes mundane running of their products, without risking boredom and frustration? Perhaps it is this very boredom and frustration that inspires them into continuous improvement of the products. This might be wishful thinking.

Even better, how do we up-skill our operators to a level where they can further develop the products they are responsible for operating? This feels like a more uplifting path.

28 July 2021

Product Management - Making Everyone's Day Better

I've been in a product management role for the last nine months and it has been a paradigm-shifting experience. Due to an unrelated opportunity I'll be moving on from this role, but someone who was interested in potentially replacing me, asked me what I did. This is how I, as someone who is still figuring it out myself, described it to him. A seasoned Product Manager would probably tear it to pieces, but I can only describe what I've managed to gather in a short 9-month stint.

The Work

The analogy I like use is to take a 600ml bottle of Coke. It is by no means a perfect analogy, but it helps create an initial picture.

The Product Manager's job is to weave between the following activities in any order at any time:

  • Work out if the client truly wants Coke

Assuming they do

  • Work out if 600ml is the optimal size (for client satisfaction, frequency of purchase, profitablilty, etc. All sorts of factors could help define "optimal")

Assuming the above are both true

  • Work with the marketing team to make sure the advertising material says "600ml Coke"
  • Work with the delivery (manufacturing) team to make sure the label says "600ml Coke"
  • Work with the delivery team to make sure the bottle is in-fact 600ml
  • Work with the delivery team to make sure the bottle is in-fact filled with Coke (few people want 800ml of Fanta, if they're expecting 600ml of Coke) 

Then

  • Work out if the client still wants Coke. If yes, rinse and repeat above. 😊

If client wants "Fanta" or even just "Vanilla Coke"

This is an area I didn't get too much into, but we'd then start looking at

  • Work out whether we should run the products side-by-side
  • If not, work out plans for sunsetting the old product and bringing new ones online 

I think there are so many more intricate pieces to each of these items, and probably some major items missing from this list, but these are the things I had the privilege to work on.

Purpose

The purpose of all of the work above is to optimise the client experience, that is just make it awesome for them. That leads to repeat business, referrals, improved relationships, increased profitability, and so on. Ultimately, our work tends to improve the experience of everyone involved, which is potent motivation.

  • The finance team find things easier to bill, as everything is well set up within the billing system.
  • The sales teams find things easier to sell, as they have collateral, they know what they're selling (and what they're not), and how we can address out-of-product queries quickly and easily. Their customers are happier and therefore like them more.
  • The delivery teams find things easier to deliver, as they know what is expected, what tools are available, what's included, what is not, where and how to find additional support.
  • The leadership team's job is easier as they have more engaged, more satisfied staff to manage.
  • Execs and owners are happier as more efficiently delivered products are more profitable (assuming we have priced right!), and staff are happier and more productive.

So you are a highly appreciated team member, which is an intense buzz.

Approach

So this is going to sound super lazy, but I found the key to the product management role was to try to do as little of "the work" as possible. This maybe more of a goal than a reality.

Your role is to inspire, delegate, excite, support. You're managing people that you have no authority over, which is a thought-provoking challenge. However, people find what you're doing so helpful that they are almost clamouring to assist. You need to get teams to own their part of the bigger picture, as much as possible. For example, technical teams need to keep technical wikis and processes up to date, the marketing team has to create the collateral, and the sales team have to engage with what has been developed to keep up to speed with what the organisation can deliver. All the while, you tie it together using various tools such as Product wikis, presentations, communications, reinforcement of consistent language, and more.

You're a conductor. A coordinator. A consultant. A facilitator.

Summary

So make the customer's experience excellent, by inspiring teams of people to make their individual pieces the best they can be in service of that goal.  

25 May 2020

Block port 53 (DNS) in an Azure Network Security Group

At times, you may want to block all outgoing traffic from a VNet in Azure. You configure a Network Security Group (NSG) with a Deny All outgoing policy. Upon testing (because you always test...right?), you find that DNS and the Windows Licensing Key Management Service are still able to traverse the NSG.

What's up with that!?

There's actually also a third service (the Azure Instance Metadata service) that can do the same, and according to Microsoft,
Basic infrastructure services like DHCP, DNS, IMDS, and health monitoring are provided through the virtualized host IP addresses 168.63.129.16 and 169.254.169.254. These IP addresses belong to Microsoft and are the only virtualized IP addresses used in all regions for this purpose. Effective security rules and effective routes will not include these platform rules.
However, the news is not all bad. The same article states that
To override this basic infrastructure communication, you can create a security rule to deny traffic by using the following service tags on your Network Security Group rules: AzurePlatformDNS, AzurePlatformIMDS, AzurePlatformLKM
So there you have it, now you can REALLY block all outgoing traffic from your VNet. Oh wait...there still isn't one for DHCP. 🤷‍♂️

A word of warning, these services are used to provide key support to your Azure workloads, so proceed with caution.

References

https://docs.microsoft.com/en-us/azure/virtual-network/service-tags-overview#available-service-tags
https://docs.microsoft.com/en-gb/azure/virtual-network/security-overview#azure-platform-considerations
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/instance-metadata-service