August 27th, 2024

Microsoft security tools questioned for treating employees as threats

A report by Cracked Labs raises concerns about Microsoft and Forcepoint's security tools normalizing intrusive employee surveillance, blurring security and monitoring, and calls for reevaluation of related legal frameworks.

Read original articleLink Icon
Microsoft security tools questioned for treating employees as threats

A report by Cracked Labs highlights concerns regarding Microsoft and Forcepoint's security tools, which are designed to enhance cybersecurity but may also normalize intrusive workplace surveillance. The report, titled "Employees as Risks," argues that such software treats employees as potential threats, blurring the lines between security measures and employee monitoring. It details how tools like Microsoft Sentinel and Purview can track extensive employee activities, including communication and behavior, raising ethical questions about the extent of surveillance. The report emphasizes that while organizations may use these tools for legitimate purposes, they can foster mistrust and lead to inaccuracies, such as false positives in risk assessments. Legal experts express concern that current data protection laws may not adequately address the implications of such surveillance technologies, which can infringe on employees' privacy and autonomy. The report calls for a reevaluation of workplace surveillance practices and the legal frameworks governing them, particularly in light of the growing sophistication of monitoring technologies.

- Microsoft and Forcepoint's security tools may normalize intrusive employee surveillance.

- The report raises ethical concerns about treating employees as potential threats.

- Current data protection laws may not sufficiently address the implications of workplace surveillance.

- Surveillance technologies can lead to mistrust and inaccuracies in employee monitoring.

- Legal experts advocate for a reevaluation of workplace surveillance practices and regulations.

Related

Windows: Insecure by Design

Windows: Insecure by Design

Ongoing security issues in Microsoft Windows include vulnerabilities like CVE-2024-30080 and CVE-2024-30078, criticized for potential remote code execution. Concerns raised about privacy with Recall feature, Windows 11 setup, and OneDrive integration. Advocacy for Linux desktops due to security and privacy frustrations.

Windows: Insecure by Design

Windows: Insecure by Design

The article discusses ongoing security issues with Microsoft Windows, including recent vulnerabilities exploited by a Chinese hacking group, criticism of continuous patch releases, concerns about privacy invasion with Recall feature, and frustrations with Windows 11 practices. It advocates for considering more secure alternatives like Linux.

Why privacy is important, and having "nothing to hide" is irrelevant (2016)

Why privacy is important, and having "nothing to hide" is irrelevant (2016)

Privacy is crucial for democracy, eroded by global surveillance. "Nothing to hide" argument debunked. Mass surveillance harms freedom, leads to self-censorship, and risks misuse. Protecting personal data is vital.

Technology's grip on modern life is pushing us down a dimly lit path

Technology's grip on modern life is pushing us down a dimly lit path

A global technology outage caused by a CrowdStrike software update exposed vulnerabilities in interconnected systems, prompting calls for a balance between innovation and security to enhance digital resilience.

Every Microsoft employee is now being judged on their security work

Every Microsoft employee is now being judged on their security work

Microsoft has prioritized security for all employees, affecting performance evaluations, promotions, and bonuses. Employees must integrate security into their work, while the Secure Future Initiative enhances overall security measures.

Link Icon 7 comments
By @Nerada - 8 months
"Employee surveillance" sounds a lot more nefarious than the reality of these systems for most organizations.

Your network admin has had access to the proxy, and by extension, all your browsing history since forever. Now, your UEBA does that, but mainly just sits there and flags things like a user normally hitting a single host to suddenly hitting 300 hosts on the network, or a user having an average data upload of 500MB/week to 200GB in a single session.

Very few people care if you're using the corporate network to listen to YouTube Music (or even looking for other jobs), most just want to be notified of data exfiltration, compromised accounts, or malicious network activity.

By @Animats - 8 months
This sort of thing that makes me miss the classified world.

Counterintelligence people definitely view employees as risks. But they're not your boss. They work for a different organization entirely. They're watching your boss, and your boss's boss, too. They only care about threats to national security. If they find other things, they log them, but don't tell your management. They have nothing to do with performance evaluation. The three-letter agencies worked out the rules on this stuff decades ago.

By @dugite-code - 8 months
If you have paid any attention to cyber security... well anything in the last 5-10 years this should be expected?

"Insider threats" are typically the one group that any security firm can actually do anything about in an active manner. Every other threat group comes at you, not the other way around.

By @crvdgc - 8 months
> Both suggest targeting "disgruntled employees" and those with bad performance reviews as potential insider threats – Forcepoint even mentions "internal activists" and those who had a "huge fight with the boss" as risks.

> Forcepoint offers to assess whether employees are in financial distress, show "decreased productivity" or plan to leave the job, how they communicate with colleagues and whether they access "obscene" content or exhibit "negative sentiment" in their conversations.

This far surpasses the normal surveillance, which is more technical in nature. It's trying to combine mind reading and minority report to enforce a Stalinist level of thought control. How much can be delivered in reality remains to be seen, though.

By @SoftTalker - 8 months
A good way to avoid malicious insiders is to pay well enough that employees won’t risk their jobs by violating the trust placed in them. That said, there’s a place for monitoring like this, to detect compromised accounts or malware activity.
By @michaelmrose - 8 months
Any test with a very small true positive and even negligible false positive rate risks an unreasonably high number of false positives when applied to a large population. This is especially bad with a squishy non-scientific topic.

If you have 50,000 employees and are screening for a risk that is 1 in 1M with a 5% false positive rate you are going to be very disappointed when over the next decade it identifies 25,000 would be shooters when you have zero actual active shooters. Even better you will probably stop disregarding such a test and miss if if it actually happens.

As awesome the fact that skynet is always watching will probably cause people to manage their workspace personas to a psychotic degree that will surely ratchet up workspace stress to new highs. Deprived of actual data on what triggers the eye of sauron 100 wrong theories about how to avoid doing so will proliferate and your studied population will both diverge from the norm the system was designed to operate on and become progressively worse.

A few years later a study will prove that the AI inadvertently learned to discriminate against minorities, women, or people in other time zones through things the training population did without thinking and the people pushing it will look like bigots. Instead of ejecting we will try to fix it. Either this doesn't work or if it does people accuse skynet of being woke.

By @chris_wot - 8 months
How do they know an employee is in financial distress? Because they company pays them peanuts?