Companies ground Microsoft Copilot over data governance concerns
Many enterprises are pausing Microsoft Copilot implementations due to data governance concerns, with half of surveyed chief data officers restricting use over security issues and complex data access permissions.
Read original articleConcerns over data governance are leading many large enterprises to pause or restrict the use of Microsoft Copilot tools. Jack Berkowitz, chief data officer of Securiti, reported that about half of the 20-plus chief data officers he surveyed have grounded their Copilot implementations due to security and oversight issues. While Microsoft markets Copilot as a productivity enhancer, the rapid deployment of generative AI has outpaced the establishment of necessary safety protocols. Companies face challenges with complex data access permissions, particularly in environments like SharePoint and Office 365, where sensitive information could be inadvertently summarized and exposed by the AI. Berkowitz emphasized that achieving effective use of Copilot requires clean data and robust security measures, which many organizations lack due to historical data management practices. He noted that while some generative AI applications in customer service have shown positive returns, the overall sentiment among corporate clients is one of caution. To successfully integrate AI tools, companies need to enhance their data governance and observability, ensuring that they understand their data assets and access rights.
- Many enterprises are pausing Microsoft Copilot implementations due to data governance concerns.
- About half of surveyed chief data officers have restricted Copilot use.
- Security issues arise from complex data access permissions in existing systems.
- Effective use of AI tools requires clean data and robust security measures.
- Companies need improved data governance and observability for successful AI integration.
Related
US intelligence community is embracing generative AI
The US intelligence community integrates generative AI for tasks like content triage and analysis support. Concerns about accuracy and security are addressed through cautious adoption and collaboration with major cloud providers.
GitHub Copilot is not infringing your copyright
GitHub Copilot, an AI tool, faces controversy for using copyleft-licensed code for training. Debate surrounds copyright infringement, AI-generated works, and implications for tech industry and open-source principles.
GitHub Copilot – Lessons
Siddharth discusses GitHub Copilot's strengths in pair programming and learning new languages, but notes its limitations with complex tasks, verbosity, and potential impact on problem-solving skills among new programmers.
If you give Copilot the reins, don't be surprised when it spills your secrets
Zenity's CTO revealed serious security flaws in Microsoft's Copilot and Copilot Studio, highlighting insecure default settings and risks of data breaches, while Zenity offers tools to test vulnerabilities.
Microsoft's Copilot falsely accuses court reporter of crimes he covered
Microsoft's Copilot falsely accused journalist Martin Bernklau of serious crimes, generating personal details and allegations. Despite attempts to remove the claims, they reappeared, prompting Bernklau to consider legal action.
The concerns are that most corporate implementations of network roles and permissions are not up to date or accurate, so CoPilot will show data to an employee that they should not be allowed to see. Salary info is an example.
Basically, CoPilot is following “the rules” (technical settings) but corporate IT teams have not kept the technical rules up to date with the business rules. So they need to pause CoPilot until they get their own rules straight.
Edit to add: if your employer has CoPilot turned on, maybe try asking for sensitive stuff and see what you get. ;-)
Would you enable a search indexer on all your corporate data that doesn't have any way to control which documents are returned to which users? Probably not.
It's a known issue with SharePoint going back years and has various solutions[0] such as document level access controls or disabling indexing of content.
If we called it what it is though the C-levels probably wouldn't even care about it. They never cared about enterprise document search before and certainly didn't "pivot" to enterprise document search or report on the progress of enterprise document search implementation to the board.
0: https://sharepointmaven.com/3-ways-prevent-documents-appeari...
It is the same problem with a lot of the AI tools right now. Using them for your code, looking at your documents, etc etc. Unless you self host it or use a 'private' service from Azure or AWS (which they say is safe...) who knows where this information is ending up.
This is a major leak waiting to happen. It scares me to think what kind of data has been fed into ChatGPT or some code tool that is just sitting somewhere in a log or something plaintext that could be found later.
Everything else is just wishful thinking. Like trying to keep a secret whilst only telling one or two friends.
LLM-based AI is technically banned at my work. For somewhat good reason: most of our work involves confidential, controlled, or classified data. Though I've seen a lot of people, especially the juniors, still using ChatGPT every day.
Also noticed the UI has gotten a lot slower. I'm guessing the two things are related.
If my company wasn't locked into "Microsoft everything" this would push me the last inch to ditch VS completely. I already did at home.
Want to build AI tooling that leverages user data? Great! * Does it gather their data for targeted ads? - neutral. * Does it gather their data to then be resold to others? - -100points, pay more tax, you're rent seeking. * Does it help the user not get phished? - +100points you're actually offering something of value.
I don't believe having humanoid robots in factories helps or is nearly as profitable as humanoid robots that will do my laundry for me.
Related
US intelligence community is embracing generative AI
The US intelligence community integrates generative AI for tasks like content triage and analysis support. Concerns about accuracy and security are addressed through cautious adoption and collaboration with major cloud providers.
GitHub Copilot is not infringing your copyright
GitHub Copilot, an AI tool, faces controversy for using copyleft-licensed code for training. Debate surrounds copyright infringement, AI-generated works, and implications for tech industry and open-source principles.
GitHub Copilot – Lessons
Siddharth discusses GitHub Copilot's strengths in pair programming and learning new languages, but notes its limitations with complex tasks, verbosity, and potential impact on problem-solving skills among new programmers.
If you give Copilot the reins, don't be surprised when it spills your secrets
Zenity's CTO revealed serious security flaws in Microsoft's Copilot and Copilot Studio, highlighting insecure default settings and risks of data breaches, while Zenity offers tools to test vulnerabilities.
Microsoft's Copilot falsely accuses court reporter of crimes he covered
Microsoft's Copilot falsely accused journalist Martin Bernklau of serious crimes, generating personal details and allegations. Despite attempts to remove the claims, they reappeared, prompting Bernklau to consider legal action.