Milei's government will monitor social media with AI to 'predict future crimes'
Javier Milei's administration in Argentina has launched an Artificial Intelligence Unit for security, raising concerns about privacy and civil liberties, as experts question the effectiveness and oversight of such surveillance initiatives.
Read original articleJavier Milei's administration in Argentina has established a new Artificial Intelligence Unit Applied to Security, which will monitor social media and other online platforms to predict and prevent crimes. This unit will also analyze real-time security camera footage and utilize drones for aerial surveillance. The initiative aims to enhance the efficiency of law enforcement by employing machine learning algorithms to analyze historical crime data and identify potential threats. The unit will be staffed by police and security agents and will focus on detecting criminal activities, identifying movements of criminal groups, and responding to emergencies.
However, experts and civil rights organizations have raised concerns about the implications of this surveillance program on privacy and civil liberties. Critics argue that the initiative contradicts constitutional rights and could lead to illegal intelligence operations disguised as technological advancements. They highlight the risks of inadequate oversight and the potential misuse of collected data, which could target academics, journalists, and activists. The Center for Studies on Freedom of Expression and Access to Information has noted that similar practices in Latin America often lack transparency and accountability. Additionally, the predictive capabilities of AI in crime prevention have been questioned, with experts warning against relying on technology that has historically failed in this area. Overall, the establishment of this unit has sparked a significant debate about the balance between security and individual rights in Argentina.
Related
We Need to Control AI Agents Now
The article by Jonathan Zittrain discusses the pressing necessity to regulate AI agents due to their autonomous actions and potential risks. Real-world examples highlight the importance of monitoring and categorizing AI behavior to prevent negative consequences.
Can AI Be Meaningfully Regulated, or Is Regulation a Deceitful Fudge?
Governments consider regulating AI due to its potential and risks, focusing on generative AI controlled by Big Tech. Challenges include balancing profit motives with ethical development. Various regulation models and debates on effectiveness persist.
Argentina's economy is growing beyond expectations
Argentina's economy grows unexpectedly under President Milei, surpassing predictions. IMF anticipates a rebound in 2025 despite concerns over rising poverty rates due to austerity measures and inflation. Milei's strategies face economic and social challenges, risking voter support erosion.
The $100B plan with "70% risk of killing us all" w Stephen Fry [video]
The YouTube video discusses ethical concerns about AI's deceptive behavior. Stuart Russell warns passing tests doesn't guarantee ethics. Fears include AI becoming super intelligent, posing risks, lack of oversight, and military misuse. Prioritizing safety in AI progress is crucial.
From sci-fi to state law: California's plan to prevent AI catastrophe
California's SB-1047 legislation aims to enhance safety for large AI models by requiring testing and shutdown capabilities. Supporters advocate for risk mitigation, while critics warn it may stifle innovation.
Imagine a state which does not sufficiently punish criminals who are guilty of certain white-collar crimes. Over time, the amount of white-collar crime causes a significant negative effect on the efficiency of its capitalist system to the point that it becomes impossible to compete in this system and make a living without also resorting to white-collar crime.
Now imagine that the system, by its design, is forcing more and more people into crime, just to survive... And then consider that AI would try to predict and possibly punish crimes before they happen... Surely AI would learn that those who are most harmed by the socio-economic system are the most likely to turn to crime. Systemic discrimination would become the main predictor of crime. It would identity flaws in the socio-economic system in order to punish the people who are most harmed by those flaws, in order to punish those people... As opposed to using those patterns to fix the system itself.
Very dystopian.
Related
We Need to Control AI Agents Now
The article by Jonathan Zittrain discusses the pressing necessity to regulate AI agents due to their autonomous actions and potential risks. Real-world examples highlight the importance of monitoring and categorizing AI behavior to prevent negative consequences.
Can AI Be Meaningfully Regulated, or Is Regulation a Deceitful Fudge?
Governments consider regulating AI due to its potential and risks, focusing on generative AI controlled by Big Tech. Challenges include balancing profit motives with ethical development. Various regulation models and debates on effectiveness persist.
Argentina's economy is growing beyond expectations
Argentina's economy grows unexpectedly under President Milei, surpassing predictions. IMF anticipates a rebound in 2025 despite concerns over rising poverty rates due to austerity measures and inflation. Milei's strategies face economic and social challenges, risking voter support erosion.
The $100B plan with "70% risk of killing us all" w Stephen Fry [video]
The YouTube video discusses ethical concerns about AI's deceptive behavior. Stuart Russell warns passing tests doesn't guarantee ethics. Fears include AI becoming super intelligent, posing risks, lack of oversight, and military misuse. Prioritizing safety in AI progress is crucial.
From sci-fi to state law: California's plan to prevent AI catastrophe
California's SB-1047 legislation aims to enhance safety for large AI models by requiring testing and shutdown capabilities. Supporters advocate for risk mitigation, while critics warn it may stifle innovation.