Omnipresent AI cameras will ensure good behavior, says Larry Ellison
Larry Ellison proposed AI surveillance for law compliance, suggesting it would improve citizen behavior. His vision raises privacy concerns and requires advanced hardware, with $100 billion in AI investments expected soon.
Read original articleLarry Ellison, co-founder of Oracle, recently shared his vision for a future dominated by AI surveillance during a company financial meeting. He proposed a system where AI would monitor citizens through a network of cameras and drones, ensuring compliance with laws by both police and the public. Ellison suggested that this constant oversight would encourage better behavior among citizens, as everything would be recorded and reported. He also envisioned AI-controlled drones replacing police vehicles in high-speed pursuits. While Ellison framed this surveillance as beneficial, his comments raised concerns about privacy and civil liberties, echoing themes from George Orwell's "1984." The implementation of such systems would require significant advancements in AI hardware, particularly GPUs, which are currently in high demand. Ellison noted that major investments in AI are expected to reach $100 billion over the next five years, as companies race to integrate AI into various applications. This vision of AI surveillance parallels existing systems in places like China, where extensive monitoring has raised alarms about digital totalitarianism.
- Larry Ellison envisions a future with AI surveillance to ensure law compliance.
- He suggests that constant monitoring will encourage better behavior among citizens.
- Concerns about privacy and civil liberties are raised by his proposals.
- The implementation of AI surveillance systems requires advanced hardware, particularly GPUs.
- Significant investments in AI are anticipated, with companies expected to spend $100 billion in the next five years.
Related
Americans Are Uncomfortable with Automated Decision-Making
A Consumer Reports survey shows 72% of Americans are uncomfortable with AI in job interviews, and 66% with its use in banking and housing, highlighting concerns over transparency and data accuracy.
AI is growing faster than companies can secure it, warn industry leaders
At the DataGrail Summit 2024, leaders stressed that AI's rapid growth outpaces security measures, urging equal investment in safety systems to mitigate risks and prepare for future developments.
Elon Musk and Larry Ellison Begged Nvidia CEO Jensen Huang for AI GPUs at Dinner
Larry Ellison and Elon Musk met Nvidia's CEO to request more AI GPUs. Oracle plans a Zettascale supercluster with 131,072 GPUs and aims to secure power with nuclear reactors.
Omnipresent AI cameras will ensure good behavior, says Larry Ellison
Larry Ellison proposed AI surveillance for law compliance, suggesting it would improve citizen behavior. His vision includes AI drones replacing police vehicles, raising privacy concerns amid high demand for AI hardware.
Ellison declares Oracle 'all in' on AI mass surveillance
Larry Ellison asserts Oracle is set to lead in AI technologies for mass surveillance, proposing continuous police monitoring and citizen behavior tracking, while emphasizing Oracle's networking architecture for AI infrastructure.
He's not even trying to frame it as "the weight of crime will be lifted from the people so they can prosper". It's "citizens will be on their best behavior". I've got a suspicion that he envisions a separate world for himself that does not involve such monitoring.
The state in this case believes this is "good behaviour", but this would be shocking to most HN readers. This is a good example of why you should never give one person or organisation too much power.
Who gets to define what "Good behaviour" is?
If democratic self-governance relies on an informed citizenry, Penney wrote, then “surveillance-related chilling effects,” by “deterring people from exercising their rights,” including “…the freedom to read, think, and communicate privately,” are “corrosive to political discourse.”
.. “Governments, of course, know this. China.. wants people to self-censor, because it knows it can’t stop everybody. The idea is that if you don’t know where the line is, and the penalty for crossing it is severe, you will stay far away from it.. if your goal is to control a population,” Schneier says, “mass surveillance is awesome.”
.. The social challenge now, [Zuboff] says, is to insist on a new social contract.. “We have to create the political context in which privacy can be successfully defended, protected, and affirmed as a human right. Then we’d have a context in which the privacy battles can be won.”
Not the politics, not what he really thinks: A reason which matches a market opportunity he thinks Oracle can seize.
Technology this powerful is the bedrock of a successful hypothetical totalitarian state, a big prerequisite. What do we do once it's within reach?
"Every police officer is going to be supervised at all times, and if there's a problem, AI will report the problem and report it to the appropriate person."
Mercy without justice is the mother of dissolution.
- St. Thomas Acquinas
https://www.nytimes.com/2005/09/12/technology/oracles-chief-...
Or is he only worried about the behavior of people who aren't billionaires?
- ALL the focus is on asset allocation and ordinary people surveillance, nothing on who control AND OWN the controllers, or the "AI" mentioned;
- smartphones, mobile connectivity the key to make people pay the surveillance, obviously for the profit of the controller.
Instead of FTTH to focus on stable and high performance links to WFH focus on mobile surveillance, I'm curious why we IT workers do no agree a MASSIVE, WORLDWIDE STRIKE asking for mandatory WFH for all eligible jobs "we run the nervous system of the society, we build it, we will not be flesh-based bots of some manager in a smart-city lager", and focus on desktop computing instead of mobile, because here happen anything else.
It seems the important questions are whether such systems will define what is right or whether they will support the population in decentralized social decision making and synchronization the same way capitalism supports distributed allocation decisions.
... and certainly wont be throwing IoT remotes against the wall in anger, publicly berating employees, or storming off angrily in their F1s, etc, etc. Hypocritical jerk.
Related
Americans Are Uncomfortable with Automated Decision-Making
A Consumer Reports survey shows 72% of Americans are uncomfortable with AI in job interviews, and 66% with its use in banking and housing, highlighting concerns over transparency and data accuracy.
AI is growing faster than companies can secure it, warn industry leaders
At the DataGrail Summit 2024, leaders stressed that AI's rapid growth outpaces security measures, urging equal investment in safety systems to mitigate risks and prepare for future developments.
Elon Musk and Larry Ellison Begged Nvidia CEO Jensen Huang for AI GPUs at Dinner
Larry Ellison and Elon Musk met Nvidia's CEO to request more AI GPUs. Oracle plans a Zettascale supercluster with 131,072 GPUs and aims to secure power with nuclear reactors.
Omnipresent AI cameras will ensure good behavior, says Larry Ellison
Larry Ellison proposed AI surveillance for law compliance, suggesting it would improve citizen behavior. His vision includes AI drones replacing police vehicles, raising privacy concerns amid high demand for AI hardware.
Ellison declares Oracle 'all in' on AI mass surveillance
Larry Ellison asserts Oracle is set to lead in AI technologies for mass surveillance, proposing continuous police monitoring and citizen behavior tracking, while emphasizing Oracle's networking architecture for AI infrastructure.