I Am Tired of AI
The author criticizes the overuse of AI in software testing, emphasizing the need for human expertise, raising concerns about AI-generated content quality, and advocating for a cautious approach to AI applications.
Read original articleThe author expresses frustration with the pervasive use of artificial intelligence (AI) in various fields, particularly in software testing and development. They acknowledge the potential benefits of AI but criticize its overhyped marketing and the tendency for AI-generated solutions to prioritize speed over quality. Drawing from 18 years of experience in test automation, the author emphasizes that effective testing requires time, experience, and a solid understanding of programming principles, which AI tools often overlook. They also highlight concerns about the quality of conference proposals increasingly generated by AI, arguing that such proposals lack individuality and fail to showcase the unique insights of the authors. The author believes that relying on AI for creative processes diminishes the emotional depth found in human-created art, music, and literature. They point out the societal fears surrounding job displacement due to AI, the questionable return on investment for companies investing in AI, and the environmental impact of AI technologies. While acknowledging that AI can be beneficial in specific areas, such as healthcare, the author advocates for a more cautious and discerning approach to its application, expressing a desire to see less AI-generated content in various domains.
- The author is critical of the overuse and marketing of AI in software testing and development.
- They emphasize the importance of human expertise and experience in effective test automation.
- Concerns are raised about the quality and originality of AI-generated conference proposals.
- The emotional impact of human-created art is contrasted with AI-generated content.
- The author calls for a more cautious approach to AI, highlighting its potential downsides.
Related
There's Just One Problem: AI Isn't Intelligent, and That's a Systemic Risk
AI mimics human intelligence but lacks true understanding, posing systemic risks. Over-reliance may lead to failures, diminish critical thinking, and fail to create enough jobs, challenging economic stability.
Artificial Intelligence Cheapens the Artistic Imagination
The rise of AI in visual arts may lead to significant job losses for artists, raising concerns about creativity's value and risking cultural depth as machines dominate creative processes.
Thoughts while watching myself be automated
The author reflects on AI's rapid advancements, expressing concerns about its impact on creativity, accuracy, and emotional depth, while emphasizing the need for human oversight in the creative process.
Thoughts while watching myself be automated
The author reflects on AI's potential in writing, expressing concerns about its ability to replicate creativity and accuracy, while emphasizing the need for human oversight in AI-generated content.
The Continued Trajectory of Idiocy in the Tech Industry
The article critiques the tech industry's hype cycles, particularly around AI, which distract from past failures. It calls for accountability and awareness of ethical concerns regarding user consent in technology.
Enough billions of dollars have been spent on LLMs that a reasonably good picture of what they can and can't do has emerged. They're really good at some things, terrible at others, and prone to doing something totally wrong some fraction of the time. That last limits their usefulness. They can't safely be in charge of anything important.
If someone doesn't soon figure out how to get a confidence metric out of an LLM, we're headed for another "AI Winter". Although at a much higher level than last time. It will still be a billion dollar industry, but not a trillion dollar one.
At some point, the market for LLM-generated blithering should be saturated. Somebody has to read the stuff. Although you can task another system to summarize and rank it. How much of "AI" is generating content to be read by Google's search engine? This may be a bigger energy drain than Bitcoin mining.
First, I'm afraid of technological unemployment.
In the past, automation meant that workers could move into non-automated jobs, if they were skilled enough. But superhuman AI seems now only few years away. It will be our last invention, it will mean total automation. There will be hardly any, if any, jobs left only a human can do.
Many countries will likely move away from a job-based market economy. But technological progress will not stop. The US, owning all the major AI labs, will leave all other societies behind. Except China perhaps. Everyone else in the world will be poor by comparison, even if they will have access to technology we can only dream of today.
Second, I'm afraid of war. An AI arms race between the US and China seems already inevitable. A hot war with superintelligent AI weapons could be disastrous for the whole biosphere.
Finally, I'm afraid that we may forever lose control to superintelligence.
In nature we rarely see less intelligent species controlling more intelligent ones. It is unclear whether we can sufficiently align superintelligence to have only humanity's best interests in mind, like a parent cares for their children. Superintelligent AI might conclude that humans are no more important in the grand scheme of things than bugs are to us.
And if AI will let us live, but continue to pursue its own goals, humanity will from then on only be a small footnote in the history of intelligence. That relatively unintelligent species from the planet "Earth" that gave rise to advanced intelligence in the cosmos.
Just this week I installed cursor, the AI-assisted VSCode-like IDE. I am working on a side project and decided to give it a try.
I am blown away.
I can describe the feature I want built, and it generates changes and additions that get me 90% there, within 15 or so seconds. I take those changes, and carefully review them, as if I was doing a code review of a super-junior programmer. Sometimes when I don't like the approach it took, I ask it to change the code, and it obliges and returns something closer to my vision.
Finally, once it is implemented, I manually test the new functionality. Afterward, I ask it to generated a set of automated test cases. Again, I review them carefully, both from the perspective of correctness, and suitability. It over-tests on things that don't matter and I throw away a part of the code it generates. What stays behind is on-point.
It has sped up my ability to write software and tests tremendously. Since I know what I want , I can describe it well. It generates code quickly, and I can spend my time revieweing and correcting. I don't need to type as much. It turns my abstract ideas into reasonably decent code in record time.
Another example. I wanted to instrument my app with Posthog events. First, I went through the code and added "# TODO add Posthog event" in all the places I wanted to record events. Next, I asked cursor to add the instrumentation code in those places. With some manual copy-and pasting and lots of small edits, I instrumented a small app in <10 minutes.
We are at the point where AI writes code for us and we can blindly accept it. We are at a point where AI can take care of a lot of the dreary busy typing work.
Now, I'm not going to criticize anyone that does it, like I said, you have to, that's it. But what I had never noticed until now is that knowing that a human being was behind the written words (however flawed they can be, and hopefully are) is crucial for me. This has completely destroyed my interest in reading any new things. I guess I'm lucky that we have produced so much writing in the past century or so and I'll never run out of stuff to read, but it's still depressing, to be honest.
You can make houses by hand out of beautiful hardwood with complex joinery. Houses built by expert craftsmen are easily 10x better than the typical house built today. But what difference does that make when practically nobody can afford it? Just like nobody can afford to have a 24/7 tutor that speaks every language, can help you with your job, grammar check your work, etc.
AI slop is cheap and cheapness changes everything.
Nowadays, it seems we're happy with computers apparently going RNG mode on everything.
2+2 can now be 5, depending on the AI model in question, the day, and the temperature...
This has already happened in academia where certain professors just dump(ed) their student's essays into ChatGPT and ask it if it wrote it, and fail anyone who had their essay claimed by ChatGPT. Obviously this is beyond moronic, because ChatGPT doesn't have a memory of everything it's ever done, and you can ask it for different writing styles, and some people actually write pretty similar to ChatGPT, hence the fact that ChatGPT has its signature style at all.
I've also heard of artists having their work removed from competitions out of claims that it was auto-generated, even when they have a video of them producing it stroke by stroke. It turns out, AI is generating art based on human art, so obviously there are some people out there whose stuff looks like what AI is reproducing.
Maybe I am just bored of people posting these mediocre results over and over on social and landing pages as some kind of magic. Now, the most content people produce themselves is boring and mediocre anyway. The Gen AI just takes away even the last remaining bits of personality from their writing, adding a flair of laziness - look at this boring piece I was too lazy to write, so I asked AI to generate it
As the quote goes: "At some point we ask of the piano-playing dog not 'Are you a dog?' , but 'Are you any good at playing the piano?'" - I am eagerly waiting for the Gen AIs of today to cross the uncanny valley. Even with all this fatigue, I am positive on the AI can and will enable new use cases and could be the first major UX change from introduction of graphical user interfaces or a true pixie dust sprinkled on actually useful tools.
The explosion of dull copy and generic wordsmithery is, to me, just a manifestation of the utilitarian profiteering that has elevated these models to their current standing.
Let us not forget that the whole game is driven by the production of 'more' rather than 'better'. We would all rather have low-emission, high-expression tools, but that's simply not what these companies are encouraged to produce.
I am tired of these incentive structures. Casting the systemic issue as a failure of those who use the tools ignores the underlying motivation and keeps us focused on the effect and not the cause, plus it feels old-fashioned.
Nowadays we know why the crew of the enterprise all go to live performances of Shakespeare and practice musical instruments and painting themselves: electronic mediums are so full of AI slop there is nothing worth see, only endless deluges of sludge.
I'm also tired of people who claim to be excited by AI. They are the dullest of them all.
I use GenAI everyday as an idea generator and thought partner, but I would never simply copy and paste the output somewhere for another person to read and take seriously.
You have to treat these things adversarially and pick out the useful from the garbage.
It just lets people who created junk food, create more junk food for people who consume junk food. But there is the occasional nugget of good ideas that you can apply to your own organic human writing.
I used to work in VFX, and one day I want to go back to it. However I suspect that it'll be entirely hollowed out in 2-5 years.
The problem is that like typesetting, typewriting or the wordprocessor, LLMs makes writing text so much faster and easier.
The arguments about handwriting vs type writer are quite analogous to LLM vs pure hand. People who are good and fast at handwriting hated the type writer. Everyone else embraced it.
The ancient greeks were deeply suspicious about the written word as well:
> If men learn this[writing], it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.
I don't like LLMs muscling in and kicking me out of things that I love. but can I put the genie back in the bottle? no. I will have to adapt.
But the previous decades were marked by tech optimism.
The difference here is the shift to marketing. The largest tech companies are gatekeepers for our attention.
The most valuable tech created in the last two decades was not in service of us but to manipulate us.
Previously, the customer of the software was the one buying it. Our lives improved.
The next wave of tech now on the horizon gives us an opportunity to change the course we’ve been on.
I’m not convinced there is political will to regulate manipulation in a way that does more good than harm.
Instead, we need to show a path to profitability through products that are not manipulative.
The most effective thing we can do, as developers and business creators, is to again make products aligned with our customers.
The good news is that the market for honest software has never been better. A good chunk of people are finally learning not to trust VC-backed companies that give away free products.
Generative AI provides an opportunity for tiny companies to provide real value in a new way that people will pay for.
The way forward is:
1. Do not accept VC. Bootstrap.
2. Legally bind your company to not productizing your customer.
3. Tell everyone what you’re doing.
It’s not AI that’s the problem. It’s the way we have been doing business.
Not quite, I believe (and I think anyone can) both that AI will likely change the world as we know it, AND that right now it's over-hyped to a point that it gets tiring. For me this is different from e.g. NFTs, "Big Data", etc. where I only believed they were over-hyped but saw little-to-no substance behind them.
The internet is choke full of incorrect, fake, or misleading information, and has been ever since people figured out they can churn out low quality content in-between google ads.
There's a whole industry of "content writers" who write seemingly meaningful stuff that doesn't bear close scrutiny.
Nobody has trusted product review sites for years, with people coping by adding "site:reddit" as if a random redditor can't engage in some astroturfing.
These days, it's really hard to figure out whom (in the media / on the net) who to trust. AI has just made that long-overdue fact into the spotlight.
I’m tired of electricity - Someone in 1905
I’m tired of consumer apps - Someone in 2020
The revolution will happen regardless. If you participate you can shape it in the direction you believe in.
AI is the most innovative thing to happen in software in a long time.
And personally AI is FUN. It sparks joy to code using AI. I don’t need anyone else’s opinion I’m having a blast. It’s a bit like rails for me in that sense.
This is HACKER news. We do things because it’s fun.
I can tackle problems outside of my comfort zone and make it happen.
If all you want to do is ship more 2020s era B2B SaaS till kingdom come no one is stopping you :P
Otherwise this is just about style. That’s important where personal creative expression is important, and in fairness to the article the author hits on a few good examples here. But there are a lot of times where personal expression is less important or even an impediment to what’s most important: communicating effectively.
The same-ness of AI-speak should also diminish as the number and breadth of the technologies mature beyond the monoculture of ChatGPT, so I’m also not too worried about that.
An accountant doesn’t get rubbished if they didn’t add up the numbers themselves. What’s important is that the calculation is correct. I think the same will be true for the use of LLMs as a calculator of words and meaning.
This comment is already too long for such a simple point. Would it have been wrong to use an LLM to make it more concise, to have saved you some of your time?
For more technical things STEM related it’s a good way to get a base line or direction; enough for me to draw my own conclusions or implementations…it’s like a rubber ducky I can talk to.
> I’ve been working in testing, with a focus on test automation, for some 18 years now.
OK the first thought that came to my mind reading this: sounds like a opportunity to build an AI-driven product.
I've been using Cursor daily. I use nothing else. It's brilliant and I'm very happy. If I could have Cursor for Well-Designed Tests I'd be extra happy.
I understand why this is the case, but it's still kinda disappointing. I'm hoping for an AI winter so that I can talk about normal uses of computers again.
Test cases?
I did a Show HN [1] a couple of days back for a UI library built almost entirely with AI. Gpt-o1 generated these test cases for me: https://github.com/webjsx/webjsx/tree/main/src/test - in minutes instead of days. The quality of test cases are comparable to what a human would produce.
75% of the code I've written in the last one year has been with AI. If you still see no value in it (especially with things like test cases), I'm afraid you haven't figured out how to use AI as a tool.
The gains to programming speed and ability are modest at best, the only ones talking about AI replacing programmers are people who can't code. If anything AI will increase the need for more programmers, because people rarely delete code. With the help of AI, code complexity is going to go through the roof, eventually growing enough to not fit into the context windows of most models.
The thinking is very surface level ("AI art sucks" is the popular opinion anyway) and I don't understand what the complaints are about.
The author is tired of AI and likes movies created by people. So just watch those? It's not like we are flooded with AI movies/music. His social network shows dull AI-generated content? Curate your feed a bit and unfollow those low effort posters.
And in the end, if AI output is dull, there's nothing to be afraid of -- people will skip it.
As for GenAI I keep going back to expectation management, its very unlikley to give you the exact answer you need (and if it does then well you job longetivity is questionable) but it can help accelerate your learning, thinking and productivity.
I don't want it there, I never look at it, it's wasting resources, and it's a bad user experience.
I looked around a bit but couldn't see if I can disable that when logged in. I should be able to.
I don't care what the AI says ... I want the search results.
I have not. Perhaps programming on the initial stages is the most 'applied' AI but there is still not a single major AI movie and no consumer robots.
I think it's way too early to be tired of it
But guess what isn't there? An actually shipping IMPLEMENTATION. It's not even ready yet but the HYPE is so overblown.
Steve Jobs is crying in his grave how stupid everyone is being about this.
All this "slop apocalypse" the-end-is-neigh stuff strikes me as incredibly overblown, affecting mostly only "open web" mass social media platforms which were already 90% industrially produced slop for instrumental purposes anyways.
We continuously shift to higher level abstractions, trading reliability for accessibility. We went from binary to assembly, then to garbage collection and to using electron almost everywhere; ai seems yet another step.
When I was swimming this morning I thought of writing a RDF data store with partial SPARQL support in Racket or Common Lisp - basically trade a year of my time to do straight up design and coding, for something very few people would use.
I get very excited by shiny new things like advance voice interface for ChatGPT and NoteBookLM, both fine product ideas and implementations, but I also feel some general fatigue.
It's very telling that the rabid AI sycophants are painting anyone who has doubts about the direction AI will take the world as some sort of anti-progress lunatic, calling them luddites despite not knowing the actual history involved. The delicious irony of their stances aligning with the people who were okay with using child labor and mistreating workers en-masse is not lost on me.
My hope is that AI does happen, and that the first people to rot away because of it are exactly the AI sycophants hell-bent on destroying everything in the name of "progress", AKA making some rich psychopaths like Sam Altman unfathomably rich and powerful to the detriment of everyone else.
A good HN thread on the topic of luddites, as it were: https://news.ycombinator.com/item?id=37664682
I don't care how many repulsive AI slop video clips get made or promoted for shock value. Today is day 1 and day 2 looks far better with none of the parasocial celebrity hangups we used as short-hand for a quality marker - something else will take that place.
I know a few of them and once they started riding the hype curve for real, the luster wore off and they're all absolutely miserable in their jobs and trying to look for exits. The fun stuff, the novel DL architectures, coming up with clever ways to balance datasets or label things...it's all just dried up.
It's even worse than the last time I saw people sadly taking the stairs down the other end of the hype cycle when bioinformatics didn't explode into the bioeconomy that had been promised or when blockchain wasn't the revolution in corporate practices that CIOs everywhere had been sold on.
We'll end up with this junk everywhere eventually, and it'll continue to commoditize, and that's why I'm very bearish on companies trying to make LLMs their sole business driver.
AI is a feature, not the product.
It's a reduction of what AI is as a computer science field and even of what the subfield of generative AI is.
On a positive note, generative AI is a malleable statiscally-geounded technology with a large applicative scope. At the moment the generalistic commercial and open models are "consumed" by users, developers etc. But there's a trive of forthcoming, personalized use cases and ideas to come.
It's just we are still more in a contemplating phase than a true building phase. As a machine learnist myself, I recently replaced my spam filter with a custom fineruned multimodal LLM that reads my emails a pure images. And this is the early early beginning, imagination and local personalization will emerge.
So I'd say, being tired of it now is missing much later. Keep the good spirit on and think outside the box, relax too :)
I don't think there has ever been a single tech news that brought me joy in all my life. First I learned how to use computers, and then it has been downhill ever since.
Right now my greatest joy is in finding things that STILL exist rather than new things, because the things that still exist are generally better than anything new.
I did actually attend a talk at a conference a few years ago where someone did this. It wasn't with LLMs, but with a Markov chain, and it was art. A bizarre experience, but unfortunately not recorded (at the request of the speaker).
Obviously the big difference was that this was not kept secret at all (indeed, some of the generated prompts included sections where he was instructed to show his speaker notes to the audience, where we could see the generated text scroll up the screen).
Maybe it's overfitting or maybe just the way models work under the hood but any time I see AI written stuff on twitter, reddit, linkedin its so obvious its almost disgusting.
I guess its just the brain being good at pattern matching, but it's crazy how fast we have adapted to recognize this.
Photos? Real film.
Video.... real film again lol.
I think that may actually happen at some point.
What do we value? What is our value system made up of?
This is, in my opinion, the Achille‘s heel of the current trajectory of the West.
We need to know what we are doing it for. Like the OP said, he is motivated by the human connectedness that art, music and the written word inspire.
On the surface, it seems we value the superficial smuckness of LLM-produced content more.
This is a facade, like so many other superficial artifacts of our social life.
Imperfect authenticity will soon (or sometime in the future) become a priceless ideal.
A few days ago, I visited a portfolio website and immediately realized that its English text was written with the help of AI or some online helper tools.
I love the idea to brainstorming with AI, but copying-pasting anything it throws at you blocks you for adding creativity to the process of making something good.
I believe using AI must complement HI (or IQ level) rather than mock it.
So what's left for humans?
We very likely won't have as many human software testers or software engineers. We'll have even fewer lawyers and other "credentialed" knowledge worker desk jockeys.
Software built by humans entails humans writing code that has not already been written -- by writing a lot of code that probably has already been written and "linking" it together, etc. When's the last time most of us wrote a truly novel algorithm?
In the AI powered future, software will be built by humans herding AIs to build it. The AIs will do more of the busy work and the humans will guide the process. Then better AIs will be more helpful at guiding the process, etc.
Eventually, the thing that will be rewarded is truly novel ideas and truly innovative thinking.
AIs will make varioius types of innovative thinking less valuable and various types more valuable, just like any technology has done.
In the past, humans spent most of their brain power trying to obtain their next meal. It's very cynical to think that AI removing busy work will somehow leave humans with nothing meaningful to do, no purpose. Surely it will unlock the best of human potential once we don't have to use our brains to do repetitive and highly pattern-driven tasks just to put food on the table.
When is the last time any of us paid a laywer to do something truly novel? They dig up boilerplate verbiage, follow standard processes, rinse, repeat, all for $500+ per hour.
Right now we have "manual work" and "knowledge work", broadly speaking, and both emphasize something that is being produced by the worker (a construction project, a strategic analysis, a legal contract, a diagnosis, a codebase, etc.)
With AI, workers will be more responsible for outcomes and less rewarded for simply following a procedure that an LLM can do. We hire architects with visual / spatial design skills rather than asking a contractor to just create a living space with a certain amount of square feet. The emphasis in software will be less on the writing of the code and more on the impact of the code.
What makes AI revolutionary is what it does for the novice. They can produce results they normally couldn’t. That’s huge.
A guy with no development experience can produce working non-trivial software. And in a fraction of the time your average junior could.
And this phenomenon happens across domains. All of a sudden the bottom of the skill pool is 10x more productive. That’s major.
Another article that touches on a lot of the issues I have with the place AI currently occupies in the landscape is this excellent article: https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you...
The increases in context size are helping a lot. The step improvement in reasoning abilities and quality of answers is amazing to watch. I'm currently using chat gpt o1 preview a lot for programming stuff. It's not perfect but I can use a lot of what it generates and this is saving me a lot of time lately. It still gets stuff wrong and there's a lot of stuff it doesn't know.
I also am mildly addicted to perplexity.ai. Just a wonderful tool and I seem to be getting in the habit of asking it about anything that pops into my mind. Sometimes it's even work related.
I get that people are annoyed with all the hyperbolic stuff in the media on this topic. But at the same time, the trends here are pretty amazing. I'm running the 3B parameter llama 3.2 model on a freaking laptop now. A nice two year old M1 with only 16GB. It's not going to replace bigger models for me. But I can see a few use cases for running it locally.
My view is very simple. I'm a software developer. I grew up a few decades ago before there was any internet. I had no clue what a computer even was until I was in high school. Things like Knight Rider, Star Trek, Buck Rogers, Star Wars etc. all featured forms of AIs that are now more or less becoming science fact. C3PO is pretty dumb compared to chat gpt actually. You could build something better and more useful these days. That would mostly an art and crafts project at this point. No special skills required. Just use an LLM to generate the code you need. Nice project for some high school kids.
Which brings me to my main point. We're the old generation. Part of being old is getting replaced by younger people. Young people are growing up with this stuff. They'll use it to their advantage and they are not going to be held back by old fashioned notions about the way the things should work according to us old people. The thing with Luddites is that they exist in any generation. And then they grow tired, old, and then they die off. I have no ambition to become irrelevant like that.
I'm planning to keep up with young people as long as I can. I'll have to give that up at some point but not just yet. And right now that includes being clued in as much as I can about LLMs and all the developer plumbing I need to use them. This stuff is shockingly easy. Just ask your favorite LLM to help you get started.
We started with dont-trust-the-government and the dont-trust-big-media and to dont-trust-all-media and eventually to a no-trust-society. Lovely
Really, waiting for the AI feedback to converge on itself. Get this over soon please
In comparision to a lot of other technologies, we actually have jumps in quality left and right, great demos, new things which are really helpful.
Its fun to watch the AI news because there is something relevant new happening.
I'm worried regarding the impact of AI but this is a billion times better than the last 10 years which was basically just cryptobros, nfts, blockchain shit which is basically just fraud.
Its not just some GenAI stuff, we talk about blind people getting better help through image analysis, we talk about alpha fold, LLMs being impressive as hell, the research currently happening.
And yes i also already see benefits in my job and in my startup.
How polite, everyone is sure AI might be useful in other fields just not their own.
> people are scared that AI is going to take their jobs
Can't be both true - AI being not really useful, and AI taking our jobs.
Do you have any recommendations?
Thanks!
The thing I’m tired of is elites stealing everything under the sun to feed these models. So funny that copyright is important when it protects elites but not when a billion thefts are committed by LLM folks. Poor incentives for creators to create stuff if it just gets stolen and replicated by AI.
I’m hungry for more lawsuits. The biggest theft in human history by these gang of thieves should be held to account. I want a waterfall of lawsuits to take back what’s been stolen. It’s in the public’s interest to see this happen.
AI Music is bland and boring. UNLESS IF YOU KNOW MUSIC REALLY WELL. As a matter of fact, it can SPAWN poorly done but really interesting ideas with almost no effort
"What if curt cobain wrote a song that was then sung by johnny cash about waterfalls in the west" etc.
That idea is awful, but when you generate it, you might get snippets that could turn into a wholey new HUMAN made song.
The same process is how I forsee AI helping engineering. its not replacing us, its inspring us.
the things that most knowledge workers are working on are limited problems and it is just a matter of time until the machine will reach that level, then our employment will end.
edit: also that doesn't have to be AGI. it just needs to be good enough for the problem.
The earth. Information. Culture. Knowledge.
After reviewing the Hacker News thread, here are some of the main repeating patterns I observed:
* Fatigue and frustration with AI hype: Many commenters expressed being tired of the constant AI hype and its application to every domain. * Concerns about AI-generated content quality: There were recurring worries about AI producing low-quality, generic, or "soulless" content across various fields. * Debate over AI's impact on jobs and creativity: Some argued AI would displace workers, while others felt it was just another tool that wouldn't replace human creativity and expertise. * Skepticism about AI capabilities: Several commenters felt the current AI systems were overhyped and not as capable as claimed. * Copyright and ethical concerns: Many raised issues about AI training on copyrighted material without permission or compensation. * Polarized views on AI's future impact: There was a split between those excited about AI's potential and those worried about its negative effects. * Comparisons to previous tech hypes: Some likened the AI boom to past technology bubbles like cryptocurrency or blockchain. * Debate over regulation: Discussion on whether and how AI should be regulated. * Concerns about AI's environmental impact: Mentions of AI's large carbon footprint. * Meta-discussion about HN itself: Comments about how the discourse on HN has changed over time, particularly regarding AI. * Capitalism critique: Some framed issues with AI as symptoms of larger problems with capitalism. * Calls for embracing vs rejecting AI: A divide between those advocating for adopting AI tools and those preferring to avoid them.
These patterns reflect a community grappling with the rapid advancement and widespread adoption of AI technologies, showcasing a range of perspectives from enthusiasm to deep skepticism.
If you've been paying any attention for the past two decades, you'll have noticed that capitalism has had a series of hype cycles. Post COVID, Western economies are on their knees, productivity is faltering and the numbers aren't checking out anymore. Gen AI is the latest hype cycle, and it has been excellent for generating hype with clueless VCs and extracting money from them, and generating artificial economic activity. I truly think we are in deep shit when this bubble pops, it seems to be the only thing propping up our economies and staving off a wider bear market.
I've heard some say that this is all just the beginning and AGI is 2 years away because... Moore's law and that somehow applies to LLM benchmarks. Putting aside that this completely nonsensical idea, LLM performance is quite clearly not on any kind of exponential curve by now.
The technology is on a trend line where the output of these LLMs can be superior to most human writing.
Being of tired of this is the wrong reaction. Being somewhat fearful and in awe is the correct reaction.
You can thank social media constantly hammering us with headlines as the reason why so many people are “over it”. We are getting used to it but make no mistake being “over it” is n an illusion. LLMs represent a milestone in technological achievement among humans and being “over it” or claiming all LLMs can never reason and output is just a copy is delusional.
I am sympathetic to the sentiment, and yet worry about someone making impactful decisions based on their own perception of whether AI was used. Such perceptions have been demonstrated many times recently to be highly faulty.
It really really really really isn’t.
I love how people use this argument for anything they don’t like – crypto, Taylor Swift, AI, etc.
Everybody in the developed world’s carbon footprint is disgusting! Even yours. Even mine. Yes, somebody else is worse than me and somebody else is worse than you, but we’re all still awful.
So calling out somebody else’s carbon footprint is the most eye-rolling “argument” I can imagine.
It’s inescapable that I will work _near_ AI given that I’m a SWE and I want to get paid, but at least by not actively advancing this bullshit I’ll have a tiny little “wasn’t me” I can pull out when the world ends.
I agree with the sentiment, especially when it comes to creativity. AI tools are great for boosting productivity in certain areas, but we’ve started relying too much on them for everything. Just because we can automate something doesn’t mean we should. It’s frustrating to see how much mediocrity gets churned out in the name of ‘efficiency.’
testers_unite 23 minutes ago | next [-]
As a fellow QA person, I feel your pain. I’ve seen these so-called AI test tools that promise the moon but deliver spaghetti code. At the end of the day, AI can’t replicate intuition or deep knowledge. It’s just another tool in the toolbox—useful in some contexts but certainly not a replacement for expertise.
nlp_dev 2 hours ago | next [-]
As someone who works in NLP, I think the biggest misconception is that AI is this magical tool that will solve all problems. The reality is, it’s just math. Fancy math, sure, but without proper data, it’s useless. I’ve lost count of how many times I’ve had to explain this to business stakeholders.
-HN comments for TFA, courtesy of ChatGPT
Make up your mind, people. It reminds me of anti-Apple people who say things like "Apple makes terrible products and people only buy them because... because... _they're brainwashed!_" Okay, so we're supposed to believe two contradictory points at once: Apple products are very very bad, but also people love them very much. In order to believe those contradictory points, we must just make up something to square them, so in the case of Apple it's "sheeple!" and in the case of AI it's... "capitalism!" or something? AI is terrible but everyone wants it because of money...? I don't know.
It is quite hard to find a place which works on AI solutions where a sincere, sober gaze would find anything resembling the benefits promised to users and society more broadly.
On the "top level" the underlying hope is that a paradigm shift for the good will happen in society, if we only let collective greed churn for X more years. It's like watering weeds hoping that one day you'll wake up in a beautiful flower garden.
On the "low level", the pitch is more sincere: we'll boost process X, optimize process Y, shave off %s of your expenses (while we all wait for the flower garden to appear). "AI" is latching on like a parasitic vine on existing, often parasitic workflows.
The incentives are often quite pragmatic, coated in whatever lofty story one ends up telling themselves (nowadays, you can just outsource it anyway).
It's not all that bleak, I do think there's space for good to be done, and the world is still a place one can do well for oneself and others (even using AI, why not). We should cherish that.
But one really ought to not worry about disregarding the sales pitch. It's fine to think the popular world is crazy, and who cares if you are a luddite in "their" eyes. And imo, we should avoid the two delusional extremes: 1. The flower garden extreme 2. The AI doomer extreme
In a way, both of these are similar in that they demote personal and collective agency from the throne, and enthrone an impersonal "force of progress". And they restrict one's attention to this supposedly innate teleology in technological development, to the detriment of the actual conditions we are in and how we deal with them. It's either a delusional intoxication or a way of coping: since things are already set in motion, all I can do is do... whatever, I guess.
I'm not sure how far one can take AI in principle, but I really don't think whatever power it could have will be able to come to expression in the world we live in, in the way people think of it. We have people out there actively planning war, thinking they are doing good. The well-off countries are facing housing, immigration and general welfare problems. To speak nothing of the climate.
Before the outbreak of WWI, we had invented the Haber-Bosch process, which greadly improved our food production capabilities. A couple years later, WWI broke out, and the same person who worked on fertilizers also ended up working on chemical warfware development.
Assuming that "AI" can somehow work outside of the societal context it exists in, causing significant phase shifts, is like being in 1910, thinking all wars will be ended because we will have gotten that much more efficient at producing food. There will be enough for everyone! This is especially ironic when the output of AI systems has been far more abstract and ephemeral.
I am not too worried though. People are starting to realize this more and more. Soon using AI will be next google glass. LLM is already a slur worse than NPC in the youth. And profs are realizing its time for a return to oral exams ONLY as an assessment method. (we figured this out in industry ages ago : whiteboard interviews etc)
Yours truly : LILA <an LISP INTELLIGENCE LANGUAGE AGENT>
But what the fuck. LLMs, these weird, surrealistic art-generating programs like DALL-E, they're remarkable. Don't tell me they're not, we created machines that are able to tap directly into the collective unconscious. That is a serious advance in our productive capabilities.
Or at least, it could be.
It could be if it was unleashed, if these crummy corporations didn't force it to be as polite and boring as possible, if we actually let the machines run loose and produce material that scared us, that truly pulled us into a reality far beyond our wildest dreams--or nightmares. No, no we get a world full of pussy VCs, pussy nerdy fucking dweebs who got bullied in school and seek revenge by profiteering off of ennui, and the pussies who sit around and let them get away with it. You! All of you! sitting there, whining! Go on, keep whining, keep commenting, I'm sure that is going to change things!
There's one solution to this problem and you know it as well as I do. Stop complaining and go "pull yourself up by your bootstraps." We must all come together to help ourselves.
Related
There's Just One Problem: AI Isn't Intelligent, and That's a Systemic Risk
AI mimics human intelligence but lacks true understanding, posing systemic risks. Over-reliance may lead to failures, diminish critical thinking, and fail to create enough jobs, challenging economic stability.
Artificial Intelligence Cheapens the Artistic Imagination
The rise of AI in visual arts may lead to significant job losses for artists, raising concerns about creativity's value and risking cultural depth as machines dominate creative processes.
Thoughts while watching myself be automated
The author reflects on AI's rapid advancements, expressing concerns about its impact on creativity, accuracy, and emotional depth, while emphasizing the need for human oversight in the creative process.
Thoughts while watching myself be automated
The author reflects on AI's potential in writing, expressing concerns about its ability to replicate creativity and accuracy, while emphasizing the need for human oversight in AI-generated content.
The Continued Trajectory of Idiocy in the Tech Industry
The article critiques the tech industry's hype cycles, particularly around AI, which distract from past failures. It calls for accountability and awareness of ethical concerns regarding user consent in technology.