Employers Say Students Need AI Skills. What If Students Don't Want Them?
Employers demand AI skills for job readiness, but many college students resist using generative AI due to ethical concerns. Attitudes vary by field, prompting calls for balanced AI training in education.
Read original articleEmployers increasingly emphasize the necessity of artificial intelligence (AI) skills for future job readiness, yet many college students express reluctance to engage with generative AI technologies. A recent survey revealed that a significant portion of students are either unfamiliar with AI tools or outright refuse to use them, citing concerns over cheating, misinformation, and data privacy. While some students are open to AI, a majority believe its use should be limited in academic settings. The divide in student attitudes towards AI often correlates with their field of study, with humanities students generally more resistant than those in STEM fields. Experts argue that higher education institutions must adapt to this technological shift, as AI literacy is becoming essential for workforce preparedness. However, there is a challenge in changing student perceptions and institutional policies that have historically been cautious or restrictive regarding AI use. Many educators advocate for a balanced approach that includes AI training while also fostering critical thinking and communication skills. As generative AI continues to evolve, the role of higher education in equipping students with relevant skills remains a topic of debate, with some experts cautioning against overemphasizing AI at the expense of broader educational goals.
- Employers are increasingly seeking AI skills in job candidates.
- Many students are resistant to using generative AI, citing ethical and practical concerns.
- Attitudes towards AI vary significantly between different fields of study.
- Higher education institutions face challenges in integrating AI training into curricula.
- Experts advocate for a balanced approach that includes AI literacy alongside critical thinking skills.
Related
The Impact of AI on Computer Science Education
AI is impacting computer science education and the job market, with studies showing mixed effects on learning. Curricula will adapt, emphasizing responsible AI, while new job roles will emerge alongside automation.
Why AI is no substitute for human teachers
A study from the Wharton School found high school students using generative AI for math prep perform worse on exams, highlighting the need for guidance and the importance of human teachers.
AI Cheating Is Getting Worse
Colleges are grappling with AI-generated cheating, prompting educators to seek new teaching methods. Many feel demoralized, with some considering leaving the profession due to declining trust in students.
AI cheating is getting worse. Colleges Still Don't Have a Plan
Colleges are struggling with increased cheating from AI tools like ChatGPT, prompting educators to seek innovative strategies, including curriculum integration and revised assignments, to maintain academic integrity and engagement.
Teacher caught students using ChatGPT on their first assignment. Debate ensues
Professor Megan Fritts reported students using ChatGPT for assignments, igniting debate on AI's impact on education, critical thinking, and reading skills, with educators divided on its classroom integration.
You need to start from high school, the AP classes need to be revamped. Currently they are focused on purely frequentist statistics. Frequentist statistics is great for most empirical sciences like biology. The formulas are mostly plug and play and even pure life science people with no mathematical talent can wield them without trouble. The problem is that they are very far from statistical learning.
Here's the current AP stats curriculum, it is meant to be equivalent to Stat 1.
https://library.fiveable.me/ap-stats
If you want to develop a strong foundation for ML, Unit 6, 7, and 8 ought to be thrown out entirely. The level they are taught at doesn't really teach anything more than plugging formulas. Unit 4.5 (Conditional Probability) and Unit 5 (Sampling) need to be further developed to cover the Bayesian theories, perhaps a segue into graphical models and Markov chains. Generative ML for example interprets likelihood as an information generator (since in the Bayesian formula, it is roughly the "inverse" of conditional probability), unfortunately most stats classes outside of physics and high level ML theory will never mention this. Heck most classically trained statisticians won't ever encounter this idea. But it is the bread and butter of generative AI. Having a vague idea of KL-divergence and what Metropolis-Hastings is coming out of high school is infinitely more useful for their career in ML than knowing how to fiddle with a p-value. You can teach most of these concept without calculus if you simplify some things and replace integrals with their discrete summation versions. Rejection sampling for example is very easy to teach. The Common Core needs a revamp, and perhaps it's time to shift away from the historical focus on calculus/pre-calc as the central tenet of pre-college mathematical teaching.
"AI skills" is comparable to what using a search engine was before.
You'd be absolutely amazed how many people still can't use search engines, other than the absolute bare basics of typing something into google, and giving up if the result isn't on top of page 1.
I've worked with plenty of (non-tech) people that are like fish out of water, when trying to find information. Just learning stuff like boolean opeators, searching for words in quotation marks, specifying which sites and dates to search for, is way beyond what most people know, or do.
Some goes for LLMs. There's a difference between prompts, and knowing what to ask for, and how to structure your questions.
Education is slow, people still don't know how to use a spreadsheet software or scripting language to enhance their lives/work. Students don't use Excel or python to cheat on their homework or exams, so they don't really have reasons to learn those tools.
Meanwhile, user-facing AI tools are often extremely intuitive and feel quite natural to interact with, so by the time young people reach working age they already have familiarity with what they need
They recently got a larger mid-term assignment which involves implementing some well-known, basic data structures, and the standard functionality associated with them.
In one of the problems, they were given skeleton code to a bit more complex functionality - and their task is to explicitly use LLM of their choice to fill in the code, and test if the code works, using a set of tests.
I think the class in general has been updated to assume that students are using LLMs more and more, as the problem sets this year are longer and more complex, compared to those of past years (which were made available to all current students).
I’m reminded that the big thing when I was in college was XML databases. XML databases, we were assured (though not particularly convincingly) were the future. I didn’t opt for the course covering XML databases, and somehow survive 20 years later; meanwhile no-one really remembers what an XML database even was.
(It was, all told, a rather boring time for tech fads. The 90s AI bubble had imploded with such finality that people barely dared utter the term 'AI', the dot-com crash had just happened, and the gloss was off CORBA, so weird XML-y stuff got pressed into service for a few years until the next thing came along.)
I am a software developer and I hire devs as well. If somebody is ignorant of AI or refuses to use it, that is hard pass for me. It is not all that different from an accountant not wanting to use computers. Sure, you could still do some work. But, you will not be competitive.
Related
The Impact of AI on Computer Science Education
AI is impacting computer science education and the job market, with studies showing mixed effects on learning. Curricula will adapt, emphasizing responsible AI, while new job roles will emerge alongside automation.
Why AI is no substitute for human teachers
A study from the Wharton School found high school students using generative AI for math prep perform worse on exams, highlighting the need for guidance and the importance of human teachers.
AI Cheating Is Getting Worse
Colleges are grappling with AI-generated cheating, prompting educators to seek new teaching methods. Many feel demoralized, with some considering leaving the profession due to declining trust in students.
AI cheating is getting worse. Colleges Still Don't Have a Plan
Colleges are struggling with increased cheating from AI tools like ChatGPT, prompting educators to seek innovative strategies, including curriculum integration and revised assignments, to maintain academic integrity and engagement.
Teacher caught students using ChatGPT on their first assignment. Debate ensues
Professor Megan Fritts reported students using ChatGPT for assignments, igniting debate on AI's impact on education, critical thinking, and reading skills, with educators divided on its classroom integration.