July 31st, 2024

The Impact of AI on Computer Science Education

AI is impacting computer science education and the job market, with studies showing mixed effects on learning. Curricula will adapt, emphasizing responsible AI, while new job roles will emerge alongside automation.

Read original articleLink Icon
The Impact of AI on Computer Science Education

AI is significantly influencing computer science education and the job market. A recent experiment at MIT demonstrated that students using AI tools like ChatGPT performed poorly on memory tests compared to those who engaged in problem-solving without AI assistance. This highlights the importance of struggling through challenges to foster learning. Experts agree that foundational knowledge in computer science remains essential, even as AI tools become prevalent. A study from MIT's CSAIL suggests that job automation due to AI may occur more slowly than anticipated, with only a small percentage of jobs at risk of being fully automated in the near future.

As AI becomes integrated into various sectors, computer science curricula will evolve to include new disciplines such as responsible AI and data science. The demand for skills like prompt engineering is rising, indicating a shift in job requirements. While generative AI is expected to disrupt many knowledge-based jobs, it will also create new roles focused on AI system design and oversight. The nature of computer science jobs will change, emphasizing requirements and specifications over traditional coding.

In summary, while AI poses challenges, it also presents opportunities for innovation in education and job creation. The integration of AI into learning environments is crucial for preparing students for future careers, necessitating a balance between utilizing AI tools and understanding fundamental concepts.

Link Icon 12 comments
By @djoldman - 3 months
Intuitive in hindsight but wasn't obvious to me prior to reading:

> One group was allowed to use ChatGPT to solve the problem, the second group was told to use Meta’s Code Llama large language model (LLM), and the third group could only use Google. The group that used ChatGPT, predictably, solved the problem quickest, while it took the second group longer to solve it. It took the group using Google even longer, because they had to break the task down into components.

> Then, the students were tested on how they solved the problem from memory, and the tables turned. The ChatGPT group “remembered nothing, and they all failed,” recalled Klopfer, a professor and director of the MIT Scheller Teacher Education Program and The Education Arcade.

> Meanwhile, half of the Code Llama group passed the test. The group that used Google? Every student passed.

By @vouaobrasil - 3 months
AI absolutely is having a major effect on computer science (and other education domains, of course). But I feel the bigger problem is what AI highlights: the vast majority of students are just in school because it's society's way of making them cogs in the economic growth machine. It's not too meaningful, and there are always jobs that they can get that don't require being independent, creative, and a thinking person. I mean, after getting a PhD, I got a job that used about 5% of what I learned in school. Most of the programming I did, I already learned in high school...

To be honest, I think the world is being so disrupted by AI because before AI, we paved the way for it by making society operate where people don't matter, curiosity doesn't matter, and being a thinking individual doesn't matter. (The exception are the 1% very independent intellectual types who DO think and solve problems, but they are the exception who have carved themselves a niche where they can satisfy their intellectual urges and they generally care sufficiently about their curiosity so that they have found a place for themselves outside the majority).

By @irhag - 3 months
ACM used to stand for technical excellence. Now you see an increasing amount of articles that are basically mainstream commercials, ranging from defending CoC mobs to "AI isn't that bad, we have to roll over and adapt".

Other institutions like the ACLU have also been hollowed out in a similar manner.

By @PeterStuer - 3 months
"I firmly believe AI cannot be fully autonomous … there’s always going to be humans and machines working together and the machine is augmenting the human’s capabilities,”

For now, sure, but 'always'? What is the impossible part realy? What is so unique to human intelligence it can not be sufficiently modeled?

By @EncomLab - 3 months
I stand with Turing Award winner Alan Kay - "The real sciences - chemistry, physics, geography - don't have "science" in their names. Hobbies aspiring to be sciences do." and MIT MacVicar Fellow Hal Abelson "Computer Science is not really very much about computers. And it’s not about computers in the same sense that physics isn’t really about particle accelerators, and biology is not really about microscopes and petri dishes. It is about formalizing intuitions about process: how to do things."

Far too many students approach Computer Science as if it is a "science" about computers that can be successfully learned by wrote memorization and chatGPT regurgitation. It's not - it's about understanding the art of formalizing an abstract problem and converting it into a form that is computable.

By @msnkarthik - 3 months
Interesting. But will computer science education be relevant in the next 10 yrs when everyone is going to be a prompt engineer. An analogy is we used to memorize 20-30 phone numbers in our brain and with the advent of smartphones, we didnt have a need anymore and we stopped doing that and we dont know the numbers of our dear ones too. Imagine a situation when you lose your phone, your entire life will stall for moment. Similarly with prompts, I feel people will stop understanding the underlying tech and only become more and more dumber. Isn't it?
By @apples_oranges - 3 months
When I was struggling with some areas of maths at uni I realized, after checking other sources/books, that it was mostly due to how it was presented to me. After getting a fresh perspective most of it suddenly became obvious and I had learned it.

I believe people's brains function differently enough, that they, to some extent at least, need to have somewhat tailored material to their "brain type".

LLM AI is great because it can be asked again and again to describe things differently. That's what makes it so powerful as a learning tool.

By @kkfx - 3 months
Is CS taught anyway? I sincere, even if a bit polemic question, from a not so old but still not so young architect/sysadmin. I've recently talk about some young PhD students in CS and well...

- they do not have the most basic scripting skill, meaning no matter the language, they can't automate most of their personal common tasks

- they do not know how to properly typeset documents, just some hyper-basic LaTeX/R/Python knowledge but not enough to quickly produce nice and dice docs

- they do not know the OS they use every days

Ok, they have some (more or less) solid basic knowledge even if not so well connected to form a big picture, but they are definitively unable to understand a not so complex whole infra, so they can't even design a basic one, how they can reach a "philosophical level knowledge" with such competences? How can they design the future if they do not even know enough of the past and the present?

No LLM, even one without hallucinations, no bias in the model and so on can't fill the void. I know we still have no proper CS school in the world, but at least generations older then me have had a comprehensive enough knowledge, mine at least have learnt a bit with the experience to float semi-submerged by technical debt (an apparently politically correct synonym of ignorance, because that's is), but current students have even not enough basic knowledge to form a proper experience correcting and filling the void.

Just as a stupid example, if I pack a credible speech with a bit of buzzwords and known to be truth elements here and there I can play nearly all, almost at PhD graduation, to the point they can't discriminate truth and fiction. I'm talk about Italian, French grad students, so those typically described as the most acculturated in the world...

By @MaxGripe - 3 months
IMO, AI speeds up work and makes life easier, but it might be not necessarily great for humanity. I have my doubts, and I think only time will tell what the long-term effects are...

I don't know Assembler, but I can effectively use SQL. Do I need to know Assembler? Probably not, but I would certainly be a better programmer in general, if I did.

I recently completed additional studies in New Media Art. I noticed that students can use Photoshop fluently, but few of them can correctly draw a human figure anatomically. Do they need this skill? Probably not, but they would certainly be better artists if they had it.

In my opinion, this is a general trend, not necessarily related to AI itself, where technology gives us quick results but paradoxically degrades other skills that were once achieved through hard work.

In a sense, we have a "instant gratification" society today.

By @space_oddity - 3 months
AI is changing our understanding of education and, overall, transforming the educational process. It's very difficult to label this process as either bad or good. It's just happening, and we need to adapt to it in a way that brings more benefit and development.
By @dooglius - 3 months
> Then, the students were tested on how they solved the problem

Did they not know in advance about this? Seems obvious that the students who used chatGPT are not going to remember the code, and this was setup to give a negative result. The question ought to be whether they learned the concepts that were trying to be taught.

By @dailykoder - 3 months
tldr: Embrace the struggle and purposefully run into it.