Across the world schools are wedging AI between students and their learning materials; in some countries greater than half of all schools have already adopted it (often an “edu” version of a model like ChatGPT, Gemini, etc), usually in the name of preparing kids for the future, despite the fact that no consensus exists around what preparing them for the future actually means when referring to AI.

Some educators have said that they believe AI is not that different from previous cutting edge technologies (like the personal computer and the smartphone), and that we need to push the “robots in front of the kids so they can learn to dance with them” (paraphrasing a quote from Harvard professor Houman Harouni). This framing ignores the obvious fact that AI is by far, the most disruptive technology we have yet developed. Any technology that has experts and developers alike (including Sam Altman a couple years ago) warning of the need for serious regulation to avoid potentially catastrophic consequences isn’t something we should probably take lightly. In very important ways, AI isn’t comparable to technologies that came before it.

The kind of reasoning we’re hearing from those educators in favor of AI adoption in schools doesn’t seem to have very solid arguments for rushing to include it broadly in virtually all classrooms rather than offering something like optional college courses in AI education for those interested. It also doesn’t sound like the sort of academic reasoning and rigorous vetting many of us would have expected of the institutions tasked with the important responsibility of educating our kids.

ChatGPT was released roughly three years ago. Anyone who uses AI generally recognizes that its actual usefulness is highly subjective. And as much as it might feel like it’s been around for a long time, three years is hardly enough time to have a firm grasp on what something that complex actually means for society or education. It’s really a stretch to say it’s had enough time to establish its value as an educational tool, even if we had come up with clear and consistent standards for its use, which we haven’t. We’re still scrambling and debating about how we should be using it in general. We’re still in the AI wild west, untamed and largely lawless.

The bottom line is that the benefits of AI to education are anything but proven at this point. The same can be said of the vague notion that every classroom must have it right now to prevent children from falling behind. Falling behind how, exactly? What assumptions are being made here? Are they founded on solid, factual evidence or merely speculation?

The benefits to Big Tech companies like OpenAI and Google, however, seem fairly obvious. They get their products into the hands of customers while they’re young, potentially cultivating their brands and products into them early. They get a wealth of highly valuable data on them. They get to maybe experiment on them, like they have previously been caught doing. They reinforce the corporate narratives behind AI — that it should be everywhere, a part of everything we do.

While some may want to assume that these companies are doing this as some sort of public service, looking at the track record of these corporations reveals a more consistent pattern of actions which are obviously focused on considerations like market share, commodification, and bottom line.

Meanwhile, there are documented problems educators are contending with in their classrooms as many children seem to be performing worse and learning less.

The way people (of all ages) often use AI has often been shown to lead to a tendency to “offload” thinking onto it — which doesn’t seem far from the opposite of learning. Even before AI, test scores and other measures of student performance have been plummeting. This seems like a terrible time to risk making our children guinea pigs in some broad experiment with poorly defined goals and unregulated and unproven technologies which may actually be more of an impediment to learning than an aid in their current form.

This approach has the potential to leave children even less prepared to deal with the unique and accelerating challenges our world is presenting us with, which will require the same critical thinking skills which are currently being eroded (in adults and children alike) by the very technologies being pushed as learning tools.

This is one of the many crazy situations happening right now that terrify me when I try to imagine the world we might actually be creating for ourselves and future generations, particularly given personal experiences and what I’ve heard from others. One quick look at the state of society today will tell you that even we adults are becoming increasingly unable to determine what’s real anymore, in large part thanks to the way in which our technologies are influencing our thinking. Our attention spans are shrinking, our ability to think critically is deteriorating along with our creativity.

I am personally not against AI, I sometimes use open source models and I believe that there is a place for it if done correctly and responsibly. We are not regulating it even remotely adequately. Instead, we’re hastily shoving it into every classroom, refrigerator, toaster, and pair of socks, in the name of making it all smart, as we ourselves grow ever dumber and less sane in response. Anyone else here worried that we might end up digitally lobotomizing our kids?

  • SoftestSapphic@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    4 hours ago

    AI highlights a problem with universities that we have been ignoring for decades already, which is that learning is not the point of education, the point is to get a degree with as little effort as possible, because that’s the only valueable thing to take away from education in our current society.

  • tehn00bi@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    ·
    5 hours ago

    I just keep seeing in my head when John Connor says “we’re not going to make it, are we?”

  • scarabic@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    4 hours ago

    We need to be able to distinguish between giving kids a chance to learn how to use AI, and replacing their whole education with AI.

    Right under this story in my feed is the one about the CEO who fired 80% of his staff because they didn’t switch over to AI fast enough. That’s the world these kids are being prepared for.

    I would rather they get some exposure to AI in the classroom where a teacher can be present and do some contextualizing. Kids are going to find AI either way. My kids have gotten reasonable contextualizing of other things at school, like not to trust Google blindly and not to cite Wikipedia as a source. Schools aren’t always great with new technology but they aren’t always terrible either. My kids school seems to take a very cautious approach with technology and mostly teach literacy and critical thinking about it. They aren’t throwing out textbooks, shoving AI at kids and calling it learning.

    This is an alarmist post. AIs benefits to education are far from proven. But it’s definitely high time for kids everyone to get some education about it at least.

    • Disillusionist@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      3 hours ago

      I do agree with your point that we need to educate people on how to use AI in responsible ways. You also mention the cautious approach taken by your kids school, which sounds commendable.

      As far as the idea of preparing kids for an AI future in which employers might fire AI illiterate staff, this sounds to me more like a problem of preparing people to enter the workforce, which is generally what college and vocational courses are meant to handle. I doubt many of us would have any issue if they had approached AI education this way. This is very different than the current move to include it broadly in virtually all classrooms without consistent guidelines.

      (I believe I read the same post about the CEO, BTW. It sounds like the CEO’s claim may likely have been AI-washing, misrepresenting the actual reason for firing them.)

      [Edit to emphasize that I believe any AI education we do to prepare for employment purposes should be approached as vocational education which is optional, confined to those specific relevant courses, rather than broadly applied]

      • scarabic@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        3 hours ago

        I agree with you that education is not primarily workforce training. I just included that note as a bit of context because it definitely made me chuckle to see these two posts right together, each painting a completely different picture of AI: “so important you must embrace it or you will die” versus “what the hell is this shit keep it away from children.”

        I fall in between somewhere. We should be very cautious with AI and judicious in its use.

        I just think that “cautious and judicious” means having it in schools - not keeping it out of schools. Toddler daycares should be angelic safe spaces where kids are utterly protected. Schools should actually have challenging material that demands critical thinking.

  • jpreston2005@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    3
    ·
    5 hours ago

    I gotta be honest. Whenever I find out that someone uses any of these LLMs, or Ai chatbots, hell even Alexa or Siri, my respect for them instantly plummets. What these things are doing to our minds, is akin to how your diet and cooking habits change once you start utilizing doordash extensively.

    I say this with full understanding that I’m coming off as just some luddite, but I don’t care. A tool is only as useful as it improves your life, and off-loading critical thinking does not improve your life. It actively harms your brains higher functions, making you a much easier target for propaganda and conspiratorial thinking. Letting children use this is exponentially worse than letting them use social media, and we all know how devastating the effects of that are… This would be catastrophically worse.

    But hey, good thing we dismantled the department of education! Wouldn’t want kids to be educated! just make sure they know how to write a good ai prompt, because that will be so fucking useful.

    • Modern_medicine_isnt@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      4 hours ago

      That sounds like a form of prejudice. I mean even Siri and Alexa? I don’t use them for different reaons… but a lot of people use them as voice activated controls for lights, music, and such. I can’t see how they are different from the clapper. As for the llms… they don’t do any critical thinking, so noone is offloading thier critical thinking to them. If anything, using them requires more critical thinking because everyone who has ever used them knows how often they are flat out wrong.

      • jpreston2005@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        4 hours ago

        voice activated light switches that constantly spy on you, harvesting your data for 3rd parties?

        Claiming that using ai requires more critical thinking than not is a wild take, bro. Gonna have to disagree with all of what you said hard.

        • Modern_medicine_isnt@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 hours ago

          You hit on why I don’t use them. But some people don’t care about that for a variety of reasons. Doesn’t make them less than.

          Anyone who tries to use AI and not apply critical thinking fails at thier task because AI is just wrong often. So they either stop using it, or they apply critical thinking to figure out when the results are usable. But we don’t have to agree on that.

  • StitchInTime@piefed.social
    link
    fedilink
    English
    arrow-up
    12
    ·
    6 hours ago

    When I was in school I was fortunate enough that I had educators who strongly emphasized critical thinking. I don’t think “AI” would be an issue if it were viewed as a research tool (with a grain of salt), backed by interactive activities that showcased how to validate what you’re getting.

    The unfortunate part is instructor’s hands are more often than not tied, and the temptation to just “finish the work” quickly on the part of the student is real. Then again, I had a few rather attractive girls flirt with me to copy my work and they didn’t exactly get far in life, so I have to wonder how much has truly changed.

  • Jason2357@lemmy.ca
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 hours ago

    Comparing with phones is odd, as we shouldn’t have allowed them in schools in the first place, and are starting to ban them in schools all over the world.

    • E_coli42@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      15
      ·
      6 hours ago

      Old man yells at cloud.

      I remember the “ban calculators” back in the day. “Kids won’t be able to learn math if the calculator does all the calculations for them!”

      The solution to almost anything disruptive is regulation, not a ban. Use AI in times when it can be a leaning tool, and re-design school to be resilient to AI when it would not enhance learning. Have more open discussions in class for a start instead of handing kids a sheet of homework that can be done by AI when the kid gets home.

      • lemmy_outta_here@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        5 hours ago

        I remember the “ban calculators” back in the day

        US math scores have hit a low point in history, and calculators are partially to blame. Calculators are good to use if you already have an excellent understanding of the operations. If you start learning math with a calculator in your hand, though, you may be prevented from developing a good understanding of numbers. There are ‘shortcut’ methods for basic operations that are obvious if you are good with numbers. When I used to teach math, I had students who couldn’t tell me what 9 * 25 is without a calculator. They never developed the intuition that 10 * 25 is dead easy to find in your head, and that 9 * 25 = (10-1) * 25 = 250-25.

      • Jason2357@lemmy.ca
        link
        fedilink
        English
        arrow-up
        5
        ·
        4 hours ago

        Offloading onto technology always atrophies the skill it replaces. Calculators offloaded, very specifically, basic arithmetic. However, Math =/= arithmetic. I used calculators, and cannot do mental multiplication and division as fast or well as older generations, but I spent that time learning to apply math to problems, understand number theory, and gaining a mastery of more complex operations, including writing computer sourcecode to do math-related things. It was always a trade-off.

        In Aristotle’s time, people spent their entire education memorizing literature, and the written world off-loaded that skill. This isn’t a new problem, but there needs to be something of value to be educated in that replaces what was off-loaded. I think scholars are much better trained today, now that they don’t have to spend years memorizing passages word for word.

        AI replaces thinking. That’s a bomb between the ears for students.

        • Fedegenerate@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          2 hours ago

          It’s good that students are using ai to cheat then. We won’t need to detect it as the answers are wrong.

      • Chulk@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 hours ago

        Cant remember the last time a calculator told me the best way to kill myself

    • jpreston2005@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 hours ago

      When I was in medical school, the one thing that surprised me the most was how often a doctor will see a patient, get their history/work-up, and then step outside into the hallway to google symptoms. It was alarming.

      Of course, the doctor is far more aware of ailments, and his googling is more sophisticated than just typing in whatever the patient says (you have to know what info is important in the pt. history, because patients will include/leave out all sorts of info), but still. It was unnerving.

      I also saw a study way back when that said that hanging up a decision tree flow chart in Emergency rooms, and having nurses work through all the steps drastically improved patient care; additionally new programs can spot a cancerous mass on a radiograph/CT scan far before the human eye could discern it, and that’s great but… We still need educated and experienced doctors because a lot of stuff looks like other stuff, and sometimes the best way to tell them apart is through weird tricks like “smell the wound, does it smell fruity? then it’s this. Does it smell earthy? then it’s this.”

    • Disillusionist@piefed.worldOP
      link
      fedilink
      English
      arrow-up
      10
      ·
      7 hours ago

      This is also the kind of thing that scares me. I think people need to seriously consider that we’re bringing up the next wave of professionals who will be in all these critical roles. These are the stakes we’re gambling with.

  • ArmchairAce1944@discuss.online
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 hours ago

    I dont trust AI much with anything other than a fun thing to chat with and give me someone to talk to. I use venice AI because I cannot get my offline LLMs to work for some reason.

  • HiTekRedNek@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    3 hours ago

    It’s cute that you think THIS is where “not taught bow to think” begins.

    Ever since the first government run school was created, it’s been about teaching what to think, and not how to think.

    When “public education” was first proposed in the USA, it was seen as a way of teaching the “poor unfortunates” just enough to be able to run the equipment in the factories. If you wanted your child to have an ACTUAL education, you hired private tutors.

    But now, even that is frowned upon. “But kids need socializing!” Meanwhile their definition of that is a little different than simply teaching your kids how to behave around others…

  • termaxima@slrpnk.net
    link
    fedilink
    English
    arrow-up
    18
    ·
    10 hours ago

    Children don’t yet have the maturity, the self control, or the technical knowledge required to actually use AI to learn.

    You need to know how to search the web the regular way, how to phrase questions so the AI explains things rather than just give you the solution. You also need the self restraint to only use it to teach you, never do things for you ; and the patience to think about the problem yourself, only then search the regular web, and only then ask the AI to clarify the few things you still don’t get.

    Many adults are already letting the chatbots de-skill them, I do not trust children would do any better.

    • Jakeroxs@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      7 hours ago

      My experience is most adults don’t know how to search the internet for information either lol.

      Also I haven’t been in school for over a decade at this point, but the internet was ubiquitous and they didn’t teach shit about it, the classes that were adjacent (like game design) were run by a coach who barely knew how to work the macs we were forced to use.

      Nor was critical thinking an important part of the teaching process, very rarely was the “why” explained, they’re just trying to get through all the material required to prepare you for the state tests which determine if you move onto the next grade.

    • LwL@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 hours ago

      I wonder if this might not be exactly the correct approach to teach them, though. When there’s actually someone to tell them “sorry that AI answer is bullshit”, so they can learn how to use it as a ressource rather than an answer provider. Adults fail at it, but they also don’t have a teacher (and kids aren’t stupid, just inexperienced).

  • iagomago@feddit.it
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    9 hours ago

    As a teacher in a school that has been quite aggressively pushing AI down our curriculum, I have to close an eye in regard to it when it comes to a simple factor of education as a work environment: bureaucracy. Gemini has so far been a lifesaver in checking the accuracy of forms, producing standardized and highly-readable versions of tests and texts, assessment grids and all of the menial shit that is required for us to produce (and which detracts a substantial amount of time from the core of the job, which would be working with the kids).

    • jpreston2005@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      I think Ai being used by teachers and administrators for the purpose of off-loading menial tasks is great. Teachers are often working like 90 hours a week just to meet all the requirements put upon them, and a lot of those tasks do not require much thought, just a lot of time.

      In that respect, yeah sure, go for it. But at this point it seems like they’re encouraging students to use these programs as a way to off-load critical thinking and learning, and that… well, that’s horrifyingly stupid.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      7 hours ago

      I mean, the bitter truth of all this is the downsizing and resource ratcheting of public schools creating an enormous labor crisis prior to the introduction of AI. Teachers were swamped with prep work for classes, they were expected to juggle multiple subjects of expertise at once, they were simultaneously educator and disciplinarian for class sizes that kept mushrooming with budget cuts. Students are subject to increasingly draconian punishments that keep them out of class longer, resulting in poorer outcomes in schools with harsher discipline. And schools use influxes of young new teachers to keep wages low, at the expense of experience.

      These tools take the pressure off people who have been in a cooker since the Bush 43 administration and the original NCLB school privatization campaign. AI in schools as a tool to bulk process busy work is a symptom of a deeper problem. Kids and teachers coordinating cheating campaigns to meet arbitrary creeping metrics set by conservative bureaucrats are symptoms of a deeper problem. The education system as we know it is shifting towards a much more rigid and ideologically doctrinaire institution, and the endless testing + AI schooling are tools utilized by the state to accomplish the transformation.

      Simply saying “No AI in Schools” does nothing to address the massive workload foisted on faculty. It does nothing to address how Teach-The-Test has taken over the educational philosophy of public schooling. And it does nothing to shrink class sizes, to maintain professional teachers for the length of their careers (rather than firing older teachers to keep salaries low), or to maximize student attendance rates - the three most empirically proven techniques to maximizing educational quality.

      AI is a crutch for a broken system. Kicking the crutch out doesn’t fix the system.

      • Disillusionist@piefed.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        I appreciated this comment, I think you made some excellent points. There is absolutely a broader, complex and longstanding problem. I feel like that makes the point that we need to consider seriously what we introduce into that vulnerable situation even more crucial. A bad fix is often worse than no fix at all.

        AI is a crutch for a broken system. Kicking the crutch out doesn’t fix the system.

        A crutch is a very simple and straightforward piece of tech. It can even just be a stick. What I’m concerned about is that AI is no stick, it’s the most complex technology we’ve yet developed. I’m reminded of that saying “the devil is in the details”. There are a great many details in AI.

  • Modern_medicine_isnt@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    6
    ·
    4 hours ago

    I couldn’t even finish the article. The mental gymnastics it would take to write it could only come from someone who never learned how to use AI. If anything, the article is a testament to how our children and everyone should be taught how to use AI effectively.

  • SharkStudiosSK@lemmy.draktis.com
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    10 hours ago

    This may be unpopular opinion but in my class today, even the teacher was using ai… to prepare the entire lecture. Now i believe that learning material shoult be prepared by the teacher not some ai. Honestly i see everybody using ai today to make the learning material and then the students use ai to solve assigments. The way its heading the world everybody will just kinda “represent” an ai, not even think for themselves. Like sure use ai to find information quickly or something but dont depend on it entirely.

    • u/lukmly013 💾 (lemmy.sdf.org)@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      7 hours ago

      I asked a lecturer some question, I think it was what happens when bit shifting signed integers.
      He asked an LLM and read the answer.
      Similarly he asked an LLM how to determine memory size allocated by malloc. It said that it was not possible, and that was the answer. But a 2009 answer from stack overflow begged to differ.
      At least he actually tried it out when I told him.

      But at this point I even had my father send me an LLM written slop that was clearly bullshit (made up information about non-existent internal system at our college), which he probably didn’t even read as he copied everything including “AI answers may be inaccurate.”