• 1 Post
  • 16 Comments
Joined 13 days ago
cake
Cake day: December 30th, 2025

help-circle
  • If the problem you have is specifics, I could flip your tactic around and ask you to point to a specific “kind of AI [that] is being discussed or how it is being used” that supports your stance on why we shouldn’t be discussing this, which is what you’ve implied. But that’s playing games with you, like what you’re doing with us.

    Your engagement on this issue is still clearly in bad faith, so I will point out that the issue of clarification you’re demanding is weak within the context of this discussion. It reads like a common troll play where they attempt to draw a mark down a rabbit hole. It shouldn’t be too difficult for you to do an internet search or tap your own personal experiences, especially as intensely passionate about this issue asyou are.

    Understand that I don’t play these games. This is me leaving you to your checkerboard. Take care.

    [Edited for grammar]



  • I do agree with your point that we need to educate people on how to use AI in responsible ways. You also mention the cautious approach taken by your kids school, which sounds commendable.

    As far as the idea of preparing kids for an AI future in which employers might fire AI illiterate staff, this sounds to me more like a problem of preparing people to enter the workforce, which is generally what college and vocational courses are meant to handle. I doubt many of us would have any issue if they had approached AI education this way. This is very different than the current move to include it broadly in virtually all classrooms without consistent guidelines.

    (I believe I read the same post about the CEO, BTW. It sounds like the CEO’s claim may likely have been AI-washing, misrepresenting the actual reason for firing them.)

    [Edit to emphasize that I believe any AI education we do to prepare for employment purposes should be approached as vocational education which is optional, confined to those specific relevant courses, rather than broadly applied]


  • While there are some linked sources, the author fails to specify what kind of AI is being discussed or how it is being used in the classroom.

    One of the important points is that there are no consistent standards or approaches toward AI in the classroom. There are almost as many variations as there are classrooms. It isn’t reasonable to expect a comprehensive list of all of them, and it’s neither the point nor the scope of the discussion.

    I welcome specific and informed counterarguments to anything presented in this discussion, I believe many of us would. I frankly find it ironic how lacking in “nuance or level-headed discussion” your own comment seems.


  • I appreciated this comment, I think you made some excellent points. There is absolutely a broader, complex and longstanding problem. I feel like that makes the point that we need to consider seriously what we introduce into that vulnerable situation even more crucial. A bad fix is often worse than no fix at all.

    AI is a crutch for a broken system. Kicking the crutch out doesn’t fix the system.

    A crutch is a very simple and straightforward piece of tech. It can even just be a stick. What I’m concerned about is that AI is no stick, it’s the most complex technology we’ve yet developed. I’m reminded of that saying “the devil is in the details”. There are a great many details in AI.






  • I see these as problems too. If you (as a teacher) put an answer machine in the hands of a student, it essentially tells that student that they’re supposed to use it. You can go out of your way to emphasize that they are expected to use it the “right way” (since there aren’t consistent standards on how it should be used, that’s a strange thing to try to sell students on), but we’ve already seen that students (and adults) often choose to choose the quickest route to the goal, which tends to result in them letting the AI do the heavy lifting.








  • I wouldn’t put much past the current American administration. I haven’t been able to shake this impression that we might really be looking a the telegraphing of an invasion. From what we know and have seen, the administration is very much itching to apply the fullest extent of its powers. It’s defined by unprecedented and extraordinary use of extralegal action and complete disregard for how it might be seen by the world at large.

    They said for years that Russia wouldn’t move on Ukraine, and then green men marched in and took over Crimea. It’s no secret how much America is becoming increasingly like Russia in every way. US already has significant military presence in Greenland — a green men play would be really easy. And Greenland also has a surprising number of politicians who openly say that they prefer the security offered by Trump’s America over Denmark, even as they declare that they want independence (experts argue that independence might make them even more vulnerable to takeover right now). It’s easy to assume that at least some Greenland doors would open up to an American green man advance.

    Also, as far as consequences for taking over Greenland, we seem to be primarily looking at a breakup of NATO — something that is also on this US administration’s longstanding wish list. Experts don’t seem to think it’s ultimately likely to result in an actual war so much as make it crystal clear that the old rules no longer apply, and that the US isn’t a friend (the Article 5 debate is shaky, especially against the prospect of actually going to war against America, and especially while NATO is also dealing with the Russian war in Ukraine). On paper it kind of reads like a win-win-win situation for the current brazen, imperialist, and isolationist American kleptocracy.

    I’d say we at least need to take this stuff seriously.