• 0 Posts
  • 4 Comments
Joined 1 year ago
cake
Cake day: January 19th, 2024

help-circle
  • After reading, the gist of it seems to be:

    • Vanilla far-right indoctrinated dumbo (his vision: “Reds” welcome, “Blues” not, “Anti-Blue Propaganda” on public view screens)
    • Wants exploitative capitalism on steroids with companies controlling everyone’s lives completely
    • Claims current capitalism is only bad because it’s “woke capitalism” which he claims the “ruling class” is pushing
    • Wants tech bros to butter up police and give security staff jobs to their children as a favor, i.e. intentional social classism

    .

    In short, just another out of touch entrepreneur who sells snake oil cures to people suffering in the current system, so that they may invite in the boot that stomps them down for good.


  • I love that example. Microsoft’s Copilot (based on GTP-4) immediately doesn’t disappoint:

    Microsoft Copilot: Two pounds of feathers and a pound of lead both weigh the same: two pounds. The difference lies in the material—feathers are much lighter and less dense than lead. However, when it comes to weight, they balance out equally.

    It’s annoying that for many things, like basic programming tasks, it manages to generate reasonable output that is good enough to goat people into trusting it, yet hallucinates very obviously wrong stuff or follows completely insane approaches on anything off the beaten path. Every other day, I have to spend an hour to justify to a coworker why I wrote code this way when the AI has given him another “great” suggestion, like opening a hidden window with an UI control to query a database instead of going through our ORM.



  • Is this a case of “here, LLM trained on millions of lines of text from cold war novels, fictional alien invasions, nuclear apocalypses and the like, please assume there is a tense diplomatic situation and write the next actions taken by either party” ?

    But it’s good that the researchers made explicit what should be clear: these LLMs aren’t thinking/reasoning “AI” that is being consulted, they just serve up a remix of likely sentences that might reasonably follow the gist of the provided prior text (“context”). A corrupted hive mind of fiction authors and actions that served their ends of telling a story.

    That being said, I could imagine /some/ use if an LLM was trained/retrained on exclusively verified information describing real actions and outcomes in 20th century military history. It could serve as brainstorming aid, to point out possible actions or possible responses of the opponent which decision makers might not have thought of.