• 0 Posts
  • 10 Comments
Joined 3 months ago
cake
Cake day: December 15th, 2024

help-circle
  • I’m not an expert but know enough to converse. As I understand it:

    The B-1 should be more expensive to fly than the B-52 because of its variable wing geometry and the nature of its engine. But, we spent a boatload of cash to make the B-1 cheaper. We put soft constraints on the performance envelope in mission design then optimized the aircraft for it.

    We didn’t update the B-52 because it was far more expensive: replace 8 engines designed in the 1950s with 2 or 4 modern engines, requiring redesign of wing, tail, and cockpit, as well as manufacturing of old parts due to scarcity. If we’d spent for engine modernization then it’d be cheaper to fly because that style of airfrane is almost always cheaper to fly than an airframe that can comfortably sustain Mach 1. It’d even be cheaper to fly for the B-1 mission because we don’t ask the B-1 to leverage the Mach 1 speed it was designed for.

    It’s shit like this that helps my civilian self understand the meaning of FUBAR. A rare example of a well-run program is the C-130.










  • Objective: To evaluate the cognitive abilities of the leading large language models and identify their susceptibility to cognitive impairment, using the Montreal Cognitive Assessment (MoCA) and additional tests.

    Results: ChatGPT 4o achieved the highest score on the MoCA test (26/30), followed by ChatGPT 4 and Claude (25/30), with Gemini 1.0 scoring lowest (16/30). All large language models showed poor performance in visuospatial/executive tasks. Gemini models failed at the delayed recall task. Only ChatGPT 4o succeeded in the incongruent stage of the Stroop test.

    Conclusions: With the exception of ChatGPT 4o, almost all large language models subjected to the MoCA test showed signs of mild cognitive impairment. Moreover, as in humans, age is a key determinant of cognitive decline: “older” chatbots, like older patients, tend to perform worse on the MoCA test. These findings challenge the assumption that artificial intelligence will soon replace human doctors, as the cognitive impairment evident in leading chatbots may affect their reliability in medical diagnostics and undermine patients’ confidence.