• jj4211@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      15 hours ago

      A bit off topic, but that’s pretty much a result of “prompt stuffing”. Your prompt is processed into a good old fashioned search query and then the search results are sort of added to the prompt. Basically from the LLM perspective, it seems a request to rework your source material in a manner consistent with your prompt. The LLM is fed the correct answer, so it doesn’t have to answer, it just has to reword the input.