• dgdft@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    4 hours ago

    This is some pathetic chuddery you’re spewing…

    You wouldn’t assume that QA can read every email you send through their mail servers ”just because”

    I absolutely would, and Microsoft explicitly maintains the right to do that in their standard T&C, both for emails and for any data passed through their AI products.

    https://www.microsoft.com/en-us/servicesagreement#14s_AIServices

    v. Use of Your Content. As part of providing the AI services, Microsoft will process and store your inputs to the service as well as output from the service, for purposes of monitoring for and preventing abusive or harmful uses or outputs of the service.

    We don’t own Your Content, but we may use Your Content to operate Copilot and improve it. By using Copilot, you grant us permission to use Your Content, which means we can copy, distribute, transmit, publicly display, publicly perform, edit, translate, and reformat it, and we can give those same rights to others who work on our behalf.

    We get to decide whether to use Your Content, and we don’t have to pay you, ask your permission, or tell you when we do.

    • Sir. Haxalot@nord.pub
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 hours ago

      That seems to be the terms for the personal edition of Microsoft 365 though? I’m pretty sure the enterprise edition that has the features like DLP and tagging content as confidential would have a separate agreement where they are not passing on the data.

      That is like the main selling point of paying extra for enterprise AI services over the free publicly available ones.

      Unless this boundary has actually been crossed in which case, yes. It’s very serious.

      • dgdft@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 hour ago

        This part applies to all customers:

        v. Use of Your Content. As part of providing the AI services, Microsoft will process and store your inputs to the service as well as output from the service, for purposes of monitoring for and preventing abusive or harmful uses or outputs of the service.

        And while Microsoft has many variations of licensing terms for different jurisdictions and market segments, what they generally promise to opted-out enterprise customers is that they won’t use their inputs to train “public foundation models”. They’re still retaining those inputs, and they reserve the right to use them for training proprietary or specialized models, like safety-filters or summarizers meant to act as part of their broader AI platform, which could leak down the line.

        That’s also assuming Microsoft are competent, good-faith actors — which they definitely aren’t.