• 2 Posts
  • 353 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle





  • This is just them going for regulatory capture. Again. The “tiered” country system, the controls on model weights, centralizing regulation in Washington, focus on datacenter build out (instead of on device inference), and more, it’s all just a big middle finger to open, locally runnable weights without saying it.

    And they’re trying to justify it with Chinese hate more than “safety” fearmongering this time, even though this would let them run circles around the US (in time, though not without OpenAI making a healthy profit first).

    They want to own your access, not let you have it.

    QwQ 32B did a decent job writing that out:

    spoiler

    OpenAI’s proposal contains elements that could inadvertently or intentionally hinder open-source/open-weights AI and smaller competitors, while also raising concerns about regulatory capture. Here’s a breakdown of key points:

    1. Regulatory Strategy (Preemption of State Laws):

    • Potential Issue: The proposal advocates federal preemption of state AI regulations to streamline compliance. While this could reduce fragmentation, it centralizes regulatory power, favoring larger companies with resources to engage in federal partnerships. Smaller players might struggle to meet federal standards or secure liability protections, creating an uneven playing field.
    • Risk of Regulatory Capture: The “voluntary partnership” framework could become a de facto requirement for accessing government contracts or protections, disadvantaging competitors not in the loop. This risks entrenching OpenAI and similar firms as preferred partners, stifling innovation from leaner, open-source alternatives.

    2. Export Controls (Tiered System):

    • Open-Source Concerns: While targeting Chinese models, the proposal emphasizes promoting “American AI systems” globally. This could pressure countries to adopt closed-source U.S. models over open-source alternatives (e.g., DeepSeek’s R1, despite its flaws). The focus on “democratic AI” might conflate national allegiance with openness, sidelining projects that prioritize technical transparency over geopolitical alignment.
    • Hardware Dependencies: Requirements for “hardware-enabled mechanisms” and restrictions on non-U.S. chips (e.g., Huawei) could lock AI development into proprietary ecosystems, disadvantaging open-source projects reliant on diverse or cost-effective hardware.

    3. Copyright Strategy:

    • Double-Edged Sword: OpenAI’s defense of fair use for training data aligns with its own needs but could backfire. If other countries adopt stricter copyright regimes (e.g., EU-style opt-outs), smaller players without OpenAI’s scale might struggle to access training data. Meanwhile, OpenAI’s push to “shape international policy” risks leveraging U.S. influence to globalize its preferred norms, potentially stifling innovations in jurisdictions with more restrictive rules.
    • IP Protectionism: The emphasis on safeguarding U.S. IP could lead to data silos, making it harder for open-source projects to train on global datasets unless they comply with U.S.-centric frameworks.

    4. Infrastructure Investments:

    • Bias Toward Scale: Proposals like AI Economic Zones and tax incentives prioritize large-scale, centralized infrastructure (e.g., hyperscale data centers). This disadvantages smaller players or open-source initiatives requiring decentralized, modular setups. The “National Transmission Highway Act” could further entrench big players with resources to navigate streamlined permitting processes.
    • Workforce Training: While beneficial overall, tying training to partnerships with AI companies might channel resources toward OpenAI’s ecosystem, sidelining open-source communities or academic projects.

    5. Government Adoption:

    • Contractual Barriers: Faster procurement for “frontier AI labs” and requirements for SOC 2/ISO certifications or facility clearances could exclude smaller firms lacking the resources or bureaucratic capacity to comply. The push for custom classified models might also favor established firms with existing security clearances.
    • On-Device Inference Risks: The focus on centralized infrastructure and export controls on chips (e.g., banning Huawei) could limit access to hardware optimized for on-device processing, disadvantaging open-source projects relying on efficient, edge-based solutions.

    6. Open Weights/Open Source Specific Risks:

    • Tiered Export Controls: While aimed at China, the framework might inadvertently penalize open-source projects that depend on global collaboration or diverse hardware. For instance, restrictions on “Tier III” nations could block cross-border contributions to open-source models.
    • Proprietary Ecosystem Push: The emphasis on “American rails” and closed partnerships could marginalize open-source efforts that resist geopolitical framing, even if technically superior.

    Conclusion: Regulatory Capture Concerns

    OpenAI’s proposals, while framed as pro-innovation, risk entrenching its own dominance and disadvantaging smaller, open-source competitors through:

    • Centralized Regulation: Federal preemption and partnerships may favor large firms with resources to engage.
    • Infrastructure Bias: Investments prioritize scale over flexibility, sidelining decentralized or cost-effective alternatives.
    • Export Controls: Could lock AI into U.S.-centric ecosystems, limiting open-source interoperability.
    • IP and Copyright Tactics: While defending fair use, they may push restrictive norms abroad, hindering smaller players.

    Verdict: While OpenAI positions itself as advocating for “freedom,” the proposals contain structural biases that could stifle open-source/open-weights innovation and enable regulatory capture. The focus on national competition with China overshadows neutral, inclusive frameworks, raising questions about whether the plan prioritizes U.S. corporate leadership over democratizing AI.

    And it was generated on my desktop. That I own, in my house, with the PC completely disconnected from the internet atm, with some settings and features OpenAI would never let me have.


  • It doesn’t have to be.

    That’s why pushing locally run, sanely implemented LLMs is so important, for the same reason you’d promote Lemmy instead of just telling people not to use Reddit.

    This is my biggest divergence with Lemmy’s political average, probably: AI haters are going to bring this to reality, as they are just pushing out “dangerous” local LLMs in favor of crappy corporate UIs.