“The future ain’t what it used to be.”

-Yogi Berra

  • 1 Post
  • 231 Comments
Joined 2 years ago
cake
Cake day: July 29th, 2023

help-circle


  • Cool. Name one. A specific one that we can directly reference, where they themselves can make that claim. Not a secondary source, but a primary one. And specifically, not the production companies either, keeping in mind that the argument that I’m making is that copyright law, was intended to protect those who control the means of production and the production system itself. Not the artists.

    The artists I know, and I know several. They make their money the way almost all people make money, by contracting for their time and services, or through selling tickets and merchandise, and through patreon subscriptions: in other words, the way artists and creatives have always made their money. The “product” in the sense of their music or art being a product, is given away practically for free. In fact, actually for free in the case of the most successful artists I know personally. If they didn’t give this “product” of their creativity away for free, they would not be able to survive.

    There is practically 0 revenue through copyright. Production companies like Universal make money through copyright. Copyright was also built, and historically based intended for, and is currently used for, the protection of production systems: not artists.














  • Too many cooks: Handwringing. Whataboutism.

    The authors misunderstand how to think of the (and even) elements of the fediverse. It’s still taking a competitive view/ worldview/ framing, and when that’s all you understand, sure. But the right way to understand the fediverse is as protocols, like email, and each branch as a flavor of email, or some other misguided metaphor. And it’s it’s only a problem when infinite growth or exp. scaling is your goal. However if neither of those things are your goal, it’s more of an annoyance.

    Commercial capture: More handwringing. Misidentification.

    Meta took a crack at capture. It didn’t seem to have worked. The fediverse is populated by the leavers, not the takers. The Internet happens at the edge and the normies are always just catching up a few years too late. The point of the fediverse is that it’s a extraordinarly easy to vote with your feet. If the fediverse can fall victim to a 51% attack, fine, well just leave and do it again.

    Guilty by association: Again, more handwringing. Also, we should do that.

    Federated p2p file sharing e2e file sharing for unsavory bits that governments and corporations don’t want you to have sounds like a great idea.

    It’s in the CIA field manual, that when you want to destroy an organization from within, urge caution, and question every unfounded problem.





  • yeah I think it would be more clarifying if you kept the modes distinct.

    I like the focus on accuracy and it’s citations. I’ve tried deep research (in contrast with chat got) few times and it’s generations have been basically worthless.

    I definitely have a use for something like this, but similarly to most of the issues I have with the applied use of these products, it boils down to a few consistent issues around guard rails, a kind of cautious insistence on singular approaches, and a lack of agency.

    For example, I would live to be able to drop it a guy hub link and have it dig through the repo, and write me an ipynb demonstrating the capabilities of the repo. Or where I could give it a script, sloppy with a bunch of my own garbage in it, and it cleans it up and makes it nice. Deep research is no where near capable of this and I attribute it to an overly cautious development approach on the part of OpenAI. As well, because of structural limits, these models lack the kind of nested or branched thinking that would be required to hold onto big picture goals and concepts.

    I do however think we’ll see things change with the new gpts coming out which are much cheaper to run for inference. Basically, to do the kind of work that deep research claims to be doing, we need a more complex internal model structure with many gpts running in both series and parallel, perhaps in more of a graph model.

    I also don’t think it will be OpenAI to do this. They’ve been too cautious with their development approach.

    At the end of the day I want what deep research claims to be, but it’s clearly not it yet.