• 0 Posts
  • 445 Comments
Joined 3 years ago
cake
Cake day: June 30th, 2023

help-circle



  • In a central banking system, the central bank can create and destroy money from nothing. All banks can do it, though banks that aren’t the central bank need to hold on to a reserve portion which iirc is 10%, so they can loan out (effectively creating) 90% of deposits, which compounds (ie, if you deposit $100, the bank can lend out $90 of that, and if that borrower puts that $90 in their account, then the bank can loan another $81, meaning for the original deposit of $100, now $271 exists, and that $81 can be loaned against, too).

    Congress can borrow money from the central bank or other banks. It’s also possible that they could seize the central bank and then just say they have the money and use that, though that’s how Germany ended up with stories of people using a wheelbarrow full of cash to buy a coffee or diners paying when they ordered because prices would have gone up by the time they finished eating.


  • My guess is the loud bass vibrates dust particles that might clog up pores loose, or maybe helps with nutrient flow inside the plant. Like it’s affected by sound not music.

    Though music might be generally better than most loud sounds because it’s one of the few cases where sound can be loud but isn’t also associated with something that adds more dust to the air, which might even give a net negative result.




  • Over time, the more common mistakes would be integrated into the tree. If some people feel indigestion as a headache, then there will be a probability that “headache” is caused by “indigestion” and questions to try to get the user to differentiate between the two.

    And it would be a supplement to doctors rather than a replacement. Early questions could be handled by the users themselves, but at some point a nurse or doctor will take over and just use it as a diagnosis helper.


  • (Assuming you meant “you” instead of “I” for the 3rd word)

    Yeah, it fits more with the older definition of AI from before NNs took the spotlight, when it meant more of a normal program that acted intelligent.

    The learning part is being able to add new branches or leaf nodes to the tree, where the program isn’t learning on its own but is improving based on the expeirences of the users.

    It could also be encoded as a series of probability multiplications instead of a tree, where it checks on whatever issue has the highest probability using the checks/questions that are cheapest to ask but afffect the probability the most.

    Which could then be encoded as a NN because they are both just a series of matrix multiplications that a NN can approximate to an arbitrary %, based on the NN parameters. Also, NNs are proven to be able to approximate any continuous function that takes some number of dimensions of real numbers if given enough neurons and connections, which means they can exactly represent any disctete function (which a decision tree is).

    It’s an open question still, but it’s possible that the equivalence goes both ways, as in a NN can represent a decision tree and a decision tree can approximate any NN. So the actual divide between the two is blurrier than you might expect.

    Which is also why I’ll always be skeptical that NNs on their own can give rise to true artificial intelligence (though there’s also a part of me that wonders if we can be represented by a complex enough decision tree or series of matrix multiplications).



  • Yeah, if you turn off randomization based on the same prompts, you can still end up with variation based on differences in the prompt wording. And who knows what false correlations it overfitted to in the training data. Like one wording might bias it towards picking medhealth data while another wording might make it more likely to use 4chan data. Not sure if these models are trained on general internet data, but even if it’s just trained on medical encyclopedias, wording might bias it towards or away from cancers, or how severe it estimates it to be.


  • Funny because medical diagnosis is actually one of the areas where AI can be great, just not fucking LLMs. It’s not even really AI, but a decision tree that asks about what symptoms are present and missing, eventually getting to the point where a doctor or nurse is required to do evaluations or tests to keep moving through the flowchart until you get to a leaf, where you either have a diagnosis (and ways to confirm/rule it out) or something new (at least to the system).

    Problem is that this kind of a system would need to be built up by doctors, though they could probably get a lot of it there using journaling and some algorithm to convert the journals into the decision tree.

    The end result would be a system that can start triage at the user’s home to help determine urgency of a medical visit (like is this a get to the ER ASAP, go to a walk-in or family doctor in the next week, it’s ok if you can’t get an appointment for a month, or just stay at home monitoring it and seek medical help if x, y, z happens), then it can give that info to the HCW you work next with for them to recheck things non-doctors often get wrong and then pick up from there. Plus it helps doctors be more consistent, informs them when symptoms match things they aren’t familiar with, and makes it harder to excuse incompetence or apathy leading to a “just get rid of them” response.

    Instead people are trying to make AI doctors out of word correlation engines, like the Hardee boys following a clue of random word associations (except reality isn’t written to make them right in the end because that’s funny like in South Park).






  • Yeah, that was the most surprising part of switching to Linux. It generally takes the same effort or less to get my linux install behaving like I want it than it did with a windows install. Plus windows likes to nagg you to set up shit I didn’t want in the first place.

    It was kinda funny because that was the whole reason for switching in the first place, but there was this base assumption that Linux was going to be harder than windows, just without stupid MS shit thrown in. But no, it’s actually easier, just different in some ways that mean some skills don’t transfer.

    But LLMs are pretty good for bridging that gap. They aren’t perfectly reliable (made up command line arguments are pretty common) but it’s good for getting command names from a description of what you want to do, which you can then learn about using pre-LLM methods.


  • I’d also argue that however the movie did after the fact, turning down roles you don’t understand is probably the smarter option. Maybe those other movies would have also flopped if they had Connery in those roles.

    Like I can’t picture him doing a good Gandalf. It wouldn’t be Gandalf, it would be Sean Connery in a wizard outfit. I can’t think of any roles where Connery played someone who wasn’t Sean Connery. He brought a lot of charisma to his roles but not a ton of range.


  • Also, every single name that gets released is a name that Trump was ok with releasing. From my pov, it just turns it into a more effective blackmail tool. He’s not afraid of what’s in the files. If it was going to ruin him, it would have already done so.

    Instead it just shows others who know they are in the files that a) he’s one of them (if they didn’t already know), b) that he can protect them, c) he isn’t protecting everyone in the files just because of point a.

    Hate to be realizing this, but I think everyone who thought the release of the Epstein files would help anything got played. Just like everyone who thought the Mueller investigation would threaten his first term or result in making a second term impossible.