No, that is certainly not “all we need to do” for such people.
No, that is certainly not “all we need to do” for such people.
So others have poined out that it isn’t that simple. And I agree with everything they said. So no need to repeat it. But after all that, there will still be people who just don’t want any restrictions no matter how reasonable. Like not screaming at the top of thier lungs at 1am. Not a large group, but they will always exist. So you can’t “solve” homelessness. You can olny solve involuntary homelessness.
Now here is the current state. Involuntary homelessness hasn’t been dealt with for a long time. And one effect is that a lot of people who are currently homeless are unrecoverably mentally ill. Current medical science just can’t repair the damage that’s been done. This group is now similar to the group I mentioned above in that they don’t want or can’t handle the normal restrictions of just living around other people.
So while the solutions mentioned can help some homeless people, and more importantly can drastically reduce “new” homeless people. We still have the current unrepairable homeless people to work with. And they will not go willingly to any kind of help. So, do we force them to get help? That requires laws for them to break so they can be forced into treatment. Now I am not saying that is happening anywhere, because I don’t think it is. And as far as I know, there isn’t a place that has the mental health services capacity to help them if they tried. But in the long run, it will be a required part of the solution… eventually. If we as a society ever get serious about solving the problem.


Not so much not follow orders, but how about go for the job. You don’t get a job like that without going after it and pulling strings. And you could resign at anytime.


I imagine this as a system that uses spare renewable energy like solar to generate gas that can be used to smooth the curve that is a renewable power source. It’s real value is that it reduces infrastructure needs, allowing its use in remote environments. But it does add a lot of additional failure points.


Only when it is good for “business”


Next months story will be about ice officers in minn pulling guns on off duty ice officers who were brought in from the south and who just have a dark tan. That will be a real hoot.


It’s not really a trust thing. They are companies, not people. The top decision makers have a fubuciary responsibility to do whatever makes money. They can, and often do, get sued if they don’t. So you can expect them to sell you out. It’s literally thier purpose for existing.


Overall, my theory lines up with yours. They knew thr simple minds at the top would fire them, and somehow that benefits them. They are lawyers after all, and getting hired into the jobs they had makes it very likely they were at least savvy in political maneuvers.


I didn’t need to reach at all. I brought down to several simple examples. You just aren’t willing to open your mind and consider it.
I 100% agree that it confuses and ill informs many adults. That is why I think it is so important that kids be exposed to it, and taught to think critically about what it tells them. It isn’t going to go away. And who kmows, they might learn to apply that same critical thinking to what the talking heads on the internet tell them. But even if not, it would be worth it.


It really doesn’t. Tyr looking it up… Oh wait, you won’t, so here https://www.merriam-webster.com/dictionary/prejudice 1-b nails it, but 1-a covers it to with “individual” and “group”. Which are listed even before race.


How about this. I think it is pretty well known that pilots and astronauts are trained on simulations where some of the information they get from “tools” or gauges is wrong. On the surface it is just simulating failures. But the larger purpose is to improve critical thinking. They are trained to take each peice of information into context and if it doesn’t fit, question it. Sound familiar?
AI spits out lots of information with every response. Much of it will be accurate. But sometimes there will be a faulty basis in it that causes one or more parts of the information to be wrong. But the wrongness almost always follows a pattern. In context the information is usually obviously wrong. And if you learn to spot the faulty basis, you can even sus out which information is still good. Or you can just tell it where it went wrong and it often will come back with the correct answer.
Talking to people isn’t all that different. There is a whole sub for confidently wrong on reddit. But spotting when a person is wrong is often harder because the depth of thier faulty basis can be soo much deeper than an AIs. And, they are people, so you pften can’t politely question the accuracy of what they are saying. Or they are just a podcast… I think you get where I am going.


Did you even read the comment I responded to? “Whenever I find out that someone uses any of these LLMs, or Ai chatbots, hell even Alexa or Siri, my respect for them instantly plummets.”
They are litterally judging someone before they even know any details other than that they use any form of AI at all. Could be a cyber security researcher fir all the commenter knows.


I think this pretty much answers the question “are humans evil?” Spoiler, yes.


But he doesn’t actually know thier actions. He knows they “use” siri. But he knows absolutely nothing about how. If they explained in detail how they use siri, then it would not be prejudice. But just the phrase, I use siri, is far from knowing thier actions. It’s not like I use an Ice pick, which has one generally understood use.


Read the word. Prejudice … pre judice… pre judgment. Judging someone on limited information that isn’t adequate to form a reasonable opinion. Hearing someone uses siri and thinking less of them on that tiny fact alone is prejudice. For all you know, siri is some part of how they make a living. Or any of a thousand reasons someone may use it and still be a good intelligent person.


You hit on why I don’t use them. But some people don’t care about that for a variety of reasons. Doesn’t make them less than.
Anyone who tries to use AI and not apply critical thinking fails at thier task because AI is just wrong often. So they either stop using it, or they apply critical thinking to figure out when the results are usable. But we don’t have to agree on that.


I couldn’t even finish the article. The mental gymnastics it would take to write it could only come from someone who never learned how to use AI. If anything, the article is a testament to how our children and everyone should be taught how to use AI effectively.


That sounds like a form of prejudice. I mean even Siri and Alexa? I don’t use them for different reaons… but a lot of people use them as voice activated controls for lights, music, and such. I can’t see how they are different from the clapper. As for the llms… they don’t do any critical thinking, so noone is offloading thier critical thinking to them. If anything, using them requires more critical thinking because everyone who has ever used them knows how often they are flat out wrong.


Well in this case it probably isn’t money he is after, but attention and fame. That said, just tell him you had a past “incident” you don’t like to talk about, but that your image shouldn’t be on anything that might give away your current location. Lol.
Thats the problem. You can go through all the pain to move to an alternative, but eventually it enshitifys too. You could go open source, but those solutions rarely have the polish to get the large quantity of users needed for niche communities. And most users won’t understand why they are better anyway. So it’s just a horrible cycle.