

Bing / chatgpt already does this. I’ve seen it high tail out of tasks or totally lie that it’s still working on something when it obviously isn’t at all.
Bing / chatgpt already does this. I’ve seen it high tail out of tasks or totally lie that it’s still working on something when it obviously isn’t at all.
You jest, but I’m keeping an open mind after the surprising popularity of twitch. I could see people watching AIs play games for them and making temperature adjustments to things like proclivity toward side quests or how serious vs slapstick the play would be.
Yeah maybe. Switching infrastructure would be a headache and expensive though. Last I checked the off the shelf versions which is how I would want to start at least didn’t have wifi capability. Is there a turnkey version that does now?
This has been a huge let down. Thought at the very least home assistants which are marginally useful could become less infuriating with an intelligence boost, but not at all. I’d be happy if I could simply upload a damn 64 Kb thesaurus at this point to my alexa so she would not ignore everything I say if I don’t remember the exact right commands.
This does seem to be exactly the problem. It is solvable, but I haven’t seen any that do it. They should be able to calculate a confidence value based on number of corresponding sources, quality ranking of sources, and how much interpolation of data is being done vs. Straightforward regurgitation of facts.
By that argument the time was a long time ago then. Vivaldi still works with uBlock so nothing has changed on their end. I think it’s still reasonable to use Vivaldi until they are forced to Manifest 3. Despite being Chromium based they’ve always been privacy focused and vocally pro ad blocking. As far as the cult of Firefox, they’ve been showing their true colors lately. They are no saints and their biggest funder is Google. Never forget to follow the money. I’m not personally convinced that a switch on a purely ideological level is indicated.
I can’t even get passed how gross it is to see this many cops standing in front of a business where there is no visible evidence of violent activity where people might be harmed. It just looks like a statement saying this is our priority - corporate welfare for the Uber rich and worst possible human specimens at the cost of taxpayer dollars. It’s so gross I can’t even work my way to the nuances of how these slobs are dressed.
The guy is 99% bullshit. This is like calling water wet.
That Biden guy has a lot of free time on his hands now. I bet he’s built some gnarly botnets. Someone should look into this.
Hope you keep talking to those people. Wear them down and get them to change. Even if it’s a small company, losing corporate traffic is a hit to X that we need to see more of. I’m at the point where I am repulsed when I see X links on business websites I need to interact with.
I’m just worried about how zdnet presumes to know what I think.
Google kills things, that’s what they do. They can get fucked.
Always welcome something new at the interface between coffee and technology. I only have dabbled in coding, but it’s my impression that serious coders more efficiently deal with CLI and sometimes feel bogged down by graphical interfaces. I’d have to be pretty caffeinated myself to interact with something like this. Also the idea of writing something down to later enter it into a database seems inefficient to my workflow at least. While that’s my unsolicited opinion, I think this likely will appeal quite nicely to a niche audience. Glad you are sharing the code, perhaps if someone runs with a graphical mobile version of this (much to your chagrin perhaps) I would give that a go.
I hadn’t noticed.
It’s going to be so easy for him to turn that x into a swastika in a couple of years though. Its what’s known in tech circles as IED “iterative evil design”
I would also like to know more.
Sure. The goal is more perfect here, not perfect.
Well yes. Garbage in garbage out of course.
I wouldn’t say definitely. AI is subject to bias of course as well based on training, but humans are very much so, and inconsistently so too. If you are putting in a liver in a patient that has poorer access to healthcare they are less likely to have as many life years as someone that has better access. If that corellates with race is this the junction where you want to make a symbolic gesture about equality by using that liver in a situation where it is likely to fail? Some people would say yes. I’d argue that those efforts towards improved equality are better spent further upstream. Gets complicated quickly - if you want it to be objective and scientifically successful, I think the less human bias the better.
That’s not what the article is about. I think putting some more objectivety into the decisions you listed for example benefits the majority. Human factors will lean toward minority factions consisting of people of wealth, power, similar race, how “nice” they might be or how many vocal advocates they might have. This paper just states that current AIs aren’t very good at what we would call moral judgment.
It seems like algorithms would be the most objective way to do this, but I could see AI contributing by maybe looking for more complicated outcome trends. Ie. Hey, it looks like people with this gene mutation with chronically uncontrolled hypertension tend to live less than 5years after cardiac transplant - consider weighing your existing algorithm by 0.5%
I’m gonna go ahead and try without a Google search.