

And apparently the evidence for this is that she competed in beauty pageants? I swear, I’ll never understand these people


And apparently the evidence for this is that she competed in beauty pageants? I swear, I’ll never understand these people


Exactly this. I don’t own any Steam hardware, nor do I expect to any time soon. However, I don’t know if I’d be running Linux as my main daily driver if not for how straightforward it is to game on Linux nowadays, thanks largely to Valve’s efforts in this area.
I did dual boot with Windows for a while, but I found that the inertia of rebooting made me more likely to just use Windows. When I discovered that basically all of my games were runnable through Proton, I got rid of Windows entirely.
I feel a lot of gratitude for the Steam Deck existing, because it makes things way easier. It’s not down to Valve’s efforts alone, but providing the solid starting point has lead to the coagulation of a lot of community efforts and resources. For instance, there have been a couple of times where I’ve had issues running games, but found the solution in adjusting the launch options, according to what helpful people on protondb suggest. I also remember struggling for a while to figure out how to mod Baldur’s Gate 3, until I found a super useful guide that was written by and for Steam Deck users. The informational infrastructure around gaming on Linux is so much better than it used to be.


I hate that I know the words hebephile and ephebophile. I only know them because of people who do weird mental gymnastics to justify their creepy behaviour. “Um actually, he’s an ephebophile, not a pedophile” is a huge red flag whenever I see it


The one bit of credit that I can give him is that he’s good at making people feel seen. At his rallies, he’ll ramble on about how awful things are, and then he’ll say something like “I see you, I see that you’re hurting. I’m going to fix it”. It’s hardly the peak of rhetoric, but it’s super effective when so many people are struggling but the system is constantly gaslighting them because “the economy is doing great 👍”.
I find it sad, because the people most likely to latch onto this are people who are most being fucked over by the system. Often hatred starts as a seed of fear for one’s circumstances. I wish they would see that Trump is just exploiting them all the more.


Glad to see People Make Games cover this. They have a lot of reach


190MB, according to the article. And when it was idling, it would be only tens of MB


You see, it’s not racism if it’s based on real, empirical data a stubborn intuition driven bigotry based on thoroughly debunked science.


I’m reminded of a quote by Max Planck:
"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die”
Or more succinctly paraphrased:
“Science progresses one funeral at a time”
Thank you for James Watson, for this final contribution to science. We are better without your festering bigotry.


I mean, Hitler at least seemed to be coming at things from a place of misguided patriotism.
Man, I feel gross just writing that out, but I think it’s true — I think I respect Hitler more than Trump.


You’re literally quoting marketing materials to me. For what it’s worth, I’ve already done more than enough research to understand where the technology is at; I dove deep into learning about machine learning in 2020, when AlphaFold 2 was taking the structural biology world by storm — I wanted to understand how it had done what it had, which started a long journey of accidentally becoming a machine learning expert (at least, compared to other biochemists and laypeople).
That knowledge informs the view in my original comment. I am (or at least, was) incredibly excited about the possibilities, and I do find much of this extremely cool. However, what has dulled my hype is how AI is being indiscriminately shoved into every orifice of society when the technology simply isn’t mature enough for that yet. Will there be some fields that experience blazing productivity gains? Certainly. But I fear any gains will be more than negated through losses in sectors where AI should not be deployed, or it should be applied more wisely.
Fundamentally, when considering its wider effect on society, I simply can’t trust the technology — because in the vast majority of cases where it’s being pushed, there’s a thoroughly distrustful corporation behind it. What’s more, there’s increasing evidence that this just simply isn’t scalable. When you look at the actual money behind it, it becomes clear that the reason why it’s being pushed as a magical universal multi tool is because the companies making these models can’t make them profitable, but if they can drum up enough investor hype, they can keep kicking that can down the road. And you’re doing their work for them — you’re literally quoting advertising materials for me; I hope you’re at least getting paid for it.
I remain convinced that the models that are most prominent today are not going to be what causes mass automation on the scale you’re suggesting. They will, no doubt, continue to improve — there’s so many angles of attack on that front: Mixture of Experts (MoE) and model distillation to reduce model size (this is what made DeepSeek so effective); Retrieval Augmented Generation (RAG) to reduce hallucinations and allow for fine-tuning of output based on a small scale based on a supplementary knowledgebase; reducing the harmful effects of training on synthetic data so you can do more of it before model collapse happens — there’s countless ways that they can incrementally improve things, but it’s just not enough to overcome the hard limits on these kinds of models.
My biggest concern, as a scientist, is that what additional progress there could be in this field is being hampered by the excessive evangelising of AI by investors and other monied interests. For example, if a company wanted to make a bot for low-risk customer service or internal knowledgebase used RAG, this would require the model to have access to high quality documentation to draw from — and speaking as someone who has contributed a few times to open-source software documentation, let me tell you that that documentation is, on average, pretty poor quality (and open source is typically better than closed source for this, which doesn’t bode well). Devaluing of human expertise and labour is just shooting ourselves in the foot because what is there to train on if most of the human writers are sacked.
Plus there’s the typical old notion around automation leading to loss of low skilled jobs, but the creation of high skilled roles to fix and maintain the “robots”. This isn’t even what’s happening, in my experience. Even people in highly skilled, not-currently-possible-to-automate jobs are being pushed towards AI pipelines that are systematically deskilling them; we have skilled computer scientists and data scientists who are unable to understand what goes wrong when one of these systems fucks up, because all the biggest models are just closed boxes, and “troubleshooting” means acting like an entry level IT technician and just trying variations of turning it off and on again. It’s not reasonable to expect these systems to be perfect — after all, humans aren’t perfect. However, if we are relying on systems that tend to make errors that are harder for human oversight to catch, as well as reducing the number of people trying to catch them, then that’s a recipe for trouble.
Now, I suspect here is where you might say “why bother having humans try to catch the errors when we have multimodal agentic models that are able to do it all”. My answer to that is that it’s a massive security hole. Humans aren’t great at vetting AI output, but we are tremendously good at breaking it. I feel like I read a paper for some ingeniously novel hack of AI every week (using “hack” as a general term for all prompt injection, jailbreak etc. stuff). I return to my earlier point: the technology is not mature enough for such widespread, indiscriminate rollout.
Finally, we have the problem of legal liability. There’s that old IBM slide that’s repeatedly done the rounds the last few years that says “A computer can never be held accountable, therefore a computer must never make a management decision.”. Often the reason why we need humans to keep an eye on systems is that legal systems demand at least the semblance of accountability, and we don’t have legal frameworks for figuring out what the hell to do when AI or other machine learning systems mess up. It was recently in the news about police officers going to ticket an automated taxi (a Waymo, I think) when it broke traffic laws, and not knowing what to do when they found it was driverless. Sure, parking fines can be sent to the company, that doesn’t seem too hard to write regulations for, but with human drivers, if you incur a large number of small violations, it’s typical to end up with a larger punishment, such as one’s driver’s licence being suspended. What would even be the equivalent level of higher punishment for driverless vehicles? It seems that no-one knows, and concerns like these are causing regulators to reconsider the rollout of them. Sure, new laws can be passed, but our legislators are often tech illiterate, so I don’t expect them to easily be able to solve what prominent legal and technology scholars are still grappling with. That process will take time, and the more that we see high profile cases like suicides following chatbot conversations, the cautious legislators will be. Public distrust of AI is growing, in large part because they feel like it’s being forced on them, and that will just harm the technology in the long run.
I genuinely am excited still about the nuts and bolts of how all this stuff works. It’s my genuine enthusiasm that I feel situates me well to criticise the technology, because I’m coming from an earnest place of wanting to see humans make cool stuff that improves lives — that’s why I became a scientist, after all. This, however, does not feel like progress. Technology doesn’t exist in a vacuum and if we don’t reckon with the real harms and risks of a new tool, we risk shutting ourselves off to the positive outcomes too.
Neat, I didn’t know there was a word for it


“Permanent tax on corporations that layoff people until they rehire to the same level.”
This is similar to what the historical Luddites were arguing for. (Probably worth clarifying that I say this as a good thing. The Luddites failed because they were working at a time when unions were literally illegal; the political conditions were just too stacked against them. However, there’s a lot of useful things we can learn from history, and this is one of them)
Edit: formatting


This sounds interesting. It reminds me of past workers movements in history, namely the Luddites and the UK miners strike. If you want to learn more about the Luddites and what they were asking for, the journalist Brian Merchant has a good book named “Blood in the Machine”.
Closer to my heart and my lived experience is the miner’s strike. I wasn’t born at the time, but I grew up in what I semi-affectionately call a “post industrial shit hole”. A friend once expressed curiosity about what an alternative to shutting the mines would have been, especially in light of our increasing knowledge of needing to move away from fossil fuels. A big problem with what happened with the mines is that there were entire communities that were effectively based around the mines.
These communities often did have other sources of industry and commerce, but with the mines gone, it fucked everything up. There weren’t enough opportunities for people afterwards, especially because miners skills and experience couldn’t easily translate to other skilled work. Even if a heckton of money had been provided to “re-skill” out of work miners, that wouldn’t have been enough to absorb the economic calamity caused by abruptly closing a mine, precisely because of how locally concentrated and effect would be. If done all at once, for instance, you’d find a severe shortage of teachers and trainers, who would then find themselves in a similar position of needing to either move elsewhere to find work, or train in a different field. The key was that there needed to be a transition plan that would acknowledge the human and economic realities of closing the mines.
Many argued, even at the time, that a gradual transition plan that actually cared about the communities affected would lead to much greater prosperity for all. Having grown up amongst the festering wounds of the miners strike, I feel this to be true. Up in the North of England, there are many who feel like they have been forgotten or discarded by the system. That causes people a lot of pain; I think it’s typical for people to want their lives to be useful in some way, but the Northern, working class manifestation of this instinct is particularly distinct.
Linking this back to your question, I think that framing it as compensation could help, but I would expect opposition to remain as long as people don’t feel like they have ways to be useful. A surprising contingent of people who dislike social security payments that involve “getting something for nothing” are people who themselves would be beneficiaries of such payments. I link this perspective to listlessness I describe in ex-mining communities. Whilst the vast majority of us are chronically overworked (including those who may be suffering from underemployment due to automation), most people do actually want to work. Humans are social creatures, and our capacities are incredibly versatile, so it’s only natural for us to want to labour towards some greater good. I think that any successful implementation of universal basic income would require that we speak to this desire in people, and help to build a sense that having their basic living costs accounted for is an opportunity for them to do something meaningful with their time.
Voluntary work is the straightforward answer to this, and indeed, some of the most fulfilled people I know are those who can afford to work very little (or not at all), but are able to spend their time on things they care about. However, I see so many people not recognise what they’re doing as meaningful labour. For example, I go to a philosophy discussion group where there is one main person who liaises with the venue, collects the small fee every week (£3 per person), updates the online description for the event and keeps track of who is running each session, recruiting volunteers as needed. He doesn’t recognise the work he does as being that much work, and certainly doesn’t feel it’s enough to warrant the word “labour”. “It’s just something I do to help”; “You’re making it sound like something larger than it is — someone has to do it”. I found myself (affectionately) frustrated during this conversation because it highlights something I see everywhere: how capitalism encourages us to devalue our own labour, especially reproductive labour and other socially valuable labour. There are insufficient opportunities for meaningful contribution within the voluntary sector as it exists now, but so much of what people could and would be doing more of exists outside of that sector.
We need a cultural shift in how we think about work. However, it’s harder to facilitate that cultural shift towards how we view labour if most people are forced to only see their labour in terms of wages and salaries. On the other hand, people are more likely to resist policies like UBI if they feel it presents a threat to their work-centred identity and their ability to conceive of their existence as valuable. It’s a tricky chicken-or-egg problem. Overall, this is why I think your framing could be useful, but is not likely to be sufficient to change people’s minds. I think that UBI or similar certainly is possible, but it’s hard to imagine it being implemented in our current context due to how radical it is. Far be it from me to shy away from radical choices, but I think that it’s necessary to think of intermediary steps towards cultivating class consciousness and allowing people to conceive of a world where their Intrinsic value is decoupled from their output under capitalism. For instance, I can’t fathom how universal basic income would work in a US without universal healthcare. It boggles my mind how badly health insurance acts to reinforce coercive labour relations. The best thing we can do to improve people’s opinion of universal basic income is to improve their material conditions.
Finally, on AI. I think my biggest disagreement with Automation Compensation as a framing device for UBI is that it inadvertently falls into the trap of “tech critihype”, which the linked author describes as “[inverting] boosters’ messages — they retain the picture of extraordinary change but focus instead on negative problems and risks.”. Critihype may appear to criticise something, but actually ends up feeding the hype cycle, and in turn, is nourished by it. The problem with AI isn’t that it is going to end up replacing a significant chunk of the workforce, but rather that penny-pinching managers can be convinced that AI is (or will be) able do that.
I like the way that Brian Merchant describes the real problem of AI on his blog:
"[…] the real AI jobs crisis is that the drumbeat, marketing, and pop culture of “powerful AI” encourages and permits management to replace or degrade jobs they might not otherwise have. More important than the technological change, perhaps, is the change in a social permission structure.”
This critical approach is extra important when we consider that the jobs and fields most heavily being affected by AI are in creative fields. We’ve probably all seen memes that say “I want an AI to automate doing the dishes so that I can do art, not automate doing art so I can spend more time doing the dishes”. Universal Basic Income would be limited in alleviating social angst unless we can disrupt the pervasive devaluation of human life and effort that the AI hype machine is powering.
Though I have ended up disagreeing with your suggestion, thanks for posing this question. It’s an interesting one to ponder, and I certainly didn’t expect to write this much when I started. I hope you find my response equally interesting.


What does OC mean in this context?
A friend has extremely asymmetrical breasts, so a bra that fits their larger breast doesn’t fit their smaller one. They have a gel insert to put into that cup to account for this, but they also made a little pocket pouch in the same shape/size.
A lot of pushup bras also have a little pocket for a smaller kind of gel insert. I know a couple people who find that pocket useful for hiding valuable and/or illicit things (e.g. drugs)
I love the glee that women show when displaying their pockets. One of the first things I did when learning to sew was add pockets to many of my garments, so I get to do this quite frequently. It’s a sweet slice of trivial solidarity
Neat! This was so fun to learn about, thank you for sharing. Xiaolin Wu did not live in vain after all, because of nerds like us


Some of the best artists I know are people who started out without a single iota of talent, but they practiced for long enough that they got good. I reckon that talent probably does exist, but it’s a far smaller component than many believe. Hard word beats talent when talent doesn’t work hard.
People who are most likely to emphasise talent in art tend to be people who wish they were good at art, but aren’t willing (or able) to put the time into improving; it feels oddly reassuring to tell oneself that it’s pointless to try if you don’t start out with talent, rather than being realistic and saying “I wish I were good at art, but I am choosing not to invest in that skill because it’s not one of my priorities”


I’m not sure how well this works for a bingo square, but something that I thought of was about how nigh every paywalled article has an archive link either in the post body itself, or in the comments.
I find it heartwarming that such considerate behaviour seems to be the norm here. It inspires me to be one of those helpful people adding an unpaywalled link in the comments in the rare case of finding an interesting post where that hasn’t been done yet. It makes me feel like I’m part of a community.
I hope it’s an entertaining trash fire for you, at the very least