• 0 Posts
  • 143 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle

  • At a pub recently, a woman who was also queuing for a drink was bopping along to the background music, and she started singing a lil. I reflexively looked in her direction, which seemed to cause her to stop singing, presumably out of self consciousness. I was devastated that I had inadvertently impeded her ability to have fun.

    I decided that I would start humming along to the music in a low key way, in hopes that she would feel less silly, and perhaps even think “the woman who looked over at me was just someone else who enjoys this song”. After a few minutes, she started singing again. Maybe it was nothing to do with my actions, but I was delighted nonetheless.

    There were no direct interactions between me and the singing woman. I received my drink and left the area. On reflection, it was a very British interaction


  • I have a random question, if you would indulge my curiosity: why do you use ‘þ’ in place of ‘th’? It’s rare that I see people using thorn in a modern context, and I was wondering why you would go to the effort?

    (þis question brought to you by me reflecting on your use of þorn, and specifically how my initial instinctual response was to be irked because it makes þings harder to read (as someone who isn’t used to seeing ‘þ’). However, I quickly realised þat being challenged in þis way is one of þe þings I value about conversations on þis platform, and I decided þat being curious would be much more fun and interesting than being needlessly irritable (as it appears some oþers opt to be, given how I sometimes see unobjectionable comments of yours gaþer inexplicable downvotes. I have written þis postscriptum using “þ” because I þought it would be an amusing way to demonstrate þe good-faiþedness of my question, as I’m sure you get asked þis a lot))


  • A lot of them went into academia, the poor fuckers. My old university tutor comes to mind as the best of what they can hope for from that path. He did relatively well for himself as a scientist, but I reckon he was a far better scientist than what his level of prestige in that area would suggest.

    There’s one paper he published that was met with little fanfare, but then a few years later, someone else published more or less the same research that massively blew up. This wasn’t a case of plagiarism (as far as I can tell), nor a conscious attempt to replicate my tutor’s research. The general research climate at the time is a plausible explanation (perhaps my tutor was ahead of the times by a few years), but this doesn’t feel sufficient to explain it. I think it’s mostly that the author of this new paper is someone who is extremely ambitious in a manner where they seem to place a lot of value on gaining respect and prestige. I’ve spoken to people who worked in that other scientists lab and apparently they can be quite vicious in how they act within their research community (though I am confident that there’s no personal beef between this researcher and my old tutor — they had presented at the same conference, but had had no interactions and seemed to be largely unaware of the other’s existence). Apparently this researcher does good science, but gives the vibe that they care more for climbing up the ranks than for doing good science; they can be quite nasty in how they respond to people whose work disrupts their own theories.

    I suspect that it’s a case of priorities. My tutor also does good research, but part of why he left such an impact on me was that he has such earnest care in his teaching roles. He works at a pretty prestigious university, and there are plenty of tutors there who do the bare minimum teaching necessary to get access to perks like fancy formal dinners, and the prestige of being a tutor — tutors who seem to regard their students as inconvenient obstacles to what they really care about. It highlights to me a sad problem in what we tend to value in the sciences, and academia more generally: the people who add the most to the growth of human knowledge are often the people who the history books will not care to remember.


  • As a society, we need to better value the labour that goes into our collective knowledge bases. Non-English Wikipedia is just one example of this, but it highlights the core of the problem: the system relies on a tremendous amount of skilled labour that cannot easily be done by just a few volunteers.

    Paying people to contribute would come with problems of its own (in a hypothetical world where this was permitted by Wikipedia, which I don’t believe it is at present), but it would be easier for people to contribute if the time they wanted to volunteer was competing with their need to keep their head above the water financially. Universal basic income, or something similar, seems like one of the more viable ways to improve this tension.

    However, a big component of the problem is around the less concrete side of how society values things. I’m a scientist in an area where we are increasingly reliant on scientific databases, such as the Protein Database (pdb), where experimentally determined protein structures are deposited and annotated, as well as countless databases on different genes and their functions. Active curation of these databases is how we’re able to research a gene in one model organism, and then apply those insights to the equivalent gene in other organisms.

    For example, the gene CG9536 is a term for a gene found in Drosophila melanogaster — fruit flies, a common model organism for genetic research, due to the ease of working with them in a lab. Much of the research around this particular gene can be found on flybase, a database for D. melanogaster gene research. Despite being super different to humans, there are many fruitfly genes that have equivalents in humans, and CG9536 is no exception; TMEM115 is what we call it in humans. The TL;DR answer of what this gene does is “we don’t know”, because although we have some knowledge of what it does, the tricky part about this kind of research is figuring out how genes or proteins interact as part of a wider system — even if we knew exactly what it does in a healthy person, for example, it’s much harder to understand what kinds of illnesses arise from a faulty version of a gene, or whether a gene or protein could be a target for developing novel drugs. I don’t know much about TMEM115 specifically, but I know someone who was exploring whether it could be relevant in understanding how certain kinds of brain tumours develop. Biological databases are a core component of how we can big to make sense of the bigger picture.

    Whilst the data that fill these databases are produced by experimental research that are attached to published papers, there’s a tremendous amount of work that makes all these resources talk to each other. That flybase link above links to the page on TMEM115, and I can use these resources to synthesise research across so many separate fields that would previously have been separate: the folks who work on flies will have a different research culture than those who work in human gene research, or yeast, or plants etc. TMEM115 is also sometimes called TM115, and it would be a nightmare if a scientist reviewing the literature missed some important existing research that referred to the gene under a slightly different name.

    Making these biological databases link up properly requires active curation, a process that the philosopher of Science Sabine Leonelli refers to as “data packaging”, a challenging task that includes asking “who else might find this data useful?” [1]. The people doing the experiments that produce the data aren’t necessarily the best people for figuring out how to package and label that data for others to use because inherently, this requires thinking in a way that spans many different research subfields. Crucially though, this infrastructure work gives a scientist far fewer opportunities to publish new papers, which means this essential labour is devalued in our current system of doing science.

    It’s rather like how some of the people who are adding poor quality articles to non-English Wikipedia feel like they’re contributing because using automated tools allows them to create more new articles than someone with actual specialist knowledge could. It’s the product of a culture of an ever-hungry “more” that fuels the production of slop, devalues the work of curators and is degrading our knowledge ecosystem. The financial incentives that drive this behaviour play a big role, but I see that as a symptom of a wider problem: society’s desire to easily quantify value causing important work that’s harder to quantify to be systematically devalued (a problem that we also see in how reproductive labour (i.e. the labour involved in managing a family or household) has historically been dismissed).

    We need to start recognising how tenuous our existing knowledge is. The OP discusses languages with few native speakers, which likely won’t affect many who read the article, but we’re at risk of losing so much more if we don’t learn to recognise how tenuous our collective knowledge is. The more we learn, the more we need to invest into expanding our systems of knowledge infrastructure, as well as maintaining what we already have.


    [1]: I am not going to cite the paper in which Sabine Leonelli coined the phrase “data packaging”, but her 2016 book “Data-Centric Biology: A Philosophical Study”. I don’t imagine that many people will read this large comment of mine, but if you’ve made it this far, you might be interested to check out her work. Though it’s not aimed at a general audience, it’s still fairly accessible, if you’re the kind of nerd who is interested in discussing the messy problem of making a database usable by everyone.

    If your appetite for learning is larger than your wallet, then I’d suggest that Anna’s Archive or similar is a good shout. Some communities aren’t cool with directly linking to resources like this, so know that you can check the Wikipedia page of shadow library sites to find a reliable link: https://en.wikipedia.org/wiki/Anna’s_Archive


    1. 1 ↩︎




  • I don’t find the latency with Bluetooth headphones to be a problem if I’m just watching videos, but it’s super jarring if I’m doing something like gaming.

    It’s interesting because my current headphones (Steel series Arctic Nova Pro Wireless) can connect via Bluetooth, or wirelessly to a little dock thing that’s plugged into my PC (just a more complex dongle that has few settings on it, and a battery charger). This means that I can easily compare the Bluetooth latency to the dock’s latency, and it’s interesting to see the difference. I haven’t compared wired latency to the dock-wireless, but certainly I haven’t noticed any problems with the dock-wireless

    A weird thing about these headphones is that the Bluetooth and the dock-wireless seem to work on different channels, because I can be connected to my phone’s audio by Bluetooth, and to my PC’s audio via the dock. I discovered this randomly after like a year of owning the headphones.

    They were quite expensive, but I rather like them, and would recommend them to someone who wants a “jack of all trades” pair of headphones. They were plug and play with Linux, which is a big part of why I got them.





  • AnarchistArtificer@slrpnk.nettomemes@lemmy.worldDecisions
    link
    fedilink
    English
    arrow-up
    4
    ·
    12 days ago

    I disagree, on the basis that sometimes when I want to cause my friends (the majority of whom are ex-emo-kids) to start singing that song, the trigger phrase is for me to go “Wake me up” in that somewhat rough way that the male vocalist does it, and then my friends are often compelled to singing Amy Lee’s part. It’s a simple spell, but quite unbreakable



  • They don’t want people off the streets. The right thrive on stoking the fear and resentment of their base, and what better fuel for the fire than people on the lowest rungs of society.

    This is why I have been increasingly frustrated at the UK’s current government, who are shitting themselves about the rise of the right wing Reform party, but refuse to understand that capitulating to their “stop the boats” anti-immigrant rhetoric, they’re just yielding more ground to the reactionary right.

    I don’t expect establishment politicians to actually give a fuck about regular people, but they are actively at risk of losing their political power if they continue to ignore the actual root causes of the social malaise that Reform are exploiting. It’s beyond obvious that we are in dire need of investment in services and infrastructure, but I guess they’re afraid of pissing off their political donors and other people with unelected power (billionaires etc.)





  • “not that hard to do”

    Eh, I’m not so sure on that. I often find myself tripping up on the xkcd Average Familiarity problem, so I worry that this assumption is inadvertently a bit gatekeepy.

    It’s the unfortunate reality that modern tech makes it pretty hard for a person to learn the kind of skills necessary to be able to customise one’s own tools. As a chronic tinkerer, I find it easy to underestimate how overwhelming it must feel for people who want to learn but have only ever learned to interface with tech as a “user”. That kind of background means that it requires a pretty high level of curiosity and drive to learn, and that’s a pretty high bar to overcome. I don’t know how techy you consider yourself to be, but I’d wager that anyone who cares about whether something is open source is closer to a techy person than the average person.


  • Sidestepping the debate about whether AI art is actually fair use, I do find the fair use doctrine an interesting lens to look at the wider issue — in particular, how deciding whether something is fair use is more complex than comparing a case to a straightforward checklist, but a fairly dynamic spectrum.

    It’s possible that something could be:

    • Highly transformative
    • Takes from a published work that is primarily of a factual nature (such as a biography)
    • Distributed to a different market than the original work but still not be considered fair use, if it had used the entirety of the base work without modification (in this case, the “highly transformative” would pertain to how the chunks of the base work are presented)

    I’m no lawyer, but I find the theory behind fair use pretty interesting. In practice, it leaves a lot to be desired (the way that YouTube’s contentID infringes on what would almost certainly be fair use, because Google wants to avoid being taken to court by rights holders, so preempts the problem by being overly harsh to potential infringement). However, my broad point is that whether a court decides something is fair use relies on a holistic assessment that considers all four of pillars of fair use, including how strongly each apply.

    AI trained off of artist’s works is different to making collage of art because of the scale of the scraping — a huge amount of copyrighted work has been used, and entire works of art were used, even if the processing of them were considered to be transformative (let’s say for the sake of argument that we are saying that training an AI is highly transformative). The pillar that AI runs up against the most though is “the effect of the use upon the potential market”. AI has already had a huge impact on the market for artistic works, and it is having a hugely negative impact on people’s ability to make a living through their art (or other creative endeavours, like writing). What’s more, the companies who are pushing AI are making inordinate amounts of revenue, which makes the whole thing feel especially egregious.

    We can draw on the ideas of fair use to understand why so many people feel that AI training is “stealing” art whilst being okay with collage. In particular, it’s useful to ask what the point of fair use is? Why have a fair use exemption to copyright at all? The reason is because one of the purposes of copyright is meant to be to encourage people to make more creative works — if you’re unable to make any money from your efforts because you’re competing with people selling your own work faster than you can, then you’re pretty strongly disincentivised to make anything at all. Fair use is a pragmatic exemption carved out because of the recognition that if copyright is overly restrictive, then it will end up making it disproportionately hard to make new stuff. Fair use is as nebulously defined as it is because it is, in theory, guided by the principle of upholding the spirit of copyright.

    Now, I’m not arguing that training an AI (or generating AI art) isn’t fair use — I don’t feel equipped to answer that particular question. As a layperson, it seems like current copyright laws aren’t really working in this digital age we find ourselves in, even before we consider AI. Though perhaps it’s silly to blame computers for this, when copyright wasn’t really helping individual artists much even before computers became commonplace. Some argue that we need new copyright laws to protect against AI, but Cory Doctorow makes a compelling argument about how this will just end up biting artists in the ass even worse than the AI. Copyright probably isn’t the right lever to pull to solve this particular problem, but it’s still a useful thing to consider if we want to understand the shape of the whole problem.

    As I see it, copyright exists because we, as a society, said we wanted to encourage people to make stuff, because that enriches society. However, that goal was in tension with the realities of living under capitalism, so we tried to resolve that through copyright laws. Copyright presented new problems, which led to the fair use doctrine, which comes with problems of its own, with or without AI. The reason people consider AI training to be stealing is because they understand AI as a dire threat to the production of creative works, and they attempt to articulate this through the familiar language of copyright. However, that’s a poor framework for addressing the problem that AI art poses though. We would be better to strip this down to the ethical core of it so we can see the actual tension that people are responding to.

    Maybe we need a more radical approach to this problem. One interesting suggestion that I’ve seen is that we should scrap copyright entirely and implement a generous universal basic income (UBI) (and other social safety nets). If creatives were free to make things without worrying about fulfilling basic living needs, it would make the problem of AI scraping far lower stakes for individual creatives. One problem with this is that most people would prefer to earn more than what even a generous UBI would provide, so would probably still feel cheated by Generative AI. However, the argument is that GenerativeAI cannot compare to human artists when it comes to producing novel or distinctive art, so the most reliable wa**y to obtain meaningful art would be to give financial support to the artists (especially if an individual is after something of a particular style). I’m not sure how viable this approach would be in practice, but I think that discussing more radical ideas like this is useful in figuring what the heck to do.


  • I get what you’re saying.

    I often find myself being the person in the room with the most knowledge about how Generative AI (and other machine learning) works, so I tend to be in the role of the person who answers questions from people who want to check whether their intuition is correct. Yesterday, when someone asked me whether LLMs have any potential uses, or whether the technology is fundamentally useless, and the way they phrased it allowed me to articulate something better than I had previously been able to.

    The TL;DR was that I actually think that LLMs have a lot of promise as a technology, but not like this; the way they are being rolled out indiscriminately, even in domains where it would be completely inappropriate, is actually obstructive to properly researching and implementing these tools in a useful way. The problem at the core is that AI is only being shoved down our throats because powerful people want to make more money, at any cost — as long as they are not the ones bearing that cost. My view is that we won’t get to find out the true promise of the technology until we break apart the bullshit economics driving this hype machine.

    I agree that even today, it’s possible for the tools to be used in a way that’s empowering for the humans using them, but it seems like the people doing that are in the minority. It seems like it’s pretty hard for a tech layperson to do that kind of stuff, not least of all because most people struggle to discern the bullshit from the genuinely useful (and I don’t blame them for being overwhelmed). I don’t think the current environment is conducive towards people learning to build those kinds of workflows. I often use myself as a sort of anti-benchmark in areas like this, because I am an exceedingly stubborn person who likes to tinker, and if I find it exhausting to learn how to do, it seems unreasonable to expect the majority of people to be able to.

    I like the comic’s example of Photoshop’s background remover, because I doubt I’d know as many people who make cool stuff in Photoshop without helpful bits of automation like that (“cool stuff” in this case often means amusing memes or jokes, but for many, that’s the starting point in continuing to grow). I’m all for increasing the accessibility of an endeavour. However, the positive arguments for Generative AI often feels like it’s actually reinforcing gatekeeping rather than actually increasing accessibility; it implicitly divides people into the static categories of Artist, and Non-Artist, and then argues that Generative AI is the only way for Non-Artists to make art. It seems to promote a sense of defeatism by suggesting that it’s not possible for a Non-Artist to ever gain worthwhile levels of skill. As someone who sits squarely in the grey area between “artist” and “non-artist”, this makes me feel deeply uncomfortable.