Off-and-on trying out an account over at @[email protected] due to scraping bots bogging down lemmy.today to the point of near-unusability.

  • 10 Posts
  • 899 Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle

  • Hmm. While I don’t know what their QA workflow is, my own experience is that working with QA people to design a QA procedure for a given feature tends to require familiarity with the feature in the context of real-world knowledge and possible problems, and that human-validating a feature isn’t usually something done at massive scale, where you’d get a lot of benefit from heavy automation.

    It’s possible that one might be able to use LLMs to help write test code — reliability and security considerations there are normally less-critical than in front-line code. Worst case is getting a false positive, and if you can get more test cases covered, I imagine that might pay off.

    Square does an MMO, among their other stuff. If they can train a model to produce AI-driven characters that act sufficiently like human players, where they can theoretically log training data from human players, that might be sufficient to populate an MMO “experimental” deployment so that they can see if anything breaks prior to moving code to production.

    “Because I would love to be able to start up 10,000 instances of a game in the cloud, so there’s 10,000 copies of the game running, deploy an AI bot to spend all night testing that game, then in the morning we get a report. Because that would be transformational.”

    I think that the problem is that you’re likely going to need more-advanced AI than an LLM, if you want them to just explore and try out new features.

    One former Respawn employee who worked in a senior QA role told Business Insider that he believes one of the reasons he was among 100 colleagues laid off this past spring is because AI was reviewing and summarising feedback from play testers, a job he usually did.

    We can do a reasonable job of summarizing human language with LLMs today. I think that that might be a viable application.


  • At a meeting in April, xAI staff lawyer Lily Lim told employees that they would need to submit their biometric data to train the AI companion to be more human-like in its interactions with customers, according to a recording of the meeting review by the Journal.

    Employees that were assigned as AI tutors were instructed to sign release forms granting xAI “a perpetual, worldwide, non-exclusive, sub-licensable, royalty-free license” to use, reproduce, and distribute their faces and voices, as part of a confidential program code-named “Project Skippy.” The data would be used to train Ani, as well as Grok’s other AI companions.

    Huh.

    I wonder if xAI has transexual employees, and if so, how socially-conservative users feel about conversing with a composite AI incorporating said data sources.






  • I don’t think that there’s a “too big”, if you can figure out a way to economically do it and fill it with worthwhile content.

    But I don’t feel like Cyberpunk 2077’s map size is the limiting factor. Like, there’s a lot of the map that just doesn’t see all that much usage in the game, even though it’s full of modeled and textured stuff. You maybe have one mission in the general vicinity, and that’s it. If I were going to ask for resources to be put somewhere in the game to improve it, it wouldn’t be on more map. It’d be on stuff like:

    • More-complex, interesting combat mechanics.

    • More missions on existing map.

    • More varied/interesting missions. Cyberpunk 2077 kinda gave me more of a GTA feel than a Fallout feel.

    • A home that one can build up and customize. I mean, Cyberpunk 2077 doesn’t really have the analog of Fallout 4’s Home Plate.

    • The city changing more over time and in response to game events.


  • From what I have read, he’s still likely to be able to line up enough votes to get his $1 trillion pay package (and the associated voting rights), despite a lot of major institutional investors being in opposition. But we’ll see when the vote goes though.

    I think that Tesla can probably get a more-effective CEO for less money, personally. Even if he leaves as CEO, he still owns 15% of Tesla and is fabulously wealthy as a result. I don’t feel like he’s getting a bad deal.

    I do think that there are some arguments that the SEC should pass some regulation to help ensure board-CEO independence; part of the issue is that the board, which is supposed to oversee Musk, has been considered to be acting on his behalf by quite a few people. I don’t think that it will happen under the present administration, though.


  • Oh, okay, I didn’t realize that you were trying to just ask people here about their search engine, rather than link to an article about Orion.

    Well, I use Kagi’s search engine. They basically do what I wish Google and YouTube and suchlike would do — just make their money by charging a fee and providing a service, rather than trying to harvest data and show ads. I use search more than any other service online, and there isn’t really a realistic way for me to run my own Web-spanning search engine and getting reasonable, private results. I don’t really make use of most of their add-on features other than their “Fediverse Forums” thing that can search all Threadiverse hosts, which is helpful, and occasionally their Usenet search functionality. My principal interest in them is from a privacy standpoint, and I’m happy with them on that front; they don’t log or data-mine.

    EDIT: They do have some sort of way to issue searches without telling Kagi which user at Kagi you are, if you’re worried about them secretly retaining your search results anyway, which I think is technically interesting, but I really don’t care that much. If a wide range of websites adopted the system, that’d be interesting, maybe.

    EDIT2: Privacy Pass. Might be the protocol of the same name that CloudFlare uses. I’ve never really dug into it.

    EDIT3: Some of their functionality (user-customizable search bangs, for example) can also be done browser-side, if your browser supports it and you rig it up that way. Like, I had Firefox set up to make "!gm <query>" do a Google Maps search before Kagi did, and chuckled when I realized that they defaulted to the same convention that I had.

    EDIT4: Oh, their images search does let you view a proxied view of the image (so that the site with the result doesn’t know that you’re viewing the image) and lets one save the image. IIRC, Google Images used to do something like that, though I don’t believe they do now, so places like pinterest that try to make saving an image a pain are obnoxious. Firefox on the desktop still lets one save any image visible on a webpage (click the lock icon in the URL bar, click “Connection Secure”, click “More Information”, click “Media”, and then scroll through the list until you find the image in question), but I’d just as soon not jump through the hoops, and Kagi just eliminates the whole headache.

    EDIT5: They try to identify and flag paywalled sites in their results, unlike Google. For example, if you kagi for “the economist American policy is splitting, state by state, into two blocs”, you’ll get a result with a little dollar sign icon. This can be helpful, though archive.today will let one effectively bypass many paywalls, which somewhat reduces the obnoxiousness of getting paywalled results just mixed in with non-paywalled results on Google.



  • tal@lemmy.todaytoComic Strips@lemmy.worldAncestors
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    3 days ago

    “If everything around you is dying, go to ground. Feed on their corpses to survive. Mate. When trees regrow, take advantage of them. No matter how long it seems, the hard times will end. In the end, it is not the largest or the strongest that will dominate, but the survivors.”

    Purgatorius, great-grand-daddy of us, 66 million years back

    In life, it would have resembled a squirrel or a tree shrew (most likely the latter, given that tree shrews are one of the closest living relatives of primates, and Purgatorius is considered to be the progenitor to primates)

    The oldest remains of Purgatorius date back to ~65.921 mya, or between 105 thousand to 139 thousand years after the K-Pg boundary.[4]

    https://en.wikipedia.org/wiki/Cretaceous–Paleogene_extinction_event

    The Cretaceous–Paleogene (K–Pg) extinction event,[a] formerly known as the Cretaceous-Tertiary (K–T) extinction event,[b] was a major mass extinction of three-quarters of the plant and animal species on Earth[2][3] approximately 66 million years ago. The event caused the extinction of all non-avian dinosaurs. Most other tetrapods weighing more than 25 kg (55 lb) also became extinct, with the exception of some ectothermic species such as sea turtles and crocodilians.

    A wide range of terrestrial species perished in the K–Pg mass extinction, the best-known being the non-avian dinosaurs, along with many mammals, birds,[22] lizards,[23] insects,[24][25] plants, and all of the pterosaurs.[26] In the Earth’s oceans, the K–Pg mass extinction killed off plesiosaurs and mosasaurs and devastated teleost fish,[27] sharks, mollusks (especially ammonites and rudists, which became extinct), and many species of plankton. It is estimated that 75% or more of all animal and marine species on Earth vanished.[28] However, the extinction also provided evolutionary opportunities: in its wake, many groups underwent remarkable adaptive radiation—sudden and prolific divergence into new forms and species within the disrupted and emptied ecological niches. Mammals in particular diversified in the following Paleogene Period,[29] evolving new forms such as horses, whales, bats, and primates.

    K–Pg boundary mammalian species were generally small, comparable in size to rats; this small size would have helped them find shelter in protected environments. It is postulated that some early monotremes, marsupials, and placentals were semiaquatic or burrowing, as there are multiple mammalian lineages with such habits today. Any burrowing or semiaquatic mammal would have had additional protection from K–Pg boundary environmental stresses.[94]

    Due to the wholesale destruction of plants at the K–Pg boundary, there was a proliferation of saprotrophic organisms, such as fungi, that do not require photosynthesis and use nutrients from decaying vegetation. The dominance of fungal species lasted only a few years while the atmosphere cleared and plenty of organic matter to feed on was present. Once the atmosphere cleared photosynthetic organisms returned – initially ferns and other ground-level plants.[172]



  • I do not game on phones, but my best experiences have, ironically, been with ‘gaming’ phones like the Razer Phone 2 and Asus phones. They have gigantic batteries, lots of RAM, and lean, stock UIs that let you disable/uninstall apps, hence they’re fast as heck and last forever. I only gave up my Razer Phone 2 because the mic got clogged up with dust, and I miss it.

    While I kind of agree (though I don’t really like the “gamer” aesthetics), Asus only offers two major updates and two years of patches, which is quite short.

    https://www.androidauthority.com/phone-update-policies-1658633/

    If someone games with their phone and plans to frequently upgrade for new hardware, they may not care. But if you get the hardware just to have a large battery and RAM, that may be a concern.

    EDIT: Also, no mmWave support, which may or may not matter to someone.




  • Sixteen percent of GDP…The United States has tethered 16% of its entire economic output to the fortunes of a single company

    That’s not really how that works. Those two numbers aren’t comparable to each other. Nvidia’s market capitalization, what investors are willing to pay for ownership of the company, is equal to sixteen percent of US GDP, the total annual economic activity in the US.

    They’re both dollar values, but it’s like comparing the value of my car to my annual income.

    You could say that the value of a company is somewhat-linked to the expected value of its future annual profit, which is loosely linked to its future annual revenue, which is at least more connected to GDP, but that’s not going to be anything like a 1:1 ratio, either.



  • But the software needs to catch up.

    Honestly, there is a lot of potential room for substantial improvements.

    • Gaining the ability to identify edges of the model that are not-particularly-relevant relevant to the current problem and unloading them. That could bring down memory requirements a lot.

    • I don’t think — though I haven’t been following the area — that current models are optimized for being clustered. Hell, the software running them isn’t either. There’s some guy, Jeff Geerling, who was working on clustering Framework Desktops a couple months back, because they’re a relatively-inexpensive way to get a ton of VRAM attached to parallel processing capability. You can have multiple instances of the software active on the hardware, and you can offload different layers to different APUs, but currently, it’s basically running sequentially — no more than one APU is doing compute presently. I’m pretty sure that that’s something that can be eliminated (if it hasn’t already been). Then the problem — which he also discusses — is that you need to move a fair bit of data from APU to APU, so you want high-speed interconnects. Okay, so that’s true, if what you want is to just run very models designed for very expensive, beefy hardware on a lot of clustered, inexpensive hardware…but you could also train models to optimize for this, like use a network of neural nets that have extremely-sparse interconnections between them, and denser connections internal to them. Each APU only runs one neural net.

    • I am sure that we are nowhere near being optimal just for the tasks that we’re currently doing, even using the existing models.

    • It’s probably possible to tie non-neural-net code in to produce very large increases in capability. To make up a simple example, LLMs are, as people have pointed out, not very good at giving answers to arithmetic questions. But…it should be perfectly viable to add a “math unit” that some of the nodes on the neural net interfaces with and train it to make use of that math unit. And suddenly, because you’ve just effectively built a CPU into the thing’s brain, it becomes far better than any human at arithmetic…and potentially at things that makes use of that capability. There are lots of things that we have very good software for today. A human can use software for some of those things, through their fingers and eyes — not a very high rate of data interchange, but we can do it. There are people like Musk’s Neuralink crowd that are trying to build computer-brain interfaces. But we can just build that software directly into the brain of a neural net, have the thing interface with it at the full bandwidth that the brain can operate at. If you build software to do image or audio processing in to help extract information that is likely “more useful” but expensive for a neural net to compute, they might get a whole lot more efficient.


  • There’s loads of hi-res ultra HD 4k porn available.

    It’s still gonna have compression artifacts. Like, the point of lossy compression having psychoacoustic and psychovisual models is to degrade the stuff as far as you can without it being noticeable. That doesn’t impact you if you’re viewing the content without transformation, but it does become a factor if you don’t. Like, you’re viewing something in a reduced colorspace with blocks and color shifts and stuff.

    I can go dig up a couple of diffusion models finetuned off SDXL that generate images with visible JPEG artifacts, because they were trained on a corpus that included a lot of said material and didn’t have some kind of preprocessing to deal with it.

    I’m not saying that it’s technically-impossible to build something that can learn to process and compensate for all that. I (unsuccessfully) spent some time, about 20 years back, on a personal project to add neural net postprocessing to reduce visibility of lossy compression artifacts, which is one part of how one might mitigate that. Just that it adds complexity to the problem to be solved.


  • I doubt that OpenAI themselves will do so, but I am absolutely confident that someone not only will be banging on this, but I suspect that they probably have already. In fact, IIRC from an earlier discussion, someone already was selling sex dolls with said integration, and I doubt that they were including local parallel compute hardware for it.

    kagis

    I don’t think that this is the one I remember, but doesn’t really matter; I’m sure that there’s a whole industry working on it.

    https://www.scmp.com/tech/tech-trends/article/3298783/chinese-sex-doll-maker-sees-jump-2025-sales-ai-boosts-adult-toys-user-experience

    Chinese sex doll maker sees jump in 2025 sales as AI boosts adult toys’ user experience

    The LLM-powered dolls are expected to cost from US$100 to US$200 more than existing versions, which are currently sold between US$1,500 and US$2,000.

    WMDoll – based in Zhongshan, a city in southern Guangdong province – embeds the company’s latest MetaBox series with an AI module, which is connected to cloud computing services hosted on data centres across various markets where the LLMs process the information from each toy.

    According to the company, it has adopted several open-source LLMs, including Meta Platforms’ Llama AI models, which can be fine-tuned and deployed anywhere.