• 0 Posts
  • 449 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle


  • It’s the second one. They are all in on this AI bullshit because they’ve got nothing else. There are no other exponential growth markets left. Capitalism has gotten so autocanibalistic that simply being a global monopoly in multiple different fields isn’t good enough. For investors it’s not about how big your company is, how reliable your yearly returns are, how stable your customer base; the only thing that matters is how fast your business is growing. But small businesses have no space to grow because of the monopolies filling every available space, and the monopolies are already monopolies. There are no worlds left to conquer. They’ve already turned every single piece of our digital lives into a subscription, blockchain was a total bust, the metaverse was a demented fever dream, VR turned out to be a niche toy at best; unless someone comes up with some brand new thing that no one has ever heard of before, AI is the last big boondoggle they have left to hit the public with.



  • Personally I think it’d be interesting to see this per capita, so here’s my back of a napkin math for data centers per 1 million pop (c. 2022):

    • NL - 16.78
    • US - 16.15
    • AU - 11.72
    • CA - 8.63
    • GB - 7.68
    • DE - 6.22
    • FR - 4.63
    • JP - 1.75
    • RU - 1.74
    • CN - 0.32

    Worth noting of course that this only lists the quantity of discrete data centers and says nothing about the capacity of those data centers. I think it’d be really interesting to break down total compute power and total storage by country and by population.

    I’d also be interested to know what qualifies as a “data center”? For example, are ASIC based crypto mining operations counted, even though their machinery cannot be repurposed to any other function? That would certainly account for a chunk of the the US (almost all of it in Texas).






  • I’m really excited for this game. Not just for the visuals, but for everything they’re doing with the mechanical design. The idea of playing as scavengers trapped between two warring factions is incredibly cool, and based on early previews it sounds like there are a lot of very clever design elements, especially in the AI, all built to back up that core idea. For example enemies intelligently prioritize targets; a tank won’t focus on infantry if there’s an enemy tank present, and even when it does target the infantry it’ll use its machine guns, not the main cannon. Enemies will focus on you if you make yourself the biggest threat, but if you’re smart and follow the flow of battle you can keep their focus elsewhere.

    That’s really smart stuff, and by all accounts it works very well. I also really like what the studio is doing more broadly. They’re really trying to push back on a lot of the toxic practices in the gaming industry. I’ll be getting the game day one, mostly just to reward them for trying to do something different.



  • While truly defining pretty much any aspect of human intelligence is functionally impossible with our current understanding of the mind, we can create some very usable “good enough” working definitions for these purposes.

    At a basic level, “reasoning” would be the act of drawing logical conclusions from available data. And that’s not what these models do. They mimic reasoning, by mimicking human communication. Humans communicate (and developed a lot of specialized language with which to communicate) the process by which we reason, and so LLMs can basically replicate the appearance of reasoning by replicating the language around it.

    The way you can tell that they’re not actually reasoning is simple; their conclusions often bear no actual connection to the facts. There’s an example I linked elsewhere where the new model is asked to list states with W in their name. It does a bunch of preamble where it spells out very clearly what the requirements and process are; assemble a list of all states, then check each name for the presence of the letter W.

    And then it includes North Dakota, South Dakota, North Carolina and South Carolina in the list.

    Any human being capable of reasoning would absolutely understand that that was wrong, if they were taking the time to carefully and systematically work through the problem in that way. The AI does not, because all this apparent “thinking” is a smoke show. They’re machines built to give the appearance of intelligence, nothing more.

    When real AGI, or even something approaching it, actually becomes a thing, I will be extremely excited. But this is just snake oil being sold as medicine. You’re not required to buy into their bullshit just to prove you’re not a technophobe.



  • Noted. I’ll have to play around with that sometime.

    Despite my obvious stance as an AI skeptic, I have no problem with putting it to use in places where it can be used effectively (and ethically). I’ve just found that in practice, those uses are varnishingly few. I’m not on some noble quest to rid the world of computers, I just don’t like being sold overhyped crap.

    I’m also hesitant to try to rebuild any part of my workflow around the current generation of these tools, when they obviously aren’t going to exist in a few years, or will exist but at an exorbitant price. The cost to run genAI is far, far higher than any entity (even Microsoft) has any willingness to sustain long term. We’re in the “give it away or make it super cheap to get everyone bought in” phase right now, but the enshittification will come hard and fast on this one, much sooner than anyone thinks. OpenAI are literally burning billions just in compute right now. It’s unsustainable. Short of some kind of magical innovation that brings those compute costs down a hundred or thousand fold, this isn’t going to stick around.



  • More and more advanced tools for automation are an important part of creating a post-scarcity future. If we can combine that with tearing down our current economic system - which inherently requires and thus has to manufacture scarcity - we can uplift our species in ways we can currently only imagine.

    But this ain’t it bud. If I ask you for water and you hand me a glass of warm piss, I’m not “against drinking water” for refusing to gulp it down.

    This isn’t AI. It isn’t - meaningfully and usefully - any form of automation at all. A bunch of conmen slapped the letters “AI” on the side of their bottle of piss and you’re drinking it down like it’s grandma’s peach tea.

    The people calling out the fundamental flaws with these products aren’t doing so because we hate the entire concept of automation, any more than someone exposing a snake-oil salesman hates medicine. What we hate is being lied to. The current state of this technology is bullshit and hype. It is not fit for human consumption (other than recreationally) and the money being pumped into it could be put to far better uses. OpenAI may have lofty goals, but they have utterly failed at achieving them, and right now any true desire to create AGI has been totally subsumed by the need to keep pumping out slightly better looking versions of the same polished turd in order to convince investors to keep paying for their staggeringly high hosting costs.