And that fraud is specifically rampant because of ai… so if companies arent using ai to combat it then they would never be able to keep up.
And that fraud is specifically rampant because of ai… so if companies arent using ai to combat it then they would never be able to keep up.
So many twists and turns here!
Its alright i wasnt going to tell anyone i knew the best energy solution after reading lemmy comments. I haven’t voted at all in this thread.
Nuclear definitely has a ton of commitment. It takes like 60 years to decommission one right?
This is a pretty fucking stupid comment lmao
What the hell are you talking about?
Maybe true. But if we have increased energy demand it might as well be nuclear.
Halting ai development might be nice according to many people, but we cant make that happen. Fraud alone is magnitudes more rampant. Its here to stay and we have to deal with it. I think this is a big win.
When your worldview is made up of article thumbnails and the sun is simply too bright to bother.
Personally its the result that matters to me, and whether or not its entertaining regardless of how it was made.
Im gonna watch harry potter but draco is macho man randy savage
I want to be constructive so:
Please consider the unintentional disinformation people create when they try to sound like they know what they are talking about. Contributing to discussion is difficult on complex topics.
Its perfectly natural to want to continue a conversation to the point where you might fill in some details instead of researching a topic or not responding. But this is seriously harmful in the age of disinformation. Theres plenty i dont know. But there are tools expressly created to identify ai content to avoid using it in model training. The consequence of using synthetic data is the only topic in the article you are commenting on. Either read the article or please dont feel like you need to come up with a response.
Hey i just wanted to say my condolences for your downvotes and im here if you want to talk
Cant blame me for asking :)
Seems like tools to recognize ai content to prevent synthetic input avoids model degredation.
If those tools are up to the task then i would agree it probably doesnt hinder model training. Not sure what the reality is, or if the need for those tools creates a barrier to entry for a significant portion of those trying to create models with internet-crawled data.
By chance, is that based on other peoples succinct social media comments on ai?
Kind of like how true thoughts and opinions on complex topics are boiled down to digestible concepts for others to understand who then perpetuate those concepts without understanding them and the meaning degrades and we dont think anymore, just repeat stuff in social media comments.
Side note… this article sucks and seems like it was ai generated. Repetitive and no author credit? Just says it was originally posted elsewhere.
Generative AI isnt in danger of being killed as this clickbait titled suggests… just hindered.
Idk why we need to say we are certain about someones intention. It could just be a mental stability thing, like many people have problems with. His tweets are just far more consequential than a regular persons. He might not be as strongly shaped by social norms due to his fortunate circumstance.
(Im not defending anything. I just want truth and theres no point in extrapolating the situation further than what we see on the surface… just punish him according to what he did, or become a journalist and get relevant details to judge him further)
Yeah but glad is a feeling you have, not something that makes you try to create a narrative or change reality. Its concerning. Please fix everyone naught.
How could someone convince themselves that this isnt concerning or news? Why dont people want truth?
Sorry but a new pico headset wouldnt do much of anything. New meta headset, new valve headset would give a bump.
Really needs better content. The hardware is almost there (in terms of cost and accessibility of the experience).
Its slowly getting there. But the current population of vr users is characterized by: who would play the same limited experiences consistently with hardware that is often cumbersome and loading screens that arent super long but become your entire existence and its annoying.
Meta sucks but they have been a boon for vr development.
Thanks. I sped read and hope its okay if i raise some quick thoughts.
I thought it was interesting how it mentioned LLMs arent a mind that is formed in nature. I would offer a dumb conjecture that agi, while a mind, might still need an LLM as a component to actually handle the amount of data of a society. Like you said, LLMs are useful if you know the answer or at least suspect when to revisit a result. Maybe we are missing the biggest piece of agi, but handling data is really important and this still benefits us right? I think we will need more than a mind from our local nature to create god.
Im a pretty skeptical person. When i used chatgpt i was pretty blown away and wouldnt say i was leaning into the idea that it was sentient. I just saw an incredible new tool, and through using it, now understand the pitfalls and can get awesome results that would have never been achieved with googling in the amount of time i spent. Most all of the heavy lifting i have it do i immediately verify through testing and its correct often enough to realize huge gains over googling and my local library etc…
I think the criticisms of LLMs and their capability arent inaccurate but maybe short sighted? I think criticsms should currently focus on its performance for how we are using it… not how laymens might imagine using something they dont understand. Ultimately any use cases should be heavily tested and perform more accurately than human counterparts (where we are talking about replacement of humans anyway). If we dont find the gains from those applications to validate the power use… or whatever… then we are always capable of recognizing that.
But i think its 100% valid to push back against idealized predictions but i also think shits gonna get crazy. I think theres a lot to be gained, and i question why LLMs cant be a stepping stone to greater computing milestones even if LLMs themselves aren’t a component of agi in the end.
What im trying to be convinced of is the criticisms arent as overblown as the hype.
Interesting. What is the chance a nuclear plant goes boom? Sounds legit.