I’m also curious. A quick search came up with these. Not sure which one is most reliable/updated
I’m also curious. A quick search came up with these. Not sure which one is most reliable/updated
Many things are called “AI models” nowadays (unfortunately due to the hype). I wouldn’t dismiss the tools and methodology yet.
That said, the article (or the researchers) did a disservice to the analysis by not including a link to the report (and code) that outlines the methodology and how the distribution of similarities look. I couldn’t find a link in the article and a quick search didn’t turn up anything.
you should try to ask the same question using xAI / Grok if possible. May also ask ChatGPT about Altman as well
welp, guess you’re right. It’s not common but not just a few someone’s either.
tell me more about the “almost” part …
Based on this reddit comment, that website is not affiliated with the magic-wormhole
CLI tool
hm, I think I’ve been doing it wrong then …
someone should make an alternate history tv show where the ship made it. bonus if it’s of a parody kind.
I believe experiments like these should move slower and with more scrutiny. As in more animal testing before moving on to humans, esp. due to the controversies surrounding Neuralink’s last animal experiments.
I think porn generation (image, audio and video) will eventually be very realistic and very easy to make with only a few clicks and some well crafted prompts. Things would just be a whole other level that what Photoshop used to be.
re: your last point, AFAIK, the TLDR bot is also not AI or LLM; it uses more classical NLP methods for summarization.
If you suspect that it’s been modified, try going to places like the internet archive or archivetoday to check. The claims you’ve made seem big, so back them up with sources.
Is there a database tracking companies that start out with good intentions and then eventually gets bought out or sells out their initial values? I’m wondering what the deciding factors are, and how long it takes for them to turn.
re 1: out of curiosity, do you encounter dnsleaks when using wireguard?
re 4: you can also check out https://starship.rs/, which helps configure shell prompt very intuitively with a toml file.
what are the other alternatives to ENV that are more preferred in terms of security?
yeah I guess maybe the formatting and the verbosity seems a bit annoying? Wonder what the alternatives solution could be to better engage people from mastodon, which is what this bot is trying to address.
edit: just to be clear, I’m not affiliated with the bot or its creator. This is just my observation from multiple posts I see this bot comments on.
I’m curious, why is this bot currently being downvoted for almost every comment it makes?
maybe port over some of your previous videos to grow content on peertube as well if it’s possible. not sure if there’s any legal issue with this tho.
Thanks for the suggestions! I’m actually also looking into llamaindex for more conceptual comparison, though didn’t get to building an app yet.
Any general suggestions for locally hosted LLM with llamaindex by the way? I’m also running into some issues with hallucination. I’m using Ollama with llama2-13b and bge-large-en-v1.5 embedding model.
Anyway, aside from conceptual comparison, I’m also looking for more literal comparison, AFAIK, the choice of embedding model will affect how the similarity will be defined. Most of the current LLM embedding models are usually abstract and the similarity will be conceptual, like “I have 3 large dogs” and “There are three canine that I own” will probably be very similar. Do you know which choice of embedding model I should choose to have it more literal comparison?
That aside, like you indicated, there are some issues. One of it involves length. I hope to find something that can build up to find similar paragraphs iteratively from similar sentences. I can take a stab at coding it up but was just wondering if there are some similar frameworks out there already that I can model after.
You can also just post the 4-5 data items without claiming that this is low or high credibility or bias. Then let the people make the decision. Like this maybe:
“Based on source X, this source media bias is:
Methodology of X is at: “