The EU has such, General Data Protection Regulation (GDPR), works reasonably well. Pretty good place to start.
might
That word is carrying a mighty big load.
What’s one that doesn’t suck?
Historical background on current events: Heather Cox Richardson.
Yeah, lots of opinions, a few facts: one of the discussions.
Sumsnag
True in a way. However, there is a rather large collection of speculation on the Internet that is quite an undertaking to correct. And a large population of people and bots willing to speculate. Also, having once been speculated, each speculation takes on a life of its own. If it gets much more substantial, forget Skynet, we’re busy creating Specunet and its sidekick Confusionet – an insidious duo.
Be careful of printers with chipped toner though. Older models still rock.
There’s certainly a history of Unix and Unix-like forks; which is rather simple compared to the Linux distro forks (go right to the big pic).
Huh, that’s so, it was there last January. It used to follow this paragraph (still there today anyway), which contains a similar criticism with citation:
It is widely used and has sometimes been criticised for its methodology.[4] Scientific studies[5] using its ratings note that ratings from Media Bias/Fact Check show high agreement with an independent fact checking dataset from 2017,[6] with NewsGuard[7] and with BuzzFeed journalists.
So if those are considered fact-based, there’s no need to delve further.
However, Wikipedia editors consider Media Bias/Fact Check as “generally unreliable”, recommending against its use for what some see as breaking Wikipedia’s neutral point of view.
Or as Dijkstra puts it: “asking whether a machine can think is as dumb as asking if a submarine can swim”.
Alan Turing puts it similarly, the question is nonsense. However, if you define “machine” and “thinking”, and redefine the question to mean: is machine thinking differentiable from human thinking; you can answer affirmatively, theoretically (rough paraphrasing). Though the current evidence suggests otherwise (e.g. AI learning from other AI drifts toward nonsense).
For more, see: Computing Machinery and Intelligence, and Turing’s original paper (which goes into the Imitation Game).
Oooooh, okay, I misread. Apologies.
Yet use AI (possibly) to determine users’ AI answers.
The “running joke” used by millions for serious and playful projects? [edited for punctuation]
Let’s extend this thought experiment a little. Consider just forum posts; the numbers will be somewhat similar for articles and other writings, as well as photos and videos.
A bot creates how many more posts than a human? Being (ridiculously) conservative, we’ll say 10x more.
On day one: 10 humans are posting (for simplicity’s sake) 10 times a day, totaling 100 posts. Bot is posting 100 a day. For a total of 200 human and bot posts; 50% of which are the bot.
In your (extended) example, at the end of a year: 10 humans are still posting 100 times a day. The 10 bots are posting a total of 1000 times a day. Bots are at 90%, humans 10%.
This statistic can lead you to think human participation in the Internet is difficult to find.
Returning to reality, consider how inhuman AI bots are, with each probably able to outpost humans by millions or billions of times under millions of aliases each. If you find search engines, articles, forums, reviews, and such are bonkers now, just wait a few years. Predicting general chaotic nonsense for the Internet is a rational conclusion, with very few islands of humanity. Unless bots are stopped.
Right now though, bots are increasing.
Exactly. A more accurate headline would be “Americans are Falling Behind on their Income.”
Yes, though in some locales there are “work crews” (slave labor) that clear brush, road litter, and such for businesses, organizations, the state, and individuals.
Back in 2000, there was something like that for the kernel with SELinux (Security-Enhanced Linux). Which continues to live in various distributions’ kernels. Not a full O/S though, and not generally regarded as a PoS.