

Anyone who has to ask is probably as bad as he is.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit before joining the Threadiverse as well.
Anyone who has to ask is probably as bad as he is.
Because a machine is expected to do it right the first time.
No, it’s not. And it doesn’t have to because as I pointed out it can check its work.
You’ve got a mistaken impression of how AI works, and how machines in general work. They can make mistakes and can recognize and correct those mistakes. I’m a programmer, I have plenty of first hand experience. I’ve written code that does it myself.
So if a machine is to take over that job, it better do it right and reliable and cheaper.
Yes, that’s the plan.
And I’m optimistically thinking competition is a good sign, especially when the field has been drifting toward monopoly for years now.
You said:
As long as AI does not get it 100% right every time it is not touching my house. And yes, a professional doesn’t reach that rate either, but at least they know and doublecheck themselves and know how to fix things.
Well, why didn’t the human professional not do it right the first time then? If it’s okay for a human professional to make mistakes because they can double check and fix their mistakes, why is not okay for machines to do likewise?
The halting problem is an abstract mathematical issue, in actual real-world scenarios it’s trivial to handle cases where you don’t know how long the process will run. Just add a check to watch for the process running too long and break into some kind of handler when that happens.
I’m a professional programmer, I deal with this kind of thing all the time. I’ve literally written applications using LLMs that do this.
Where do I say anything about “offing” the board? That’s rather a leap. I’m talking about the US government attacking the corporate structure of Wikipedia, which isn’t paranoia because this article is literally about exactly that.
The term “artificial intelligence” has been in use since the 1950s and it encompasses a wide range of fields in computer science. Machine learning is most definitely included under that umbrella.
Why do you think an AI can’t double check things and fix them when it notices problems? It’s a fairly straightforward process.
AI can also know to doublecheck themselves and how to fix things.
You’re still making the assumption that “they” are the same people. That’s the point here, Wikipedia-the-organization is being threatened.
Assuming that there’s just one single new site that pops up that everyone agrees to go to en masse, and that it has enough resources to handle the load, and that its administration is aligned with the same goals as the original.
It could happen, but it’s by no means as simple and easy as cloning a repository.
The tricky bit is that Wikipedia is a “living” document, constantly being updated and refined by a huge community of dedicated editors, and you can’t download the community and pop up a new one overnight. AI isn’t that good yet.
“But why is it necessary?”
“I concluded it. Didn’t you hear me?” <Makes a note to subpoena this annoying questioner later>
My point is that the “already fully prepared” requirement is extremely small and easy. “Having a car” is enough (or, in the event of one of these disaster scenarios, having someone else’s unattended car somewhere near you). So bringing it up as an objection to the usefulness of this hard drive is not really significant.
You’re overestimating the difficulty and expense necessary to support this device. You could probably power it from a car. A solar panel and inverter cost less than a hundred dollars.
There are an infinite number of things for which there is no evidence. Preparing for those things would be taking effort away from preparing for things that are actually real.
The first lunar astronauts spent 21 days in quarantine because we know that diseases are real and in the past there have been real examples of explorers bringing back new diseases from the places they visited. They didn’t simultaneously get ritually cleansed by a shaman because there is no evidence of actual lycanthropy being a thing.
Of the possibilities, I find
How do you find that? Through some kind of rigorous analysis, or just an intuitive feeling?
As I keep saying, the human mind is not good at intuitively handling very large or very small numbers and probabilities.
You’re analyzing a risk we could imagine, what you can’t do is analyze a risk we haven’t imagined yet.
What you can’t do is analyze a risk without doing an actual analysis. For that you need to collect data and work the numbers, not just imagine them.
Not miraculously, we know some of the causes that make this happen.
Yes, and all the causes that we know don’t apply to any nearby stars that might threaten us. You have to make up imaginary new causes in order to be frightened of a gamma ray burst.
A quick Googling puts them around $50. The PrepperDisk is priced at $270 (Canadian dollars in both cases). So add a second drive and it jumps to $320, plus the cost of whatever additional complexity there is to the motherboard to support it, plus extra development cost for the RAID controller. And the device itself becomes bulkier.
Sure, this satisfies the handful of people who were concerned about that. Everyone else ends up with what’s basically the same product but more expensive and bulkier. I can easily see the developers deciding that’s a net loss for sales.
Money. Raid 1 would make every Prepper Drive cost a lot more, since it would need double the storage space. Fewer people will buy them. Instead, keep them cheap and let the people who are truly concerned about redundancy solve the problem themselves by buying two.
If you’re concerned why not just have two of them? That’s more secure, you can store them in different places.
[ Removed by Reddit ]