Freedom is the right to tell people what they do not want to hear.

  • George Orwell
  • 0 Posts
  • 39 Comments
Joined 13 days ago
cake
Cake day: July 17th, 2025

help-circle
  • “Study my brain. I’m sorry,” Tisch quoted Tamura as having written in the note. The commissioner noted that Tamura had fatally shot himself in the chest.

    Didn’t shoot himself in the head to preserve the brain. Reminds me of the “Texas Tower Shooter” Charles Whitman.

    In his note, Whitman went on to request an autopsy be performed on his remains after he was dead to determine if there had been a biological cause for his actions and for his continuing and increasingly intense headaches.

    During the autopsy, Dr. Chenar reported that he discovered a pecan-sized brain tumor, above the red nucleus, in the white matter below the gray center thalamus, which he identified as an astrocytoma with slight necrosis.

    I’ve heard a neuroscientist talk about this and conclude that this tumor could very well have been the cause for his behavior.












  • I don’t think you even know what you’re talking about.

    You can define intelligence however you like, but if you come into a discussion using your own private definitions, all you get is people talking past each other and thinking they’re disagreeing when they’re not. Terms like this have a technical meaning for a reason. Sure, you can simplify things in a one-on-one conversation with someone who doesn’t know the jargon - but dragging those made-up definitions into an online discussion just muddies the water.

    The correct term here is “AI,” and it doesn’t somehow skip over the word “artificial.” What exactly do you think AI stands for? The fact that normies don’t understand what AI actually means and assume it implies general intelligence doesn’t suddenly make LLMs “not AI” - it just means normies don’t know what they’re talking about either.

    And for the record, the term is Artificial General Intelligence (AGI), not GAI.


  • Claims like this just create more confusion and lead to people saying things like “LLMs aren’t AI.”

    LLMs are intelligent - just not in the way people think.

    Their intelligence lies in their ability to generate natural-sounding language, and at that they’re extremely good. Expecting them to consistently output factual information isn’t a failure of the LLM - it’s a failure of the user’s expectations. LLMs are so good at generating text, and so often happen to be correct, that people start expecting general intelligence from them. But that’s never what they were designed to do.




  • Trust what? I’m simply pointing out that we don’t know whether he’s actually done anything illegal or not. A lot of people seem convinced that he did - which they couldn’t possibly be certain of - or they’re hoping he did, which is a pretty awful thing to hope for when you actually stop and think about the implications. And then there are those who don’t even care whether he did anything or not, they just want him convicted anyway - which is equally insane.

    Also, being “on the list” is not the same thing as being a child rapist. We don’t even know what this list really is or why certain people are on it. Anyone connected to Epstein in any capacity would dread having that list released, regardless of the reason they’re on it, because the result would be total destruction of their reputation.